abstract
stringlengths 1
4.43k
| claims
stringlengths 14
189k
| description
stringlengths 5
1.46M
|
---|---|---|
The reliability and elecrtromigration resistance of planarized metallization patterns, e.g., of copper, in-laid in the surface of a layer of dielectric material, are enhanced by a process comprising blanket-depositing on the planarized, upper surfaces of the metallization features and the dielectric layer at least one thin layer comprising at least one alloying element for the metal of the features, and then uniformly diffusing at least a minimum amount of the at least one thin layer for a minimum depth below the upper surfaces of the metallization features to effect alloying therewith. The alloyed portions of the metallization features advantageously reduce electromigration therefrom. Planarization, as by CMP, may be performed subsequent to diffusion/alloying to remove any remaining elevated, alloyed or unalloyed portions of the at least one thin layer. The invention finds particular utility in "back-end" metallization processing of high-density integrated circuit semiconductor devices having sub-micron dimensioned metallization features. |
What is claimed is: 1. A method of manufacturing an electrical device, which method comprises the sequential steps of:(a) providing a substrate including at least one damascene-type metal feature in-laid in the upper, exposed surface of a layer of dielectric material overlying at least a portion of said substrate, the at least one metal feature including an upper, exposed surface substantially co-planar with said upper surface of said layer of dielectric material; (b) blanket-depositing at least one layer comprising at least one electromigration reducing alloying element for said metal feature on said exposed, upper surface of said at least one metal feature and on said upper surface of said layer of dielectric material; (c) annealing to substantially uniformly diffuse at least a predetermined minimum amount of said at least one alloying element from said at least one layer comprising said at least one alloying element into said at least one metal feature for at least a predetermined minimum depth below said upper surface thereof, whereby electromigration of the metal of said at least one metal feature is minimized or substantially prevented; and (d) removing any remaining, alloyed and/or unalloyed portion(s) of said at least one layer comprising said at least one alloying element which extend(s) above said surface of said layer of dielectric material, thereby making said upper surface of said at least one metal feature substantially co-planar with said upper surface of said dielectric layer. 2. The method as in claim 1, further comprising the step of:(a') exposing said upper surface of said at least one metal feature to a reducing agent or atmosphere prior to performing step (b). 3. The method as in claim 1, wherein:step (d) comprises removing any remaining, alloyed and/or unalloyed portion(s) of said at least one layer comprising at least one alloying element by etching or chemical-mechanical polishing (CMP). 4. The method as in claim 1, wherein said electrical device comprises a semiconductor integrated circuit device, and:step (a) comprises providing as said substrate a semiconductor wafer of monocrystalline silicon (Si) or gallium arsenide (GaAs) having a major surface, said dielectric layer is formed over at least a portion of said major surface, and said at least one damascene-type, in-laid metal feature comprises a plurality of features of different widths and/or depths for providing vias, interlevel metallization, and/or interconnection lines of at least one active device region or component formed on or within said semiconductor wafer. 5. The method as in claim 4, wherein:said metal of said at least one in-laid metal feature is unalloyed copper (Cu). 6. The method as in claim 5, wherein:step (b) comprises blanket-depositing at least one layer comprising at least one alloying element selected from the group consisting of: tin (Sn), boron (B), magnesium (Mg), carbon (C), palladium (Pd), cobalt (Co), nickel (Ni), and cadmium (Cd). 7. The method as in claim 6, wherein:step (b) comprises blanket-depositing said at least one layer comprising at least one alloying element by a physical vapor deposition (PVD) process. 8. The method as in claim 7, wherein:step (b) comprises blanket-depositing said at least one layer comprising at least one alloying element by sputtering, ion plating, or vacuum evaporation. 9. The method as in claim 6, wherein:step (b) comprises blanket-depositing said at least one layer comprising at least one alloying element in a predetermined minimum thickness at least sufficient to provide a predetermined minimum concentration of said at least one alloying element of from about 0.1 to about 4 at. % for at least a predetermined minimum depth below said upper surface of said at least one Cu metal feature. 10. The method as in claim 9, wherein:step (c) comprises annealing at a temperature of from about 200[deg.] C. to about 450[deg.] C. for from about 60 sec. to about 90 min. in an inert atmosphere. 11. The method as in claim 9, wherein:step (d) comprises removing any remaining, alloyed and/or unalloyed portion(s) of said at least one layer comprising at least one alloying element by etching or chemical-mechanical polishing (CMP). 12. The method as in claim 5, further comprising the step of:(a') exposing said upper surface of said at least one Cu metal feature to a reducing agent or atmosphere for reducing any copper oxide present thereat, prior to performing step (b). 13. The method as in claim 12, wherein:step (a') comprises exposing said upper surface of said at least one Cu metal feature to a hydrogen plasma. 14. The method as in claim 1, wherein:step (a) for providing said substrate including at least one damascene-type, in-laid metal feature comprises the preliminary steps of: i. forming a dielectric layer on a surface of a substrate, said dielectric layer having an exposed, upper surface; ii. forming at least one recess in said exposed, upper surface of said dielectric layer; iii. depositing a metal layer filling the at least one recess and extending over said upper surface of said dielectric layer; iv. removing the portion(s) of the metal layer extending over said upper surface of said dielectric layer; and v. removing any excess thickness portion(s) of the metal layer filling the at least one recess which extend(s) above said upper surface of said dielectric layer, thereby making the upper surface of said at least one in-laid metal feature substantially co-planar with said upper surface of said dielectric layer. 15. The method as in claim 14, wherein:preliminary step v. comprises planarizing by chemical-mechanical polishing (CMP). 16. A method of manufacturing a semiconductor integrated circuit device, which method comprises the sequential steps of:(a) providing a substrate comprising a semiconductor wafer of monocrystalline Si or GaAs and having a major surface, a dielectric layer formed on at least a portion of said major surface and having an exposed, upper surface, at least one damascene-type, unalloyed Cu metal feature in-laid in said upper surface of said dielectric layer, the at least one Cu metal feature including an exposed, upper surface substantially co-planar with said upper surface of said dielectric layer; (b) blanket-depositing at least one layer comprising at least one electromigration reducing alloying element for said Cu metal feature on said exposed, upper surface of said at least one Cu metal feature and on said exposed, upper surface of said dielectric layer, said at least one alloying element being selected from the group consisting of: Sn, B, Mg, C, Pd, Co, Ni, and Cd; (c) annealing to substantially uniformly diffuse the at least one alloying element into said at least one Cu metal feature for at least a minimum depth below said upper surface thereof, thereby to minimize or substantially prevent electromigration of Cu atoms and/or ions therefrom, the thickness of said at least one layer comprising said at least one alloying element being sufficient to provide a predetermined minimum concentration thereof of from about 0.1 to about 4 at. % for a predetermined minimum depth of at least about 20 Å below said upper surface of said at least one Cu metal feature; and (d) removing any remaining alloyed and/or unalloyed portion(s) of said at least one layer comprising said at least one alloying element which extend(s) above said upper surface of said dielectric layer, thereby making said upper surface of said at least one Cu metal feature substantially co-planar with said upper surface of said dielectric layer. 17. The method as in claim 16, further comprising the step of:(a') exposing said upper surface of said at least one Cu metal feature to a reducing agent or atmosphere to reduce any copper oxide present thereat prior to performing step (b). 18. The method as in claim 17, wherein:step (a') comprises exposing said upper surface of said at least one Cu metal feature to a hydrogen plasma. 19. The method as in claim 16, wherein:step (c) comprises annealing at a temperature of from about 200[deg.] C. to about 450[deg.] C. for from about 60 sec. to about 90 min. in an inert atmosphere. 20. The method as in claim 16, wherein:step (a) comprises providing a semiconductor wafer having a dielectric layer on a major surface thereof which comprises a plurality of in-laid, unalloyed Cu metal features of different widths and/or depths for providing vias, interlevel metallization, and/or interconnection lines of at least one active device region or component formed on or within said semiconductor wafer. |
CROSS-REFERENCE TO RELATED APPLICATIONThis application contains subject matter related to subject matter disclosed in co-pending U.S. patent application Ser. No. 09/477,821, filed on Jan. 5, 2000.FIELD OF THE INVENTIONThe present invention relates to electrical devices, e.g., semiconductor integrated circuit devices, having in-laid ("damascene"-type) metallization patterns, e.g., interconnection lines, etc., and to a method for minimizing, or substantially preventing, deleterious electromigration of the metallic element(s) of the metallization pattern. More specifically, the present invention relates to semiconductor devices comprising copper (Cu) interconnection patterns and is the manufacture of high speed integrated circuits having sub-micron dimensioned design features and high electrical conductivity interconnect structures.BACKGROUND OF THE INVENTIONThe present invention relates to a method for forming metal films as part of metallization processing of particular utility in the manufacture of electrical and electronic devices, e.g., circuit boards and semiconductor integrated circuits, and is especially adapted for use in processing employing "in-laid" or "damascene"-type technology.The escalating requirements for high density and performance associated with ultra-large scale integration (ULSI) semiconductor device wiring are difficult to satisfy in terms of providing sub-micron-sized (e.g., 0.18 [mu]m and under), low resistance-capacitance (RC) time constant metallization patterns, particularly wherein the sub-micron-sized metallization features, such as vias, contact areas, lines, etc. require grooves, trenches, and other shaped openings or recesses having very high aspect (i.e., depth-to-width) ratios due to microminiaturization.Semiconductor devices of the type contemplated herein typically comprise a semiconductor wafer substrate, usually of doped monocrystalline silicon (Si) or, in some instances, gallium arsenide (GaAs), and a plurality of sequentially formed interlayer dielectrics and electrically conductive patterns formed therein and/or therebetween. An integrated circuit is formed therefrom containing a plurality of patterns of conductive lines separated by interwiring spacings, and a plurality of interconnect lines, such as bus lines, bit lines, word lines, and logic interconnect lines. Typically, the conductive patterns of vertically spaced-apart metallization layers or strata are electrically interconnected by a vertically oriented conductive plug filling a via hole formed in the inter-layer dielectric layer separating the layers or strata, while another conductive plug filling a contact area hole establishes electrical contact with an active device region, such as a source/drain region of a transistor, formed in or on the semiconductor substrate. Conductive lines formed in groove- or trench-like openings in overlying inter-layer dielectrics extend substantially parallel to the semiconductor substrate. Semiconductor devices of such type fabricated according to current technology may comprise five or more layers or strata of such metallization in order to satisfy device geometry and microminiaturization requirements.Electrically conductive films or layers of the type contemplated for use in e.g., "back-end" semiconductor manufacturing technology for fabricating devices having multi-level metallization patterns such as described supra, typically comprise a metal such as titanium (Ti), tantalum (Ta), tungsten (W), aluminum (Al), chromium (Cr), nickel (Ni), cobalt (Co), silver (Ag), gold (Au), copper (Cu) and their alloys. In use, each of the enumerated metals presents advantages as well as drawbacks. For example, Al is relatively inexpensive, exhibits low resistivity, and is relatively easy to etch. However, in addition to being difficult to deposit by lower cost, lower temperature, more rapid "wet" type technology such as electrodeposition, step coverage with Al is poor when the metallization features are scaled down to sub-micron size, resulting in decreased reliability of interconnections, high current densities at certain locations, and increased electromigration. In addition, certain low dielectric constant materials, e.g., polyamides, when employed as dielectric inter-layers, create moisture/bias reliability problems when in contact with Al.Copper (Cu) and Cu-based alloys are particularly attractive for use in large scale integration (LSI), very large-scale integration (VLSI), and ultra-large scale (ULSI) semiconductor devices requiring multi-level metallization systems for "back-end" processing of the semiconductor wafers on which the devices are based. Cu- and Cu alloy-based metallization systems have very low resistivities, i.e., significantly lower than that of W and even lower than those of previously preferred systems utilizing Al and its alloys, as well as a higher (but not complete) resistance to electromigration. Moreover, Cu and its alloys enjoy a considerable cost advantage over a number of the above-enumerated metals, notably Ag and Au. Also, in contrast to Al and the refractory-type metals (e.g., Ti, Ta, and W), Cu and its alloys can be readily deposited at low temperatures in good quality, bright layer form by well-known "wet" plating techniques, such as electroless and electroplating techniques, at deposition rates fully compatible with the requirements of device manufacturing throughput.Electroless plating of Cu generally involves the controlled auto-catalytic deposition of a continuous film of Cu or an alloy thereof on a catalytic surface by the interaction of at least a Cu-containing salt and a chemical reducing agent contained in a suitable solution, whereas electroplating comprises employing electrons supplied to an electrode (comprising the surface(s) to be plated) from an external source (i.e., a power supply) for reducing Cu ions in solution and depositing reduced Cu metal atoms on the plating surface(s). In either case, a nucleation/seed layer is required for catalysis and/or deposition on the types of substrates contemplated herein. Finally, while electroplating requires a continuous nucleation/seed layer, very thin and discontinuous islands of a catalytic metal may be employed with electroless plating.As indicated above, a commonly employed method for forming "in-laid" metallization patterns as are required for "back-end" metallization processing of semiconductor wafers employs "damascene"-type technology. Generally, in such processing methodology, a recess (i.e., an opening) for forming, e.g., a via hole in a dielectric layer for electrically connecting vertically separated metallization layers, or a groove or trench for a metallization line, is created in the dielectric layer by conventional photolithographic and etching techniques, and filled with a selected metal. Any excess metal overfilling the recess and/or extending over the surface of the dielectric layer is then removed by, e.g., chemical-mechanical polishing (CMP), wherein a moving pad is biased against the surface to be polished/planarized, with the interposition of a slurry containing abrasive particles (and other ingredients) therebetween.A variant of the above-described technique, termed "dual damascene" processing, involves the formation of an opening comprising a lower contact or via hole section in communication with an upper groove or trench section, which opening is filled with a conductive material, typically a metal, to simultaneously form a conductive via plug in electrical contact with a conductive line.Referring now to FIG. 1, schematically shown therein in simplified cross-sectional view, is a conventional damascene-type processing sequence employing relatively low cost, high manufacturing throughput plating and CMP techniques for forming recessed "back-end" metallization patterns (illustratively of Cu-based metallurgy but not limited thereto) in a semiconductor device formed in or on a semiconductor wafer substrate 1. In a first step, the desired arrangement of conductors is defined as a pattern of recesses 2 such as via holes, grooves, trenches, etc. formed (as by conventional photolithographic and etching techniques) in the surface 4 of a dielectric layer 3 (e.g., a silicon oxide and/or nitride or an organic polymeric material) deposited or otherwise formed over the semiconductor substrate 1. In a second step, a layer of Cu or Cu-based alloy 5 is deposited by conventional plating techniques, e.g., electroless or electroplating techniques, to fill the recesses 2. In order to ensure complete filling of the recesses, the Cu-containing layer 5 is deposited as a blanket (or "overburden") layer of excess thickness t so as to overfill the recesses 2 and cover the upper surface 4 of the dielectric layer 3. Next, the entire excess thickness t of the metal overburden layer 5 over the surface of the dielectric layer 3 is removed by a CMP process utilizing an alumina (Al2O3)-based slurry, leaving metal portions 5' in the recesses 2 with their exposed upper surfaces 6 substantially co-planar with the surface 4 of the dielectric layer 3.The above-described conventional damascene-type process forms in-laid conductors 5' in the dielectric layer 3 while avoiding problems associated with other types of metallization patterning processing, e.g., blanket metal layer deposition, followed by photolithographic masking/etching and dielectric gap filling. In addition, such single or dual damascene-type processing can be performed with a variety of other types of substrates, e.g., printed circuit boards, with and/or without intervening dielectric layers, and with a plurality of metallization levels, i.e., five or more levels.A drawback associated with Cu-based "back-end" metallization is the possibility of Cu diffusion into adjacent structures, e.g., an underlying semiconductor substrate (typically Si) or a dielectric layer, resulting in degradation of semiconductive or insulative properties, as well as poor adhesion of the deposited Cu or Cu alloy layer to various materials employed as dielectric inter-layers, etc. As a consequence of these phenomena associated with Cu-based metallurgy, it is generally necessary to provide an adhesion promoting and/or diffusion barrier layer intermediate the semiconductor substrate and the overlying Cu-based metallization layer. Suitable materials for such adhesion/barrier layers include, e.g., Ti, W, Cr, Ta, and TaN.Another drawback associated with the use of Cu or Cu-based metallurgy for "back-end" metallization processing of semiconductor devices, results from the undesirable formation of copper oxide(s), e.g., Cu2O, CuO, CuO2, etc., on the planarized Cu or Cu-based alloy surfaces of the in-laid metallization features as a result of oxidation, etc., due to the strong chemical oxidizing agents conventionally included in CMP slurries for enhancing Cu dissolution/removal rates or as a result of exposure of the freshly abraded Cu-based surfaces to an oxidizing atmosphere, e.g., air or oxygen. The thickness of the copper oxide layer can vary depending upon the particular CMP processing conditions, e.g., stronger oxidizing agents contained in the CMP slurry result in thicker oxide layers, as does increased duration of exposure of freshly abraded, post CMP Cu surfaces to oxidizing atmospheres, e.g., air.Copper oxide-containing layer(s), when formed as described above, disadvantageously increase contact resistance and reduce or prevent adhesion of layers thereto, e.g., silicon nitride-based capping layers. Moreover, the copper oxide layers are brittle, increasing the likelihood of circuit disconnect or reduced conductivity due to separation, as by peeling, of the copper oxide layer from conductor layers in contact therewith. Yet another disadvantage attributable to the presence of copper oxide at the interface between adjacent electrical conductors results from the rapid diffusion of Cu atoms and/or ions along the oxide layer. The latter characteristic of copper oxide layers disadvantageously results in enhanced material transport during electrical current flow and thus increases the electromigration rate of Cu atoms and/or ions along Cu-based conductor lines.Electromigration occurs in extended runs or lengths of metal conductor lines carrying significant currents. According to a conventional theory for explaining the mechanism of electromigration, the current flow within the conductor line can be sufficient to result in movement of Cu atoms and/or ions along the line via momentum transfer engendered by collision of the Cu atoms and/or ions with energetic, flowing electrons. The current flow also creates a thermal gradient along the conductor length which increases the mobility of the metal ions and/or atoms. As a consequence of the momentum transfer and the thermally enhanced mobility, metal (Cu) ions and/or atoms diffuse in the direction of the gradient, and metal (Cu) loss at the source end of the conductor eventually results in thinning of the conductor line. The electromigration effect can continue until the conductor line becomes so thin that it separates from the current input or forms an open circuit, resulting in circuit (i.e., semiconductor chip) failure. As this usually occurs over an extended period of operation, the failure is often seen by the end-user.As design rules for high integration density becomes smaller, high speed semiconductor devices extend deeper into the sub-micron range, e.g., about 0.18 [mu]m and under, e.g., about 0.15 [mu]m and below, and the number of metallization levels increases, the reliability of the "back-end" interconnection patterns and systems become particularly critical for the obtainment of desired operating characteristics and performance. As is known, Cu electromigration can be reduced by the addition thereto of certain alloying elements, e.g., tin (Sn), boron (B), magnesium (Mg), carbon (C), palladium (Pd), cobalt (Co) nickel (Ni), and cadmium (Cd). However, the use of alloyed Cu according to conventional damascene-type methodology, such as described supra with respect to FIG. 1, is problematic in that the commonly utilized high manufacturing throughput technique for filling the recesses 2 is not amenable for depositing such Cu-based alloys with adequate control of composition and/or uniformity. In addition, formation of appropriately constituted Cu-based alloys by a process comprising electrolessly depositing a seed layer containing the alloying element(s) at the bottom of the recesses and then upwardly diffusing the alloying element(s) into subsequently electroplated unalloyed Cu filling the recesses similarly results in lack of adequate alloy composition uniformity for reliable reduction or mitigation of Cu electro-migration. Moreover, the effectiveness of such techniques depends, in part, on line width; however, inasmuch as line widths are variable, the problem of variation of alloy composition uniformity is further exacerbated.Thus, there exists a need for metallization process methodology which avoids the above-mentioned drawbacks associated with oxide layer formation, electromigration, poor control of alloy composition resulting in poor alloy uniformity, etc., and which enables formation of metallization members, e.g., interconnect and routing lines, particularly of Cu or Cu-based alloys, having high reliability, high product yield, improved electromigration resistance, and high performance. In particular, there exists a need for eliminating the problems associated with electromigration and oxide layer formation resulting from CMP processing to form "in-laid", "damascene"-type Cu-based metallization patterns. Moreover, there exists a need for improved metallization processing technology which is fully compatible with conventional process flow, methodology, and throughput requirements in the manufacture of integrated circuit semiconductor devices and other devices requiring "in-laid" metallization patterns.DISCLOSURE OF THE INVENTIONAn advantage of the present invention is a method of manufacturing an electrical or electronic device having highly reliable, electromigration-resistant metallization patterns.Another advantage of the present invention is a method of manufacturing a semiconductor integrated circuit device having highly reliable, electromigration-resistant Cu-based metallization patterns.Yet another advantage of the present invention is a method of manufacturing "in-laid", "damascene"-type Cu-based metallization patterns having improved reliability, high conductivity, and improved electromigration resistance.Still another advantage of the present invention is an improved method of forming high-density, "in-laid" metallization patterns by a "damascene"-type, CMP-based process which is fully compatible with existing process methodology for forming integrated circuit semiconductor devices and printed circuit boards.Additional advantages and other features of the present invention will be set forth in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or will be learned from the practice of the invention. The advantages of the present invention may be realized and obtained as particularly pointed out in the appended claims.According to one aspect of the present invention, the foregoing and other advantages are achieved in part by a method of manufacturing an electrical device, the method comprising the sequential steps of:(a) providing a substrate including at least one damascene-type, metal feature in-laid in the upper, exposed surface of a layer of dielectric material overlying at least a portion of the substrate, the at least one metal feature including an upper, exposed surface substantially co-planar with the upper surface of the layer of dielectric material;(b) blanket-depositing at least one layer comprising at least one alloying element for the metal feature on the exposed, upper surface of the at least one metal feature and on the surface of the layer of dielectric material;(c) annealing to substantially uniformly diffuse at least a predetermined minimum amount of the at least one alloying element from the at least one layer comprising the at least one alloying element into the at least one metal feature for at least a minimum depth below the upper surface thereof, whereby electromigration of the metal of the at least one metal feature is minimized or substantially prevented; and(d) removing remaining, alloyed and/or unalloyed portions of the at least one layer comprising at least one alloying element which extend above the upper surface of the layer of dielectric material, thereby making the upper surface of the at least one metal feature substantially co-planar with the upper surface of the dielectric layer.According to embodiments of the present invention, the method further comprises the step of:(a') exposing the upper surface of the at least one metal feature to a reducing agent or atmosphere prior to performing step (b).In accordance with embodiments of the present invention, the electrical device comprises a semiconductor integrated circuit device, and step (a) comprises providing as the substrate a semiconductor wafer of monocrystalline silicon (Si) or gallium arsenide (GaAs) having a major surface, the dielectric layer is formed over at least a portion of the major surface, and the at least one damascene-type, in-laid metal feature comprises a plurality of features of different widths and/or depths for providing vias, interlevel metallization, and/or interconnection lines of at least one active device region or component formed on or within the semiconductor wafer, and the metal of the at least one in-laid metal feature is unalloyed copper (Cu); step (b) comprises blanket-depositing, as by a physical vapor deposition (PVD), e.g., sputtering, ion plating, or vacuum evaporation, at least one layer comprising at least one alloying element selected from the group consisting of: tin (Sn), boron (B), magnesium (Mg), carbon (C), palladium (Pd), cobalt (Co), nickel (Ni), and cadmium (Cd), the at least one layer having a thickness at least sufficient to provide a predetermined minimum concentration of the at least one alloying element of from about 0.1 to about 4 at. % for at least a predetermined minimum depth below the upper surface of the at least one in-laid Cu metal feature; step (c) comprises annealing at a temperature of from about 200[deg.] C. to about 450[deg.] C. for from about 60 sec. to about 90 min. in an inert atmosphere; and step (d) comprises removing (e.g., by etching or chemical-mechanical polishing (CMP)) the remaining alloyed and/or unalloyed portion(s) of the at least one layer comprising the at least one alloying element which extend(s) above the upper surface of the dielectric layer, thereby making the upper surface of the at least one metal feature substantially co-planar with the upper surface of the dielectric layer.According to further embodiments of the present invention, step (a') comprises exposing the upper surface of the at least one Cu metal feature to a reducing agent or atmosphere, e.g., a hydrogen plasma, for reducing any copper oxide present thereat, prior to performing step (b); and step (a) for providing the substrate including at least one damascene-type, in-laid metal feature comprises the preliminary steps of:i. forming a dielectric layer on a surface of a substrate, the dielectric layer having an exposed, upper surface;ii. forming at least one recess in the exposed, upper surface of the dielectric layer;iii. depositing a metal layer (e.g., unalloyed Cu) filling the at least one recess and extending over the upper surface of the dielectric layer;iv. removing the portion(s) of the metal layer extending over the upper surface of the dielectric layer; andv. removing any excess thickness portion(s) of the metal layer filling the at least one recess which extend(s) above the upper surface of the dielectric layer, e.g., by CMP, thereby making the upper surface of the at least one in-laid metal feature substantially co-planar with the upper surface of the dielectric layer.According to another aspect of the present invention, a method of manufacturing a semiconductor integrated circuit device comprises the sequential steps of:(a) providing a substrate comprising a semiconductor wafer of monocrystalline Si or GaAs and having a major surface, a dielectric layer formed on at least a portion of the major surface and having an exposed, upper surface, at least one damascene-type, unalloyed Cu metal feature in-laid in the upper surface of the dielectric layer, the at least one Cu metal feature including an exposed, upper surface substantially co-planar with the upper surface of the dielectric layer;(b) blanket-depositing at least one layer comprising at least one alloying element for the Cu metal feature on the exposed, upper surface of the at least one Cu metal feature and on the exposed, upper surface of the dielectric layer, the at least one alloying element being selected from the group consisting of: Sn, B, Mg, C, Pd, Co, Ni, and Cd;(c) annealing to substantially uniformly diffuse the at least one layer comprising at least one alloying element into the at least one Cu metal feature for at least a predetermined minimum depth below the upper surface thereof, thereby to minimize or substantially prevent electromigration of Cu atoms and/or ions therefrom, the thickness of the at least one layer comprising the at least one alloying element being sufficient to provide a predetermined minimum concentration thereof of from about 0.1 to about 4 at. % for a predetermined minimum depth of at least about 20 Å below the upper surface of the at least one Cu metal feature; and(d) removing any remaining, alloyed and/or unalloyed portion(s) of the at least one layer comprising the at least one alloying element which extend(s) above the upper surface of the dielectric layer, thereby making the upper surface of the at least one Cu metal feature substantially co-planar with the upper surface of the dielectric layer.According to embodiments of the present invention, the method further comprises the steps of:(a') exposing the upper surface of the at least one Cu metal feature to a reducing agent or atmosphere, e.g., a hydrogen plasma, for reducing any copper oxide present thereat prior to performing step (b); andstep (c) comprises annealing at a temperature of from about 200[deg.] C. to about 450[deg.] C. for from about 60 sec. to about 90 min. in an inert atmosphere.According to further embodiments of the present invention, step (a) comprises providing a semiconductor wafer having a dielectric layer on a major surface thereof which comprises a plurality of in-laid, unalloyed Cu metal features of different widths and/or depths for providing vias, inter-level metallization, and/or interconnection lines of at least one active device region or component formed on or within the semiconductor wafer.Additional advantages of the present invention will readily become apparent to those skilled in the art from the following detailed description, wherein only the preferred embodiment of the present invention is shown and described, simply by way of illustration of the best mode contemplated for carrying out the method of the present invention. As will be understood, the present invention is capable of other and different embodiments, and its several details are capable of modification in various obvious respects, all without departing from the present invention. Accordingly, the drawing and description are to be regarded as illustrative in nature, and not as limitative.BRIEF DESCRIPTION OF THE DRAWINGSThe following detailed description of an embodiment of the present invention can best be understood when read in conjunction with the following drawings, in which like reference numerals are employed throughout to designate similar features, wherein:FIG. 1 illustrates, in simplified, cross-sectional schematic form, a sequence of processing steps for forming a pattern of damascene-type, in-laid Cu metallization features according to conventional practices for manufacture of semiconductor integrated circuit devices; andFIG. 2 illustrates, in simplified, cross-sectional schematic form, a sequence of processing steps for alloying the pattern of Cu in-laid metallization features ofFIG. 1 according to the inventive methodology.DESCRIPTION OF THE INVENTIONThe present invention addresses and solves problems arising from manufacturing electrical devices comprising in-laid metallization patterns, e.g., semiconductor integrated circuit devices, wherein, as part of the fabrication methodology, a plurality of recesses formed in the surface of a dielectric layer overlying a semiconductor substrate comprising at least one active device region or component are filled with a metal, illustratively Cu, which is subject to electromigration when the device is in use. More specifically, the present invention enables the formation of in-laid metallization patterns, e.g., of Cu-based metallurgy, in which the tendency for electromigration of the principal metallic element or component is minimized or substantially prevented, and which provide good adhesion to and low contact resistance with adjacent metallization patterns and/or levels in contact therewith.The present invention enables the formation of in-laid metallization patterns comprising metal alloys of well-controlled composition and/or compositional uniformity, by means of techniques which are fully compatible with the requirements of automated manufacturing technology and product throughput. Briefly stated, according to the present invention, conventional damascene-type methodology (such as illustrated in FIG. 1) is employed for forming an in-laid metallization pattern in a dielectric layer overlying a suitable substrate, e.g., a semiconductor wafer comprising at least one active device region or component, by which processing an unalloyed metal, e.g., Cu, is utilized for filling the pattern of recesses in the dielectric layer. Subsequent to planarization processing, as by chemical-mechanical polishing (CMP), and after optional exposure of the metal surface(s) to a reducing atmosphere for removing any deleterious oxide therefrom, at least one thin layer comprising at least one alloying element for the unalloyed metal is blanket-deposited on the exposed, upper surface(s) of the feature(s) of the metallization pattern and [an] the exposed, upper surface of the dielectric layer, and the thus-produced structure subjected to thermal processing, e.g., annealing in an inert atmosphere, to substantially uniformly diffuse into, and alloy with, at least a portion of the metal (e.g., Cu) filling the recess pattern, whereby electromigration of the metal is minimized or substantially prevented. Any excess alloyed and/or unalloyed, elevated portion(s) of the at least one layer comprising at least one alloying element remaining after diffusion/alloying is (are) removed, as by CMP, thereby making the exposed, upper surface of the in-laid metal feature(s) of the metallization pattern substantially co-planar with the exposed, upper surface of the dielectric layer.An embodiment of the present invention will now be described with reference to FIG. 2, which shows, in simplified, cross-sectional, schematic fashion, an illustrative, but not limitative, embodiment of the present invention comprising a sequence of processing steps performed on a semiconductor wafer substrate-based workpiece produced according to the process sequence illustrated in FIG. 1, wherein similar reference numerals are used throughout to denote similar features. As will be apparent to one of ordinary skill in the art, the inventive process is readily adapted for use in the manufacture of a variety of electrical and electronic devices utilizing in-laid metallization patterns, e.g., printed circuit boards and integrated circuit devices. It should also be recognized that the process steps and structures described below do not necessarily form a complete process flow for manufacturing such devices. However, the present invention can be used in conjunction with conventional technology currently employed in the art, e.g., integrated circuit fabrication methodology, and, consequently, only so much of the commonly practiced process steps are included here as are necessary for an understanding of the present invention. As employed throughout the disclosure and claims, the term "substrate" and/or "semiconductor wafer substrate" includes, e.g., a semiconductor substrate per se or an epitaxial layer formed on a suitable semiconductor substrate. Finally, the drawing figures representing cross-sections of portions of a semiconductor device during fabrication processing are not drawn to scale, but instead are drawn as to best illustrate the features of the present invention.Referring now to FIG. 2, in a preliminary step according to the present invention, a semiconductor substrate-based workpiece similar to that shown in the third view of FIG. 1 is provided, having a desired in-laid metallization pattern, comprising a semiconductor wafer substrate 1, a dielectric layer 3 overlying substrate 1 and having a plurality of recesses of different widths and/or depths formed in the exposed, upper surface 4 thereof, and a layer 5 of an unalloyed metal, illustratively Cu, filling the recesses 2, the exposed, upper surfaces 6 of the metal being substantially co-planar with the exposed, upper surface 4 of the dielectric layer 3.In the illustrated structure, semiconductor substrate 1 typically comprises a wafer of monocrystalline Si or GaAs, layer 3 comprises an insulative material typically utilized as an inter-layer dielectric ("ILD"), i.e., an inorganic material such as a silicon oxide, nitride, or oxynitride, or an organic-based or derived material, such as parylene, benzocyclobutene (BCB), etc. Recesses 2 formed in the upper, exposed surface 4 of dielectric layer 3 are utilized for forming vias, inter-level metallization, and/or interconnection routing of at least one active device region or component formed on or within semiconductor wafer substrate 1 and typically include high aspect (i.e., depth-to-width) ratios greater than one, sub-micron or micron-sized dimensions, and sub-micron or micron-sized dimensions, i.e., widths of from about 0.08 to about 3.0 [mu]m and depths of from about 0.4 to about 2.0 [mu]m.In a first step according to the inventive methodology, at least one thin layer 7 comprising at least one alloying element for the metal 5 of the in-laid metal feature(s) of the metallization pattern is blanket-deposited on the exposed, upper surfaces, 6 and 4, respectively, of the metallization features and the dielectric layer 3, as by a suitable physical vapor deposition (PVD) technique, including, inter alia, sputtering, ion plating, and vacuum evaporation. According to the present invention, the thickness(es) of the at least one thin layer 7 comprising at least one alloying element is (are) sufficient to provide, after a subsequent thermal treatment for effecting diffusion into the underlying metal 5 of the metal feature(s), a predetermined minimum concentration (cmin.) of the alloying element(s) for a predetermined minimum depth dmin. below surface(s) 6 of the metal feature(s), sufficient to substantially reduce or eliminate electromigration of the metal 5 of the metallization feature(s). By way of illustration, but not limitation, in the case of metallization features filled with unalloyed Cu metal 5, a layer 7 of Co from about 50 to about 200 Å thick is sufficient to provide a Cu-Co alloy layer 8 having a substantially uniform, minimum concentration cmin. of Co of from about 0.1 to about 4 at. % extending for a minimum depth dmin. of from about 20 to about 100 Å below surface 6. Significantly, it is also within the ambit of the present invention to substantially uniformly diffuse the at least one alloying element from the at least one layer 7 into the metal 5 of the metal feature(s) for the entire depth thereof, and therefore form an alloy layer 8 encompassing the entire depth and width extent of the metal feature(s). Given the disclosure and the objective(s) of the present invention, appropriate thicknesses for the at least one alloying element layer 7, as well as alloy depth and concentration, can be determined and optimized for use in a particular application.Layer 7 can, depending, inter alia, upon the particular metal 5 and choice of alloying element(s), comprises a single layer including one or more alloying elements, e.g., two alloying elements, or alternatively, can comprise two or more overlying layers, each containing a single alloying element. The latter alternative may be preferred when co-deposition of multiple alloying elements in single layer form is impractical or results in poor control of the relative amounts of the alloying elements, and therefore, poor composition control and/or uniformity of the desired alloy.Referring still to FIG. 2, in the next step according to the inventive methodology, the at least one alloying element layer 7 is subjected to a treatment for effecting diffusion into and alloying with the underlying metal 5 of the metal feature(s), as by a thermal treatment. More specifically, diffusion/alloying can be effected by annealing at an elevated temperature in an inert atmosphere, e.g., nitrogen (N2) or a rare gas such as argon (Ar). By way of illustration, but not limitation, in the case of a layer 7 of Co and underlying metal feature(s) of unalloyed Cu metal 5, diffusion/alloying to provide an alloy layer 8 having a minimum alloying element concentration Cmin. for a minimum depth dmin. below surface 6 as described supra can be provided by annealing in an inert atmosphere at a temperature of from about 200 to about 450[deg.] C. for from about 60 sec. to about 90 min. As before, given the disclosure and objective(s) of the present invention, suitable annealing conditions for use with other alloying elements and metal features can be optimized for use in a particular application.As illustrated in FIG. 2, layer portions 7' which extend above the level of upper surface 4 of dielectric layer 3, composed of alloyed and/or unalloyed components or portions of layer(s) 7 may remain on or over the upper surfaces 6 and 4, respectively, of the alloyed layer portion(s) 8 of the metal feature(s) and the dielectric layer 3 after completion of the diffusion/alloying treatment. In the next step according to the inventive methodology, any such remaining portion(s) 7' is (are) removed, e.g., by etching or chemical-mechanical planarization (CMP), thereby re-establishing co-planarity of the upper surface 6 of the in-laid metal feature(s) and the upper surface 4 of the dielectric layer 3. The thus-produced, planarized, in-laid metallization pattern having alloy layer 8 at the upper surface 6 for minimizing or substantially preventing electromigration therefrom may then be subjected to further "back-end" metallization processing, e.g., adherent formation thereon, as by damascene techniques, of at least one additional layer or strata of in-laid metallization.In some instances, e.g., as with unalloyed Cu in-laid metal features, a layer comprising at least one copper oxide (Cu2O, CuO, and/or CuO2) may be present on the upper surface(s) 6 of the metal 5 of the metal feature(s) of the workpiece prior to the step for blanket deposition thereon of the at least one layer comprising at least one alloying element, typically as a result of oxidation by oxidants included in the CMP abrasive slurry or by exposure of the freshly abraded surface(s) to an oxidizing atmosphere (e.g., air) after planarization processing. In any event, the copper oxide layer, if present during the selective deposition step, would result, inter alia, in poor adhesion of the at least one alloying layer and impaired diffusion/alloying, and, therefore, must be removed prior to the selective deposition step, as by exposure to a reducing agent or atmosphere. By way of illustration, but not limitation, copper oxide layers on unalloyed Cu metal 5 features may be removed by exposure to a hydrogen plasma for from about 20 to about 90 sec.The present invention thus provides a simple, convenient, and reliable method for reducing, or substantially preventing, deleterious electromigration of metal from in-laid metallization patterns by introducing at least a minimum concentration of at least one electromigration inhibiting, alloying element into the metallization features for at least a minimum depth below the upper surfaces (top interface) thereof. The present invention enables the formation of extremely reliable interconnect members and patterns, illustratively, but not limited to, Cu, by providing a method for reliably reducing, or substantially preventing, deleterious electromigration. The inventive process also provides a substantial increase in the reliability and adhesion of damascene-type metallization patterns utilized in semiconductor "back-end" processing and is equally applicable to "dual-damascene" type processing.The inventive methodology enjoys particular utility in the manufacture of semiconductor devices having sub-micron dimensioned metallization features and high aspect ratio openings. Moreover, the inventive method can be practiced at manufacturing rates consistent with the requirements for economic competitiveness, and is fully compatible with conventional process flow for automated manufacture of high-density integration semiconductor devices. In addition, the invention is particularly well suited to the manufacture of circuit boards and other types of electrical and electronic devices and/or components.In the previous description, numerous specific details are set forth, such as specific materials, structures, reactants, processes, etc., in order to provide a better understanding of the present invention. However, the present invention can be practiced without resorting to the details specifically set forth. In other instances, well known processing materials and techniques have not been described in detail in order not to unnecessarily obscure the present invention.Only the preferred embodiment of the present invention and but a few examples of its versatility are shown and described in the present disclosure. It is to be understood that the present invention is capable of use in various other combinations and environments and is susceptible of changes or modifications within the scope of the inventive concept as expressed herein. |
A method of an aspect includes generating real time instruction trace (RTIT) packets for a first logical processor of a processor. The RTIT packets indicate a flow of software executed by the first logical processor. The RTIT packets are stored in an RTIT queue corresponding to the first logical processor. The RTIT packets are transferred from the RTIT queue to memory predominantly with firmware of the processor. Other methods, apparatus, and systems are also disclosed. |
CLAIMS What is claimed is: 1. A processor comprising: at least a first logical processor; and real time instruction trace (RTIT) logic coupled with the first logical processor, the RTIT logic including: RTIT packetizer logic to generate RTIT packets for the first logical processor, the RTIT packets to indicate a flow of software executed by the first logical processor; an RTIT queue corresponding to the first logical processor, the RTIT queue coupled with the RTIT packetizer logic, the RTIT queue to store the RTIT packets; and RTIT queue contents transfer logic coupled with the RTIT queue, the RTIT queue contents transfer logic to transfer the RTIT packets to memory, wherein the RTIT queue contents transfer logic is implemented predominantly in firmware. 2. The processor of claim 1, wherein the RTIT queue contents transfer logic comprises a firmware service sub-routine. 3. The processor of claim 1, wherein the RTIT queue contents transfer logic is to transfer the RTIT packets to a set of architectural registers and then transfer the RTIT packets from the set of architectural registers to the memory through a store operation. 4. The processor of claim 3, wherein the store operation comprises one selected from an uncacheable speculative write combining operation and a cacheable store operation. 5. The processor of claim 1, wherein at least a portion of the RTIT queue is capable of being configured as a last branch record (LBR). 6. The processor of claim 1, further comprising a non-renamed bus coupled with the RTIT packetizer logic, the non-renamed bus having a width in bits that is at least as large as a width of a line of the RTIT queue. 7. The processor of claim 1, wherein a size of the RTIT queue ranges from 0.3 to 4 kilobytes corresponding to the first logical processor. 8. The processor of claim 7, wherein the size of the RTIT queue ranges from 0.4 to 4 kilobytes corresponding to the first logical processor. 9. The processor of claim 1, wherein the RTIT packetizer logic is implemented predominantly in hardware. 10. The processor of claim 1, wherein the RTIT packetizer logic is to perform an intermediate level of compression in which non-operations (NOPs) are left between RTIT packets in the RRQ. 11. The processor of claim 1, wherein the RTIT packetizer logic is to store packets of a given type in fixed locations of chunks. 12. The processor of claim 1, wherein the RTIT logic is to provide a level of intrusiveness that ranges from 2% to 20% for the first logical processor. 13. A method comprising: generating real time instruction trace (RTIT) packets for a first logical processor of a processor, the RTIT packets to indicate a flow of software executed by the first logical processor; storing the RTIT packets in an RTIT queue corresponding to the first logical processor; and transferring the RTIT packets from the RTIT queue to memory with predominantly firmware of the processor. 14. The method of claim 13, wherein transferring comprises transferring the RTIT packets with a firmware service sub-routine. 15. The method of claim 13, wherein transferring comprises: transferring the RTIT packets from the RTIT queue to a set of architectural registers; and transferring the RTIT packets from the set of architectural registers to the memory through a store operation. 16. The method of claim 15, wherein transferring comprises transferring the RTIT packets from the set of architectural registers to the memory through a store operation selected from an uncacheable speculative write combining operation and a cacheable store operation. 17. The method of claim 13, further comprising using at least a portion of the RTIT queue as a last branch record (LBR). 18. The method of claim 13, further comprising transmitting a line of the RTIT queue on a non-renamed bus having a width in bits that is at least as wide as a width of the line of the RTIT queue. 19. The method of claim 13, wherein storing comprises storing the RTIT packets in an RTIT queue having a size that ranges from 0.3 to 4 kilobytes corresponding to the first logical processor. 20. The method of claim 19, wherein the size of the RTIT queue ranges from 0.4 to 4 kilobytes corresponding to the first logical processor. 21. The method of claim 13, wherein generating the RTIT packets is performed predominantly by hardware of the processor. 22. The method of claim 13, wherein storing the RTIT packets in the RTIT queue leaves non-operations (NOPs) between RTIT packets. 23. The method of claim 13, further comprising providing a level of intrusiveness of RTIT that ranges from 2% to 20% for the first logical processor. 24. A system comprising: an interconnect; a dynamic random access memory (DRAM) coupled with the interconnect; and a processor coupled with the interconnect, the processor including: at least a first logical processor; and real time instruction trace (RTIT) logic coupled with the first logical processor, the RTIT logic including: RTIT packetizer logic to generate RTIT packets for the first logical processor, the RTIT packets to indicate a flow of software executed by the first logical processor; an RTIT queue corresponding to the first logical processor, the RTIT queue coupled with the RTIT packetizer logic, the RTIT queue to store the RTIT packets; and RTIT queue contents transfer logic coupled with the RTIT queue, the RTIT queue contents transfer logic to transfer the RTIT packets to the DRAM, wherein the RTIT queue contents transfer logic is implemented predominantly in firmware of the processor. 25. The system of claim 24, wherein the RTIT queue contents transfer logic comprises a firmware service sub-routine, and wherein the RTIT queue contents transfer logic is to transfer the RTIT packets to a set of architectural registers and then transfer the RTIT packets from the set of architectural registers to the DRAM through a store operation. 26. The system of claim 24, wherein at least a portion of the RTIT queue is capable of being configured as a last branch record (LBR), and further comprising a non-renamed bus coupled with the RTIT packetizer logic, the non-renamed bus having a width in bits that is at least as large as a width of a line of the RTIT queue. 27. A processor comprising: a real time instruction trace (RTIT) queue to store RTIT packets for a first logical processor, the RTIT packets to indicate a flow of software executed by the first logical processor; and RTIT queue contents transfer logic coupled with the RTIT queue, the RTIT queue contents transfer logic to: transfer the RTIT packets from the RTIT queue to a set of architectural registers; and transfer the RTIT packets from the set of architectural registers to memory through a store operation. 28. The processor of claim 27, wherein the RTIT queue contents transfer logic comprises a firmware service sub-routine. 29. The processor of claim 27, wherein at least a portion of the RTIT queue is capable of being configured as a last branch record (LBR). |
REAL TIME INSTRUCTION TRACE PROCESSORS, METHODS, AND SYSTEMS BACKGROUND Field Embodiments relate to the field of instruction trace. In particular, embodiments relate to the field of real time instruction trace in processors. Background Information Multi-threaded and/or multi-core processors are commonplace today. They are used in various types of computing devices such as servers, desktops, laptops, netbooks, tablets, smartphones, and cell phones, to name just a few examples. It is currently expected that, at least for some processor segments, the trend to increasingly more threads and/or cores is going to continue into the future. The multiple threads and/or cores generally help to improve performance by providing hardware parallelism which allows more instructions to be executed concurrently or in parallel. The multiple threads and/or cores have encouraged the development of multi-threaded or parallel processing software. For example, a multi-threaded application may include multiple threads that execute concurrently on different hardware threads, cores, or other logical processors. During the execution of software various different types of events may alter the control flow of the software. Examples of such events include the execution of conditional branch instructions, jump instructions, subroutine call instructions, and asynchronous events (e.g., interrupts, exceptions, etc.). Tracing is often used to log or record information about the execution of software including information describing the control flow. However, one challenge especially with such multi-threaded and/or multi-core processors is that debug tends to be more difficult as compared to single-threaded and/or single-core processors. Knowing the real time code execution flow is often challenging. As a result, debug may tend to take more time, which may lead to higher development costs and/or potential delays in bringing products to market. In addition, many existing methods of tracing tend to be highly performance intrusive. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS The invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings: Figure 1 is a block diagram of a computer system including an embodiment of a processor having an embodiment of real time instruction trace (RTIT) logic and a memory. Figure 2 is a block diagram of an embodiment of a processor having an embodiment of RTIT logic. Figure 3 is a block diagram of an embodiment of a processor having an example embodiment of RTIT reorder buffer queue (RRQ) contents transfer logic that is operable to transfer contents of an RRQ to memory. Figure 4A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 4B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. Figure 5A is a block diagram of a single processor core, along with its connection to the on-die interconnect network and with its local subset of the Level 2 (L2) cache, according to embodiments of the invention. Figure 5B is an expanded view of part of the processor core in Figure 5A according to embodiments of the invention. Figure 6 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. Figure 7, shown is a block diagram of a system in accordance with one embodiment of the present invention. Figure 8, shown is a block diagram of a first more specific exemplary system in accordance with an embodiment of the present invention. Figure 9, shown is a block diagram of a second more specific exemplary system in accordance with an embodiment of the present invention. Figure 10, shown is a block diagram of a SoC in accordance with an embodiment of the present invention. Figure 11 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. DETAILED DESCRIPTION Disclosed herein are methods, processors, and systems for real time instruction trace (RTIT). In the following description, numerous specific details are set forth (for example specific RTIT logic implementations, RTIT packet formats, hardware/firmware partitioning details, logic partitioning/integration details, processor configurations, microarchitectural details, sequences of operations, types and interrelationships of system components, and the like). However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description. Figure 1 is a block diagram of a computer system 100 including an embodiment of a processor 101 and a memory 105. The processor and the memory are coupled, or otherwise in communication with one another, by a conventional coupling mechanism 112 (e.g., through one or more buses, hubs, memory controllers, chipset components, or the like). The memory may include one or more memory devices and/or one or more different types of memory. In some embodiments, the processor 101 may be a general-purpose processor (e.g., of the type used in desktop, laptop, netbook, tablet, smartphone, cell phone, server, and like computer systems). Alternatively, the processor may be a special-purpose processor. Examples of suitable special-purpose processors include, but are not limited to, communications processors, network processors, cryptographic processors, graphics processors, co-processors, embedded processors, digital signal processors (DSPs), and controllers (e.g., microcontrollers), to name just a few examples. The processor includes at least a first logical processor 102-1 optionally up to an Nth logical processor 102-N, where N may be any appropriate number (e.g., from two to tens or even hundreds). Each logical processor may include logic to support and/or be independently associated with a software thread. Examples of suitable logical processors include, but are not limited to, a core, a hardware thread, a thread unit, a thread slot, a context unit, and/or other hardware and/or logic capable of executing instructions and holding state (e.g., an execution state and/or an architectural state). Referring again to Figure 1, the memory 105 includes software 106. In the illustrated embodiment, the software includes an operating system 107 and one or more applications 108. During operation, a portion of the software may execute on the processor as executing software 103. For example, the first logical processor may have first executing software (e.g., a first thread) 103-1 and the optional Nth logical processor may optionally have an Nth executing software (e.g., an Nth thread) 103-N. Although embodiments may be used for any number of logical processors (e.g., including a single logical processor), commonly the greatest benefit may be experienced when there are multiple or many such logical processors. In general, the more logical processors the more complicated the execution, and the more difficult debugging tends to be without the real time instruction trace embodiments disclosed herein. The executing software may include macroinstructions or instruction set architecture (ISA) level instructions that are loaded from the software 106 and executed on the processor (e.g., scheduled, decoded, executed, etc.). By way of example, the instructions may include arithmetic instructions, load instructions, store instructions, and the like. In addition, the instructions may include one or more types of instructions that alter the flow of the software by branching, jumping, or otherwise moving around in the software. Examples of such instructions include, but are not limited to, branch instructions, conditional branch instructions, jump instructions, call instructions, and the like. In different architectures these instructions are sometimes referred to by different names. Generally these instructions involve moving to an instruction other than the next sequential instruction (e.g. by jumping over intervening instructions). Faults, interrupts, exceptions, or other similar asynchronous events may also alter program flow when they occur (e.g., by moving to a handler routine). Referring again to Figure 1, the processor also includes the embodiment of the real time instruction trace (RTIT) logic 109. The RTIT logic may be operable to generate and log, record, or store RTIT data about the execution of the software 103 including information about the control flow of the executing software. In some embodiments, the RTIT logic may store the RTIT data 111 in the memory 105. In some embodiments, different portions of the memory (e.g., different address ranges) may be used for each of the logical processors. In other embodiments, rather than storing the RTIT data to memory (e.g., as content in the memory that allows later post processing software to translate the RTIT data trace to the actual execution flow), the RTIT data may be output on processor pins (e.g., and used by the post processing software to translate the RTIT data trace to the actual execution flow). In some embodiments, the RTIT logic may be operable to record trace information for all non-statically known program or control flow changes during the execution of the software. For example, the RTIT data 111 may include information to indicate whether conditional branches were taken or not taken, destination addresses of indirect jump and call instructions, origination and destination addresses for exceptions, interrupts, and like asynchronous events, etc. In some embodiments, the RTrf data may represent a full record or full live back trace of where the software actually executed in real time within the processor. Advantageously, the RTIT logic and RTrf data may allow a user to follow almost an endless number of control flow changes (e.g., from the beginning of the program flow to a failure, or area of slow performance) provided there is sufficient memory availability to store that amount of RTIT data. The RTIT data 111 may be used for various different purposes. The scope of the invention is not limited to any known such use of the RTIT data. Examples of such possible uses include, but are not limited to, debugging (e.g., software functional debug and/or hardware debug), post silicon validation, diagnostic purposes, performance analysis and tuning, power analysis and tuning, and the like. The RTIT data may be used both during software/hardware development and after release of the software/hardware. In some cases, the software may include one or more software applications 110 that use the RTIT data 111. By way of example, in the case of debugging, a practitioner may use debugging software to access the RTIT data and use it to obtain details about where the software actually executed for purposes of debugging. As another example, in the case of performance analysis and tuning, the practitioner may use performance analysis and tuning software to access and use the RTIT data to obtain details about where and how fast the software actually executed to analyze and tune performance. The RTIT logic 109 is on-die and/or on-processor. The on-die/processor logic is fixed, resident, or persistent on-die/processor (e.g., as opposed to software instructions that are loaded into the processor from the memory). Commonly, the on-die/processor logic is present on the die/processor even when the processor is powered off, prior to booting, and/or at the time of completion of manufacture. In some embodiments, the on-die/processor logic includes a combination of hardware (e.g., integrated circuitry, transistors, registers, etc.), firmware (e.g., microcode), and/or other on-die/processor logic. The firmware may include a combination of persistent and/or non-volatile memory of the processor (e.g., read only memory (ROM), electrically programmable ROM (EPROM), flash memory, or the like.) and instructions (e.g., microcode, microinstructions, microarchitectural instructions, circuit level instructions that are lower-level than ISA instructions, or the like) stored in the persistent and/or non- volatile memory. In some embodiments, the combination of hardware and firmware for the RTIT logic 109 may be selected to help to balance performance impact objectives with die size, power, and related objectives. It is also possible to implement the RTIT logic completely or almost completely in hardware. However, implementing the RTIT logic completely or almost completely in hardware may have a number of significant drawbacks. For one thing, this may involve a significant amount of hardware logic that may tend to increase the size (e.g., the processor silicon die area), manufacturing cost, and power consumption of the processor. In contrast to hardware, firmware generally uses significantly less size (e.g., less die area), has lower manufacturing cost, and generally also has less power consumption. However, in contrast to hardware, firmware generally has less performance and/or may tend to be more performance intrusive, since it shares resources with main flow processor operation. Accordingly, in some embodiments, the RTIT logic 109 may be implemented through a combination of hardware and firmware that is able to achieve a desired balance between performance intrusion and size, manufacturing cost, and power consumption. In some embodiments, the RTIT logic may be operable to provide a level of intrusiveness that ranges from about 2% to about 20% per logical processor, or from about 2% to about 15% per logical processor, or from about 2% to about 10% per logical processor, although this is not required. The level of intrusiveness may represent the decrease in performance when the RTIT logic is implemented compared to when the RTIT logic is not implemented (e.g., is disabled) for a given workload. The aforementioned levels of intrusiveness (e.g., the percentages) are suitable for embodiments but are not required. Other embodiments may use other levels of intrusiveness suitable for the particular implementation. Figure 2 is a block diagram of an embodiment of a processor 201 having an embodiment of RTIT logic 209. In some embodiments, the processor and RTIT logic of Figure 2 may be included in the system of Figure 1. Alternatively, the processor and RTIT logic of Figure 2 may be included in a similar or different system. Moreover, the system of Figure 1 may include either the same, similar, or different processor and RTIT logic than those of Figure 2. The RTIT logic 209 includes RTIT packetizer logic 223, an RTIT reorder buffer queue (RRQ) 224, an RTIT filter logic 225, timing logic 226, and RRQ contents transfer logic 227. In addition to the RTIT logic, the processor also includes a reorder buffer (ROB) 220, a branch order buffer (BOB) 221, an extended instruction pointer 222, a non-renamed bus 228, and an address and control signal bus 229. These components are coupled with one another by the arrows and busses. The ROB, BOB, and address and control signal bus 229 represent substantially conventional logic found in out-of-order (OOO) processors. For example, the ROB and BOB may be used to reorder instructions, which have been executed out of order, back into original program order. The BOB 221 holds the information associated with each branch including the target address and other information such as the taken/non-taken indication. The ROB 220 provides the RTIT packetizer logic 223 information which operation is a branch, in which case, the BOB is read to provide the information associated with the branch, such as if the branch was taken and its destination address. In more complicate branches (like indirect branches) the firmware is also involved in the to and from branch addresses. The RTIT packetizer logic 223 is operable to generate RTIT packets and store the packets in the RTIT reorder buffer queue (RRQ) 224. In some embodiments, the RTIT packetizer logic may order and store the packets in the RRQ in a way that sufficiently fits the hardware resources. Different types of RTIT packets are contemplated. One possible type of RTIT packet is a taken or not taken (TNT) packet. The TNT packet may indicate whether each of multiple conditional branches are taken or not taken. In some embodiments, the TNT packet may use a single bit per conditional branch. According to one possible convention, the bit may be given a first value (i.e., be set to binary 1) to indicate that a branch was taken, or the bit may be given a second value (e.g., be cleared to binary 0) to indicate that the branch was not taken. The opposite convention is also possible. Each TNT packet may record such information for a group of conditional branches. According to one example embodiment, each TNT packet may be an 8- bit byte and may be able to record the outcomes of up to six conditional branches (e.g. have six bits to indicate the outcome of up to six branches). Other embodiments may have wider or narrower TNT packets to record either fewer or more conditional branch outcomes. Another possible type of RTIT packet is a target instruction pointer (TIP) packet. The TIP packet may indicate targets of indirect branches, jumps, transfers, far events, calls, and the like. The TIP packet may include a variable length destination address. According to one example embodiment, each TIP packet may be from about two to about seven bytes, although this is not required. Yet another possible type of RTIT packet is a flow update (FUP) packet. The FUP packet may indicate a source address of an asynchronous event (e.g., an interrupt, exception, etc.) to log where the execution was at before the event. According to one example embodiment, each FUP packet may be from about three to about seven bytes, although this is not required. Other possible examples of RTIT packets include, but are not limited to, timing and/or synchronization packets, packets to provide core-to-bus frequency ratios, packets to provide numbers of core cycles between packets, packets to stop or otherwise control instruction trace, packets to identify packet stream boundaries, and the like. These are just a few illustrative examples of suitable types of RTrf packets. Other embodiments may utilize different types of RTrr packets, additional RTrf packets, etc. In some embodiments, the RTIT packetizer logic 223 may be implemented substantially entirely in hardware (i.e., at least 90% in hardware), or predominantly in hardware (i.e., more than 50% in hardware). In some embodiments, the RTrf packetizer logic may include some firmware (e.g., less than 50%), since allowing firmware to generate part of the RTFT packets may help to reduce the size of the RTFT packetizer hardware logic. In some embodiments, the RTIT packetizer logic may be lightweight RTIT packetizer logic. The lightweight RTIT packetizer logic may be operable to generate and store packets in the RTIT reorder buffer queue (RRQ) with flexible or intelligent compression in a way that balances performance impact (or intrusion) with logic size, cost, and power consumption. Advantageously, the flexible or intelligent compression may reduce the amount of hardware logic without significantly impacting performance. For example, rather than storing the packets in the RRQ in a way that achieves highest levels of packing or compression, the lightweight RTIT packetizer logic may provide an intermediate level of compression or packing that leaves a certain amount of unused space between the packets in the RRQ. A significant amount of logic is generally needed in order to achieve the highest level and/or full compression (e.g., by eliminating the unused space or holes, etc.). In some embodiments, the lightweight RTIT packetizer logic may not provide full compression, because the input multiplexers and fill buffers needed for full compression would generally increase the size and power consumption more than warranted by the increase in performance that would be achieved. Providing an intermediate level of compression or packing may help to reduce some of this logic while still achieving a sufficiently low performance impact. In some embodiments, in addition to and/or instead of such flexible/intelligent compression, another way to reduce the amount of logic of the RTIT packetizer logic is to have certain packets in a fixed location. For example, cycle packets may be inserted only in the first byte of 32 chunks. This means that the cycle packet may be located only in byteO, bytel and byte3 or byte32, byte33, and byte34. But the cycle packet may have a length of one, two, or three bytes depending on the time passed since the last packet. As a result, in the event of a cycle packet of one byte, then the bytel and byte2 may be space, which means holes are inserted, and in some embodiments such holes may be interpreted by the decoder as no-operations (NOPs). In some embodiments, a special packet of a NOP may be used to implement the holes. Referring again to Figure 2, the RTIT reorder buffer queue (RRQ) 224 is coupled with the RTIT packetizer logic 223. The RRQ is operable to store the RTIT packets 232. The RRQ may be filled out by the ROB hardware in the case of an indirect branch or by firmware in the case of a FAR branch or exception. In some embodiments, a separate RRQ may be included for each of one or more logical processors (e.g., hardware threads) of the processor and may be used to store RTIT packets for the corresponding logical processor. In some embodiments, each RRQ may be significantly larger than a conventional last branch record (LBR). The LBR is generally able to hold only a very limited number of branches (e.g., in some cases no more than about 10 to 20). Such a limited number of branch records may be encountered in a very short amount of time (e.g., a fraction of a second) and is often insufficient. In contrast, each RRQ may be significantly larger than a conventional LBR. For example, in some embodiments, each RRQ may have a size of at least 0.3 kilobytes, such as, for example, from about 0.3 to about 4 kilobytes, or from about 0.4 to about 4 kilobytes, or from about 0.5 to about 3 kilobytes, whereas a LBR is often not larger than about 0.2 kilobytes. These sizes are per-logical processor. In some embodiments, the RRQ may be operable to be used by two or more logical processors concurrently. For example, the RRQ may correspond to a given core and may have different portions allocated to two or more different logical processors of that given core. In some embodiments, the portions of the RRQ may be fixedly or statically allocated to the different logical processors, which may help to reduce logic and/or provide a simpler implementation. In other embodiments, the portions of the RRQ may be capable of being dynamically allocated among the different logical processors, which may allow greater flexibility. For example, this may allow a portion of the RRQ that is allocated to a non-active logical processor to be reclaimed so that it may be used by an active logical processor. In some embodiments, an existing last branch record (LBR) buffer may be reused and extended in size in order to implement the RRQ buffer. Such reuse of the LBR buffer may help to avoid an unnecessary increase in die area, manufacturing cost, etc. In this embodiment, the RRQ and LBR generally would not be used concurrently, but rather would be used alternatively. For example, a user may configure the system to use either the RRQ or the LBR. In other embodiments, separate LBR and RRQ buffers may optionally be included. In such embodiments, the separate LBR and RRQ may optionally and/or potentially be used concurrently. Referring again to Figure 2, the RTIT logic includes RTIT filter logic 225. The RTIT filter logic is coupled with the RTIT packetizer logic 223 and the RRQ 224. In some embodiments, the RTIT filter logic may be implemented predominantly in hardware, although this is not required. The RTIT filter logic is operable to filter execution that is to be traced from execution that is not to be traced. In various embodiments, the processor and/or the RTIT logic may allow a user or software (e.g., a debugging program) to specify which execution is to be traced and/or which is not to be traced. For example, this may allow execution of all software to be traced, execution of the operating system to be traced, or execution of one or more particular applications to be traced. In some embodiments, the RTIT filter logic may perform filtering based on address range and/or based on an operation mode. For example, in the case of address range filtering, the RTIT filter logic may compare addresses of executing software with a given or specified address range in order to determine whether or not the executing software is within the given address range. In one aspect, the address range may represent a page directory base value (e.g., from a CR3 control register in an Intel Architecture processor), or the like. In the case of operation mode, the filtering logic may filter based on privilege level (e.g., ring level 0, ring level 3, etc.). Advantageously, the RTIT filter logic may allow real time trace to be performed selectively on specific software, address ranges, operation modes, or the like, which may help to allow a practitioner or software application to focus the tracing on particular software and/or on particular bugs, errors, or the like. Referring again to Figure 2, the processor also includes a non-renamed bus 228. The non-renamed bus is coupled with the ROB 220, the extended instruction pointer 222, the BOB 221, and the RRQ 224. By way of example, the non-renamed bus may be used to transfer values stored in non-renamed registers (e.g., often located in the ROB) to a storage area associated with an operation stored temporarily in the reservation station till dispatch. These values may be read as a source of that operation when the operation is dispatched from the reservation station. However, conventionally the non-renamed bus width may be smaller than the RRQ line width. In some embodiments, the non-renamed bus may have a width equal to RRQ line width in order to allow the packets to be read efficiently from the RRQ (e.g., one line read from the RRQ per cycle or clock). For example, in some embodiments, the non-renamed bus may have a width of 64-bits for a 64-bit line width of the RRQ, although this particular width is not required. The RTIT logic also includes timing logic 226. The timing logic may be operable to generate and provide packets to provide timing information that is useful for the RTIT logic. The timing logic may receive a reference clock signal 230 that is used to generate the timing information. Different types of timing information are contemplated. One possible type of timing information is a time stamp counter (TSC) value or packet representing the official processor wall clock timer. This may represent an architectural feature and may be synchronized on multi core and even on multi socket systems sharing the same reset signal. Another possible type of timing information is a sub-sampling of such a time stamp counter value. This is referred to as a mini time stamp counter (MTC) value or packet. For example, the mini time stamp counter value may be an 8-bit sub-set of a 56-bit time stamp counter value. Such a mini time stamp counter value may allow logging information relevant to the full time stamp counter value, and having the same synchronization, but in less bits. Yet another type of timing information is cycle information. The cycle information may be appended to other packets and may indicate the number of core cycles elapsed between consecutive packets. The cycle packets may be issued with core clock resolution. Such timing information is useful for estimating when instructions were executed. In the case of multiple cores, the timing information may be useful for calculating when the instructions were executed on cores with respect to other cores and with respect to wall clock time. Such timing information is also useful to allow the RTIT logic to find and correct performance issues and/or for performance tuning. For example, the timestamp information may be used to determine what portions of code execute fast and what portions of code execute slow. When the RTIT logic is used, the traced program execution rate/speed is typically affected (i.e., typically reduced) as compared to if the RTIT logic were not used. As a result, the timing information in the packets generally does not perfectly/precisely indicate the real program execution rate/speed, but rather may serve as a useful estimate thereof. The RTrr logic also includes RRQ contents transfer logic 227. In some embodiments, the RRQ contents transfer logic is implemented predominantly in firmware potentially combined with a lesser amount of hardware. As shown, the RRQ contents transfer logic 227 includes firmware 299. The RRQ contents transfer logic is operable to transfer contents from the RRQ to memory (e.g., memory 205). In some embodiments, this may be done when the RRQ is full or almost full (e.g., when the RRQ has a capacity that meets a fullness threshold). In other embodiments, this may be done periodically or continuously to help prevent the RRQ from becoming completely full. Further details of a suitable embodiment of the RRQ contents transfer logic will be shown and described in conjunction with Figure 3. The processor also includes an address and control signal bus 229. The address and control signal bus is coupled with the BOB 221, the ROB 220, and the RTFT packetizer logic 223. The ROB is often responsible for the committed branches. The BOB may hold the information associated with a branch. Using a dedicated array for branches (e.g., the BOB) may help to save die area since not all entries in the ROB need to save the information associated with a branch. On a taken branch, the ROB may read the branch target address from the BOB on the address and control signal bus, and use it to calculate the instruction pointer. The address and control signal bus 229 may provide to the RTIT packetizer logic 223 information about which branch was taken and/or not- taken and its address. In some embodiments, the RTIT logic 209 may be fully contained within a core. This may offer certain advantages, since branches may occur per logical processor (of which in some embodiments each core may have a plurality). In other embodiments, in order to save die area, a portion of the RTIT logic may optionally be implemented in an uncore portion of the processor outside of the cores. For example, the RRQ logic and logic to indicate each logical processor RTFT trace may be included in the uncore portion of the processor, since each logical processor trace may be stored in a different corresponding memory location. Alternatively, other portions of the RTIT logic may optionally be included in the uncore portion of the processor. The choice between locating logic in the core or uncore may be determined in a way that is appropriate for the particular implementation (e.g., in a way that appropriately trades off die area saving in the core vs. die area, routing and complexity in the uncore). To avoid obscuring the description, a relatively simple processor has been shown and described. In other embodiments, the processor may optionally include other well-known components, such as, for example, an instruction fetch unit, an instruction scheduling unit, a branch prediction unit, an instruction decoder, microinstruction queues, an execution unit, microinstruction sequencers, registers, a register renaming unit, instruction and data caches, instruction and data translation lookaside buffers, bus interface units, second or higher level caches, a retirement unit, other components included in processors, and various combinations thereof. There are literally numerous different combinations and configurations of components in processors, and embodiments are not limited to any particular combination or configuration. The processor may represent an integrated circuit or set of one or more semiconductor dies or chips (e.g., a single die or chip, or a package incorporating two or more die or chips). In some embodiments, the processor may represent a system-on-chip (SoC). Figure 3 is a block diagram of an embodiment of a processor 301 having an example embodiment of RRQ contents transfer logic 327 that is operable to transfer contents of an RRQ 324 to memory 305. In some embodiments, the RRQ contents transfer logic may initiate the transfer when the RRQ is full, almost full, meets a fullness threshold, continuously or periodically throughout time, or the like. For example, when the RRQ is full or almost full (e.g., meets a fullness threshold), hardware of the processor may set an RRQ full flag or bit 340 that results in RRQ contents transfer (e.g., causes a special assistance request to the RRQ contents transfer logic). The RRQ contents transfer logic 327 includes firmware 399. In some embodiments, the RRQ contents transfer logic may be implemented predominantly (i.e., more than 50%) in firmware (e.g., microcode, microinstructions, circuit-level instructions stored in non-volatile memory, etc.) potentially with a lesser amount of hardware. For example, in some embodiments, the RRQ contents transfer logic may be implemented in from about 50% to 90% firmware with the remainder being made up of hardware (e.g., to interface with the RRQ, perform other functions best suited for hardware, etc.). In some embodiments, the RTrf contents transfer logic may include a firmware service sub-routine. It would also be possible to implement the RRQ contents transfer logic entirely or predominantly in hardware, although this generally has certain drawbacks. For one thing, a significant amount of hardware logic is generally needed in order to implement the RRQ contents transfer logic entirely or predominantly in hardware. Such a large amount of hardware logic may tend to increase the size (e.g., the die area), manufacturing cost, and power consumption of the processor. In contrast, firmware generally takes significantly less size, has less manufacturing cost, and has less power consumption than hardware logic. Although firmware may have less performance than hardware, implementing the RRQ contents transfer logic predominantly in firmware (e.g., from 51% to 90%), generally provides an appropriate level of performance without unnecessarily increasing the size, manufacturing cost, and power without. As shown at numeral (1), in some embodiments, the RRQ contents transfer logic 327 may transfer or store a set of one or more RTIT packets 332 from the RRQ to one or more architectural registers 342. The architectural registers may be those registers referenced as sources and/or destinations by ISA level instructions of the processor (e.g., write instructions, store instructions, etc.). Then, as shown at numerals (2) and (3), in some embodiments, a write or other operation may be performed to transfer, or otherwise store the set of RTIT packets from the architectural registers 342 to RTIT data 311 in the memory 305. In some embodiments, the RRQ transfer logic, which may be implemented predominantly by a firmware routine or function, may fast evict the contents of the RRQ to the architectural registers in a tight loop, and then perform a write operation to transfer the contents of the architectural registers on to memory 305. Transferring the contents of the RRQ to memory may help to free additional space in the RRQ. In some embodiments, the RRQ contents transfer logic may continue to transfer the contents of the RRQ to the memory until the RRQ has a sufficient amount of free space. In some embodiments, the write or other operation shown at numerals (2) and (3) may indicate the set of RTIT packets in the architectural registers as uncacheable speculative write combining (USWC). USWC is a known cache attribute type in Intel Architecture processors. Analogous attributes in other architectures may also be used. Indicating the set of RTIT packets as having the USWC attribute may allow the RTIT packets to be stored directly to the memory bypassing one or more cache levels of the processor. The USWC operation may accumulate stores in internal buffers before going out to memory which may help to reduce memory bus transactions. This may help to avoid polluting the one or more levels of cache with the RTIT packets (i.e., the RTIT packets will not tie up cache entries). Use of USWC helps to reduce the intrusiveness. Alternatively, in other embodiments, the write or store operation of the packets from the architectural registers to the memory may be a cacheable store operation. In addition, in some embodiments, physical addresses rather than linear addresses may be stored, which may help to bypass the paging translation and may tend to be more convenient for debugging systems, although this is not required. Exemplary Core Architectures, Processors, and Computer Architectures Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures. Exemplary Core Architectures In-order and out-of-order core block diagram Figure 4A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 4B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 4A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described. In Figure 4A, a processor pipeline 400 includes a fetch stage 402, a length decode stage 404, a decode stage 406, an allocation stage 408, a renaming stage 410, a scheduling (also known as a dispatch or issue) stage 412, a register read/memory read stage 414, an execute stage 416, a write back/memory write stage 418, an exception handling stage 422, and a commit stage 424. Figure 4B shows processor core 490 including a front end unit 430 coupled to an execution engine unit 450, and both are coupled to a memory unit 470. The core 490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 490 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like. The front end unit 430 includes a branch prediction unit 432 coupled to an instruction cache unit 434, which is coupled to an instruction translation lookaside buffer (TLB) 436, which is coupled to an instruction fetch unit 438, which is coupled to a decode unit 440. The decode unit 440 (or decoder) may decode instructions, and generate as an output one or more micro- operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 440 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 490 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 440 or otherwise within the front end unit 430). The decode unit 440 is coupled to a rename/allocator unit 452 in the execution engine unit 450. The execution engine unit 450 includes the rename/allocator unit 452 coupled to a retirement unit 454 and a set of one or more scheduler unit(s) 456. The scheduler unit(s) 456 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 456 is coupled to the physical register file(s) unit(s) 458. Each of the physical register file(s) units 458 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 458 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 458 is overlapped by the retirement unit 454 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 454 and the physical register file(s) unit(s) 458 are coupled to the execution cluster(s) 460. The execution cluster(s) 460 includes a set of one or more execution units 462 and a set of one or more memory access units 464. The execution units 462 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 456, physical register file(s) unit(s) 458, and execution cluster(s) 460 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 464). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order. The set of memory access units 464 is coupled to the memory unit 470, which includes a data TLB unit 472 coupled to a data cache unit 474 coupled to a level 2 (L2) cache unit 476. In one exemplary embodiment, the memory access units 464 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 472 in the memory unit 470. The instruction cache unit 434 is further coupled to a level 2 (L2) cache unit 476 in the memory unit 470. The L2 cache unit 476 is coupled to one or more other levels of cache and eventually to a main memory. By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 400 as follows: 1) the instruction fetch 438 performs the fetch and length decoding stages 402 and 404; 2) the decode unit 440 performs the decode stage 406; 3) the rename/allocator unit 452 performs the allocation stage 408 and renaming stage 410; 4) the scheduler unit(s) 456 performs the schedule stage 412; 5) the physical register file(s) unit(s) 458 and the memory unit 470 perform the register read/memory read stage 414; the execution cluster 460 perform the execute stage 416; 6) the memory unit 470 and the physical register file(s) unit(s) 458 perform the write back/memory write stage 418; 7) various units may be involved in the exception handling stage 422; and 8) the retirement unit 454 and the physical register file(s) unit(s) 458 perform the commit stage 424. The core 490 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 490 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data. It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology). While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 434/474 and a shared L2 cache unit 476, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor. Specific Exemplary In- Order Core Architecture Figures 5A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application. Figure 5A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 502 and with its local subset of the Level 2 (L2) cache 504, according to embodiments of the invention. In one embodiment, an instruction decoder 500 supports the x86 instruction set with a packed data instruction set extension. An LI cache 506 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 508 and a vector unit 510 use separate register sets (respectively, scalar registers 512 and vector registers 514) and data transferred between them is written to memory and then read back in from a level 1 (LI) cache 506, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back). The local subset of the L2 cache 504 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 504. Data read by a processor core is stored in its L2 cache subset 504 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 504 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring datapath is 1012-bits wide per direction. Figure 5B is an expanded view of part of the processor core in Figure 5A according to embodiments of the invention. Figure 5B includes an LI data cache 506A part of the LI cache 504, as well as more detail regarding the vector unit 510 and the vector registers 514. Specifically, the vector unit 510 is a 16- wide vector processing unit (VPU) (see the 16-wide ALU 528), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 520, numeric conversion with numeric convert units 522A-B, and replication with replication unit 524 on the memory input. Write mask registers 526 allow predicating resulting vector writes. Processor with integrated memory controller and graphics Figure 6 is a block diagram of a processor 600 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 6 illustrate a processor 600 with a single core 602A, a system agent 610, a set of one or more bus controller units 616, while the optional addition of the dashed lined boxes illustrates an alternative processor 600 with multiple cores 602A-N, a set of one or more integrated memory controller unit(s) 614 in the system agent unit 610, and special purpose logic 608. Thus, different implementations of the processor 600 may include: 1) a CPU with the special purpose logic 608 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 602A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 602A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 602A-N being a large number of general purpose in-order cores. Thus, the processor 600 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 600 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS. The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 606, and external memory (not shown) coupled to the set of integrated memory controller units 614. The set of shared cache units 606 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 612 interconnects the integrated graphics logic 608, the set of shared cache units 606, and the system agent unit 610/integrated memory controller unit(s) 614, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 606 and cores 602- A-N. In some embodiments, one or more of the cores 602A-N are capable of multi-threading. The system agent 610 includes those components coordinating and operating cores 602A-N. The system agent unit 610 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 602A-N and the integrated graphics logic 608. The display unit is for driving one or more externally connected displays. The cores 602A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 602A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set. Exemplary Computer Architectures Figures 7-10 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable. Referring now to Figure 7, shown is a block diagram of a system 700 in accordance with one embodiment of the present invention. The system 700 may include one or more processors 710, 715, which are coupled to a controller hub 720. In one embodiment the controller hub 720 includes a graphics memory controller hub (GMCH) 790 and an Input/Output Hub (IOH) 750 (which may be on separate chips); the GMCH 790 includes memory and graphics controllers to which are coupled memory 740 and a coprocessor 745; the IOH 750 is couples input/output (I/O) devices 760 to the GMCH 790. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 740 and the coprocessor 745 are coupled directly to the processor 710, and the controller hub 720 in a single chip with the IOH 750. The optional nature of additional processors 715 is denoted in Figure 7 with broken lines. Each processor 710, 715 may include one or more of the processing cores described herein and may be some version of the processor 600. The memory 740 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 720 communicates with the processor(s) 710, 715 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 795. In one embodiment, the coprocessor 745 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 720 may include an integrated graphics accelerator. There can be a variety of differences between the physical resources 710, 715 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like. In one embodiment, the processor 710 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 710 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 745. Accordingly, the processor 710 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 745. Coprocessor(s) 745 accept and execute the received coprocessor instructions. Referring now to Figure 8, shown is a block diagram of a first more specific exemplary system 800 in accordance with an embodiment of the present invention. As shown in Figure 8, multiprocessor system 800 is a point-to-point interconnect system, and includes a first processor 870 and a second processor 880 coupled via a point-to-point interconnect 850. Each of processors 870 and 880 may be some version of the processor 600. In one embodiment of the invention, processors 870 and 880 are respectively processors 710 and 715, while coprocessor 838 is coprocessor 745. In another embodiment, processors 870 and 880 are respectively processor 710 coprocessor 745. Processors 870 and 880 are shown including integrated memory controller (IMC) units 872 and 882, respectively. Processor 870 also includes as part of its bus controller units point- to-point (P-P) interfaces 876 and 878; similarly, second processor 880 includes P-P interfaces 886 and 888. Processors 870, 880 may exchange information via a point-to-point (P-P) interface 850 using P-P interface circuits 878, 888. As shown in Figure 8, IMCs 872 and 882 couple the processors to respective memories, namely a memory 832 and a memory 834, which may be portions of main memory locally attached to the respective processors. Processors 870, 880 may each exchange information with a chipset 890 via individual P- P interfaces 852, 854 using point to point interface circuits 876, 894, 886, 898. Chipset 890 may optionally exchange information with the coprocessor 838 via a high-performance interface 839. In one embodiment, the coprocessor 838 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode. Chipset 890 may be coupled to a first bus 816 via an interface 896. In one embodiment, first bus 816 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited. As shown in Figure 8, various I O devices 814 may be coupled to first bus 816, along with a bus bridge 818 which couples first bus 816 to a second bus 820. In one embodiment, one or more additional processor(s) 815, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 816. In one embodiment, second bus 820 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 820 including, for example, a keyboard and/or mouse 822, communication devices 827 and a storage unit 828 such as a disk drive or other mass storage device which may include instructions/code and data 830, in one embodiment. Further, an audio I/O 824 may be coupled to the second bus 820. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 8, a system may implement a multidrop bus or other such architecture. Referring now to Figure 9, shown is a block diagram of a second more specific exemplary system 900 in accordance with an embodiment of the present invention. Like elements in Figures 8 and 9 bear like reference numerals, and certain aspects of Figure 8 have been omitted from Figure 9 in order to avoid obscuring other aspects of Figure 9. Figure 9 illustrates that the processors 870, 880 may include integrated memory and I/O control logic ("CL") 872 and 882, respectively. Thus, the CL 872, 882 include integrated memory controller units and include I/O control logic. Figure 9 illustrates that not only are the memories 832, 834 coupled to the CL 872, 882, but also that I/O devices 914 are also coupled to the control logic 872, 882. Legacy I/O devices 915 are coupled to the chipset 890. Referring now to Figure 10, shown is a block diagram of a SoC 1000 in accordance with an embodiment of the present invention. Similar elements in Figure 6 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 10, an interconnect unit(s) 1002 is coupled to: an application processor 1010 which includes a set of one or more cores 202A-N and shared cache unit(s) 606; a system agent unit 610; a bus controller unit(s) 616; an integrated memory controller unit(s) 614; a set or one or more coprocessors 1020 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1030; a direct memory access (DMA) unit 1032; and a display unit 1040 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1020 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high- throughput MIC processor, embedded processor, or the like. Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code, such as code 830 illustrated in Figure 8, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor. The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language. One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable' s (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions. Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products. Emulation (including binary translation, code morphing, etc.) In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor. Figure 11 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 11 shows a program in a high level language 1102 may be compiled using an x86 compiler 1104 to generate x86 binary code 1106 that may be natively executed by a processor with at least one x86 instruction set core 1116. The processor with at least one x86 instruction set core 1116 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1104 represents a compiler that is operable to generate x86 binary code 1106 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1116. Similarly, Figure 11 shows the program in the high level language 1102 may be compiled using an alternative instruction set compiler 1108 to generate alternative instruction set binary code 1110 that may be natively executed by a processor without at least one x86 instruction set core 1114 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1112 is used to convert the x86 binary code 1106 into code that may be natively executed by the processor without an x86 instruction set core 1114. This converted code is not likely to be the same as the alternative instruction set binary code 1110 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1112 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1106. In the description and claims, the terms "coupled" and "connected," along with their derivatives, may have been used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The term "and/or" may have been used. As used herein, the term "and/or" means one or the other or both (e.g., A and/or B means A or B or both A and B). In the description above, for the purposes of explanation, numerous specific details have been set forth in order to provide a thorough understanding of the embodiments of the invention. It will be apparent however, to one skilled in the art, that one or more other embodiments may be practiced without some of these specific details. The particular embodiments described are not provided to limit the invention but to illustrate it. The scope of the invention is not to be determined by the specific examples provided above but only by the claims below. In other instances, well-known circuits, structures, devices, and operations have been shown in block diagram form or without detail in order to avoid obscuring the understanding of the description. Where considered appropriate, reference numerals or terminal portions of reference numerals have been repeated among the figures to indicate corresponding or analogous elements, which may optionally have similar or the same characteristics unless specified or clearly apparent otherwise. In the drawings, arrows represent couplings and bidirectional arrows represent bidirectional couplings. Various operations and methods have been described. Some of the methods have been described in a relatively basic form in the flow diagrams, but operations may optionally be added to and/or removed from the methods. In addition, while the flow diagrams show a particular order of the operations according to example embodiments, it is to be understood that that particular order is exemplary. Alternate embodiments may optionally perform the operations in different order, combine certain operations, overlap certain operations, etc. The components, features, and specific optional details described herein for the apparatus may also optionally apply to the methods described herein, which may in embodiments be performed by and/or within such apparatus. Some embodiments include an article of manufacture (e.g., a computer program product) that includes a machine-readable medium. The medium may include a mechanism that provides, for example stores, information in a form that is readable by the machine. The machine-readable medium may provide, or have stored thereon, a sequence of instructions, which if executed by a machine causes the machine to perform one or operations, methods, or techniques disclosed herein. In some embodiments, the machine-readable medium may include a tangible non- transitory machine-readable storage media. For example, the tangible non-transitory machine- readable storage media may include a floppy diskette, an optical storage medium, an optical disk, a CD-ROM, a magnetic disk, a magneto-optical disk, a read only memory (ROM), a programmable ROM (PROM), an erasable-and-programmable ROM (EPROM), an electrically- erasable-and-programmable ROM (EEPROM), a random access memory (RAM), a static-RAM (SRAM), a dynamic-RAM (DRAM), a Flash memory, a phase-change memory, or the like. The tangible medium may include one or more solid or tangible physical materials, such as, for example, a semiconductor material, a phase change material, a magnetic material, etc. Examples of suitable machines include, but are not limited to, desktop, laptop, notebooks, netbook nettops,, tablet, smartphone, cell phone, Mobile Internet devices (MIDs), server, network elements (e.g., routers, switches, etc.), set-top boxes, video game controllers, and like computing systems, and other electronic devices having one or more processors. It should also be appreciated that reference throughout this specification to "one embodiment", "an embodiment", or "one or more embodiments", for example, means that a particular feature may be included in the practice of the invention. Similarly, it should be appreciated that in the description various features are sometimes grouped together in a single embodiment, Figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects may lie in less than all features of a single disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of the invention. |
A variety of applications can include apparatus and/or methods of operating the apparatus that include a memory device having read levels that can be calibrated. A calibration controller implemented with the memory device can trigger a read level calibration based on inputs from one or more trackers monitoring parameters associated with the memory device and a determination of an occurrence of at least one event from a set of events related to the monitored parameters. The monitored parameters can include parameters related to a selected time interval and measurements of read, erase, or write operations of the memory device. Additional apparatus, systems, and methods are disclosed. |
ClaimsWhat is claimed is:1. An apparatus comprising:a memory device to receive read and write commands to read from and write to memory cells of an array of memory cells of the memory device;one or more trackers to monitor parameters including a selected time interval, a number of read operations to read at least a portion of the memory device, and a number of at least one of write operations and erase operations to the at least the portion of the memory device; anda calibration controller to trigger a read level calibration based on inputs from the one or more trackers and a determination of an occurrence of at least one event from a set of events including a monitored time equal to or exceeding the selected time interval, the number of the read operations equal to or exceeding a predetermined threshold for a number of read operations within the selected time interval, and the number of the at least one of write operations and erase operations equal to or exceeding a threshold for a number of at least one of write operations and erase operations within the selected time interval.2. The apparatus of claim 1, wherein the calibration controller includes firmware with stored instructions to determine the occurrence based on the inputs from the one or more trackers.3. The apparatus of claim 1, wherein the one or more trackers includes a read counter to count read commands sent to the memory device.4. The apparatus of claim 1, wherein the one or more trackers includes at least one of counter to count write and erase messages sent from the memory device in response to conducting at least one of an write and erase operation in the memory array.5. The apparatus of claim 1, wherein the one or more trackers includes a timer that is resettable to a reset value by the calibration controller to begin another wait interval for a read level calibration at the selected time interval from the reset value, and the calibration controller is operable to reset the one or more trackers to track read operations and track at least one of write operations and erase operations from the reset value of the timer.6. The apparatus of claim 1, wherein the triggered read level calibration includes a sampling of memory raw bit error rates at different read voltages to select a set of read voltages associated with a least raw bit error rate.7. The apparatus of claim 1, wherein the calibration controller is operable to track memory cell threshold voltage movement of the memory cells under stress conditions.8. The apparatus of claim 1, wherein the array of memory cells of the memory device is structured in a three-dimensional NAND configuration.9. A system comprising:a host processor;a controller coupled to communicate with the host processor;a set of memory devices coupled to the controller, the set of memory devices including a NAND memory device having an array of memory cells to which read and write commands are received from the controller to read from and write to memory cells of the NAND memory device;a set of trackers to monitor time, to track read operations to the memory device, and to track write and/or erase operations communicated from the NAND memory device; anda calibration controller to trigger read level calibration based on inputs from the set of trackers and a determination of an occurrence of at least one event from a set of events including the monitored time exceeding a selected time interval, a number of the read operations equal to or exceeding a predetermined threshold for a number of read operations within the selected time interval, and a number of the write and/or erase operations exceeding a threshold for a number of write and/or erase operations within the selected time interval.10. The system of claim 9, wherein the calibration controller includes firmware with stored instructions to determine the occurrence based on the inputs from the set of trackers.11. The system of claim 9, wherein the tracker to track read operations to the NAND memory device includes a read counter to count read commands sent from the controller to the NAND memory device, and the tracker to track write and/or erase operations communicated from the NAND memory device includes a write and/or erase counter to count write and/or erase messages sent by the NAND memory device in response to conducting an write and/or erase operations in the memory array.12. The system of claim 9, wherein the tracker to monitor time includes a timer that is resettable to a reset value by the calibration controller to begin another wait interval for a read level calibration at the selected time interval from the reset value, and the calibration controller is operable to reset the tracker to track read operations and the tracker to track write and/or erase operations to track from the reset value of the timer and within the selected time interval.13. The system of claim 9, wherein the system includes a flash translation layer that generates read and write operations to the NAND memory device via the controller to manage garbage collection of the array of memory cells of the NAND memory device.14. A method comprising:determining a number of read operations of a memory array of a memory structure and a number of at least one of write operations and erase operations of the memory array;determining an occurrence of at least one event from a set of events including a monitored time equal to or exceeding a selected time interval, the determined number of read operations of the memory array equal to or exceeding a threshold for a number of read operations of the memory array within the selected time interval, and the determined number of at least one of write operations and erase operations of the memory array equal to or exceeding a threshold for a number of at least one of write operations and erase operations within the selected time interval; andtriggering a read level calibration of the memory array in response to the determination of the occurrence.15. The method of claim 14, wherein determining the number of read operations of the memory array includes counting read commands sent to read the memory array.16. The method of claim 14, wherein determining the number of at least one of write operations and erase operations of the memory array includes counting at least one of write messages and erase messages sent in response to conducting at least one of write operations and erase operations on in the memory array.17. The method of claim 14, wherein the method includes conducting a read level calibration in response to the triggering, resetting a timer to begin another wait interval for a read level calibration at the selected time interval from the reset value of the timer, and resetting one or more trackers of a number of read operations and a number of at least one of write operations and erase operations of the memory array from the reset value of the timer and within the selected time interval.18. The method of claim 17, wherein resetting the one or more trackers includes resetting a read counter of a number of read operations and a counter of at least one of write operations and erase operations of the memory array.19. The method of claim 14, wherein the method includes conducting the triggered read level calibration by sampling memory raw bit error rates at different read voltages to select a set of read voltages with a least raw bit error rate.20. The method of claim 14, wherein the method includes tracking memory cell threshold voltage movement of the array of memory cells under stress conditions. |
OPTIMIZED SCAN INTERVALPRIORITY APPLICATION[0001] This application claims the benefit of priority to U.S. Application Serial Number 15/692,407, filed 31 August 2017, which is incorporated herein by reference in its entirety.BACKGROU N D[0002] Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory, including volatile and non-volatile memory. Volatile memory requires power to maintain its data, and examples of volatile memory include random-access memory (RAM), dynamic random-access memory (DRAM), and synchronous dynamic random-access memory (SDRAM), among others. Non-volatile memory can retain stored data when not powered, and examples of non-volatile memory include flash memory, read-only memory (ROM), electrically erasable programmable ROM (EEPROM), static RAM (SRAM), erasable programmable ROM (EPROM), resistance variable memory, such as phase-change random-access memory (PCRAM), resistive random-access memory (RRAM), magnetoresistive random-access memory (MRAM), and three- dimensional (3D) XPoint™ memory, among others.[0003] Flash memory is utilized as non-volatile memory for a wide range of electronic applications. Flash memory devices typically include one or more groups of one-transistor, floating gate or charge trap memory cells that allow for high memory densities, high reliability, and low power consumption. Two common types of flash memory array architectures include NAND and NOR architectures, named after the logic form in which the basic memory cell configuration of each is arranged. The memory cells of the memory array are typically arranged in a matrix. In an example, the gates of each floating gate memory cell in a row of the array are coupled to an access line (e.g., a word line). In a NOR architecture, the drains of each memory cell in a column of the array are coupled to a data line (e.g., a bit line). In a NAND architecture, the memory cells in a string of the array are coupled together in series, source to drain, between a source line and a bit line.[0004] Both NOR and NAND architecture semiconductor memory arrays are accessed through decoders that activate specific memory cells by selecting the word line coupled to their gates. In a NOR architecture semiconductor memory array, once activated, the selected memory cells place their data values on bit lines, causing different currents to flow depending on the state at which a particular cell is programmed. In a NAND architecture semiconductor memory array, a high bias voltage is applied to a drain-side select gate (SGD) line. Word lines coupled to the gates of the unselected memory cells of each group are driven at a specified pass voltage (e.g., Vpass) to operate the unselected memory cells of each group as pass transistors (e.g., to pass current in a manner that is unrestricted by their stored data values). Current then flows from the source line to the bit line through each series coupled group, restricted only by the selected memory cells of each group, placing current encoded data values of selected memory cells on the bit lines.[0005] Each flash memory cell in a NOR or NAND architecturesemiconductor memory array can be programmed individually or collectively to one or a number of programmed states. For example, a single-level cell (SLC) can represent one of two programmed states (e.g., 1 or 0), representing one bit of data. However, flash memory cells can also represent one of more than two programmed states, allowing the manufacture of higher density memories without increasing the number of memory cells, as each cell can represent more than one binary digit (e.g., more than one bit). Such cells can be referred to as multi-state memory cells, multi-digit cells, or multi-level cells (MLCs). In certain examples, MLC can refer to a memory cell that can store two bits of data per cell (e.g., one of four programmed states), a triple-level cell (TLC) can refer to a memory cell that can store three bits of data per cell (e.g., one of eight programmed states), and a quad- level cell (QLC) can store four bits of data per cell. MLC is used herein in its broader context, to can refer to any memory cell that can store more than one bit of data per cell (i.e., that can represent more than two programmed states).[0006] Traditional memory arrays are two-dimensional (2D) structures arranged on a surface of a semiconductor substrate. To increase memory capacity for a given area, and to decrease cost, the size of the individual memory cells has decreased. However, there is a technological limit to the reduction in size of the individual memory cells, and thus, to the memory density of 2D memory arrays. In response, three-dimensional (3D) memory structures, such as 3D NAND architecture semiconductor memory devices, are being developed to further increase memory density and lower memory cost.[0007] Such 3D NAND devices often include strings of storage cells, coupled in series (e.g., drain to source), between one or more source-side select gates (SGSs) proximate a source, and one or more drain-side select gates (SGDs) proximate a bit line. In an example, the SGSs or the SGDs can include one or more field-effect transistors (FETs) or metal-oxide semiconductor (MOS) structure devices, etc. In some examples, the strings will extend vertically, through multiple vertically spaced tiers containing respective word lines. A semiconductor structure (e.g., a polysilicon structure) may extend adjacent a string of storage cells to form a channel for the storages cells of the string. In the example of a vertical string, the polysilicon structure may be in the form of a vertically extending pillar. In some examples, the string may be "folded," and thus arranged relative to a U-shaped pillar. In other examples, multiple vertical structures may be stacked upon one another to form stacked arrays of storage cell strings.[0008] Memory arrays or devices can be combined together to form a storage volume of a memory system, such as a solid-state drive (SSD), a Universal Flash Storage (UFS™) device, a MultiMediaCard (MMC) solid-state storage device, an embedded MMC device (eMMC™), etc. An SSD can be used as, among other things, the main storage device of a computer, having advantages over traditional hard drives with moving parts with respect to, for example, performance, size, weight, ruggedness, operating temperature range, and power consumption. For example, SSDs can have reduced seek time, latency, or other delay associated with magnetic disk drives (e.g., electromechanical, etc.). SSDs use non-volatile memory cells, such as flash memory cells to obviate internal battery supply requirements, thus allowing the drive to be more versatile and compact.[0009] An SSD can include a number of memory devices, including a number of dies or logical units (e.g., logical unit numbers or LUNs), and can include one or more processors or other controllers performing logic functions required to operate the memory devices or interface with external systems. Such SSDs may include one or more flash memory die, including a number of memory arrays and peripheral circuitry thereon. The flash memory arrays can include a number of blocks of memory cells organized into a number of physical pages. In many examples, the SSDs will also include DRAM or SRAM (or other forms of memory die or other memory structures). The SSD can receive commands from a host in association with memory operations, such as read or write operations to transfer data (e.g., user data and associated integrity data, such as error data and address data, etc.) between the memory devices and the host, or erase operations to erase data from the memory devices.[0010] In NAND flash based storage systems, a memory cell arranged as SLC or MLC typically contains a charge storage transistor in which the charge stored in the charge storage transistor sets a threshold voltage, Vt, of the charge storage transistor. Internal logic of the NAND fixes an association of a different threshold voltage with each state. However, NAND Vts are constantly subjected to shifts due to any of a number of factors. Such factors include read disturb, retention, cross-temperature etc. A count of failed bits can include a function of the mismatch between a value of the read voltage and the NAND Vt. As a result, improvements of NAND flash based storage systems can include improvements in recalibration of read voltages.BRI EF DESCRI PTION OF THE DRAWINGS[0011] The drawings, which are not necessarily drawn to scale, illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.[0012] Figure 1 illustrates an example of an environment including a memory device, according to various embodiments.[0013] Figures 2 and 3 illustrate schematic diagrams of an example of a three- dimensional NAND architecture semiconductor memory array, according to various embodiments.[0014] Figure 4 illustrates an example block diagram of a memory module, according to various embodiments.[0015] Figure 5 is a block diagram illustrating an example of a machine upon which one or more embodiments may be implemented, according to various embodiments. [0016] Figure 6 is a block diagram of features of an example system having a memory device, one or more trackers, and a calibration controller, according to various embodiments.[0017] Figure 7 is a representation of a procedure using combined input/output per second sampling and time based sampling of a NAND memory device, according to various embodiments.[0018] Figure 8 illustrates timing associated with time based sampling, read based sampling, and write/erase based sampling to trigger read level processing, according to various embodiments.[0019] Figure 9 is a flow diagram of features of an example method of optimizing a scan of a memory device for read level calibration, according to various embodiments.DETAI LED DESCRI PTION[0020] In NAND flash based storage systems, fail bit count, from mismatch between the read voltage and a NAND Vt, can be minimized by adjusting the read voltage in accordance with the NAND Vt. Adjustment of the read voltage can include a read level voltage calibration. Such a read level voltage calibration can involve real time sampling of the NAND raw bit error rate (RBER) at different read voltages. Firmware can be implemented to calibrate the NAND read voltages by issuing scan reads at various read voltages, measuring the RBERs, and determining optimal read voltages for an optimal RBER.[0021] Since sampling the RBER at different read voltages manifests as a host performance impact, the sampling is typically done at time intervals providing a slow enough rate such that sampling activities with respect to the NAND are hardly detected by the host as signal interrupts or delays. In normal operation, reads and writes occur in relatively small numbers such that the intervals for calibrating can be over hours, days, or longer, and can be scheduled according to a usage model such as a day-to-day usage model. In certain cases, such as targeted benchmarks, the NAND Vt shifts happen faster than the time based slower sampling and time based sampling may not be quite effective in these cases. In benchmarking, a large number of read or writes can be sent to the NAND, which number can be orders of magnitude larger than associated with a day-day usage model of the NAND.Benchmarking can provide testing of the capability of the NAND and performance levels of the NAND. However, in cases like benchmarking, read level calibration at intervals associated with typical user usage models may not be appropriate and can be conducted for improved performance of the NAND at shorter intervals.[0022] In various embodiments, read calibration can be triggeredusing input/output operations per second (IOPS) sampling in addition to time based sampling. IOPS is an input/output performance measurement. In the IOPS based sampling, the firmware for the NAND can track reads and writes separately on the NAND. In response to the tracking, the firmware can trigger the read voltage sampling for each certain numbers of reads or/and writes performed. By using the NAND reads/writes as input parameters, the firmware is better positioned to track the cell Vt movement under targeted benchmark stress conditions on the NAND. The time based sampling can be performed at a slower rate, i.e., at longer intervals, than IOPS sampling of reads/writes. The faster rate of the IOPS based sampling rate can be conducted at a strategically faster rate such that the increased number of samples is buried in the IOPS. Using an approach as taught herein combining time based sampling with IOPS can allow systems to accommodate targeted benchmarks with accelerated sampling without affecting the performance and latency of typical user workloads.[0023] Electronic devices, such as mobile electronic devices (e.g., smart phones, tablets, etc.), electronic devices for use in automotive applications (e.g., automotive sensors, control units, driver-assistance systems, passenger safety or comfort systems, etc.), and internet-connected appliances or devices (e.g., internet- of-things (IoT) devices, etc.), have varying storage needs depending on, among other things, the type of electronic device, use environment, performance expectations, etc.[0024] Electronic devices can be broken down into several main components: a processor (e.g., a central processing unit (CPU) or other main processor); memory (e.g., one or more volatile or non-volatile random-access memory (RAM) memory device, such as dynamic RAM (DRAM), mobile or low-power double-data-rate synchronous DRAM (DDR SDRAM), etc.); and a storage device (e.g., non-volatile memory (NVM) device, such as flash memory, read-only memory (ROM), an SSD, an MMC, or other memory card structure or assembly, etc.). In certain examples, electronic devices can include a user interface (e.g., a display, touchscreen, keyboard, one or more buttons, etc.), a graphics processing unit (GPU), a power management circuit, a baseband processor or one or more transceiver circuits, etc.[0025] Figure 1 illustrates an example of an environment 100 including a host device 105 and a memory device 110 configured to communicate over a communication interface. The host device 105 or the memory device 110 may be included in a variety of products 150, such as Internet of Things (IoT) devices (e.g., a refrigerator or other appliance, sensor, motor or actuator, mobile communication device, automobile, drone, etc.) to support processing, communications, or control of the product 150.[0026] The memory device 110 includes a memory controller 1 15 and a memory array 120 including, for example, a number of individual memory die (e.g., a stack of three-dimensional (3D) NAND die). In 3D architecture semiconductor memory technology, vertical structures are stacked, increasing the number of tiers, physical pages, and accordingly, the density of a memory device (e.g., a storage device). In an example, the memory device 1 10 can be a discrete memory or storage device component of the host device 105. In other examples, the memory device 110 can be a portion of an integrated circuit (e.g., system on a chip (SOC), etc.), stacked or otherwise included with one or more other components of the host device 105.[0027] One or more communication interfaces can be used to transfer data between the memory device 110 and one or more other components of the host device 105, such as a Serial Advanced Technology Attachment (SAT A) interface, a Peripheral Component Interconnect Express (PCIe) interface, a Universal Serial Bus (USB) interface, a Universal Flash Storage (UFS) interface, an eMMC™ interface, or one or more other connectors or interfaces. The host device 105 can include a host system, an electronic device, a processor, a memory card reader, or one or more other electronic devices external to the memory device 110. In some examples, the host 105 may be a machine having some portion, or all, of the components discussed in reference to the machine 500 of Figure 5.[0028] The memory controller 115 can receive instructions from the host 105, and can communicate with the memory array, such as to transfer data to (e.g., write or erase) or from (e.g., read) one or more of the memory cells, planes, sub-blocks, blocks, or pages of the memory array. The memory controller 115 can include, among other things, circuitry or firmware, including one or more components or integrated circuits. For example, the memory controller 115 can include one or more memory control units, circuits, or components configured to control access across the memory array 120 and to provide a translation layer between the host 105 and the memory device 110. The memory controller 115 can include one or more input/output (I/O) circuits, lines, or interfaces to transfer data to or from the memory array 120. The memory controller 115 can include a memory manager 125 and an array controller 135.[0029] The memory manager 125 can include, among other things, circuitry or firmware, such as a number of components or integrated circuits associated with various memory management functions. For purposes of the present description example memory operation and management functions will be described in the context of NAND memory. Persons skilled in the art will recognize that other forms of non-volatile memory may have analogous memory operations or management functions. Such NAND management functions include wear leveling (e.g., garbage collection or reclamation), error detection or correction, block retirement, or one or more other memory management functions. The memory manager 125 can parse or format host commands (e.g., commands received from a host) into device commands (e.g., commands associated with operation of a memory array, etc.), or generate device commands (e.g., to accomplish various memory management functions) for the array controller 135 or one or more other components of the memory device 110.[0030] The memory manager 125 can include a set of management tables 130 configured to maintain various information associated with one or more component of the memory device 1 10 (e.g., various information associated with a memory array or one or more memory cells coupled to the memory controller 1 15). For example, the management tables 130 can include information regarding block age, block erase count, error history, or one or more error counts (e.g., a write operation error count, a read bit error count, a read operation error count, an erase error count, etc.) for one or more blocks of memory cells coupled to the memory controller 115. In certain examples, if the number of detected errors for one or more of the error counts is above a threshold, the bit error can be referred to as an uncorrectable bit error. The management tables 130 can maintain a count of correctable or uncorrectable bit errors, among other things. [0031] The array controller 135 can include, among other things, circuitry or components configured to control memory operations associated with writing data to, reading data from, or erasing one or more memory cells of the memory device 1 10 coupled to the memory controller 115. The memory operations can be based on, for example, host commands received from the host 105, or internally generated by the memory manager 125 (e.g., in association with wear leveling, error detection or correction, etc.).[0032] The array controller 135 can include an error correction code (ECC) component 140, which can include, among other things, an ECC engine or other circuitry configured to detect or correct errors associated with writing data to or reading data from one or more memory cells of the memory device 110 coupled to the memory controller 115. The memory controller 1 15 can be configured to actively detect and recover from error occurrences (e.g., bit errors, operation errors, etc.) associated with various operations or storage of data, while maintaining integrity of the data transferred between the host 105 and the memory device 110, or maintaining integrity of stored data (e.g., using redundant RAID storage, etc.), and can remove (e.g., retire) failing memory resources (e.g., memory cells, memory arrays, pages, blocks, etc.) to prevent future errors.[0033] The memory array 120 can include several memory cells arranged in, for example, a number of devices, planes, sub-blocks, blocks, or pages. As one example, a 48 GB TLC NAND memory device can include 18,592 bytes (B) of data per page (16,384 + 2208 bytes), 1536 pages per block, 548 blocks per plane, and 4 or more planes per device. As another example, a 32 GB MLC memory device (storing two bits of data per cell (i.e., 4 programmable states)) can include 18,592 bytes (B) of data per page (16,384 + 2208 bytes), 1024 pages per block, 548 blocks per plane, and 4 planes per device, but with half the required write time and twice the program/erase (P/E) cycles as a corresponding TLC memory device. Other examples can include other numbers or arrangements. In some examples, a memory device, or a portion thereof, may be selectively operated in SLC mode, or in a desired MLC mode (such as TLC, QLC, etc.).[0034] In operation, data is typically written to or read from the NA D memory device 110 in pages, and erased in blocks. However, one or more memory operations (e.g., read, write, erase, etc.) can be performed on larger or smaller groups of memory cells, as desired. The data transfer size of a NAND memory device 1 10 is typically referred to as a page; whereas the data transfer size of a host is typically referred to as a sector.[0035] Although a page of data can include a number of bytes of user data (e.g., a data payload including a number of sectors of data) and its corresponding metadata, the size of the page often refers only to the number of bytes used to store the user data. As an example, a page of data having a page size of 4 KB may include 4 KB of user data (e.g., 8 sectors assuming a sector size of 512 B) as well as a number of bytes (e.g., 32 B, 54 B, 224 B, etc.) of metadata corresponding to the user data, such as integrity data (e.g., error detecting or correcting code data), address data (e.g., logical address data, etc.), or other metadata associated with the user data.[0036] Different types of memory cells or memory arrays 120 can provide for different page sizes, or may require different amounts of metadata associated therewith. For example, different memory device types may have different bit error rates, which can lead to different amounts of metadata necessary to ensure integrity of the page of data (e.g., a memory device with a higher bit error rate may require more bytes of error correction code data than a memory device with a lower bit error rate). As an example, a multi-level cell (MLC) NAND flash device may have a higher bit error rate than a corresponding single-level cell (SLC) NAND flash device. As such, the MLC device may require more metadata bytes for error data than the corresponding SLC device.[0037] Figure 2 illustrates an example schematic diagram of a 3D NAND architecture semiconductor memory array 200 including a number of strings of memory cells (e.g., first-third Ao memory strings 205Ao-207Ao, first-third An memory strings 205An-207An, first-third Bo memory strings 205Bo-207Bo, first- third Bnmemory strings 205Bn-207Bn, etc.), organized in blocks (e.g., block A 201 A, block B 201B, etc.) and sub-blocks (e.g., sub-block Ao 201Ao, sub-block An201 An, sub-block B0201B0, sub-block Bn201Bn, etc.). The memory array 200 represents a portion of a greater number of similar structures that would typically be found in a block, device, or other unit of a memory device.[0038] Each string of memory cells includes a number of tiers of charge storage transistors (e.g., floating gate transistors, charge-trapping structures, etc.) stacked in the Z direction, source to drain, between a source line (SRC) 235 or a source-side select gate (SGS) (e.g., first-third A0SGS 231A0-233A0, first-third AnSGS 231An-233An, first-third B0SGS 231B0-233B0, first-third B„ SGS 23 lBn- 233Bn, etc.) and a drain-side select gate (SGD) (e.g., first-third A0SGD 226A0- 228Ao, first-third AnSGD 226An-228An, first-third Bo SGD 226Bo-228B0, first- third BnSGD 226Bn-228Bn, etc.). Each string of memory cells in the 3D memory array can be arranged along the X direction as data lines (e.g., bit lines (BL) BLO- BL2 220-222), and along the Y direction as physical pages.[0039] Within a physical page, each tier represents a row of memory cells, and each string of memory cells represents a column. A sub-block can include one or more physical pages. A block can include a number of sub-blocks (or physical pages) (e.g., 128, 256, 384, etc.). Although illustrated herein as having two blocks, each block having two sub-blocks, each sub-block having a single physical page, each physical page having three strings of memory cells, and each string having 8 tiers of memory cells, in other examples, the memory array 200 can include more or fewer blocks, sub-blocks, physical pages, strings of memory cells, memory cells, or tiers. For example, each string of memory cells can include more or fewer tiers (e.g., 16, 32, 64, 128, etc.), as well as one or more additional tiers of semiconductor material above or below the charge storage transistors (e.g., select gates, data lines, etc.), as desired. As an example, a 48 GB TLC NAND memory device can include 18,592 bytes (B) of data per page (16,384 + 2208 bytes), 1536 pages per block, 548 blocks per plane, and 4 or more planes per device.[0040] Each memory cell in the memory array 200 includes a control gate (CG) coupled to (e.g., electrically or otherwise operatively connected to) an access line (e.g., word lines (WL) WL00-WL70210A-217A, WL0i-WL7i 210B-217B, etc.), which collectively couples the control gates (CGs) across a specific tier, or a portion of a tier, as desired. Specific tiers in the 3D memory array, andaccordingly, specific memory cells in a string, can be accessed or controlled using respective access lines. Groups of select gates can be accessed using various select lines. For example, first-third Ao SGD 226Ao-228Ao can be accessed using an Ao SGD line SGDAo 225A0, first-third AnSGD 226An-228Ancan be accessed using an An SGD line SGDA„ 225 An, first-third Bo SGD 226B0-228B0can be accessed using an B0SGD line SGDBo 225B0, and first-third BnSGD 226Bn-228Bncan be accessed using an BnSGD line SGDBn225Bn. First-third A0SGS 231 A0-233A0and first-third AnSGS 231 An-233An can be accessed using a gate select line SGSo 230A, and first-third B0SGS 231B0-233B0and first-third BnSGS 23 lBn-233Bncan be accessed using a gate select line SGSi 230B.[0041] In an example, the memory array 200 can include a number of levels of semiconductor material (e.g., polysilicon, etc.) configured to couple the control gates (CGs) of each memory cell or select gate (or a portion of the CGs or select gates) of a respective tier of the array. Specific strings of memory cells in the array can be accessed, selected, or controlled using a combination of bit lines (BLs) and select gates, etc., and specific memory cells at one or more tiers in the specific strings can be accessed, selected, or controlled using one or more access lines (e.g., word lines).[0042] Figure 3 illustrates an example schematic diagram of a portion of a NAND architecture semiconductor memory array 300 including a plurality of memory cells 302 arranged in a two-dimensional array of strings (e.g., first-third strings 305-307) and tiers (e.g., illustrated as respective word lines (WL) WL0- WL7 310-317, a drain-side select gate (SGD) line 325, a source-side select gate (SGS) line 330, etc.), and sense amplifiers or devices 360. For example, the memory array 300 can illustrate an example schematic diagram of a portion of one physical page of memory cells of a 3D NAND architecture semiconductor memory device, such as illustrated in Figure 2.[0043] Each string of memory cells is coupled to a source line (SRC) using a respective source-side select gate (SGS) (e.g., first-third SGS 331-333), and to a respective data line (e.g., first-third bit lines (BL) BL0-BL2 320-322) using a respective drain-side select gate (SGD) (e.g., first-third SGD 326-328). Although illustrated with 8 tiers (e.g., using word lines (WL) WL0-WL7 310-317) and three data lines (BL0-BL2 326-328) in the example of Figure 3, other examples can include strings of memory cells having more or fewer tiers or data lines, as desired.[0044] In a NAND architecture semiconductor memory array, such as the example memory array 300, the state of a selected memory cell 302 can be accessed by sensing a current or voltage variation associated with a particular data line containing the selected memory cell. The memory array 300 can be accessed (e.g., by a control circuit, one or more processors, digital logic, etc.) using one or more drivers. In an example, one or more drivers can activate a specific memory cell, or set of memory cells, by driving a particular potential to one or more data lines (e.g., bit lines BL0-BL2), access lines (e.g., word lines WL0-WL7), or select gates, depending on the type of operation desired to be performed on the specific memory cell or set of memory cells.[0045] To program or write data to a memory cell, a programming voltage (Vpgm) (e.g., one or more programming pulses, etc.) can be applied to selected word lines (e.g., WL4), and thus, to a control gate of each memory cell coupled to the selected word lines (e.g., first-third control gates (CGs) 341-343 of the memory cells coupled to WL4). Programming pulses can begin, for example, at or near 15 V, and, in certain examples, can increase in magnitude during each programming pulse application. While the program voltage is applied to the selected word lines, a potential, such as a ground potential (e.g., Vss), can be applied to the data lines (e.g., bit lines) and substrates (and thus the channels, between the sources and drains) of the memory cells targeted for programming, resulting in a charge transfer (e.g., direct injection or Fowler-Nordheim (FN) tunneling, etc.) from the channels to the floating gates of the targeted memory cells.[0046] In contrast, a pass voltage (Vpass) can be applied to one or more word lines having memory cells that are not targeted for programming, or an inhibit voltage (e.g., Vcc) can be applied to data lines (e.g., bit lines) having memory cells that are not targeted for programming, for example, to inhibit charge from being transferred from the channels to the floating gates of such non-targeted memory cells. The pass voltage can be variable, depending, for example, on the proximity of the applied pass voltages to a word line targeted for programming. The inhibit voltage can include a supply voltage (Vcc), such as a voltage from an external source or supply (e.g., a battery, an AC-to-DC converter, etc.), relative to a ground potential (e.g., Vss).[0047] As an example, if a programming voltage (e.g., 15V or more) is applied to a specific word line, such as WL4, a pass voltage of 10V can be applied to one or more other word lines, such as WL3, WL5, etc., to inhibit programming of non- targeted memory cells, or to retain the values stored on such memory cells not targeted for programming. As the distance between an applied program voltage and the non-targeted memory cells increases, the pass voltage required to refrain from programming the non-targeted memory cells can decrease. For example, where a programming voltage of 15 V is applied to WL4, a pass voltage of 10V can be applied to WL3 and WL5, a pass voltage of 8V can be applied to WL2 and WL6, a pass voltage of 7V can be applied to WL1 and WL7, etc. In other examples, the pass voltages, or number of word lines, etc., can be higher or lower, or more or less.[0048] The sense amplifiers 360, coupled to one or more of the data lines (e.g., first, second, or third bit lines (BL0-BL2) 320-322), can detect the state of each memory cell in respective data lines by sensing a voltage or current on a particular data line.[0049] Between applications of one or more programming pulses (e.g., Vpgm), a verify operation can be performed to determine if a selected memory cell has reached its intended programmed state. If the selected memory cell has reached its intended programmed state, it can be inhibited from further programming. If the selected memory cell has not reached its intended programmed state, additional programming pulses can be applied. If the selected memory cell has not reached its intended programmed state after a particular number of programming pulses (e.g., a maximum number), the selected memory cell, or a string, block, or page associated with such selected memory cell, can be marked as defective.[0050] To erase a memory cell or a group of memory cells (e.g., erasure is typically performed in blocks or sub-blocks), an erasure voltage (Vers) (e.g., typically Vpgm) can be applied to the substrates (and thus the channels, between the sources and drains) of the memory cells targeted for erasure (e.g., using one or more bit lines, select gates, etc.), while the word lines of the targeted memory cells are kept at a potential, such as a ground potential (e.g., Vss), resulting in a charge transfer (e.g., direct injection or Fowler-Nordheim (FN) tunneling, etc.) from the floating gates of the targeted memory cells to the channels.[0051] Figure 4 illustrates an example block diagram of a memory device 400 including a memory array 402 having a plurality of memory cells 404, and one or more circuits or components to provide communication with, or perform one or more memory operations on, the memory array 402. The memory device 400 can include a row decoder 412, a column decoder 414, sense amplifiers 420, a page buffer 422, a selector 424, an input/output (I/O) circuit 426, and a memory control unit 430.[0052] The memory cells 404 of the memory array 402 can be arranged in blocks, such as first and second blocks 402 A, 402B. Each block can include sub- blocks. For example, the first block 402A can include first and second sub-blocks 402Ao, 402An, and the second block 402B can include first and second sub-blocks 402Bo, 402Bn. Each sub-block can include a number of physical pages, each page including a number of memory cells 404. Although illustrated herein as having two blocks, each block having two sub-blocks, and each sub-block having a number of memory cells 404, in other examples, the memory array 402 can include more or fewer blocks, sub-blocks, memory cells, etc. In other examples, the memory cells 404 can be arranged in a number of rows, columns, pages, sub- blocks, blocks, etc., and accessed using, for example, access lines 406, first data lines 410, or one or more select gates, source lines, etc.[0053] The memory control unit 430 can control memory operations of the memory device 400 according to one or more signals or instructions received on control lines 432, including, for example, one or more clock signals or control signals that indicate a desired operation (e.g., write, read, erase, etc.), or address signals (AO-AX) received on one or more address lines 416. One or more devices external to the memory device 400 can control the values of the control signals on the control lines 432, or the address signals on the address line 416. Examples of devices external to the memory device 400 can include, but are not limited to, a host, a memory controller, a processor, or one or more circuits or components not illustrated in Figure 4.[0054] The memory device 400 can use access lines 406 and first data lines 410 to transfer data to (e.g., write or erase) or from (e.g., read) one or more of the memory cells 404. The row decoder 412 and the column decoder 414 can receive and decode the address signals (AO-AX) from the address line 416, can determine which of the memory cells 404 are to be accessed, and can provide signals to one or more of the access lines 406 (e.g., one or more of a plurality of word lines (WLO-WLm)) or the first data lines 410 (e.g., one or more of a plurality of bit lines (BLO-BLn)), such as described above.[0055] The memory device 400 can include sense circuitry, such as the sense amplifiers 420, configured to determine the values of data on (e.g., read), or to determine the values of data to be written to, the memory cells 404 using the first data lines 410. For example, in a selected string of memory cells 404, one or more of the sense amplifiers 420 can read a logic level in the selected memory cell 404 in response to a read current flowing in the memory array 402 through the selected string to the data lines 410. [0056] One or more devices external to the memory device 400 can communicate with the memory device 400 using the VO lines (DQ0-DQN) 408, address lines 416 (AO-AX), or control lines 432. The input/output (I/O) circuit 426 can transfer values of data in or out of the memory device 400, such as in or out of the page buffer 422 or the memory array 402, using the VO lines 408, according to, for example, the control lines 432 and address lines 416. The page buffer 422 can store data received from the one or more devices external to the memory device 400 before the data is programmed into relevant portions of the memory array 402, or can store data read from the memory array 402 before the data is transmitted to the one or more devices external to the memory device 400.[0057] The column decoder 414 can receive and decode address signals (AO- AX) into one or more column select signals (CSELl-CSELn). The selector 424 (e.g., a select circuit) can receive the column select signals (CSELl-CSELn) and select data in the page buffer 422 representing values of data to be read from or to be programmed into memory cells 404. Selected data can be transferred between the page buffer 422 and the I/O circuit 426 using second data lines 418.[0058] The memory control unit 430 can receive positive and negative supply signals, such as a supply voltage (Vcc) 434 and a negative supply (Vss) 436 (e.g., a ground potential), from an external source or supply (e.g., an internal or external battery, an AC-to-DC converter, etc.). In certain examples, the memory control unit 430 can include a regulator 428 to internally provide positive or negative supply signals.[0059] Figure 5 illustrates a block diagram of an example machine 500 upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform. In alternative embodiments, the machine 500 may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine 500 may operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine 500 may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine 500 may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, an IoT device, automotive system, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.[0060] Examples, as described herein, may include, or may operate by, logic, components, devices, packages, or mechanisms. Circuitry is a collection (e.g., set) of circuits implemented in tangible entities that include hardware (e.g., simple circuits, gates, logic, etc.). Circuitry membership may be flexible over time and underlying hardware variability. Circuitries include members that may, alone or in combination, perform specific tasks when operating. In an example, hardware of the circuitry may be immutably designed to carry out a specific operation (e.g., hardwired). In an example, the hardware of the circuitry may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation. In connecting the physical components, the underlying electrical properties of a hardware constituent are changed, for example, from an insulator to a conductor or vice versa. The instructions enable participating hardware (e.g., the execution units or a loading mechanism) to create members of the circuitry in hardware via the variable connections to carry out portions of the specific tasks when in operation.Accordingly, the computer readable medium is communicatively coupled to the other components of the circuitry when the device is operating. In an example, any of the physical components may be used in more than one member of more than one circuitry. For example, under operation, execution units may be used in a first circuit of a first circuitry at one point in time and reused by a second circuit in the first circuitry, or by a third circuit in a second circuitry at a different time.[0061] The machine (e.g., computer system) 500 (e.g., the host device 105, the memory device 110, etc.) may include a hardware processor 502 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof, such as the memory controller 115, etc.), a main memory 504 and a static memory 506, some or all of which may communicate with each other via an interlink (e.g., bus) 508. The machine 500 may further include a display unit 510, an alphanumeric input device 512 (e.g., a keyboard), and a user interface (UI) navigation device 514 (e.g., a mouse). In an example, the display unit 510, input device 512 and UI navigation device 514 may be a touch screen display. The machine 500 may additionally include a storage device (e.g., drive unit) 521, a signal generation device 518 (e.g., a speaker), a network interface device 520, and one or more sensors 516, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine 500 may include an output controller 528, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).[0062] The storage device 521 may include a machine readable medium 522 on which is stored one or more sets of data structures or instructions 524 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 524 may also reside, completely or at least partially, within the main memory 504, within static memory 506, or within the hardware processor 502 during execution thereof by the machine 500. In an example, one or any combination of the hardware processor 502, the main memory 504, the static memory 506, or the storage device 521 may constitute the machine readable medium 522.[0063] While the machine readable medium 522 is illustrated as a single medium, the term "machine readable medium" may include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) configured to store the one or more instructions 524.[0064] The term "machine readable medium" may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 500 and that cause the machine 500 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media. In an example, a massed machine readable medium comprises a machine-readable medium with a plurality of particles having invariant (e.g., rest) mass. Accordingly, massed machine-readable media are not transitory propagating signals. Specific examples of massed machine readable media may include: nonvolatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically ErasableProgrammable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.[0065] The instructions 524 (e.g., software, programs, an operating system (OS), etc.) or other data are stored on the storage device 521, can be accessed by the memory 504 for use by the processor 502. The memory 504 (e.g., DRAM) is typically fast, but volatile, and thus a different type of storage than the storage device 521 (e.g., an SSD), which is suitable for long-term storage, including while in an "off' condition. The instructions 524 or data in use by a user or the machine 500 are typically loaded in the memory 504 for use by the processor 502. When the memory 504 is full, virtual space from the storage device 521 can be allocated to supplement the memory 504; however, because the storage 521 device is typically slower than the memory 504, and write speeds are typically at least twice as slow as read speeds, use of virtual memory can greatly reduce user experience due to storage device latency (in contrast to the memory 504, e.g., DRAM).Further, use of the storage device 521 for virtual memory can greatly reduce the usable lifespan of the storage device 521.[0066] In contrast to virtual memory, virtual memory compression (e.g., the Linux® kernel feature "ZRAM") uses part of the memory as compressed block storage to avoid paging to the storage device 521. Paging takes place in the compressed block until it is necessary to write such data to the storage device 521. Virtual memory compression increases the usable size of memory 504, while reducing wear on the storage device 521.[0067] Storage devices optimized for mobile electronic devices, or mobile storage, traditionally include MMC solid-state storage devices (e.g., micro Secure Digital (microSD™) cards, etc.). MMC devices include a number of parallel interfaces (e.g., an 8-bit parallel interface) with a host device, and are often removable and separate components from the host device. In contrast, eMMC™ devices are attached to a circuit board and considered a component of the host device, with read speeds that rival serial ATA™ (Serial AT (AdvancedTechnology) Attachment, or SATA) based SSD devices. However, demand for mobile device performance continues to increase, such as to fully enable virtual or augmented-reality devices, utilize increasing networks speeds, etc. In response to this demand, storage devices have shifted from parallel to serial communication interfaces. Universal Flash Storage (UFS) devices, including controllers and firmware, communicate with a host device using a low-voltage differential signaling (LVDS) serial interface with dedicated read/write paths, further advancing greater read/write speeds.[0068] The instructions 524 may further be transmitted or received over a communications network 526 using a transmission medium via the network interface device 520 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Examplecommunication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 520 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 526. In an example, the network interface device 520 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MEVIO), or multiple-input single-output (MISO) techniques. The term "transmission medium" shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 500, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.[0069] Figure 6 is a block diagram of features of a system 600 having a memory device 642, one or more trackers 646, 648, and 649, and a calibration controller 644. Memory device 642 is arranged to receive read and write commands to read from and write to memory cells of an array of memory cells of memory device 642. Memory device 642 can be a NAND memory device. Such a NAND memory device may have an array of memory cells, where the array is structured as a three-dimensional array. [0070] The one or more trackers 646, 648, and 649 can be arranged to monitor parameters including a selected time interval, a number of read operations to read at least a portion of the memory device, and a number of at least one of write operations and erase operations to the at least the portion of the memory device. The one or more trackers can be realized as individual trackers or as trackers structured to monitor more than one tracker. When structured as individual trackers, read tracker 646 can include a counter to count a number of read operations to read at least a portion of the memory device; erase, write tracker 648 can include a counter to count at least one of write operations and erase operations to the at least the portion of the memory device; and time tracker 649 can include a timer to monitor time with respect to an initialization/reset time and a selected time interval. Trackers 646, 648, and 649 can be structured as part of firmware that manages at least some features of memory device 642, or structured as a combination of firmware and counter circuitry.[0071] Calibration controller 644 can be arranged to trigger a read level calibration of memory device 642 based on inputs from the one or more trackers 646, 648, and 649 and a determination of an occurrence of at least one event from a set of events. The set of events can include a monitored time equal to or exceeding the selected time interval, the number of the read operations equal to or exceeding a predetermined threshold for a number of read operations within the selected time interval, and the number of the at least one of write operations and erase operations equal to or exceeding a threshold for a number of at least one of write operations and erase operations within the selected time interval. Calibration controller 644 can include firmware with stored instructions to determine the occurrence based on the inputs from the one or more trackers and stored criteria for a scan of memory device that can provide for a read level calibration. The one or more trackers 646, 648, and 649 and calibration controller 644 may be structured with instructions in a common set of firmware.[0072] Controller logic 647, which communicates with memory device 642 with respect to operating on commands from a host 641, such as memory read and writes of data, can be arranged with calibration controller 644 to receive commands and/or instructions with respect to system read level calibration and generate read commands to memory device 642 for the system read level calibration. In addition, controller logic 647 can provide read and write commands to memory device 642 from flash translation layer (FTL) 643 that includes reads and writes in addition to host defined reads and writes. The term reads can be used to refer to read operations or commands, the term writes can be used to refer to write operations or write commands, and the term erases can be used to refer to erase operations or erase commands. FTL 643 is firmware that can provide some management tasks for memory device 642. FTL 643 can include instructions and routines for firmware generated scans of memory device 642, garbage collection, and other management tasks of memory device 642. Tasks such as garbage collection can be conducted according to a number of conventional techniques. Read tracker 646 can be arranged effectively at the output of controller logic 647 to determine the number of read operations sent to memory device 642. Erase/write tracker 648 can be arranged at the output of controller logic 647 to determine the number of at least one of write operations and erase operations conducted by memory device 642. In various embodiments, erase/write tracker 648 may be realized as two trackers with one for erase operations and one for write operations.[0073] Figure 7 is a representation of a procedure using combined IOPS sampling and time based sampling of a NA D memory device 742. One or more NAND trackers 746 and 748, time tracker 749, a calibration controller 744, controller logic 747, FTL 743, and a host 741 of Figure 7 can be arranged similar to the one or more trackers 646, 648, and 649, calibration controller 644, controller logic 647, FTL 643, and host 641, respectively, of Figure 6. Reads, writes, and erases can be provided to NAND 742 from host 741 via controller logic 747. Read commands for read operations from controller logic 747 can be counted by NAND read tracker, which can be a NAND read counter 746. Signals from NAND 742 regarding write operations and/or erase operations can be counted by NAND erase, write tracker, which can be a NAND erase, write counter 748. In various embodiments, NAND erase, write counter 748 may be realized as two counters with one for erase operations and one for write operations. A time tracker 749 can be structured to provide the time from a reference time. The reference time can be zero correlated to a beginning of the procedure.[0074] The time from the reference time, for instance time zero, can be monitored and compared to a selected time interval. The selected time interval can be the time between scheduled read level calibrations of NAND 742. The time between schedules real level calibrations can be referred to as tscan. Tscan can be set based on a user usage model. Tscan can be set in calibration controller 644 and can be changed with implementation or modification of a user usage model of NAND 742. At 759, a determination can be made as to whether the current monitored time is greater than tscan. If the current monitored time is not greater than tscan, this status need not be provided to calibration controller 744 and time tracker 749 continues to track the time from the reference time, if the current monitored time is greater than tscan, this status can be provided to calibration controller 744, and calibration controller 744 can trigger a read level calibration of NAND 742. Upon triggering the read level calibration or on completion of the read level calibration, calibration controller 744 can reset the time tracker, which can be a timer, to reference zero from which time tracker 749 continues to monitor time. In addition to setting time tracker 749 to reference zero, calibration controller 744 can reset NAND read counter 746 to a reference count, which can be a zero count, and reset NAND erase, write counter 748 to another reference count, which also can be a zero count. In an embodiment, with the monitored time of time tracker 749 equal to tscan, calibration controller 744 can operate in the same manner as for the monitored time being greater than tscan. In an alternative embodiment, with the monitored time of time tracker 749 equal to tscan, calibration controller 744 can operate in the same manner as for the monitored time being less than tscan.[0075] Signals from NAND 742 that indicate completion of write operations and/or erase operations can be monitored to count write operations and/or erase operations by NAND erase, write counter 748. The number of erases counted and the number of writes counted can have different granularity. The number of erases can typically be the average erase count of NAND 742. The number of writes can typically be the number of pages written to NAND 742. At 758, a determination can be made as to whether the number of erases counted and/or the number of writes counted is greater than a predetermined threshold for erases and/or writes. If the current count of erases and/or writes is not greater than the predetermined threshold for erases and/or writes, this status need not be provided to calibration controller 744 and NAND erase, write counter 748 continues to track the number of erases and/or the number or writes. If the current count of erases and/or writes is greater than the predetermined threshold for erases and/or writes, this status can be provided to calibration controller 744, and calibration controller 744 can trigger a read level calibration of NAND 742.[0076] Upon triggering the read level calibration or on completion of the read level calibration, calibration controller 744 can reset NAND erase, write counter 748 to its reference count, which also can be a zero count, from which NAND erase, write counter 748 again starts to count the number of erases and/or the number or writes by NAND 742. In addition to setting NAND erase, write counter 748 to reference count zero, calibration controller 744 can reset time tracker 749 to reference zero and reset NAND read counter to a reference count zero. If the count of NAND erase, write counter 748 does not reach the predetermined threshold for the number of erases and/or the number or writes by NAND 742 by tscan, then the event of the monitored time by time tracker 749 exceeding tscan will result in the NAND erase, write counter 748 being reset to reference zero. With NAND erase, write counter 748 working in conjunction with time tracker 749, the highest count of the NAND erase, write counter 748 occurs within the selected time, tscan. NAND erase, write counter 748 may be arranged as two counters with two predetermined thresholds. In an embodiment, with the monitored count of NAND erase, write counter 748 equal to the predetermined threshold, calibration controller 744 can operate in the same manner as for the count being greater than the threshold. In an alternative embodiment, with the monitored count of NAND erase, write counter 748 equal to its respective predetermined threshold, calibration controller 744 can operate in the manner as for the monitored time being less than the threshold.[0077] Read operations from logic controller 747 to NAND 742 for reading form NAND 742 can be monitored to count read operations by NAND read counter 746. At 768, a determination can be made as to whether the number of read operations counted is greater than a predetermined threshold for read operations for NAND 742. If the current count of reads is not greater than the predetermined threshold for reads from NAND 742, this status need not be provided to calibration controller 744 and NAND read counter 746 continues to track the number of reads. If the current count of reads is greater than the predetermined threshold for reads, this status can be provided to calibration controller 744, and calibration controller 744 can trigger a read level calibration of NAND 742. [0078] Upon triggering the read level calibration or on completion of the read level calibration, calibration controller 744 can reset NAND read counter 746 to its reference count, which also can be a zero count, from which NAND read counter 746 again starts to count the number of reads to NAND 742. In addition to setting NAND read counter 746 to reference count zero, calibration controller 744 can reset time tracker 749 to reference zero and reset NAND erase, write counter 748 to a reference count zero. If the count of NAND read counter 746 does not reach the predetermined threshold for the number of reads by NAND 742 by tscan, then the event of the monitored time by time tracker 749 exceeding tscan will result in the NAND read counter 746 resetting to reference zero. With NAND read counter 748 working in conjunction with time tracker 749, the highest count of the NAND read counter 748 occurs within the selected time, tscan. In an embodiment, with the monitored count of NAND read counter 746 equal to its respective predetermined threshold, calibration controller 744 can operate in the same manner as for the count being greater than the predetermined threshold. In an alternative embodiment, with the monitored count of NAND read counter 746 equal to its respective predetermined threshold, calibration controller 744 can operate in the manner as for the monitored time being less than the threshold.[0079] In the procedure shown in Figure 7, calibration controller 744 can control read level calibration and management of the reset of the sampling criterion. Once triggered, the read level calibration can be conducted by any of conventional calibrations of read voltages for a NAND. Firmware of calibration controller 744 can include parameters for tscan and thresholds for various count mechanisms. A number of different values for tscans and the threshold values may be stored in calibration controller 744 with selection criteria for selection and implementation of particular values. In addition, the comparison of monitored / measured counts and time to their respective thresholds can be conducted in the firmware of calibration controller 744 resulting in the determination of a number of events occurring. Such a set of events can include a monitored time equal to or exceeding the selected time interval, the number of the read operations equal to or exceeding a predetermined threshold for a number of read operations within the selected time interval, and the number of the at least one of write operations and erase operations equal to or exceeding a threshold for a number of at least one of write operations and erase operations within the selected time interval. Occurrence of one event from a set of events can control the triggering of read level calibration of NAND 742 through the reset procedure by calibration controller 744 to all the trackers 746, 748, and 749.[0080] Figure 8 illustrates the timing associated with time based sampling, read based sampling, and write/erase based sampling to trigger read level processing. For each of these samplings, the sampling starts at time tO, where time tO provides a time stamp of previous sample. In read based sampling, the target can be for high quality determination read intensive workloads. If such activities occur, the update frequency of read level calibration, for example, can be every few hours at a time tR, defined by an IOPS attaining a read criterion, from tO. In write/erase based sampling, the target can be for endurance workloads. If such activities occur, the update frequency of read level calibration, for example, can be every day at a time tW, defined by an IOPS attaining an erase/write criterion, from tO. In time based sampling, the target is typical user workloads. If such activities occur, the update frequency of read level calibration, for example, can be every few days at a time tscan, defined by a scheduling that can be based on user usage model, from tO. In operation, read based sampling and write/erase based sampling can be related to targeted benchmarks that are not representative of the typical user workloads. Figure 8, in some regards, may be viewed as a scheduling for read level calibration every time period tscan with possible interrupts based on IOPS criterion initiating the read level calibration earlier than tscan and restarting the beginning of the tscan period from a new tO.[0081] Read voltage calibration conducted based on a combination of IOPS sampling and time based sampling, as taught herein, can achieve better performance and/or latency for targeted benchmarks with the least or zero impact on the normal user workloads. In addition, such a combined technique, as taught herein, can help the NAND trigger rates for read level calibration to take into consideration targeted benchmarks by triggering calibrations when such targeted benchmarks stress the NAND before the time for scheduled calibration. This procedure can help eliminate NAND over-design, which otherwise could have resulted in NAND endurance or performance penalty.[0082] Figure 9 is a flow diagram of features of an embodiment of an example method 900 of optimizing a scan of a memory device for read level calibration. At 910, a number of read operations of a memory array of a memory structure is determined. Determining the number of read operations of the memory array can include counting read commands sent to read the memory array. At 920, a number of at least one of write operations and erase operations of the memory array is determined. Determining the number of at least one of write operations and erase operations of the memory array can include counting at least one of write message and erase messages sent in response to conducting at least one of write operations and erase operations on in the memory array. At 930, an occurrence of at least one event from a set of events is determined. The set of events can include a monitored time equal to or exceeding a selected time interval, the determined number of read operations of the memory array equal to or exceeding a threshold for a number of read operations of the memory array within the selected time interval, and the determined number of at least one of write operations and erase operations of the memory array equal to or exceeding a threshold for a number of at least one of write operations and erase operations within the selected time interval. At 940, a read level calibration of the memory array is triggered in response to the determination of the occurrence.[0083] Variations of method 900 or methods similar to method 900 can include a number of different embodiments that may be combined depending on the application of such methods and/or the architecture of systems in which such methods are implemented. Such methods can include conducting a read level calibration in response to the triggering, resetting a timer to begin another wait interval for a read level calibration at the selected time interval from the reset value of the timer, and resetting one or more trackers of a number of read operations and a number of at least one of write operations and erase operations of the memory array from the reset value of the timer and within the selected time interval.Resetting the one or more trackers can include resetting a read counter of a number of read operations and a counter of at least one of write operations and erase operations of the memory array.[0084] Method 900 or similar methods can include conducting the triggered read level calibration by sampling memory raw bit error rates at different read voltages to select a set of read voltages with a least raw bit error rate. Method 900 or similar methods can include tracking memory cell threshold voltage movement of the array of memory cells under stress conditions. [0085] Firmware can comprise instructions, such as a microcode, which when executed by a controller, can cause performance of operations comprising:determining a number of read operations of a memory array of a memory structure and a number of at least one of write operations and erase operations of the memory array; determining an occurrence of at least one event from a set of events including a monitored time equal to or exceeding a selected time interval, the determined number of read operations of the memory array equal to or exceeding a threshold for a number of read operations of the memory array within the selected time interval, and the determined number of at least one of write operations and erase operations of the memory array equal to or exceeding a threshold for a number of at least one of write operations and erase operations within the selected time interval; and triggering a read level calibration of the memory array in response to the determination of the occurrence. Determining the number of read operations of the memory array can include counting read commands sent to read the memory array. Determining the number of at least one of write operations and erase operations of the memory array can include counting at least one of write messages and erase messages sent in response to conducting at least one of write operations and erase operations on in the memory array.[0086] Instructions of the firmware, which when executed by a controller, can cause performance of operations, which operations can include conducting a read level calibration in response to the triggering, resetting a timer to begin another wait interval for a read level calibration at the selected time interval from the reset value of the timer, and resetting one or more trackers of a number of read operations and a number of at least one of write operations and erase operations of the memory array from the reset value of the timer and within the selected time interval. Resetting the one or more trackers can include resetting a read counter of a number of read operations and a counter of at least one of write operations and erase operations of the memory array.[0087] Instructions of the firmware, which when executed by a controller, can cause performance of operations, where operations can include conducting the triggered read level calibration by sampling memory raw bit error rates at different read voltages to select a set of read voltages with a least raw bit error rate. In addition, instructions, which when executed by a controller, can cause performance of operations, where operations can include tracking memory cell threshold voltage movement of the array of memory cells under stress conditions.[0088] In various embodiment, an apparatus can comprise: a memory device to receive read and write commands to read from and write to memory cells of an array of memory cells of the memory device; one or more trackers to monitor parameters including a selected time interval, a number of read operations to read at least a portion of the memory device, and a number of at least one of write operations and erase operations to the at least the portion of the memory device; and a calibration controller to trigger a read level calibration based on inputs from the one or more trackers and a determination of an occurrence of at least one event from a set of events including a monitored time equal to or exceeding the selected time interval, the number of the read operations equal to or exceeding a predetermined threshold for a number of read operations within the selected time interval, and the number of the at least one of write operations and erase operations equal to or exceeding a threshold for a number of at least one of write operations and erase operations within the selected time interval.[0089] The calibration controller can include firmware with stored instructions to determine the occurrence based on the inputs from the one or more trackers. The calibration controller can be operable to track memory cell threshold voltage movement of the memory cells under stress conditions. In such apparatus, the triggered read level calibration can include a sampling of memory raw bit error rates at different read voltages to select a set of read voltages associated with a least raw bit error rate. In such apparatus, the array of memory cells of the memory device can be structured in a three-dimensional NAM) configuration.[0090] In such apparatus, the one or more trackers can include a read counter to count read commands sent to the memory device. The one or more trackers can include at least one of counter to count write and erase messages sent from the memory device in response to conducting at least one of an write and erase operation in the memory array. The one or more trackers can include a timer that is resettable to a reset value by the calibration controller to begin another wait interval for a read level calibration at the selected time interval from the reset value, and the calibration controller can be operable to reset the one or more trackers to track read operations and track at least one of write operations and erase operations from the reset value of the timer. [0091] In various embodiments, a system can comprise: a host processor; a controller coupled to communicate with the host processor; a set of memory devices coupled to the controller, the set of memory devices including a NAND memory device having an array of memory cells to which read and write commands are received from the controller to read from and write to memory cells of the NAND memory device; a set of trackers to monitor time, to track read operations to the memory device, and to track write and/or erase operations communicated from the NAND memory device; and a calibration controller to trigger read level calibration based on inputs from the set of trackers and a determination of an occurrence of at least one event from a set of events including the monitored time exceeding a selected time interval, a number of the read operations equal to or exceeding a predetermined threshold for a number of read operations within the selected time interval, and a number of the write and/or erase operations equal to or exceeding a threshold for a number of write and/or erase operations within the selected time interval. The system can include a flash translation layer that generates read and write operations to the NAND memory device via the controller to manage garbage collection of the array of memory cells of the NAND memory device.[0092] The calibration controller can include firmware with stored instructions to determine the occurrence based on the inputs from the set of trackers. The tracker to track read operations to the NAND memory device can include a read counter to count read commands sent from the controller to the NAND memory device, and the tracker to track write and/or erase operations communicated from the NAND memory device can include a write and/or erase counter to count write and/or erase messages sent by the NAND memory device in response to conducting an write and/or erase operations in the memory array. The tracker to monitor time includes a timer that is resettable to a reset value by the calibration controller to begin another wait interval for a read level calibration at the selected time interval from the reset value, and the calibration controller is operable to reset the tracker to track read operations and the tracker to track write and/or erase operations to track from the reset value of the timer and within the selected time interval.[0093] The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as "examples". Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.[0094] In this document, the terms "a" or "an" are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of "at least one" or "one or more." In this document, the term "or" is used to refer to a nonexclusive or, such that "A or B" may include "A but not B," "B but not A," and "A and B," unless otherwise indicated. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein". Also, in the following claims, the terms "including" and "comprising" are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.[0095] In various examples, the components, controllers, processors, units, engines, or tables described herein can include, among other things, physical circuitry or firmware stored on a physical device. As used herein, "processor" means any type of computational circuit such as, but not limited to, amicroprocessor, a microcontroller, a graphics processor, a digital signal processor (DSP), or any other type of processor or processing circuit, including a group of processors or multi-core devices.[0096] Operating a memory cell, as used herein, includes reading from, writing to, or erasing the memory cell. The operation of placing a memory cell in an intended state is referred to herein as "programming," and can include both writing to or erasing from the memory cell (e.g., the memory cell may be programmed to an erased state). [0097] According to one or more embodiments, a memory controller (e.g., a processor, controller, firmware, etc.) located internal or external to a memory device, is capable of determining (e.g., selecting, setting, adjusting, computing, changing, clearing, communicating, adapting, deriving, defining, utilizing, modifying, applying, etc.) a quantity of wear cycles, or a wear state (e.g., recording wear cycles, counting operations of the memory device as they occur, tracking the operations of the memory device it initiates, evaluating the memory device characteristics corresponding to a wear state, etc.).[0098] According to one or more embodiments, a memory access device may be configured to provide wear cycle information to the memory device with each memory operation. The memory device control circuitry (e.g., control logic) may be programmed to compensate for memory device performance changes corresponding to the wear cycle information. The memory device may receive the wear cycle information and determine one or more operating parameters (e.g., a value, characteristic) in response to the wear cycle information.[0099] It will be understood that when an element is referred to as being "on," "connected to" or "coupled with" another element, it can be directly on, connected, or coupled with the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly on," "directly connected to" or "directly coupled with" another element, there are no intervening elements or layers present. If two elements are shown in the drawings with a line connecting them, the two elements can be either be coupled, or directly coupled, unless otherwise indicated.[00100] Method examples described herein can be machine or computer- implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, the code can be tangibly stored on one or more volatile or non-volatile tangible computer-readable media, such as during execution or at other times. Examples of these tangible computer-readable media can include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact discs and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), solid state drives (SSDs), Universal Flash Storage (UFS) device, embedded MMC (eMMC) device, and the like.[00101] The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon studying the above description. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. |
Methods and apparatus relating to techniques for increasing per core memory bandwidth by using forget store operations are described. In an embodiment, a cache stores a buffer. Execution circuitry executes an instruction. The instruction causes one or more cachelines in the cache to be marked based on a start address for the buffer and a size of the buffer. A marked cacheline in the cache is to be prevented from being written back to memory. Other embodiments are also disclosed and claimed. |
An apparatus comprising:a cache to store a buffer; andexecution circuitry to execute an instruction, the instruction to cause one or more cachelines in the cache to be marked based on a start address for the buffer and a size of the buffer,wherein a marked cacheline in the cache is to be prevented from being written back to memory.The apparatus of claim 1, wherein marking of the one or more cachelines comprises invalidating the one or more cachelines.The apparatus of claim 1, wherein marking of the one or more cachelines comprises modifying a state the one or more cachelines to an Exclusive state.The apparatus of claim 1, wherein marking of the one or more cachelines comprises indicating the one or more cachelines as victim candidates to allow early eviction of the one or more cachelines.The apparatus of claim 1, wherein the instruction is to cause a look up of the one or more cachelines in the cache based on a mask.The apparatus of claim 1, wherein an accelerator is to utilize the buffer as a scratchpad, wherein the instruction is to cause the accelerator to reclaim the scratchpad.The apparatus of claim 1, wherein the cache is a Level 2 (L2) cache, wherein the instruction is to cause a look up of the one or more cachelines in the L2 cache, and upon a miss in the L2 cache, no further operations associated with the instruction are to be performed.The apparatus of claim 1, further comprising decode circuitry to decode the instruction into a plurality of store operations, wherein each of the plurality of store operations is to invalidate a corresponding cacheline in the cache.The apparatus of claim 1, wherein the memory comprises a main memory or a dynamic random access memory.The apparatus of claim 1, wherein the cache comprises one or more of a level 1 cache, a level 2 cache, and a last level cache.The apparatus of claim 1, wherein a processor core comprises the execution circuitry and the cache.The apparatus of claim 11, wherein the processor core comprises a Graphics Processing Unit (GPU) core.An apparatus comprising means to perform an operation as set forth in any preceding claim.Machine-readable storage including machine-readable instructions, when executed, to implement an operation or realize an apparatus as set forth in any preceding claim. |
FIELDThe present disclosure generally relates to the field of electronics. More particularly, some embodiments relate to techniques for increasing per core memory bandwidth by using forget store operations.BACKGROUNDGenerally, Dynamic Random Access Memory (DRAM) and/or interconnect bandwidth limitations can be a major performance bottleneck for present Central Processing Unit (CPU) cores. These bandwidth limitations cause delays in data transfer to and from CPU cores. Hence, if DRAM and/or interconnect bandwidth limitations are reduced or eliminated, CPU performance can be greatly increased.BRIEF DESCRIPTION OF THE DRAWINGSSo that the manner in which the herein recited features of the present embodiments can be understood in detail, a more particular description of the embodiments may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments and are therefore not to be considered limiting of their scope.Fig. 1 illustrates a block diagram of a processor with private cache level and a shared last-level cache, which may be utilized in some embodiments.Fig. 2 illustrates sample operands for a decompression instruction, according to an embodiment.Fig. 3A illustrates a block diagram of a processor with private cache level and shared last-level cache, according to an embodiment. Fig. 3B illustrates sample operands for a decompression instruction, according to an embodiment. Fig. 3C illustrates two sample decoded operations for a decompression instruction, according to an embodiment.Fig. 4 illustrates a high level diagram of various components of a processor core, according to an embodiment.Fig. 5 illustrates a flow diagram of a method to provide decompression closer to a processor core, according to an embodiment.Fig. 6 shows sample evaluation results, according to an embodiment.FIG. 7A is a block diagram illustrating an exemplary instruction format according to embodiments.FIG. 7B is a block diagram illustrating the fields of the instruction format that make up the full opcode field according to one embodiment.FIG. 7C is a block diagram illustrating the fields of the instruction format that make up the register index field according to one embodiment.FIG. 7D is a block diagram illustrating the fields of the instruction format that make up the augmentation operation field according to one embodiment.FIG. 8 is a block diagram of a register architecture according to one embodiment.FIG. 9A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments.FIG. 9B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments.FIG. 10 illustrates a block diagram of an SOC (System On Chip) package in accordance with an embodiment.FIG. 11 is a block diagram of a processing system, according to an embodiment.FIG. 12 is a block diagram of an embodiment of a processor having one or more processor cores, according to some embodiments.FIG. 13 is a block diagram of a graphics processor, according to an embodiment.DETAILED DESCRIPTIONIn the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, various embodiments may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments. Further, various aspects of embodiments may be performed using various means, such as integrated semiconductor circuits ("hardware"), computer-readable instructions organized into one or more programs ("software"), or some combination of hardware and software. For the purposes of this disclosure reference to "logic" shall mean either hardware, software, firmware, or some combination thereof.As mentioned above, performance of present processor or Central Processing Unit (CPU) cores is significantly limited by Dynamic Random Access Memory (DRAM) and/or interconnect bandwidth. To scale up DRAM or interconnect bandwidth, one effective way is to reduce the amount of data transferred to and from the cores using compression and decompression. However, large compression/decompression latency can limit the efficacy of such a solution even when using accelerators to speed up the compression/decompression.Moreover, in some implementations, software allocates buffers for accelerators to write results. The simplest method for any software is to provide an arbitrary buffer for this. Highly tuned software may use a small pool of temporary buffers and carefully manage them, reusing them as much as possible to reduce the overall memory usage footprint. Even with carefully re-used temporary buffers, some data will need to be written back to memory (e.g., to the main memory or DRAM), wasting precious memory bandwidth. This problem is exacerbated for simple software that tends to unsystematically use any buffer. This will have a larger address footprint, and since there is low re-use, almost all data must be written back to memory.Moreover, in current implementations, such temporary buffers are written back to memory when evicted from the cache. If the buffer is small and a re-writing happens soon, it can be merged into the core caches. However, accelerators tend to create large buffer outputs, and it will be difficult to merge them into the core caches, e.g., since the caches will tend to evict these buffers soon, before they can observe a reuse. Unnecessarily writing back large temporary buffers results in increased traffic on memory channels and interconnects. That causes performance and power issues. Hence, increasing per core memory bandwidth can address these problems.To this end, some embodiments provide techniques for increasing per core memory bandwidth by using forget store operations. An embodiment utilizes a special instruction which when executed forces the invalidation of the local buffer (also sometimes referred to herein as a scratchpad). This special instruction may mark the end of a given buffer. A processor core can then simply invalidate this buffer from its cache hierarchies, without writing its data back to memory (e.g., to the main memory or DRAM such as memory 1120 of Fig. 11 and/or memory 1060 of Fig. 10 ). In at least one embodiment, the utilized instruction(s) follow the EVEX format (such as discussed with reference to Figs. 7A-7C . However, embodiments are not limited to EVEX format and any instruction format may be used to implement various embodiments.Generally, large temporary buffers can be allocated very often in function bodies and need to be written back to memory. This causes wasteful write operations, wasting bandwidth as well as power. One or more embodiments can simply invalidate these buffers from the cache hierarchy and avoid the need for writing the buffered data back to memory. This is especially useful in tightly coupled accelerator architectures where large buffers are allocated and need to be thrown away.For example, considering the example of a decompression accelerator in the core (see, e.g., the discussion of Figs. 1-5 ), a decompression may operate as follows: Void ∗b = malloc(size-of-decompression); // pointer to decompresseddataDecompress (x, b); //decompress x and store it in bOutput = f(b); // Use some entries of bDecompress (y, b); //decompress y and store it in same buffer bOutput2 = f1(b); // something elseTypical execution will perform stores for buffer b when it is evicted from cache hierarchy. However, the same buffer b is going to be reused by the program later. So, there is no need to write the first copy of buffer b to memory. This will help performance as well as power. In an embodiment, the cores could smartly drop b as soon as it is known that b is not needed anymore, or it is going to be completely overwritten. In at least one embodiment, a new instruction and/or supporting micro-architecture can achieve this.Fig. 1 illustrates multiple versions of a Forget instruction, according to some embodiments. Fig. 2 illustrates a flow diagram of a method 200 to implement a Forget instruction, according to an embodiment.Considering an example software that utilizes a decompression accelerator (e.g., such as discussed with reference to Figs. 3A et seq.), the accelerator (e.g., decompression engine 301 of Fig. 3A and/or 4) decompresses data and stores it in a temporary buffer (e.g., in a cache or other memory, including those discussed with reference to Figs. 3A and/or 4). Once this temporary buffer is used up, there is no reason to write it back to the main memory or cache hierarchy. For this, an indication may be received from the user that the buffer can now be invalidated in the core caches.Referring to Fig. 1(A) , the ISA can be in the form: Forget (start-address, size), where start-address is the beginning of the buffer and size is the number of bytes in the buffer. In an alternative embodiment such as shown in Fig. 1(B) , a mask may also be supplied to the Forget instruction: Forget (start-address, size, mask). Only cachelines with a set mask bit (or a cleared mask bit, depending on the implementation) are invalidated.In an embodiment, the sample software/pseudo-code may be as follows:In the above example, the Forget instruction will cause invalidation of the buffer b, since the software guarantees that the buffer contents are expected to be undefined and therefore will not read/use contents before writing it again. However, software expects to re-use the buffer with another offload job that will write to it (fully or partially).Micro-Architecture:Referring to Fig. 2 , operation 202 (e.g., the front end of a processor core such as 930 of Fig. 9B ) determines whether a Forget instruction is observed. At operation 204, the Forget instruction will cause a look up of the core caches (e.g., Level 1 (L1) cache, Level 2 (L2) cache, or Last Level Cache (LLC)) for all the cachelines that are part of the mask (such as discussed with reference to Fig. 1(B) ), or all cachelines that form the buffer based on the start-address and size as discussed with reference to Fig. 1(A) . At operation 206, if any hit is observed in L1, L2, or LLC, the corresponding cacheline(s) are simply invalidated at operation 208. Otherwise, operation 210 drops any further operations with regards to the Forget instruction.In an embodiment, instead of invalidation, the cacheline may alternatively be converted to an E (Exclusive or clean state, indicating it does not need to be written back to memory). The advantage of keeping the line in E state (instead of I or invalid state) is that any future re-allocation of the buffer will not need to do costly Read-for-Ownership operations before performing a store operation to the cacheline. This can result in faster re-allocation of this buffer. The flip side is that it will occupy core cache space.In some embodiments, to reduce cache pollution, the replacement policy of the core caches may be modified to make such "Forget" cache lines victim candidates (i.e., LRU or Least Recently Used). In case cache pressure is seen (i.e., cache is becoming too full), these Forget lines may be evicted. Since they are clean, they can be safely dropped to reduce pressure on interconnects/LLC, etc. L2 and LLC replacement policies may also be made aware of the presence of such cachelines. Furthermore, Forget instruction can be extended to LLC, and its effects may be similar to what happens in the core caches L1 and L2.Alternatively, if the Forget instruction is running in conjunction with an accelerator (such as the decompression engine 301 of Figs. 3A and/or 4), then this instruction can be a hint to the accelerator to reclaim the scratchpad/buffer (such as scratchpad 424 of Fig. 4 ) it had allocated. This is because the accelerator already knows the buffer in the cache, so it is easy for it to invalidate it.In an embodiment, the Forget instruction may be just a hint. In case a cache-line has already been evicted from the core caches, Forget will not check for it in the LLC. For example, if the buffer is stored in an L2 cache, an L2 miss will simply drop any further operations for the Forget instruction. Furthermore, the Forget instruction may be broken down into multiple store operations, each with the property of invalidating certain (e.g., 64B) cachelines.Additional Details regarding Forget ISAForget instruction in x86 architecture assembly may be implemented as: Forget RAX, RBX. The first register (RAX) holds the starting address of the buffer. The second register (RBX) holds the mask of cachelines to be invalidated. In another variation of the instruction, the size of buffer to be invalidated can be supplied, e.g., in a separate register. In an embodiment, the Forget instruction will not modify EFLAGS. In x86 architecture, EFLAGS generally refers to a (e.g., 32 bit) register used to store Boolean values which are results of operations and/or the state of a processor.Another embodiment provides an Application Programming Interface (API) for fine grained low latency decompression within a processor core. In an embodiment, a decompression Application Programming Interface (API) receives an input handle to a data object. The data object includes compressed data and metadata. In an embodiment, the metadata remains uncompressed to assist in identifying the compressed portion for decompression. Decompression Engine (DE) circuitry decompresses the compressed data to generate uncompressed data. The DE circuitry decompresses the compressed data in response to invocation of a decompression instruction (or DISA) by the decompression API.Further, at least one embodiment provides an instruction and/or micro-architecture support for decompression on a processor or CPU core. This ISA extension may also be referred to as Decompression ISA or "DISA" herein. One or more embodiments provide a hardware-software synergistic solution that proposes a low latency decompression solution at hardware level for an end-to-end compression/decompression solution.Various embodiments provide bandwidth benefits to and from a processor core, thereby reducing pressure on not just DRAM but also on the interconnect coupled between various components of a processor including one or more cores, memory, etc. An embodiment includes a decompression accelerator and architecturally enables the decompression accelerator in a processor core's Level 2 (L2) cache (which may also be interchangeably referred to as mid-level cache or "MLC"). At least one embodiment causes signaling a processor core after decompressing every cache-line (e.g., 64B, where "B" refers to Byte or Bytes) of data. This is in contrast with some approaches which may wait for a complete chunk of data (such as a page or 4KB) to be decompressed before allowing consumption of or access to the decompressed data, thereby entailing significant latency penalty.Moreover, one or more embodiments also allow speculative decompression of data, permitting the large depth of out-of-order cores to absorb the expensive decompression latency, e.g., by allowing the out-of-order decompression of data instead of sequential decompression. Generally, to improve performance, some processors utilize speculative processing (also sometimes referred to as Out-Of-Order (OOO) processors), which executes a program in parallel and sooner than a sequential process would follow. The speculative processing may or may not end up being correct. When it is correct, a program will execute in less time than when non-speculative processing is employed, thereby improving performance and reducing latency. Furthermore, a new instruction in the ISA is utilized to enable a core to communicate with the decompression accelerator in an embodiment.By contrast, some implementations may rely on compression/decompression to be performed completely in software to enhance effective memory capacity. However, software-only decompression is slow and costly. Additionally, a hardware accelerator may be used for compression and decompression of data stored far away from a core. However, hardware decompression accelerators operate on large chunks of data. This is generally done to amortize the large latency of communication given the large distance that these accelerators sit away from the cores. Such coarse-granular decompression is not very useful or efficient for applications that need to work with many smaller objects.On the other hand, the decompression accelerator proposed as part of this disclosure is located at or near the L2 cache of the cores (i.e., closer to the cores). This enables fast communication between the core and the accelerator through a dedicated instruction. The accelerator can thus be designed to signal/inform the core on completion of decompression of every cacheline (e.g., every 64B) without having to wait for the rest of the chunk to be decompressed. The accelerator may signal the core by using a dedicated signal, a dedicated bus for signaling, a packet with completion information, or changing a status bit in a designated register or location in cache (such as L1 cache or L2 cache), etc. This signaling may also convey the address/location of the decompressed cacheline in the (e.g., L2) cache or may include the decompressed data. As a result, the core can make forward progress (enabling ILP) while decompression proceeds for subsequent cachelines of the larger block to be decompressed. Moreover, the decompression accelerator may be invoked speculatively. By invoking the accelerator speculatively, the latency of decompression can be hidden by the deep out-of-order windows of processors. These two approaches enable one or more embodiments to outperform current state-of-the-art, allowing fine grained decompression very close to the core(s).Further, some embodiments may be applied in computing systems that include one or more processors (e.g., where the one or more processors may include one or more processor cores), such as those discussed with reference to Figs. 1 et seq., including for example a desktop computer, a work station, a computer server, a server blade, or a mobile computing device. The mobile computing device may include a smartphone, tablet, UMPC (Ultra-Mobile Personal Computer), laptop computer, Ultrabook™ computing device, wearable devices (such as a smart watch, smart ring, smart bracelet, or smart glasses), etc.For example, Fig. 3A illustrates a block diagram of a processor 300 with private cache level and a shared last-level cache, which may be utilized in some embodiments. As shown, each core has a number of private cache levels (e.g., including a L1 and Level 2 (L2) cache levels (the L2 cache may sometimes be referred to as Mid-Level Cache (MLC)) which may be kept coherent, and a shared last-level cache (LLC), which may be distributed amongst a plurality of cores. The on-chip network may facilitate communication amongst the cores, L2 caches, and/or distributed LLC. A cache may sometimes be designated with a dollar sign ($), such as shown in Fig. 2 . Generally, cache coherence is managed at cache block or cacheline granularity. Also, as shown in Fig. 2 , a core may include L1 cache, whereas L2 cache may straddle the boundary and be implemented as part of the core or outside the core (as indicated by the dashed boxes indicating the optional placement of L2 cache). LLC is located outside the core as shown in Fig. 2 and shared amongst a plurality of processor cores.A decompression logic circuitry/engine 301 may be provided in various locations in in the processor 300. In at least one embodiment, decompression logic 301 may be in a core, e.g., adjacent or near an L2 cache. However, embodiments are not limited to this and decompression logic 301 may be instead be outside the core, e.g., coupled to the on-chip network, distributed LLC or between the core and the on-chip network/distributed LLC.In an embodiment, x86 ISA (provided by Intel® Corporation) is expanded to include a special hardware-decompression instruction. When a programmer wants to read or make use of the actual data which has been compressed, decompression is performed. Decompression is triggered when the processor executes the decompression instruction (DISA) which brings the compressed data into the core's cache(s), decompresses it using a hardware accelerator (i.e., decompression logic 301) that stores the decompressed data in the L2 cache (or another cache like LLC depending on the implementation and/or data size). The decompression operation is performed speculatively in an embodiment, and the processor/core continues to execute instructions that are not dependent on the decompressed data, thereby hiding the latency of decompression. Once the data requested by the core (e.g., one or more cachelines) is available in a decompressed state, the core can proceed and not wait for the complete decompression operation of all cachelines or chunk(s) of data to finish.Hence, some embodiments allow processors to scale up the bandwidth of DRAM and/or interconnect, agnostic of the DRAM and/or interconnect technology or vendor used, and provide overall performance gains for processors by alleviating memory and/or interconnect bottlenecks. By effectively increasing DRAM and/or interconnect bandwidth (e.g., due to compression) along with the proposed low latency decompression, customers can help reduce the data center TCO (Total Cost of Ownership) for memory. Simulation results show up to 2X performance on bandwidth sensitive kernels, that are representative of data center use case scenarios, for instance. To this end, some embodiments address the memory bottlenecks limiting processor performance. Using data-compression in a hardware-software synergistic manner, applications can achieve an effectively lower memory bandwidth requirement.To compress the data, applications identify their target data-structures and compress them using available compression algorithms such as DEFLATE. As discussed herein, "Deflate" or "DEFLATE" generally refers to a lossless data compression file format used in conjunction with compression coding, e.g., in accordance with Lempel-Ziv-Storer-Szymanski (LZSS) and/or Huffman coding. Compression may be in software or done with an offload engine. Generally, to minimize memory bandwidth requirements, compression will be performed on frequently used data. Reading frequently used data in a compressed form, will effectively reduce memory bandwidth requirements. However, decompression latency will now critically affect the performance. Moreover, server products are also limited by the interconnect bandwidth, so it is important to transfer compressed data over the interconnect and perform low latency decompression as close to the core as possible in accordance with some embodiments.To achieve this, one embodiment uses a dedicated hardware accelerator (e.g., decompression logic 301) close to the core and performs low latency decompression. Using micro-architecture support in the pipeline of the core and a dedicated decompression engine at the L2-cache, DISA enables fine-grained, cacheline level access to the data being decompressed.As discussed below, there are three independent flows in DISA as follows:1. ISA Extension - This section explains the instruction semantics of the hardware-decompression engine and the ISA extensions to access it in software.2. Software Support - This section illustrates how the compression and decompression steps are included in the application code (e.g., user space).3. Hardware/Micro-architecture Support - This section explains how the core pipeline is modified to handle the decompression and how the final decompressed data is delivered to the consumer-instructions in the user program space. This section also explains the micro-architecture flows that enable decompression using additional hardware in the form of a decompression engine that implements the decompression function associated with the compression algorithm.One aim of compression in this scenario is to provide bandwidth savings all the way to the core. This saves not just memory bandwidth but also network/interface bandwidth.Decompression ISA ExtensionFig. 3B illustrates sample operands for a decompression instruction, according to an embodiment. Fig. 3C illustrates two sample decoded operations for a decompression instruction, according to an embodiment. A special hardware-decompression instruction is used which is referred to herein as DISA for Decompression ISA henceforth. Its semantics are described as follows:A. It can have at least four fields - 1. Source (compressed) data location 302 (e.g., a virtual memory address); 2. Source-data size 304 (the software API (Application Programming Interface) providing compression can also provide the compressed-output size); 3. Destination 306 (decompressed) data location (e.g., a virtual memory address); 4. Destination data size 308 (stored in any of the available logical temporary registers).B. It may have other variations like using a consumer bitmap. For example, when considering compression on a page-level granularity, then for a memory page of 4KB and cache-line size of 64 bytes in the core, this bitmap can be 64 bits long signaling/indicating which index cachelines are of interest for the consumer instructions/code after decompression completes. Alternatively or in addition to the bitmap, a bit mask may be used to select cachelines for decompression and/or access. This approach could potentially improve cache space management, minimize evictions, etc.C. After fetching and decoding by the processor's front-end (e.g., front end 930 of Fig. 9 ), DISA is split into two fused micro-operations (uops) as shown in Fig. 3C . The first fused-uop 310 is a load operation which dispatches one or more loads for all the cachelines containing the compressed data to be decompressed. These load(s) access memory and fetch the required cachelines from the DRAM or main memory (such as memory 1060 of Fig. 10 ) to the core caches. The second fused-uop 312 is a store operation which may function as a macro-store that signals the decompression engine 301 to start and perform decompression. Unlike traditional stores which directly go to the memory/DRAM to bring to-be-written cachelines into the core's caches, some embodiments use a macro-store "DISA store" which causes the decompression engine 301 to produce the uncompressed data into the core's cachelines, e.g., as further discussed with reference to Fig. 4 . As shown in Fig. 3C , the load uop 310 may receive operands 302 and 304, while the store uop 312 may receive operands 306 and 308.Additionally, in at least one embodiment, for ease of implementation, a four-operand instruction may be broken down into two or more instructions, as needed.DISA Software SupportInitially, it is decided (e.g., by a programmer, designer, and/or user) which data objects are large enough to benefit substantially from compression. Compressibility or achievable compression ratio may also be a factor that might go into this decision. The liveliness of the decompressed data, i.e., whether it needs to be decompressed and stored globally or is it a temporary variable that is alive as long as the function containing this data is alive on the program stack, may be another design choice. An embodiment is orthogonal and agnostic to these choices, simply operating on a target for decompression, while following the previously described architectural semantics. Also, one or more APIs discussed herein may be stored in any type of storage device such as those discussed with reference to Figs. 1B et seq., including for example a cache (such as an L1 cache, an L2 cache (like L2 426 of Fig. 4 ), an LLC, etc.).One embodiments provides a public API imported into the program code as a static library. It consists of the compression function which implements a given or multiple compression algorithm(s) (like DEFLATE) and given the bytes of data for any size, compresses it in a programmer accessible object which contains the compressed data as well as other metadata used for later decompression. An example, which can serve as a template for the design of such an API and the corresponding invocation in the application of interest, is as follows: // example of a large array of objectsmy_struct∗ user_data_array = (my_struct∗) malloc (sizeof(my_struct) ∗1024 ∗ 1024);initialize_data(user_data_array);// example of an API (system) called to compress data in softwareCOMPRESSED_DATA∗ compress_user_data(user_data_array, 1024 ∗1024, sizeof(my_struct));Here, "my_struct" is the type of custom data-structure in the example application which is the target of compression and "COMPRESSED DATA" is a defined compressed-data structure/format that is recognized and used for decompression later on.One embodiment provides handle(s) (sometimes referred to as pointer(s)) to the actual compressed data and information regarding how much of compressed data needs to be decompressed, which the defined data-structure will hold. As discussed above, a bitmap and/or a bit mask may be used to select specific cachelines for decompression and/or access.Moreover, when accessing the original data belonging to user_data_array later on in the program (e.g., in a read-only manner), another API function may be used as follows: (my_struct∗) decompress_user_data (COMPRESSED-DATA∗);// decompress ISA or DISA will be called within this function (andpotentially cached for serving multiple "structs" read from the same 4KB page)This API decompression-function corresponds to the hardware counterpart of DISA. It takes as an input handle to the compressed-data-object that is obtained before using the software API function. But instead of decompressing it in hardware like some software-based compression routines, the DISA hardware-decompression instruction may be used. Since the compressed-data-object also contains metadata related to compression, it will supply the three main arguments needed by the special ISA extended instruction, i.e., the virtual address location of the compressed data, the original (uncompressed) size of the data required, and the compressed data size obtained after compression. The final argument (parameter) which is the virtual address location of the decompressed data can either be explicitly and locally created by the programmer or by the API function definition in the program space at the point of calling this function.DISA Micro-Architecture DetailsFig. 4 illustrates a high level diagram of various components of a processor core 400, according to an embodiment. Fig. 5 illustrates a flow diagram of a method 500 to provide decompression closer to a processor core, according to an embodiment. In one or more embodiments, operations of the method 500 may be performed by one of more hardware components of Fig. 4 as further discussed below.Referring to Fig. 4 , the modified application code using the above-mentioned API functions, and working with compressed data in the program space, executes on the processor core 400 with hardware support to provide decompression purely in hardware and decoupled from both the software and the Operating System (OS). For the purpose of explanation, it is assumed that the targeted uncompressed data fits within the default page size (commonly 4KB in the normal cases today) in the DRAM or main memory (such as memory 1060 of Fig. 10 ). The application code consists of one or more load instructions which read the decompressed data after it has been decompressed and stored into the core's cache (e.g., L2 cache, L1, cache, and/or LLC cache).Referring to Figs. 4 and 5 , operation 502 detects the DISA instruction (e.g., by core 408). As discussed in the above ISA Extension section, the fetching and decoding of DISA instruction will produce two fused uops in the OOO of the core (e.g., scheduled for execution by the OOO Scheduler 402 - one fused uop for loading compressed data from the DRAM into the core's cache(s) in the Re-Order Buffer (ROB) 404 and the other for storing decompressed data back into the memory (cache/DRAM) for subsequent consumption. Operation 504 generates a macro load and a macro store operation for the DISA instruction and operation 506 sends the DISA instruction (or even a signal indicating the decompression request) to the DE 301. Operations 504 and 506 may be performed by components of core 408 as further discussed herein. The following operations may be used to handle the DISA-macro-load and DISA-macro-store to achieve decompression in hardware 406.The DISA-macro-load is dispatched from the OOO scheduler 402 when its sources, e.g., the compressed data memory location and compressed data memory size, are available. It can be dispatched normally from the core 408 to the uncore 410 and may be broken down into multiple loads (such as Load(1), Load(2), ..., Load(x) shown in ROB 404) during dispatch since compressed data may span multiple cachelines depending on the compressed data size. Since the OS is not aware of the compressed data format and it exists in the program space, it brings the corresponding cachelines -- such as any data would be brought from the physical pages corresponding to the virtual memory location of compressed data -- into cacheline chunks that the core can then store and process.The DISA-macro-store may be allocated by the time DISA-macro-load is fetching data from memory. This allocation timing can provide bandwidth savings for delivering performance gains. DISA-macro-store proceeds to be dispatched to the uncore 410 from the Store Buffer (SB) 412 when all its sources, i.e., memory address for storing the decompressed data and the decompressed size, are available. DISA-macro-store 420, after being dispatched from SB 412, is trapped/kept by the Decompression Engine (DE) or logic 301 until decompression completes 414. Similarly dispatched consumer loads 422 stay in the DE 301 as identified by their matching SB identifier (ID) 414.As shown in Fig. 4 , decompression logic/engine 301 operates at L2 cache (216) level and uses its cachelines as its temporary storage or scratch-pad 424. Operation 508 of Fig. 5 allocates space in the (e.g., L2) cache for decompression purposes. For example, the DE 301 may reserve the requisite number of cachelines (e.g., depending on the decompressed data size) in L2 cache 426 and prevent them from being accessed, modified or evicted (e.g., by marking them as uncacheable) until decompression logic/engine 301 is done decompressing the compressed data. Operation 510 (e.g., performed by DE logic 301 and/or a cache controller or other logic coupled to the L2 cache 426 determines whether there is sufficient space for the decompression data, and if eviction is required, operation 512 evicts one or more cachelines from the L2 cache (e.g., based on a Least Recently Used (LRU) algorithm or another algorithm). Otherwise, if no eviction is required, DE logic 301 performs decompression in the allocated scratchpad 424 in the L2 cache.Once the decompressed cacheline load is completed, DE 301 matches this waiting load(s) and supplies the Write Back (WB) data at operation 516, e.g., for storage in memory 428. One reason for using L2 cache is that it is significantly larger than L1 and hence, can reserve space in its capacity while supporting standard, non-decompression logic/engine 301 related cacheline management operation(s). It can also potentially scale up DISA operation to support multiple decompressions happening concurrently in the decompression engine 301. However, embodiments are not limited to using L2 cache for these purposes and other cache levels (such as L1 and/or LLC) may be used depending on the implementation.Furthermore, all the subsequent consumer loads 430 (e.g., in program order) that access the decompressed data region may be allocated in the OOO Scheduler 402 and Load Buffer (LB) 432, while the DISA-macro-store has not completed and is occupying an entry in the Store Buffer (SB) 412. The memory disambiguation logic (not shown) present in the core blocks these requests from being dispatched to uncore until the DISA-macro-store completes and writes back 428. This blocking happens because of the virtual address region overlapping detected by disambiguation between DISA-macro-store and these concerning loads 434. Other irrelevant/younger loads (which may not depend on decompressed data) will not be blocked and can proceed.Moreover, the DISA-macro-store, when its sources are ready, updates its destination/writing virtual address region in the SB 412, which may be used by any memory-dependent younger load operations as mentioned above. Until a "ready" signal is received from the decompression logic/engine 301 (which has been given a connection to the SB), DISA-SB-entry continues to block younger loads. It then transfers the data to the L2 cache (as it is marked uncacheable in L1) and decompression logic/engine 301 identifies the DISA-related store request and sets its state variables (tracking DISA-SB-ID to segregate active DISAs) to initiate decompression. Using the decompressed size, it computes how many cachelines are required to write-back the decompressed data to and evicts (e.g., using LRU (Least Recently Used) algorithm) the required number of cachelines to reserve them exclusively for decompression logic/engine 301's output as discussed with reference to operations 510 and 512. L2 cache may also mark these as inaccessible by the core 408 and decompression logic/engine 301 may store the indices (e.g., sets/ways) of these cachelines mapping.As decompression logic/engine 301 starts decompression, which may take multiple cycles depending on both the compression algorithm and the level of decompression used, it issues an acknowledgement signal to the SB 412 to unblock the waiting loads by bypassing them in the disambiguation logic using DISA SB ID as the exclusivity condition. So, younger consumer loads are unblocked to disambiguate and to be issued.Further, the consumer loads that match a DISA SB may carry that SB ID with them or otherwise be associated with the SB ID. In an embodiment, these consumer loads are labelled as uncacheable loads in the L1 cache and do not look up the L1 cache, directly reaching the L2 cache. Decompression logic/engine 301 detects/catches these loads and supplies them the lookup way in the MLC (using the source load address and the decompressed address region to L2 reserved cacheline mapping that decompression logic/engine 301 records when it performs its reservation). The load then reads the decompressed data from MLC. If that cacheline is not written to yet, the load is blocked here and will only complete writeback when the decompression logic/engine 301 completes its process and writes back the data with a signal for the load to proceed to its corresponding cacheline. Eventually, decompression logic/engine 301 will complete its active DISA and by then, all loads will have received the required data and will writeback to their destination registers when sent back to the core. Decompression logic/engine 301 then sends a signal back finally to the corresponding SB ID and writes back its status as complete 428.When DISA-macro-store becomes the head of the SB 412 and is able to retire after having written back, it becomes senior. It is finally dispatched to uncore 410 while its SB is deallocated. Decompression logic/engine 301 recognizes the senior status request of the store and un-reserves the reserved memory - making the corresponding cachelines "public for core" from "private for decompression logic/engine 301." For decompression operations that are performed speculatively, operation 518 determines whether the speculation was correct, and if so commits and makes the data visible in L2 cache; otherwise, the allocated scratchpad in L2 cache is released/invalidated. Moreover, the DE logic 301 also writes back 428 the final compressed data back to the memory by issuing (e.g., multiple) senior store(s) to the DRAM or main memory (such as memory 1060 of Fig. 10 ). In an embodiment, once the DISA instruction is committed in the ROB 404, an atomic store is performed to write the data back and the allocated space in the L2 cache is released. This delayed writeback to memory is another optimization where the corresponding younger load(s) can directly obtain the decompressed data without having to go all the way to the core 408. It acts as a bypass at L2 for the concerned loads. In an embodiment, if there is an error during decompression, the scratchpad 424 will be discarded, and an OS fault may be issued to correct it. Additionally, while the load and store operations triggered by the DISA may appear as one atomic load and one atomic store, a series of load and store operations may be conducted as discussed above.DISA Performance SummaryFig. 6 shows sample evaluation results, according to an embodiment. The evaluation data shown in Fig. 6 was evaluated relative to a hypothetical next generation OOO architecture , e.g., in a constrained memory bandwidth scenario. Two kernels are traced for the analysis, one mimicking a sample database run on servers. The other was a proxy for a compute-bound kernel. As can be seen, a DISA engine sitting next to the MLC (20 cycles of communication latency) could potentially provide a significant boost to performance (over 2X IPC (Instructions Per Cycle) gain). Further, the performance of a similarly built decompression accelerator that sits much further away, e.g., on a mesh interconnect, has a much higher startup latency. Expectedly, the performance gains from such a distant accelerator is significantly muted. The results clearly demonstrate the potential advantages of reading compressed data over the interconnect and using low latency techniques to decompress it as disclosed by one or more embodiments.Instruction SetsAn instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and/or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (source1/destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to as the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developer's Manual, September 2014; and see Intel® Advanced Vector Extensions Programming Reference, October 2014).Exemplary Instruction FormatsEmbodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.While embodiments will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doubleword-size elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).FIG. 7A is a block diagram illustrating an exemplary instruction format according to embodiments. FIG. 7A shows an instruction format 700 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The instruction format 700 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions.EVEX Prefix (Bytes 0-3) 702 - is encoded in a four-byte form.Format Field 782 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 782 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment).The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.REX field 705 (EVEX Byte 1, bits [7-5]) - consists of a EVEX.R bit field (EVEX Byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [6] - X), and 757BEX byte 1, bit[5] - B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using 1s complement form, i.e., ZMM0 is encoded as 1111B, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.REX' field QAc10 - this is the EVEX.R' bit field (EVEX Byte 1, bit [4] - R') that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternative embodiments do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and the other RRR from other fields.Opcode map field 715 (EVEX byte 1, bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (0F, OF 38, or OF 3).Data element width field 764 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.EVEX.vvvv 720 (EVEX Byte 2, bits [6:3]-vvvv)- the role of EVEX.vvvv may include the following: 1) EVEX.vvvv encodes the first source register operand, specified in inverted (1s complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in Is complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 1111b. Thus, EVEX.vvvv field 720 encodes the 4 low-order bits of the first source register specifier stored in inverted (1s complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.EVEX.U 768 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A (support merging-writemasking) or EVEX.U0; if EVEX.U = 1, it indicates class B (support zeroing and merging-writemasking)or EVEX.U1.Prefix encoding field 725 (EVEX byte 2, bits [1:0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2 bit SIMD prefix encodings, and thus not require the expansion.Alpha field 753 (EVEX byte 3, bit [7] - EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.writemask control, and EVEX.N; also illustrated with α) ― its content distinguishes which one of the different augmentation operation types are to be performed.Beta field 755 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB; also illustrated with βββ) ― distinguishes which of the operations of a specified type are to be performed.REX' field 710 - this is the remainder of the REX' field and is the EVEX.V' bit field (EVEX Byte 3, bit [3] - V') that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.vvvv.Writemask field 771 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the writemask registers. In one embodiment, the specific value EVEX.kkk=000 has a special behavior implying no writemask is used for the particular instruction (this may be implemented in a variety of ways including the use of a writemask hardwired to all ones or hardware that bypasses the masking hardware). When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the writemask field 771 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments are described in which the writemask field's 771 content selects one of a number of writemask registers that contains the writemask to be used (and thus the writemask field's 771 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 771 content to directly specify the masking to be performed.Real Opcode Field 730 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.MOD R/M Field 740 (Byte 5) includes MOD field 742, register index field 744, and R/M field 746. The MOD field's 742 content distinguishes between memory access and non-memory access operations. The role of register index field 744 can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The content of register index field 744, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g., 32x512, 16x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).The role of R/M field 746 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.Scale, Index, Base (SIB) Byte (Byte 6) - The scale field's 750 content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale ∗ index + base). SIB.xxx 754 and SIB.bbb 756 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.Displacement field 763A (Bytes 7-10) ― when MOD field 742 contains 10, bytes 7-10 are the displacement field 763A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity. This may be used as part of memory address generation (e.g., for address generation that uses 2scale ∗ index + base + displacement).Displacement factor field 763B (Byte 7) - when MOD field 742 contains 01, byte 7 is the displacement factor field 763B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and 127 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 763B is a reinterpretation of disp8; when using displacement factor field 763B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8∗N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 763B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 763B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8∗N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset).Immediate field 772 allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.Full Opcode FieldFIG. 7B is a block diagram illustrating the fields of the instruction format 700 that make up the full opcode field 774 according to one embodiment. Specifically, the full opcode field 774 includes the format field 782, the base operation field 743, and the data element width (W) field 763. The base operation field 743 includes the prefix encoding field 725, the opcode map field 715, and the real opcode field 730.Register Index FieldFIG. 7C is a block diagram illustrating the fields of the format 700 that make up the register index field 745 according to one embodiment. Specifically, the register index field 745 includes the REX field 705, the REX' field 710, the MODR/M.reg field 744, the MODR/M.r/m field 746, the VVVV field 720, xxx field 754, and the bbb field 756.Augmentation Operation FieldFIG. 7D is a block diagram illustrating the fields of the instruction format 700 that make up an augmentation operation field according to one embodiment. When the class (U) field 768 contains 0, it signifies EVEX.U0 (class A 768A); when it contains 1, it signifies EVEX.U1 (class B 768B). When U=0 and the MOD field 742 contains 11 (signifying a no memory access operation), the alpha field 753 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 753A. When the rs field 753A contains a 1 (round 753A.1), the beta field 755 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 755A. The round control field 755A includes a one bit SAE field 796 and a two bit round operation field 798. When the rs field 753A contains a 0 (data transform 753A.2), the beta field 755 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data transform field 755B. When U=0 and the MOD field 742 contains 00, 01, or 10 (signifying a memory access operation), the alpha field 753 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 753B and the beta field 755 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data manipulation field 755C.When U=1, the alpha field 753 (EVEX byte 3, bit [7] ― EH) is interpreted as the writemask control (Z) field 753C. When U=1 and the MOD field 742 contains 11 (signifying a no memory access operation), part of the beta field 755 (EVEX byte 3, bit [4]- S0) is interpreted as the RL field 757A; when it contains a 1 (round 757A.1) the rest of the beta field 755 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the round operation field 759A, while when the RL field 757A contains a 0 (VSIZE 757.A2) the rest of the beta field 755 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the vector length field 759B (EVEX byte 3, bit [6-5]- L1-0). When U=1 and the MOD field 742 contains 00, 01, or 10 (signifying a memory access operation), the beta field 755 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 759B (EVEX byte 3, bit [6-5]- L1-0) and the broadcast field 757B (EVEX byte 3, bit [4]- B).Exemplary Register ArchitectureFIG. 8 is a block diagram of a register architecture 800 according to one embodiment. In the embodiment illustrated, there are 32 vector registers 810 that are 512 bits wide; these registers are referenced as ZMM0 through ZMM31. The lower order 256 bits of the lower 16 ZMM registers are overlaid on registers YMMO-16. The lower order 128 bits of the lower 16 ZMM registers (the lower order 128 bits of the YMM registers) are overlaid on registers XMMO-15. In other words, the vector length field 759B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 759B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the instruction format 700 operate on packed or scalar single/double-precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in a ZMM/YMM/XMM register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.Writemask registers 815 - in the embodiment illustrated, there are 8 writemask registers (k0 through k7), each 64 bits in size. In an alternate embodiment, the writemask registers 815 are 16 bits in size. In some embodiments, the vector mask register k0 cannot be used as a writemask; when the encoding that would normally indicate k0 is used for a writemask, it selects a hardwired writemask of 0xFFFF, effectively disabling writemasking for that instruction.General-purpose registers 825 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar floating point stack register file (x87 stack) 845, on which is aliased the MMX packed integer flat register file 850 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.Alternative embodiments may use wider or narrower registers. Additionally, alternative embodiments may use more, less, or different register files and registers.Exemplary Core Architectures, Processors, and Computer ArchitecturesProcessor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU (Central Processing Unit) including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.Exemplary Core ArchitecturesFIG. 9A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments. FIG. 9B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments. The solid lined boxes in FIGS. 9A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In FIG. 9A , a processor pipeline 900 includes a fetch stage 902, a length decode stage 904, a decode stage 906, an allocation stage 908, a renaming stage 910, a scheduling (also known as a dispatch or issue) stage 912, a register read/memory read stage 914, an execute stage 916, a write back/memory write stage 918, an exception handling stage 922, and a commit stage 924.FIG. 9B shows processor core 990 including a front end unit 930 coupled to an execution engine unit 950, and both are coupled to a memory unit 970. The core 990 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 990 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 930 includes a branch prediction unit 932 coupled to an instruction cache unit 934, which is coupled to an instruction translation lookaside buffer (TLB) 936, which is coupled to an instruction fetch unit 938, which is coupled to a decode unit 940. The decode unit 940 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 940 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 990 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 940 or otherwise within the front end unit 930). The decode unit 940 is coupled to a rename/allocator unit 952 in the execution engine unit 950.The execution engine unit 950 includes the rename/allocator unit 952 coupled to a retirement unit 954 and a set of one or more scheduler unit(s) 956. The scheduler unit(s) 956 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 956 is coupled to the physical register file(s) unit(s) 958. Each of the physical register file(s) units 958 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 958 comprises a vector registers unit, a writemask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 958 is overlapped by the retirement unit 954 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 954 and the physical register file(s) unit(s) 958 are coupled to the execution cluster(s) 960. The execution cluster(s) 960 includes a set of one or more execution units 962 and a set of one or more memory access units 964. The execution units 962 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 956, physical register file(s) unit(s) 958, and execution cluster(s) 960 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 964). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 964 is coupled to the memory unit 970, which includes a data TLB unit 972 coupled to a data cache unit 974 coupled to a level 2 (L2) cache unit 976. In one exemplary embodiment, the memory access units 964 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 972 in the memory unit 970. The instruction cache unit 934 is further coupled to a level 2 (L2) cache unit 976 in the memory unit 970. The L2 cache unit 976 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 900 as follows: 1) the instruction fetch 938 performs the fetch and length decoding stages 902 and 904; 2) the decode unit 940 performs the decode stage 906; 3) the rename/allocator unit 952 performs the allocation stage 908 and renaming stage 910; 4) the scheduler unit(s) 956 performs the schedule stage 912; 5) the physical register file(s) unit(s) 958 and the memory unit 970 perform the register read/memory read stage 914; the execution cluster 960 perform the execute stage 916; 6) the memory unit 970 and the physical register file(s) unit(s) 958 perform the write back/memory write stage 918; 7) various units may be involved in the exception handling stage 922; and 8) the retirement unit 954 and the physical register file(s) unit(s) 958 perform the commit stage 924.The core 990 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 990 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.FIG. 10 illustrates a block diagram of an SOC package in accordance with an embodiment. As illustrated in FIG. 10 , SOC 1002 includes one or more Central Processing Unit (CPU) cores 1020, one or more Graphics Processor Unit (GPU) cores 1030, an Input/Output (I/O) interface 1040, and a memory controller 1042. Various components of the SOC package 1002 may be coupled to an interconnect or bus such as discussed herein with reference to the other figures. Also, the SOC package 1002 may include more or less components, such as those discussed herein with reference to the other figures. Further, each component of the SOC package 1002 may include one or more other components, e.g., as discussed with reference to the other figures herein. In one embodiment, SOC package 1002 (and its components) is provided on one or more Integrated Circuit (IC) die, e.g., which are packaged into a single semiconductor device.As illustrated in FIG. 10 , SOC package 1002 is coupled to a memory 1060 via the memory controller 1042. In an embodiment, the memory 1060 (or a portion of it) can be integrated on the SOC package 1002.The I/O interface 1040 may be coupled to one or more I/O devices 1070, e.g., via an interconnect and/or bus such as discussed herein with reference to other figures. I/O device(s) 1070 may include one or more of a keyboard, a mouse, a touchpad, a display, an image/video capture device (such as a camera or camcorder/video recorder), a touch screen, a speaker, or the like.FIG. 11 is a block diagram of a processing system 1100, according to an embodiment. In various embodiments the system 1100 includes one or more processors 1102 and one or more graphics processors 1108, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 1102 or processor cores 1107. In on embodiment, the system 1100 is a processing platform incorporated within a system-on-a-chip (SoC or SOC) integrated circuit for use in mobile, handheld, or embedded devices.An embodiment of system 1100 can include, or be incorporated within a server-based gaming platform, a game console, including a game and media console, a mobile gaming console, a handheld game console, or an online game console. In some embodiments system 1100 is a mobile phone, smart phone, tablet computing device or mobile Internet device. Data processing system 1100 can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, smart eyewear device, augmented reality device, or virtual reality device. In some embodiments, data processing system 1100 is a television or set top box device having one or more processors 1102 and a graphical interface generated by one or more graphics processors 1108.In some embodiments, the one or more processors 1102 each include one or more processor cores 1107 to process instructions which, when executed, perform operations for system and user software. In some embodiments, each of the one or more processor cores 1107 is configured to process a specific instruction set 1109. In some embodiments, instruction set 1109 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). Multiple processor cores 1107 may each process a different instruction set 1109, which may include instructions to facilitate the emulation of other instruction sets. Processor core 1107 may also include other processing devices, such a Digital Signal Processor (DSP).In some embodiments, the processor 1102 includes cache memory 1104. Depending on the architecture, the processor 1102 can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor 1102. In some embodiments, the processor 1102 also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 1107 using known cache coherency techniques. A register file 1106 is additionally included in processor 1102 which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor 1102.In some embodiments, processor 1102 is coupled to a processor bus 1110 to transmit communication signals such as address, data, or control signals between processor 1102 and other components in system 1100. In one embodiment the system 1100 uses an exemplary 'hub' system architecture, including a memory controller hub 1116 and an Input Output (I/O) controller hub 1130. A memory controller hub 1116 facilitates communication between a memory device and other components of system 1100, while an I/O Controller Hub (ICH) 1130 provides connections to I/O devices via a local I/O bus. In one embodiment, the logic of the memory controller hub 1116 is integrated within the processor.Memory device 1120 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In one embodiment the memory device 1120 can operate as system memory for the system 1100, to store data 1122 and instructions 1121 for use when the one or more processors 1102 executes an application or process. Memory controller hub 1116 also couples with an optional external graphics processor 1112, which may communicate with the one or more graphics processors 1108 in processors 1102 to perform graphics and media operations.In some embodiments, ICH 1130 enables peripherals to connect to memory device 1120 and processor 1102 via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller 1146, a firmware interface 1128, a wireless transceiver 1126 (e.g., Wi-Fi, Bluetooth), a data storage device 1124 (e.g., hard disk drive, flash memory, etc.), and a legacy I/O controller 1140 for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. One or more Universal Serial Bus (USB) controllers 1142 connect input devices, such as keyboard and mouse 1144 combinations. A network controller 1134 may also couple to ICH 1130. In some embodiments, a high-performance network controller (not shown) couples to processor bus 1110. It will be appreciated that the system 1100 shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, the I/O controller hub 1130 may be integrated within the one or more processor 1102, or the memory controller hub 1116 and I/O controller hub 1130 may be integrated into a discreet external graphics processor, such as the external graphics processor 1112.FIG. 12 is a block diagram of an embodiment of a processor 1200 having one or more processor cores 1202A to 1202N, an integrated memory controller 1214, and an integrated graphics processor 1208. Those elements of FIG. 12 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. Processor 1200 can include additional cores up to and including additional core 1202N represented by the dashed lined boxes. Each of processor cores 1202A to 1202N includes one or more internal cache units 1204A to 1204N. In some embodiments each processor core also has access to one or more shared cached units 1206.The internal cache units 1204A to 1204N and shared cache units 1206 represent a cache memory hierarchy within the processor 1200. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units 1206 and 1204A to 1204N.In some embodiments, processor 1200 may also include a set of one or more bus controller units 1216 and a system agent core 1210. The one or more bus controller units 1216 manage a set of peripheral buses, such as one or more Peripheral Component Interconnect buses (e.g., PCI, PCI Express). System agent core 1210 provides management functionality for the various processor components. In some embodiments, system agent core 1210 includes one or more integrated memory controllers 1214 to manage access to various external memory devices (not shown).In some embodiments, one or more of the processor cores 1202A to 1202N include support for simultaneous multi-threading. In such embodiment, the system agent core 1210 includes components for coordinating and operating cores 1202A to 1202N during multi-threaded processing. System agent core 1210 may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores 1202A to 1202N and graphics processor 1208.In some embodiments, processor 1200 additionally includes graphics processor 1208 to execute graphics processing operations. In some embodiments, the graphics processor 1208 couples with the set of shared cache units 1206, and the system agent core 1210, including the one or more integrated memory controllers 1214. In some embodiments, a display controller 1211 is coupled with the graphics processor 1208 to drive graphics processor output to one or more coupled displays. In some embodiments, display controller 1211 may be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor 1208 or system agent core 1210.In some embodiments, a ring based interconnect unit 1212 is used to couple the internal components of the processor 1200. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some embodiments, graphics processor 1208 couples with the ring interconnect 1212 via an I/O link 1213.The exemplary I/O link 1213 represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module 1218, such as an eDRAM (or embedded DRAM) module. In some embodiments, each of the processor cores 1202 to 1202N and graphics processor 1208 use embedded memory modules 1218 as a shared Last Level Cache.In some embodiments, processor cores 1202A to 1202N are homogenous cores executing the same instruction set architecture. In another embodiment, processor cores 1202A to 1202N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores 1202A to 1202N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment processor cores 1202A to 1202N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. Additionally, processor 1200 can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components.FIG. 13 is a block diagram of a graphics processor 1300, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores. In some embodiments, the graphics processor communicates via a memory mapped I/O interface to registers on the graphics processor and with commands placed into the processor memory. In some embodiments, graphics processor 1300 includes a memory interface 1314 to access memory. Memory interface 1314 can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.In some embodiments, graphics processor 1300 also includes a display controller 1302 to drive display output data to a display device 1320. Display controller 1302 includes hardware for one or more overlay planes for the display and composition of multiple layers of video or user interface elements. In some embodiments, graphics processor 1300 includes a video codec engine 1306 to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, as well as the Society of Motion Picture & Television Engineers (SMPTE) 321M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats.In some embodiments, graphics processor 1300 includes a block image transfer (BLIT) engine 1304 to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in one embodiment, 3D graphics operations are performed using one or more components of graphics processing engine (GPE) 1310. In some embodiments, graphics processing engine 1310 is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.In some embodiments, GPE 1310 includes a 3D pipeline 1312 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline 1312 includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media sub-system 1315. While 3D pipeline 1312 can be used to perform media operations, an embodiment of GPE 1310 also includes a media pipeline 1316 that is specifically used to perform media operations, such as video post-processing and image enhancement.In some embodiments, media pipeline 1316 includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine 1306. In some embodiments, media pipeline 1316 additionally includes a thread spawning unit to spawn threads for execution on 3D/Media sub-system 1315. The spawned threads perform computations for the media operations on one or more graphics execution units included in 3D/Media sub-system 1315.In some embodiments, 3D/Media subsystem 1315 includes logic for executing threads spawned by 3D pipeline 1312 and media pipeline 1316. In one embodiment, the pipelines send thread execution requests to 3D/Media subsystem 1315, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics execution units to process the 3D and media threads. In some embodiments, 3D/Media subsystem 1315 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, to share data between threads and to store output data.In the following description, numerous specific details are set forth to provide a more thorough understanding. However, it will be apparent to one of skill in the art that the embodiments described herein may be practiced without one or more of these specific details. In other instances, well-known features have not been described to avoid obscuring the details of the present embodiments.The following examples pertain to further embodiments. Example 1 includes an apparatus comprising: a cache to store a buffer; and execution circuitry to execute an instruction, the instruction to cause one or more cachelines in the cache to be marked based on a start address for the buffer and a size of the buffer, wherein a marked cacheline in the cache is to be prevented from being written back to memory. Example 2 includes the apparatus of example 1, wherein marking of the one or more cachelines comprises invalidating the one or more cachelines. Example 3 includes the apparatus of example 1, wherein marking of the one or more cachelines comprises modifying a state the one or more cachelines to an Exclusive state. Example 4 includes the apparatus of example 1, wherein marking of the one or more cachelines comprises indicating the one or more cachelines as victim candidates to allow early eviction of the one or more cachelines. Example 5 includes the apparatus of example 1, wherein the instruction is to cause a look up of the one or more cachelines in the cache based on a mask. Example 6 includes the apparatus of example 1, wherein an accelerator is to utilize the buffer as a scratchpad, wherein the instruction is to cause the accelerator to reclaim the scratchpad. Example 7 includes the apparatus of example 1, wherein the cache is a Level 2 (L2) cache, wherein the instruction is to cause a look up of the one or more cachelines in the L2 cache, and upon a miss in the L2 cache, no further operations associated with the instruction are to be performed. Example 8 includes the apparatus of example 1, further comprising decode circuitry to decode the instruction into a plurality of store operations, wherein each of the plurality of store operations is to invalidate a corresponding cacheline in the cache. Example 9 includes the apparatus of example 1, wherein the memory comprises a main memory or a dynamic random access memory. Example 10 includes the apparatus of example 1, wherein the cache comprises one or more of a level 1 cache, a level 2 cache, and a last level cache. Example 11 includes the apparatus of example 1, wherein a processor core comprises the execution circuitry and the cache. Example 12 includes the apparatus of example 11, wherein the processor core comprises a Graphics Processing Unit (GPU) core.Example 13 includes one or more non-transitory computer-readable media comprising one or more instructions that when executed on at least one processor configure the at least one processor to perform one or more operations to: mark one or more cachelines in a cache in response to execution of an instruction based on a start address for a buffer and a size of the buffer, wherein a marked cacheline in the cache is to be prevented from being written back to memory. Example 14 includes the one or more computer-readable media of example 13, further comprising one or more instructions that when executed on the at least one processor configure the at least one processor to perform one or more operations to cause invalidating the one or more cachelines based on the marking of the one or more cachelines. Example 15 includes the one or more computer-readable media of example 13, wherein marking of the one or more cachelines comprises modifying a state the one or more cachelines to an Exclusive state. Example 16 includes the one or more computer-readable media of example 13, wherein marking of the one or more cachelines comprises indicating the one or more cachelines as victim candidates to allow early eviction of the one or more cachelines. Example 17 includes the one or more computer-readable media of example 13, further comprising one or more instructions that when executed on the at least one processor configure the at least one processor to perform one or more operations to cause a look up of the one or more cachelines in the cache based on a mask. Example 18 includes the one or more computer-readable media of example 13, further comprising one or more instructions that when executed on the at least one processor configure the at least one processor to perform one or more operations to cause an accelerator to utilize the buffer as a scratchpad, wherein the instruction is to cause the accelerator to reclaim the scratchpad. Example 19 includes the one or more computer-readable media of example 13, wherein the cache is a Level 2 (L2) cache, wherein the instruction is to cause a look up of the one or more cachelines in the L2 cache, and upon a miss in the L2 cache, no further operations associated with the instruction are to be performed. Example 20 includes the one or more computer-readable media of example 18, further comprising one or more instructions that when executed on the at least one processor configure the at least one processor to perform one or more operations to cause decoding of the instruction into a plurality of store operations, wherein each of the plurality of store operations is to invalidate a corresponding cacheline in the cache. Example 21 includes the one or more computer-readable media of example 13, wherein the memory comprises a main memory or a dynamic random access memory. Example 22 includes the one or more computer-readable media of example 13, wherein the cache comprises one or more of a level 1 cache, a level 2 cache, and a last level cache. Example 23 includes the one or more computer-readable media of example 13, wherein a processor core comprises the execution circuitry and the cache. Example 24 includes the one or more computer-readable media of example 13, wherein the processor core comprises a Graphics Processing Unit (GPU) core.Example 25 includes an apparatus comprising means to perform a method as set forth in any preceding example. Example 26 includes machine-readable storage including machine-readable instructions, when executed, to implement a method or realize an apparatus as set forth in any preceding example.In various embodiments, one or more operations discussed with reference to Figs. 1 et seq. may be performed by one or more components (interchangeably referred to herein as "logic") discussed with reference to any of the figures.In various embodiments, the operations discussed herein, e.g., with reference to Figs. 1 et seq., may be implemented as hardware (e.g., logic circuitry), software, firmware, or combinations thereof, which may be provided as a computer program product, e.g., including one or more tangible (e.g., non-transitory) machine-readable or computer-readable media having stored thereon instructions (or software procedures) used to program a computer to perform a process discussed herein. The machine-readable medium may include a storage device such as those discussed with respect to the figures.Additionally, such computer-readable media may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals provided in a carrier wave or other propagation medium via a communication link (e.g., a bus, a modem, or a network connection).Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, and/or characteristic described in connection with the embodiment may be included in at least an implementation. The appearances of the phrase "in one embodiment" in various places in the specification may or may not be all referring to the same embodiment.Also, in the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. In some embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements may not be in direct contact with each other, but may still cooperate or interact with each other.Thus, although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter. |
The invention relates to a system and method to secure boot UEFI firmware and UEFI-aware operating systems on a mobile internet device. In some embodiments, the invention involves adding a capability for a platform owner or administrator to ensure that the firmware is only executed in an owner-authorized fashion, such as with signed components managed by a security processor. Embodiments may extend the Core Root of Trust for Measurement (CRTM), via use of a cryptographic unit coupled to the security processor in a mobile Internet device (MID) as a Root-of-Trust for Storage (RTS) Storage Root Key (SRK), into a unified extensible firmware interface (UEFI) Platform Initialization (PI) image authorization and boot manager. Other embodiments are described and claimed. |
1.A system for securely booting on a mobile platform includes:A host processor configured to execute a host operating system and a host application;Firmware for booting the host processor, the firmware being configured to use one or more signing keys during booting, each signing key being associated with a software image to be loaded onto the platform during booting; as well asA security processor on the platform, the security processor being communicatively coupled to a secure memory storage that is not accessible to the firmware and other host processor applications; the security The processor is configured to manage the one or more signing keys to control image loading during boot.2.The system of claim 1, wherein the secure memory storage resides in a non-volatile memory (NVM) storage coupled to the security processor.3.The system of claim 1, wherein the security processor resides in a chipset coupled to a cryptographic core, the cryptographic core configured to assist in verifying a digital signature.4.The system of claim 3, further comprising:A public key coupled to a chipset on the platform; andA certificate database stored in the secure memory store, wherein the certificate database includes a plurality of certificates, wherein each certificate corresponds to one of a plurality of software images executable by the host processor, and wherein the security The processor is configured to verify each software image to be loaded on the host processor against the corresponding certificate in the certificate database and the digital signature embedded in the software image, the verification using a coupling To the public key of the chipset.5.The system of claim 4, further comprising:Means for taking ownership of said mobile platform by a platform administrator; andMeans for registering credentials in the certificate database, wherein the credentials include at least one of a platform credential and a third-party credential.6.The system of claim 4, wherein the software image is compatible with a Unified Extensible Firmware Interface (UEFI) architecture.7.The system of claim 1, wherein if the signature key associated with the software image is not confirmed, the firmware will not load or start the software image.8.The system according to claim 7, wherein the confirmation failure is at least one of expiration of a certificate, absence of a certificate, or revocation of a certificate.9.The system of claim 1, wherein the signature key comprises at least one of a platform key, a protected variable key, or a public key.10.The system of claim 9, wherein the one or more signing keys include a hierarchical structure of signing keys, wherein a key of a higher layer protects a key of a lower layer.11.The system of claim 10, wherein the level of the platform key is higher than the protected variable key, and the level of the protected variable key is higher than the public key, wherein the public key and the key to be loaded during booting Each software image is associated.12.The system of claim 1, wherein the system has wireless communication capabilities configured to allow a remote platform administrator to update a certificate database coupled to the security processor.13.A method for securely booting on a mobile platform includes:Start secure boot of the host processor on the platform;Determining by the security processor on the platform whether the boot module is digitally signed and authorized to be loaded on the host processor;When the boot module is digitally signed and authorized, the boot module is loaded and executed on the host processor, and the security processor determines that the boot module is authorized to process on the host Whether to load multiple software images after loading on the processor, and load one of the multiple software images for execution on the host processor when one of the multiple software images is authorized; andWhen the digitally signed boot module is not authorized, the platform administrator is executed to authorize the boot image or at least one of the two platforms cannot be booted, and when one of the multiple software images is not authorized , One of the multiple software images cannot be loaded on the host processor.14.The method of claim 13, wherein the security processor has wireless communication capabilities, and the method further comprises communicating by the security processor by wireless communication with a remote administrator having information about credentials Manage credentials, which are stored in a certificate database in a non-volatile memory accessible by the security processor, the non-volatile memory being inaccessible to the host processor.15.The method of claim 13, wherein determining whether the boot module is digitally signed and authorized to load on the host processor further comprises:Determining whether the boot module has a mapping credential in the certificate database; andIt is determined whether the boot module image credential is verified against the image credential in the certificate database.16.The method of claim 15, wherein determining by the security processor whether to load multiple software images after the boot module is authorized to load on the host processor comprises:Determining whether each of the software images has a corresponding image credential in the certificate database; andIt is determined whether each of the software image credentials is verified for the image credentials in the certificate database.17.The method according to claim 13, further comprising:The digital signature in the boot module and software image is verified by a cryptographic core residing in the same chipset as the security processor.18.A device for using secure guidance on a mobile platform includes:Means for starting secure boot of a host processor on the platform;Means for determining whether a boot module is digitally signed and authorized to be loaded on the host processor by a security processor on the platform;Means for loading and executing the boot module on the host processor when the boot module is digitally signed and authorized, and for determining by the security processor that the boot module is authorized Means for loading multiple software images after loading on the host processor, and means for loading the multiple software on the host processor when one of the multiple software images is authorized One of the images for execution; andA device for selectively executing at least one of the boot image authorized by a platform administrator or being unable to boot the platform when the digitally signed boot module is unauthorized, and when the plurality of software When one of the images is not authorized, one of the plurality of software images cannot be loaded on the host processor.19.The device according to claim 18, wherein the security processor has wireless communication capabilities, and further comprises a certificate database managed by the security processor through wireless communication with a remote administrator having information about the credentials In the device of credentials, the certificate database is stored in a non-volatile memory accessible by the security processor, and the non-volatile memory is inaccessible to the host processor.20.The device according to claim 18, wherein the means for determining whether the boot module is digitally signed and authorized to be loaded on the host processor further comprises:Means for determining whether the boot module has a mapping credential in the certificate database; andAnd a device for determining whether the booting module image credential is verified against the image credential in the certificate database.21.The device of claim 20, wherein the means for determining, by the security processor, whether to load multiple software images after the boot module is authorized to load on the host processor, further comprises:Means for determining whether each of the software images has a corresponding image credential in the certificate database; andA device that determines whether each of the software image credentials is verified for the image credentials in the certificate database.22.The apparatus according to claim 18, further comprising:A means for verifying the digital signature in the boot module and software image by a cryptographic core residing on the same chipset as the security processor. |
System and method for securely guiding UEFI firmware and UEFI-aware operating system on mobile Internet deviceCopyright NoticeThis article contains copyrighted material. The copyright owner has no objection to the facsimile reproduction by anyone of the patent disclosure when it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all rights whatsoever under copyright.Field of inventionEmbodiments of the present invention generally relate to mobile computing platforms, and more specifically, embodiments of the present invention increase the ability of a platform owner or administrator to ensure that firmware is only executed in a manner authorized by the owner, such as using a signed component. The embodiment can extend the core metric trusted root (CRTM) to the Unified Extensible Firmware Interface (UEFI) platform initialization by using a cryptographic coprocessor in a mobile device as the storage trusted root (RTS) storage root key (SRK). PI) image authorization and boot manager.Background InformationVarious mechanisms exist for secure booting. The Unified Extensible Firmware Interface (UEFI) specification defines a new interface model for the operating system and platform firmware. The interface consists of a data table containing platform-specific information plus boot and runtime service calls available to the operating system and its loader. Together they provide a standard environment for booting the operating system and running pre-boot applications. More information about UEFI can be found at the URL www * uefi * org / home on the public Internet. Please note that periods are replaced by asterisks in this document to prevent inadvertent hyperlinks. The UEFI standard can be used to assist the secure boot of the platform.Chapter 26 of UEFI Specification 2.1 describes the secure boot protocol. The defined protocol provides a specific device path to access common authentication information. This protocol can be used on any device process to obtain information associated with a physical or logical device. The public key and certificate can be retained on the firmware and checked for digital signatures of third-party (U) EFI drivers and operating system (OS) loaders. Binding public keys to the platform is already a deployment issue. Security is only as good as the platform's secure storage of public keys (ie, the terrible "key management problem"). Revoking a public key or certificate at boot time is not possible because earlier boot environments could not access the network and find out the certificate revocation list (CRL) from the server. Fake loaders can be inserted into the platform to bypass security. Therefore, this method of secure booting is still vulnerable to attack during booting.Mobile devices, and more specifically mobile Internet devices (MIDs), have become common. There are various mechanisms for booting a mobile device, which may differ from the methods used to boot a desktop or laptop system. On desktop and server platforms, a Trusted Platform Module (TPM) component can be used to assist in secure booting. The TPM is a chip type documented by the Trusted Computing Group (TCG), and its latest approved variant is version 1.2. When these types of platforms are combined with processor / chipset technologies such as AMD's Presidio and its SKINIT instructions or Intel's Trusted Execution Technology (TXT) using the SENTER instruction (i.e. Protect the firmware from booting on these types of platforms. However, MID processors do not support Trusted Execution Technology (TXT) or TCG 1.2TPM, and therefore require a "secure boot" of the firmware and a trusted root in the platform as part of the operating system (OS) bootstrap. The firmware boot on the MID needs to be protected before the operating system starts for added security. This is especially true because high-value content such as music and other multimedia is available on the MID system and requires the stronger protection required by content providers.Brief description of the drawingsThe features and advantages of the present invention will become apparent from the following detailed description of the present invention and the accompanying drawings, which are:FIG. 1 is a block diagram illustrating a hierarchical structure of a signature key for ensuring boot and system integrity in a system using signature technology; FIG.2 is a diagram illustrating a flow when a platform owner acquires ownership and generates a platform credential by means of a security processor according to an embodiment of the present invention;3 is a flowchart illustrating a method for obtaining ownership and registering a platform voucher according to an embodiment of the present invention;4 illustrates an exemplary C code for implementing an embodiment of the present invention;5 is a block diagram illustrating an exemplary structure of a certificate database according to an embodiment of the present invention;6 is a flowchart illustrating a method for a platform owner to register a third-party authentication credential according to an embodiment of the present invention;7 is a flowchart illustrating a method for registering a digital signature by a platform owner according to an embodiment of the present invention;8 is a flowchart illustrating an exemplary method for authorizing a UEFI executable program according to an embodiment of the present invention;9 is a block diagram illustrating a platform having a main processor element and a security processing chipset according to an embodiment of the present invention; and10 is a block diagram of an exemplary cryptographic unit with an embedded security processor according to an embodiment of the present invention.Detailed DescriptionEmbodiments of the present invention are systems and methods related to mobile devices. For illustration, an embodiment of the invention is described in relation to a mobile Internet device (MID). It should be understood, however, that embodiments of the present invention are applicable to cell phones, portable MP3 players, personal digital assistants (PDAs), or other mobile devices that do not have Internet access. Embodiments of the invention increase the ability of a platform owner or administrator to ensure that the firmware is only executed in a manner authorized by the owner, such as using a signature component. Embodiments can extend the core metric trusted root (CRTM) to the Unified Extensible Firmware Interface (UEFI) platform initialization by using a cryptographic coprocessor in a mobile device as the storage trusted root (RTS) storage root key (SRK) (PI) image authorization and boot manager.Reference in this specification to "one embodiment" or "an embodiment" of the invention means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "in one embodiment" in various places throughout the specification are not necessarily all referring to the same embodiment.For purposes of illustration, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the embodiments of the present invention may be practiced without the specific details provided herein. Moreover, well-known features are omitted or simplified so as not to obscure the present invention. Various examples are given in this description. These are merely descriptions of specific embodiments of the invention. The scope of the invention is not limited to the examples given.Embodiments of the present invention employ an opaque "OEM boot module" and apply UEFI secure boot and policy-based dispatch based on platform initialization (PI) firmware to implement a security solution and support initial equipment manufacturers (OEMs) that sell MIDs ) And Original Design Manufacturer (ODM). Specifically, the technology involves a secure boot platform driver, a third-party option ROM (O-ROM), and an OS loader, such as Winload.efi (for Microsoft Windows) and eLilo.efi (for Linux). eLilo is the standard Linux boot loader for EFI-based PC hardware. This embodiment eliminates the danger of exposing the private key when the platform owner must sign the payload before invoking various authenticated services. This embodiment also supplements a Single Sign-On (SSO) scenario and a 1-touch provisioning service. For example, the platform owner can use the same authorization data to take ownership of both the OS and pre-OS phases.Embodiments of the invention use a security processor on the MID to manage the signing private key. This allows UEFI secure boot with a security processor policy engine to be seamlessly integrated into the manageability and service provisioning infrastructure.Referring to FIG. 9, a block diagram illustrating a platform 900 having a main or host processor element 910 and a security processing chipset 930 is shown. In an embodiment of the invention, the platform 900 has a main runtime environment 910, where the main processor 911 requires secure OS booting to protect against malware. The processor 911 is communicatively coupled to the system memory 905. The security processor chipset unit 930 may have a security engine or processor 931, a system controller (ARC) 933 (which completes some platform initialization before loading x86 firmware), and a dedicated read-only memory (ROM) 935. ROM can be used to further protect code and data to ensure secure boot; when it is read-only, code and data can not be changed by malicious tampering. The flash memory 920 is coupled to the security processor and holds a boot software image 921 for booting the main processor 911. It is important to confirm that the boot software image to be loaded on the main processor 911 is authorized and free of malware.In an embodiment, the verified boot may be implemented using a security chipset 930. The security engine 931 is configured to implement the key and signature process, as described below. Since the security processor 931 is a dedicated device and cannot be accessed by the main processor 911, the security engine is protected from tampering. The security engine is configured to confirm the main processor's boot ROM before allowing the main processor to boot. In existing systems, such as IntelCoreTM 2 Duo-based personal computers (PCs), the main processor simply passes boot control to everything in the flash section—the boot ROM—without confirmation. . In some desktop, laptop, or other full-performance PCs, TXT technology can intervene to confirm the boot ROM and subsequent processing. The platform enters a safe state later than the start-up process; explicitly, the TXT microcode started by the SENTER instruction will synchronize all processors on the platform and allow the measurement start environment (MLE) to be independent of all software running before it, including option ROM Software and platform BIOS. However, TXT processing is not available on MID and other low-cost platforms. In an embodiment, a security processor may be added to the MID to confirm the boot ROM as a replacement for TXT in other systems. However, in an embodiment with a MID of a secure processor, only the OEM boot module can be verified.The OEM boot module 951 is essentially UEFI firmware. The security processor 930 confirms the OEM boot module 951. Once confirmed, the security processor 930 copies the OEM boot module 951 to the SRAM 905 of the main processor 911. The OEM boot module can include pre-EFI initialization (PEI) and driver execution environment (DXE). These phases are required to run before starting the operating system (OS). In some systems, once the OEM boot module is executed, the OS loader 953 is started from the PEI stage. The OS loader 953 starts the trusted OS 955. However, the trust of the OS in these systems is assumed, not really verified. In addition to verifying only the OEM boot module 951, embodiments of the present invention allow the OS loader and other EFI modules to be confirmed. This provides the ability to store multiple signed OS instances and verify signed applications.It should be understood that in the existing MID system, the boot module, the OS loader, and the OS code may be stored as an executable image. Implementing the UEFI architecture on the MID allows separate images of each stage of boot and operating system loading. Embodiments of the present invention are constructed using a UEFI architecture, so each individual image has its own digital signature and can be independently verified. This also allows individual components to be updated or changed when needed.In some embodiments, it may be necessary to provide the ability to boot an alternative operating system in the MID. For example, it is more convenient for users of cellular phones that if multiple operating systems have been loaded onto the phone, or can be remotely loaded and properly authorized, the subscription to different cellular carriers on the same smart phone can be changed. In this case, what the user needs is to request a reboot from a new carrier and use his custom software to cause the carrier to reboot the MID (smartphone) instead of forcing the user to purchase a new phone. It may also be necessary to boot different operating systems based on better applications in the device, even when on the same carrier. However, it is important to start the validation and authorization of the new operating system. It will be understood that even if a mobile Internet device is referenced for illustrative purposes, other applications, such as camera capabilities or music storage and playback, may be selectively installed on the mobile device without requiring Internet access to implement the invention.When the MID uses the UEFI architecture, the replacement operating system can use the UEFI OS loader, the UEFI firmware can communicate with the security processor and store the authentication variables and certificates used to sign the EFI driver and loader in non-volatile memory ; The authentication variable may be stored in the non-volatile memory of the security processor and / or in a secure area of the platform flash memory and signed by the RSA engine of the security processor. The security processor guarantees that the driver execution environment (DXE) and pre-EFI initialization environment (PEI) code is correct and undamaged. The DXE phase can then use the capabilities of the security processor to manage credentials and certificates for UEFI secure boot.The security processor has its own memory storage, which is not accessible by the main processor. Keys and certificates can be safely stored in security processor storage without the risk of being tampered with by malicious code executing on the main processor. In another embodiment, the key and certificate may be stored in a secure area of the host processor's flash memory, which is not accessible by the host processor. This partitioning is supported by the platform chipset and is only accessible by the security processor.To verify the OEM boot module, the OEM typically stores the public key on the MID's chipset. The OEM boot module is signed and can be checked with the public key. When the security processor validates the signed module, the chipset issues the module to the host processor to begin booting.What is needed is to maintain a secure multimedia stack 961, a secure manageability stack 963 (i.e. media player software such as Helix from RealNetworks) and verify the signature application 965 among various applications to be executed on the MID; The boot environment is not trusted and the platform cannot establish trust in these applications up to and including the OS loader. In addition to OEM boot modules, embodiments of the invention ensure that these applications are verified. Thus, embodiments of the present invention ensure that all modules executing on the MID main processor have been verified and authenticated using keys and signature techniques, where the keys are managed by the security processor.Referring now to FIG. 10, a cryptographic unit with an embedded security processor according to an embodiment of the present invention is shown. The crypto unit 1000 has an embedded security processor 931. The security processor 931 is communicatively coupled to both the ROM 935 and the system RAM 937. The crypto unit 1000 may further include a security debug manager 1010. The security debug manager may be coupled to a Joint Test Behavior Group (JTAG) standard test access port 1011 and a boundary scan architecture for test access ports for testing printed circuit boards using boundary scan. The cryptographic core 1020 is generally fixed-function hardware that can be coupled to a ring oscillator 1021. The crypto unit 1000 has its own clock 1030 and power source 1040. The clock 1030 may be driven by or synchronized with an external clock 1031. The cryptographic core 1020 can accelerate the modular arithmetic used by the RSA algorithm or other asymmetric and symmetric cryptographic algorithms for signing and verification. Due to the cost of using X86 processors to complete these functions, fixed-function hardware is often used for passwords for MID systems, where cost is measured by price, size, efficiency, and power requirements. The power of the password unit 1000 can be driven or reset by the external power source 1041 and the reset 1043 unit. The crypto unit 1000 may have a non-volatile memory (NVM) manager 1050 to control the non-volatile memory (NVM) unit 1051.The crypto unit 1000 may have a system interface 1070 with an advanced high-performance bus (AHB) master module and an advanced peripheral bus (APB) slave module. AHB is interconnected between the intelligent security processor and the host. AHB can hold several outstanding commands. Some of these commands, such as cryptographic operations, can be sent to the APB for processing them in fixed function hardware. Having fixed-function hardware handle passwords is important because the specially crafted circuits used for passwords are much more efficient at millions of instructions per second (MIPS) / Watt than passwords on general-purpose processors.In one embodiment, the certificate used to ensure that the UEFI boot is verified and authenticated may be stored in a non-volatile memory 1051 accessible to the security processor. Certificates such as certificates that follow the x509v2 standard each store n-tuple information. This information may include the public key, the date / time the certificate was valid, and a certificate signature from a trusted third party (TTP) such as Verisign or the OS vendor. In this way, the public key 1053 can be stored in the chipset NVM 1051 of the crypto unit 1000 to assist verification. The process used for authentication boots is discussed further below. In an embodiment, the certificate signing and keying techniques used are similar to those used in existing systems that use signing keys. However, existing methods cannot be used directly on MID main processors without the risk of tampering.FIG. 1 illustrates a hierarchical structure of a signature key for ensuring boot and system integrity in a system currently using a signature technology. The signature hierarchy is a canonical store with roots and leaves. A key with a dashed line as an outline can reside in write-protected storage. In an embodiment of the present invention, the protected storage can be accessed by the security device, but cannot be accessed by the main processor or the device operating system. In exemplary UEFI embodiments, PV-00, PV-01, PV-30, and PV-31 (101a-d) represent keys used to protect UEFI protected variables. These protected variables (PVs) point to signed UEFI loaders and drivers. KeK0pub, KeK1pub, KeK2pub, and KeK3pub (103a-d) represent the public keys stored in the cryptographic core. The UEFI firmware uses the public key 103 to check the digital signature embedded in the UEFI driver and loader to see if the signature is correct by sending a command to the cryptographic core. Some keys may not correspond to protected variables. Each operating system loader typically has its own key. For example, a platform may have both a Windowsloader and a Linux loader. Both need to be protected. Each loader will have its own public key. Each OS vendor will typically digitally sign their loader products. A platform key (PK) is a key given to the platform by the platform owner, such as the company's information technology (IT) department. The platform uses PK to encrypt / sign all other keys. For example, the key KEK 103 from an OS vendor or an independent hardware vendor (IHV) is encrypted with PK 105. In other words, the platform firmware uses PK to secure KEK.Platform_Administrator_r107 represents the administrator or IT professional of the system or platform. The administrator usually enables the key / signature / encryption function and installs PK105. In some systems, the platform administrator 107 can remotely install and launch the boot function at the management console and send commands to the UEFI machine over the network, such as through Intel Active Management Technology (iAMT) networking. For cellular phone MID systems, the platform administrator can open the key through wireless cellular communication or through the Internet connection, using a trusted channel to the management console (such as TLS / SSL), or using other technologies such as OMA (OpenMobileAlliance.org) protocol /signature.The remote server 110 may hold a public key such as a platform key 113a or an OS loader key 113b and a certificate and a revocation list in the Active Directory 111. Active Directory 111 is the corporate registry. The registry holds information about the platforms it manages. A good / valid key list can be stored in Active Directory 111. In other systems, a manageability engine (ME) or Intel Active Management Technology (iAMT) device accesses Active Directory 111 on a remote server 110 to determine whether the key is valid or has been revoked. In the alternative, the ME may access other remote servers or networks, such as through the public Internet, to retrieve a list of good or revoked keys. In an embodiment of the invention, a security processor is used to manage keys and certificates. Although the list of certificates can be stored in non-volatile memory (NVM) that is accessible by the security processor and not accessible by the main processor, the certificate can be updated or revoked at runtime through Internet access in the same manner as described above Or push or update the new certificate set to the MID through the wireless communication path by the platform administrator.In an embodiment, the security processor is an active hardware chip on a MID. UEFI Secure Boot adds a Trusted Root (RTE / RTV) for mandatory validation, although it supports "Secure Boot". In fact, if the software state does not meet some integrity metric such as a hash or digital signature in a whitelist, RTE and "Secure Boot" can abort the boot. UEFI allows both cases but advocates the latter, because the possible hash list is infinite, which would be a management nightmare; public keys allow a layer of indirection to map keys to a small number of trusted sources, thus mitigating Management issues associated with deployment.The problems solved by the embodiments of the present invention include: (1) having a single policy mechanism to manage the certification of third-party UEFI drivers, applications, and OS loaders; and (2) once the platform owner obtains system ownership before or in the OS , Authorize third-party UEFI drivers, applications, and OS loaders to execute.The security processor allows a trust relationship between the platform owner, the platform firmware, and a third party (ie, OSV, OEM, etc.). There are two types of credentials that can be used to describe trust relationships. First, you can use platform credentials to establish a trust relationship between the platform owner and the firmware. The platform owner creates the platform credential as a root credential containing an asymmetric key pair, which is used to take ownership and register all other credentials. Second, third-party credentials establish a trust relationship between third-party vendors and firmware. Platform owners can register credentials for trusted third-party vendors, which can be used to authorize the execution of third-party executable programs. Such credentials may contain both the public key generated by the seller and the seller-specific information.Because platform credentials contain public keys and are used to register third-party credentials, local machines require explicit private key manipulation (ie, the signature payload). In MID, it is difficult to deal with these two issues as in a desktop system or other existing systems. Thus, embodiments of the present invention provide an innovative method that uses a security processor as the root of trust (RTS) for storage to generate platform credentials and stores the private key securely with respect to the first problem. For the latter problem, the embodiment uses a security processor to perform the signing operation internally, which will not expose any private keys.In one embodiment, two operating modes are defined from the perspective of the platform owner: a SETUP mode and a USER mode. Security policies are enforced in the latter mode. Specifically, the case is established that the machine is open to obtain the provided certificate, while the user mode is that the UEFI firmware will become a verification root of trust (RTV) and only invoke a digitally signed and associated verification certificate. UEFI OS loader or driver with the key in the certificate installed on the MID device.FIG. 2 illustrates the process of obtaining ownership by the platform owner and generating platform credentials by means of a security processor. Figure 2 illustrates an exemplary implementation. The establishment mode 201 takes ownership and registers the platform credentials, passing control to the user mode 203. The user mode 203 registers third-party credentials. The user mode 203 also relinquishes ownership to the establishment mode 201.This latter operation can occur when the MID system has been attacked by malware and the UEFI firmware is no longer booting the machine, thus keeping the machine functioning like a door controller. Natural response is used by the device owner to send it back to the carrier / vendor. Back in the factory, specific hardware tags / stimuli / or some other mechanism can be used to transfer the machine from user mode back to setup mode to re-provide credentials and / or software to be loaded onto the machine.FIG. 3 is a flowchart illustrating a method of obtaining ownership and registering a platform credential according to an embodiment of the present invention. It illustrates one possible method of generating or updating a certificate database on the device. The platform administrator 310 in the case of a cellular telephone may be a cellular telephone company, which determines whether a proprietary installation is necessary at block 301. If so, the administrator can assert physical presence at block 303. This is usually done during manufacturing before the device leaves the administrator's control. In some embodiments, credentials may be registered using out-of-band presence rather than physical presence. A SecProcForceClear command can be sent to the security processor to clear ownership. The owner is then cleared at block 305 with the secret platform key (PK). Enter setup mode on the security processor. When the owner has installed and entered ownership mode after clearing ownership, an administrative password may be set at block 307. A key pair may now be created as part of the platform credentials at block 309. A key pair 340 may be generated at block 311 with a platform public key PKpub, a cryptographic operation (ESRK) as a function of the platform private key (PKpri). The term ESRK (PKpri) refers to cryptographic operations on the platform's private key. Encryption on a private key (PKpri) that never leaves the security processor thus solves a public key cryptography problem, that is, if someone gets your private key, they own the machine. These operations on the private key are proxied by the security processor, so that the x86UEFI code never has to deal with the private key itself. The key pair may be stored in the non-volatile storage 330. The SecProcCreateKey command may be executed in the security processor 320 to generate a key pair. Once the key pair is created, the security processor enters user mode. Other credentials may be registered in block 313.FIG. 4 illustrates exemplary C code for implementing an embodiment of the present invention. Before allowing each image to be loaded and started, each image must be authenticated. In embodiments, the UEFI image is typically a portable executable program and a common object file format (PE / COFF) executable program image. Each PE / COFF image has a section called a security directory. The security directory contains the digital signature of the image and the associated public key. The hash of the PE / COFF image and the associated public key can be passed to the security processor to confirm the image. The security processor can retrieve the appropriate certificate from the certificate database and use cryptographic hardware functions on the chipset to verify the image. Referring to FIG. 4, for each UEFI image, the function AuthenticateImageWithSecProc () 401 is executed and a determination 403 is made as to whether a security violation (EFI_SECURITY_VIOLATION) has been returned. If so, the next image (NextImage ()) 405 function is executed to authenticate the next image. If the boot or firmware image is authenticated, it is started by executing the LaunchImage () function 407. Authenticating the image with the security processor (AuthenticateImageWithSecProc ()) 401 includes determining whether the image credential is in the third-party certificate database 409 and whether the image credential has been verified by the third-party certificate database 411. If so, the function returns success 413. If any of these checks fail, the function returns a security violation (EFI_SECURITY_VIOLATION) 415.These credentials can be stored as UEFI EFI Certificate Database (EFI_CERTIFICATE_DATABASE) type 440, which is stored in persistent or non-volatile storage. Embodiments of the invention can also be used to authorize network-loaded pre-boot execution environment (PXE) images. The security processor can use the provided NVM and authenticated access to secure the storage.It should be understood that various operating systems and loaders can use different formats, that is, not always PE / COFF. The security processor can be configured to accept all allowed operating system formats. Regardless of the format, the image will contain a digital signature and public key that match a certificate database accessible by the security processor and not accessible by the main processor on the device.An example of a data structure that can be used for a certificate database is also shown in FIG. 4. EFI_CERTIFICATE_DATABASE 440 can contain database size, certificate list count, and certificate list data. The certificate list (EFI_CERTIFICATE_LIST) data structure 450 may include a certificate list size, a certificate count, a certificate type, a certificate header size, a certificate header, and a certificate. The certificate data structure can contain identifiers and data. The identifier may be a globally unique identifier (GUID). The certificate data (EFI_CERTIFICATE_DATA) 460 may be a structure with a GUID and a text field of any size.FIG. 5 more intuitively illustrates the structure of a UEFI certificate database 500 according to an embodiment of the present invention. The general structure of the database is shown on the left, which has a header 501 and three certificate lists 503a-c. A certificate list 510 is shown on the right, which has a list size 511, a certificate count 513, a type 515, a header size 517, a certificate size 519, a certificate header 520, and individual certificates 530 with identifiers 521a-n and data 523a-n .6 is a flowchart illustrating a method for a platform owner to register a third-party authentication credential according to an embodiment of the present invention. This registration can be used for a single authorized execution of an associated collection of third-party executables, such as credentials used to authenticate all UEFI drivers provided by the OEM. At block 601, the platform administrator 310 starts checking to determine if a platform key (PK) has been generated. At block 603, a password challenge is typically required to authenticate the administrator. At block 605, the administrator authorizes the signing of third-party credentials. The signature may begin to execute functions of creating a key (SecProcCreateKey), signing (SecProcSign), and unloading a key (SecProcUnloadKey) in the security processor 320. Appropriate storage root key (ESRK) 640 operations are returned from the non-volatile storage 330 by the security processor. Third-party credentials are then registered at block 607 and signatures are registered in the database at 609; explicitly, the registration of the signature is to have the certificate along with its public key stored in a tamper-resistant location, or to save a hash of the executable program for later use Used in image verification.7 is a flowchart illustrating a method for registering a signature by a platform owner according to an embodiment of the present invention. This registration can be used to authorize the execution of an executable program independently of other executable programs. At block 701, the platform administrator 301 starts registration of a signature. At block 703, the signature is verified using Security_Arch_Protocol. If the authentication is successful, as determined at block 705, a password query may be performed with the administrator at block 709. If the authentication is unsuccessful, a determination is made at block 707 as to whether a signature should be added anyway using a platform vendor-specific policy. If not, then log out of registration without completing at block 740. If a signature is to be added anyway, processing continues with a password challenge at block 709. Once the administrator has entered the correct password, the security processor 320 loads the key (SecProcLoadKey), the signature (SecProcSign), and then unloads the key (SecProcUnloadKey) at block 711. Then at block 713, the signature is registered at NVM 330 by setting a variable (SetVariable function). The process is completed at block 760. This is the process of performing administrative actions to add other signatures to the database. This happens when the device owner wishes to start a new operating system loader or application whose certificate is not registered during device manufacture.FIG. 8 is a flowchart illustrating an exemplary method of authorizing a UEFI executable program according to an embodiment of the present invention. Power on or reset the MID at block 801. The initial key is stored at block 803; this may be a NVM / database provided by the factory. At block 805, as described above, the security processor determines whether the UEFI confirmation has succeeded by checking the signature against the public key. If the validation fails, a determination is made at block 809 as to whether the UEFI executable program is authorized. The authorized application is a signed application, which has an associated verification public key in the platform, and the digital signature in the UEFI image passes the verification test. A confirmation action may be performed later, for example, at block 823, where the image may not yet be in the database, but during OS runtime, the OS may communicate with the remote authority and query the status of the image or query the user to determine if the user next time Hope to register / run the image).If the UEFI executable is not authorized, the next boot option may be tried in block 813. In some cases, this boot option may be a completely failed device boot. In other cases, the boot option can boot a management mode OS or some reduced function OS. In the case of trying the next boot option at block 813, the boot may be postponed until the platform administrator adds the UEFI signature to the system configuration table at block 815.When the confirmation is successful, the UEFI executable program may then be launched at block 807. When the confirmation fails but the authorization is granted, the UEFI executable program signature may be saved to the database 830 at block 811 before the executable program is launched at block 807.Once the OS has been started in block 821, a determination may be made in block 823 as to whether the OS application is confirmed by the UEFI executable program. If not, the process ends at block 850. If so, the UEFI executable program signature database is updated at block 825 and the process ends at 850.Mobile Internet device architectures can deliberately avoid "PC compatibility" in order to target the cellular market more closely. In doing so, they omit / lost certain in-band processor trusted platform technologies such as TXT GETSEC instructions like SENTER. Instead, a dedicated integrated security processor can be selected for the MID architecture, as described above. In doing so, there is a gap in the eco-system between the MID and the "OEM boot module" to fill. The "gap" is because BIOS vendors are used to building boot code for PC / AT platforms. By separating from PC / AT, traditional BIOS will no longer work on MID. The UEFI platform initialization (PI) code for early initialization and the modular platform-independent design of the UEFI interface for OS loading are advantageous here. The UEFI and PI codes can be targeted for this non-traditional (ie, non-PC / AT) platform. With UEFI, more forms can be added and also use the introduced security of UEFI Secure Boot to work closely with security processors to maintain manufacturer trust into the runtime environment. Specifically, the security processor can be taught to understand the signed UEFI PI firmware volume, and the UEFI implementation in DXE uses the security processor to store the certificate (along with the public key used for image verification) and authenticate (i.e. run similar SHA's one-way hash function and RSA-like digital signature algorithm) UEFI OS loader and driver.Embodiments of the present invention use a security processor as a storage root of trust (RTS) to perform key management, such as key generation and storage, and some cryptographic operations such as payload signatures, without the danger of exposing the private key, as described above . In existing systems, the use of authentication variables must allow some kind of in-band code to sign the AuthInfo (Author Information) field of the authentication variable, which is dangerous when there is no shielding position. The security processor allows masking the location and signing authentication variables on the platform itself; this is in contrast to some scenarios where signing must occur on a remote signing server off the machine. This latter off-machine signature is inconvenient because it then has to synchronize the mobile device with the remote server whenever an update or management action occurs.The embodiments of the present invention also establish a credential hierarchy by deploying different credentials in a top-down manner, in other words, from platform credentials to third-party authentication credentials and third-party executable program credentials. This eliminates the dependency on a single voucher and also distinguishes the issuer of the voucher. Moreover, embodiments of the present invention supplement the single sign-on scenario. For example, the platform owner can use the same authorization data to take ownership in the OS and the first two phases of the OS.In addition, embodiments of the present invention using a security processor may prohibit the execution of unauthorized code to avoid damage caused by running malicious software.Because the Itaniumplatform and MID do not have TXT or LT-SX, the embodiments of the present invention can be used to confirm the executable programs before the OS, including the OS loader, so the malware cannot use the front OS as an attack vector.Security processors are hardware-based security engines used by MIDs, such as for digital rights management (DRM), trusted boot, and secure storage. The security processor also provides hardware acceleration for cryptographic functions (symmetric, PKI), hashing functions, and proofs. The security processor parses the DRM License / Rights Object (RO) (such as how long a movie can be watched, how long a song can be played, etc.) and extracts a key for content decryption, and never exposes the key to system memory. The security processor may also use the key extracted from the DRM license / RO file to decrypt the DRM content.The techniques described herein are not limited to any particular hardware or software configuration; they can find applicability in any computing, consumer electronics, or processing environment. These technologies can be implemented in hardware, software, or a combination of both.For simulation, the program code may represent hardware using a hardware description language or other functional description language, which essentially provides a model of how the hardware is expected to be designed. Program code may be assembly or machine language or compilable and / or interpreted data. Furthermore, it is generally considered in the art that one or another form of software is to take action or cause results. Such an expression is merely a brief way of stating that the execution of program code by a processing system causes the processor to complete an action or produce a result.Each program can be implemented with a high-level process or object-oriented programming language used to communicate with the processing system. However, if necessary, the program can be implemented in assembly or machine language. In either case, the language can be compiled or interpreted.Program instructions may be used to cause a general-purpose or special-purpose processing system programmed with these instructions to perform the operations described herein. Alternatively, these operations may be performed by specific hardware components containing hard-wired logic for performing these operations, or by any combination of programmed computer components and custom hardware components. The methods described herein may be provided as a computer program product, which may include a machine-accessible medium having stored thereon instructions for programming a processing system or other electronic device to perform these methods.For example, program code or instructions may be stored in volatile and / or non-volatile memory, such as storage devices and / or associated machine-readable or machine-accessible media, including solid-state memory, hard drives, floppy disks, Optical storage, tapes, flash memory, memory sticks, digital video disks, digital versatile disks (DVDs), and more, as well as more unusual media such as machine-accessible biological state preservation storage. A machine-readable medium may include any mechanism for storing, transmitting, or receiving information in a form readable by a machine, and the medium may include a tangible medium such as an antenna, optical fiber, communication interface, etc., which encodes program code with electricity, light, sound, or other Forms of propagation signals or carrier waves can pass through these tangible media. Program code can be sent in the form of packets, serial data, parallel data, propagation signals, etc., and can be used in a compressed or encrypted format.Program code can be implemented as a program executed on a programmable machine such as a mobile or fixed computer, personal digital assistant, cell phone and pager, consumer electronics device (including DVD player, personal video recorder, personal video player, satellite Receivers, stereo receivers, cable television receivers) and other electronic devices, each of which includes a processor, processor-readable volatile and / or non-volatile memory, at least one input device, and / or one or Multiple output devices. Program code may be applied to data entered using an input device to execute the embodiments and generate output information. The output information can be applied to one or more output devices. Those skilled in the art will appreciate that embodiments of the disclosed subject matter may be practiced by a variety of computer system configurations, including multiprocessor or multicore processing systems, small computers, and distributed or microcomputers that may be embedded in virtually any device Or processor. Embodiments of the disclosed subject matter may also be practiced in a distributed computing environment, where tasks or portions of the tasks may be performed by remote processing devices linked through a communication network.Although operations are described as an ordered process, some operations can actually be performed in parallel, simultaneously, and / or in a distributed environment, and program code stored locally and / or remotely can be accessed by a single or multiple processor machines . Additionally, in some embodiments, the order of operations may be rearranged without departing from the spirit of the disclosed subject matter. The program code can be used by or in conjunction with an embedded controller.Although the invention has been described with reference to exemplary embodiments, the description is not intended to be construed in a limiting sense. Various modifications of the exemplified embodiment and other embodiments of the present invention will be apparent to those skilled in the art to which the present invention pertains, and should fall within the spirit and scope of the present invention. |
Example low-delay complementary metal-oxide semiconductor (CMOS) to emitter- coupled logic (ECL) converters, methods and apparatus are disclosed. A disclosed example apparatus (200) comprises a reference level generator circuit (245) to generate first and second reference signals (240 and 241) and a bias signal (235) based on a CMOS supply voltage (215), a source follower circuit (255) to convert a CMOS input signal (205) to a single-ended ECL signal (250) based on the first and second reference signals (240 and 241), and an ECL buffer circuit (260) to convert the single-ended ECL signal to a differential ECL output signal (210) based on the bias signal and an ECL supply voltage (225). |
CLAIMS What Is Claimed Is: 1. A converter to convert a complementary metal-oxide semiconductor (CMOS) input signal to a differential emitter-coupled logic (ECL) output signal, the converter comprising: a reference level generator circuit comprising: first and second transistors connected in series between a first CMOS supply voltage and a second CMOS supply voltage, and using as their gate input signals a bias signal, the bias signal to have a first value substantially mid- way between the first and second CMOS supply voltages; and first and second components connected in series between the first and second transistors, the bias signal to be created at a first node where the first and second components are connected, the first component to create a first reference signal to have a second value a first voltage above the bias signal, the second component to create a second reference signal to have third value a second voltage below the bias signal; a source follower circuit comprising third and fourth transistors connected in series between the first and second reference signals in a source follower topology to create a single-ended ECL signal, the single-ended ECL signal to be created at a second node where the third and fourth transistors are connected, the third and fourth transistors using as their gate input signals the CMOS input signal; and an ECL buffer circuit to generate the differential ECL output signal based on the single-ended ECL signal and the bias signal. 2. A CMOS to ECL converter as defined in Claim 0, wherein the ECL buffer comprises: a fifth transistor having the single-ended ECL signal as its gate input signal; an sixth transistor having the bias signal as its gate input signal, the fifth and sixth transistor configured in a differential switch topology; a first resistor connected to an ECL supply voltage to set a first voltage limit for a positive ECL output signal component of the ECL output signal; anda second resistor connected between the first resistor and the seventh transistor to set a second voltage limit for the positive ECL output signal. 3. An apparatus comprising: a reference level generator circuit to generate first and second reference signals and a bias signal based on a complementary metal-oxide semiconductor (CMOS) supply voltage; a source follower circuit to convert a CMOS input signal to a single-ended emitter-coupled logic (ECL) signal based on the first and second reference signals; and an ECL buffer circuit to convert the single-ended ECL signal to a differential ECL output signal based on the bias signal and an ECL supply voltage. 4. An apparatus as defined in Claim 3, wherein the first reference signal is to have a first value a first voltage above the bias signal, the second reference signal is to have a second value the first voltage above the bias signal, and the bias signal is to have a third value substantially mid- way between the CMOS voltage signal and a CMOS ground signal. 5. An apparatus as defined in Claim 3 or 4, wherein the reference level generator circuit comprises: first and second transistors to generate a bias signal substantially mid-way between the CMOS supply voltage and a second CMOS supply voltage; and a first component to generate the first reference signal based on the third reference signal; and a second component to generate the second reference signal based on the third reference signal. 6. An apparatus as defined in Claim 3, wherein the source follower circuit comprises: a first transistor having the first reference signal as its power rail and the CMOS input signal as its gate input signal; and a second transistor having the second reference signal as its power rail and the CMOS input signal as its gate input signal, wherein outputs of the first and second transistors are electrically coupled to generate the singled-ended ECL signal. 7. An apparatus as defined in Claim 3 or 6, wherein the source follower circuit comprises first and second transistors configured in a source follower topology. 8. An apparatus as defined in Claim 3, wherein the ECL buffer circuit comprises: a first transistor having the singled-ended ECL signal as its gate input signal; and a second transistor having the bias signal as its gate signal, wherein the first reference signal is to have a first value a first voltage above the bias signal, the second reference signal is to have a second value the first voltage above the bias signal, and the bias signal is to have a third value substantially mid-way between the CMOS voltage signal and a CMOS ground signal. 9. An apparatus as defined in Claim 8, wherein the ECL buffer circuit further comprises: a first resistor connected to the ECL supply voltage to determine a first voltage limit for a positive ECL output; and a second resistor connected between the first resistor and the first transistor to determine a second voltage limit for the positive ECL output. 10. A method comprising: generating a bias signal and first and second reference signals based on a complementary metal-oxide semiconductor (CMOS) supply voltage; converting a CMOS input signal to a single-ended emitter-coupled logic (ECL) signal based on the first and second reference signals; and buffering the single-ended ECL compatible signal to form a differential ECL signal based on the bias signal and an ECL supply voltage. 11. A method as defined in Claim 10, wherein the first reference signal is to have a first value a first voltage above a bias signal, the second reference signal is to have a second value the first voltage above the bias signal, and the bias signal is to have a value substantially mid- way between the CMOS voltage signal and a second CMOS voltage supply. 12. A method as defined in Claim 10 or 11, wherein the single-ended ECL signallow the CMOS input signal and be limited by the first and second reference signals. |
LOW-DELAY COMPLIMENTARY METAL-OXIDE SEMICONDUCTOR (CMOS) TO EMITTER-COUPLED LOGIC (ECL) CONVERTERS, METHODS AND APPARATUS This disclosure relates generally to signal converters, and, more particularly, to low- delay complementary metal-oxide semiconductor (CMOS) to emitter-coupled logic (ECL) converters, methods and apparatus. BACKGROUND In some applications and/or circuits (e.g., requiring high-speed logic) it is necessary and/or desirable to convert a rail-to-rail signal, such as that generated by complementary metal-oxide semiconductor (CMOS) logic, to a differential signal compatible with emitter- coupled logic (ECL) logic. An example ECL differential signal uses a -1.75 volt (V) signal with respect to ground on a positive signal component and a -0.9 V signal with respect to ground on a negative signal component to represent a logic value of "0"; and the -0.9 V signal on the positive signal component and the -1.75 V signal on the negative signal component to represent a logic value of "1." BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic diagram of an example conventional complementary metal- oxide semiconductor (CMOS) to emitter-coupled logic (ECL) converter. FIG. 2 is a schematic diagram of an example low-delay CMOS to ECL converter constructed in accordance with the teachings of the disclosure. FIG. 3 is a schematic diagram of example manners of implementing the example reference level generator, the example source follower and/or the example ECL buffer of FIG. 2. FIG. 4 is a diagram of an example circuit implementing the example CMOS to ECL converter of FIG. 3. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS Example low-delay complementary metal-oxide semiconductor (CMOS) to emitter- coupled logic (ECL) converters, methods and apparatus are disclosed. A disclosed example apparatus includes a reference level generator circuit to generate first and second reference signals and a bias signal based on a CMOS supply voltage, a source follower circuit toconvert a CMOS input signal to a single-ended ECL signal based on the first and second reference signals, and an ECL buffer circuit to convert the single-ended ECL signal to a differential ECL output signal based on the bias signal and an ECL supply voltage. A disclosed example converter to convert a CMOS input signal to a differential ECL output signal includes a reference level generator circuit, a source follower and an ECL buffer. In one example, the reference level generator circuit includes first and second transistors connected in series between a first CMOS supply voltage and a second CMOS supply voltage. In such an arrangement, the gate input signals to the first and second transistors are used as a bias signal having a first value substantially mid- way between the first and second CMOS supply voltages. The example also includes first and second components connected in series between the first and second transistors. The bias signal is created at a first node where the first and second components are connected. The first component creates a first reference signal having a second value that is a first voltage above the bias signal and the second component creates a second reference signal having a third value that is a second voltage below the bias signal. In some examples, the source follower circuit includes third and fourth transistors connected in series between the first and second reference signals in a source follower topology to create a single-ended ECL signal at a second node where the third and fourth transistors are connected, the third and fourth transistors using as their gate input signals the CMOS input signal. The ECL buffer circuit thereby generates a differential ECL output signal based on the single-ended ECL signal and the bias signal. A disclosed example method includes generating a bias signal and first and second reference signals based on a CMOS supply voltage, converting a CMOS input signal to a single-ended ECL signal based on the first and second reference signals, and buffering the single-ended ECL compatible signal to form a differential ECL signal based on the bias signal and an ECL supply voltage. FIG. 1 is a schematic diagram of an example conventional CMOS to ECL converter 100. The example CMOS to ECL converter 100 of FIG. 1 converts a CMOS input signal 105 to a differential ECL output signal 110 via a CMOS inverter 115, and contains a delay stage 120 to equalize the delay in the two paths to a current mode logic (CML) buffer 125.The CML buffer 125 of FIG. 1 uses MOS input transistors since its inputs are CMOS levels and, thus, not compatible with a bipolar transistor based ECL buffer. FIG. 2 is a schematic diagram of an example low-delay CMOS to ECL converter 200. The example CMOS to ECL converter 200 of FIG. 2 converts a CMOS input signal 205 to a differential ECL output signal 210. By eliminating the example CMOS inverter 115 and the example delay stage 120 of FIG. 1 and replacing the MOS based CML buffer 125 with a bipolar based ECL buffer 260, the example CMOS to ECL converter 200 of FIG. 2 substantially reduces the delay introduced by the conversion process. For instance, the example CMOS to ECL converter circuits disclosed herein only introduce an average delay of 40 pico seconds (ps) across temperature and semiconductor process variations, as compared to a typical delay of 400 ps introduced by the conventional CMOS to ECL converters (e.g., the example CMOS to ECL converter 100 of FIG. 1). The example CMOS input signal 205 of FIG. 2 is a rail-to-rail signal taking on the value of a first CMOS supply voltage (CMOS VDD) 215 or a second CMOS supply voltage (CMOS VSS) 220 (e.g., a ground signal). The CMOS input signal 205 may also take on values falling between the CMOS supply voltage 215 and the CMOS ground signal 220 when transitioning between values, for example, as occurs on a rising and/or falling edge of the CMOS input signal 205. The example differential ECL output signal 210 of FIG. 2 comprises a positive signal component 212 and a negative signal component 213, the voltage differential thus between (e.g., the value of the positive signal component 212 minus the value of the negative signal component 213) represents digital logic bits. To provide voltage supplies and/or references, the example CMOS to ECL converter 200 of FIG. 2 includes the example CMOS supply voltage 215, the example CMOS ground 220, a first ECL supply voltage (ECC VDD) 225 and a second ECL supply signal (ECC VSS) 230. An example set of supply voltages comprises a CMOS VDD 215 of 3 V, a CMOS VSS 220 of 0 V, an ECL VDD 225 of 3 V and an ECL VSS 230 of 0 V. As described below in connection with FIG. 3, the ECL supply voltages 225 and 230 of FIG. 2 also determine, at least partially, the signal levels that may occur on the positive and negative signal components 212 and 213. The example CMOS supply signals 215 and 220 and the example ECL supply signals 225 and 230 can be implemented by any number and/or type(s)of past, present and/or future voltage supply and/or ground signal source(s), device(s) and/or circuit(s). To generate a bias signal 235 and a pair of reference signals 240 and 241, the example CMOS to ECL converter 200 of FIG. 2 includes a reference level generator 245. The example reference level generator 245 of FIG. 2 receives the CMOS supply signal 215 and the CMOS ground signal 220, and generates the bias signal 235 to be substantially midway between the CMOS supply signal 215 and the CMOS ground signal 220. The example reference level generator 245 generates the reference signal 240 such that its voltage is substantially a diode drop voltage (e.g., 0.7 volts (V)) above the bias signal 235, and the reference signal 241 such that its voltage is substantially a diode drop voltage below the bias signal 235. For example, for a CMOS supply voltage 215 of 3 V, the bias signal 235 would have a voltage of approximately 1.5V, the reference signal 240 would have approximately a voltage of 2.2V, and the reference signal 241 would have a voltage of approximately 0.8V. An example manner of implementing the example reference level generator 245 of FIG. 2 is described below in connection with FIG. 3. To convert the CMOS input signal 205 to a single-ended ECL signal 250, the example CMOS to ECL converter 200 of FIG. 2 includes a source follower 255. The example source follower 255 of FIG. 2 causes the single-ended ECL signal 250 to follow the CMOS input signal 205, but be bounded and/or limited by the example reference signals 240 and 241. That is the singled-ended ECL signal 250 represents an ECL compatible version of the CMOS input signal 250 with respect to the bias signal 235. For example, for a CMOS logical "1" and CMOS supply voltage 215 of 3V, the output 250 of the source follower 255 would be 2.2 V. Conversely, the voltage of the output 250 for a logical "0" would be 0.9 V. An example manner of implementing the example source follower 255 of FIG. 2 is described below in connection with FIG. 3. To buffer the single-ended ECL signal 250, the example CMOS to ECL converter 200 of FIG. 2 includes an ECL buffer 260. The example ECL buffer 260 of FIG. 2 transforms the single-ended ECL signal 250 into the differential ECL output signal 210 based on the bias signal 235 and the ECL supply voltage 225. The example differential ECL output signal 210 of FIG. 2 follows the single-ended ECL signal 250, but the desired range for thevalues of the positive and negative signals 212 and 213 are determined by the ECL buffer 260 based on the ECL supply voltage 225 and a set of resistors (e.g., the example resistors Rl 8, R19 and R20 of FIG. 3). An example manner of implementing the example ECL buffer 260 of FIG. 2 is described below in connection with FIG. 3. FIG. 3 illustrates an example manner of implementing the example reference level generator 245, the example source follower 255, the example ECL buffer 260 and/or, more generally, the example CMOS to ECL converter 200 of FIG. 2. To generate the example bias signal 235, which is substantially mid-way between the CMOS supply 215 and the CMOS ground 220, the example reference level generator 245 of FIG. 3 includes junction field-effect transistors (JFETs) MNl and MP2. The CMOS supply 215 is connected to the drain of MP2. To generate the reference signals 240 and 241, the example reference level generator 245 of FIG. 3 includes bipolar junction transistors (BJTs) Ql and Q2. The example transistors MP2, Q2, Ql and MNl are connected in series in a voltage divider topology to generate the bias signal 235, as illustrated in FIG. 3. The example bias signal 235 of FIG. 3 is generated at the point 305 where the collector of example transistor Ql is electrically coupled to the emitter of example transistor Q2, and this node 305 is connected to the gate input of both of the transistors MP2 and MNl. The example transistors Ql and Q2 of FIG. 3 have their bases coupled to their collectors and, thus, are each configured in a diode topology. Accordingly, the example reference signal 240 is substantially a diode drop voltage (e.g., 0.7V) above the bias signal 235, and the example reference signal 241 is substantially a diode drop voltage below the bias signal 235. Any number and/or type(s) of components could be used instead of, or in addition to, the example transistors Ql and Q2 to create the example diode drop voltages of FIG. 3. For example, either or both of the transistors Ql and Q2 could be replaced by diodes. The example bias signal 235 is substantially at the midpoint of the CMOS supplies 215 and 220. However, the bias signal 235 may alternatively be adjusted to be somewhat higher or lower as benefits the operation of the example converter 200 in a particular application, to account for process variability, temperature variation and/or absolute supply voltage levels. To form the single-ended ECL signal 250, the example source follower 255 of FIG. 3 includes MOS field-effect transistors (MOSFETs) MNO and MPO. The example transistorsMNO and MPO of FIG. 3 are connected in series between the reference signals 240 and 241 in a source follower topology. The gates of transistors MNO and MPO are coupled together at a node 310 to form an input for the CMOS input signal 205. The example single-ended ECL signal 250 of FIG. 3 is generated at a node 315 where the source of example transistor MNO is electrically coupled to the source of example transistor MPO. When the CMOS input signal 205 has a logical high value (e.g., 3V), the example transistor MNO of FIG. 3 is turned on and the example transistor MPO of FIG. 3 is turned off such that the single-ended ECL signal 250 takes on the value of the reference signal 240 (e.g., 2.2V). Likewise, when the CMOS input signal 205 has a logical low value (e.g., OV), the transistor MNO is turned off and the transistor MPO is turned on such that the single-ended ECL signal 250 takes on the value of the reference signal 241 (e.g., 0.8V). In this fashion, the example source follower 255 of FIG. 3 forms a single-ended ECL signal 250 that follows the CMOS input signal 205, is centered around the bias signal 235, and is bounded and/or limited by the reference signals 240 and 241. To buffer the single-ended ECL signal 250, the example ECL buffer 260 of FIG. 2 includes BJT transistors Q3 and Q4, and resistors R18, R19 and R20. The example transistors Q3 and Q4 of FIG. 3 are connected in a differential switch topology such that one, but not both, of the example transistors Q3 and Q4 is turned on at a given time. For example, when the single-ended ECL signal 250 is sufficiently greater than the bias signal 235 (e.g., by 200 millivolts (mV)), the transistor Q3 is turned on and the transistor Q4 is turned off. The example resistors R 18, Rl 9 and R20 of FIG. 3 determine the allowable range of voltages for the positive and negative ECL signals 212 and 213. The example resistors R18, R19 and R20 are arranged in a voltage divider topology. In the illustrated example of FIG. 3, the resistance of the resistor R20 is selected to determine the largest voltage that the signals 212 and 213 can have. The example resistors R18 and R19 of FIG. 3 are selected to have the same resistance, and the resistance is selected to determine the difference between the largest voltage and the smallest voltage that the signals 212 and 213 can have. For example, when the example transistor Q3 of FIG. 3 is turned on, current flows through the left branch of the ECL buffer 260 and the voltage drop across the resistor R20 determines the voltage of the positive signal 212 and the voltage drop across the series combination of resistors R20 andRl 8 determines the voltage of the negative signal 213. Likewise, when the example transistor Q4 is turned on, current flows through the right branch of the ECL buffer 260 and the voltage drop across the resistor R20 determines the voltage of the negative signal 213 and the voltage drop across the series combination of resistors R20 and R19 determines the voltage of the positive signal 212. To control the amount of current that flows through the ECL buffer 260, the example circuit of FIG. 3 includes any type of current source CS. The example current source CS and the resistors R18, R19 and R20 determine the voltage values taken by the positive and negative signals 212 and 213. For example, if the current source CS provides 200 micro Amps (uA) of current, the resistance of R20 is 2.5 thousand ohms (kΩ), and the resistances of R18 and R19 are 2.7 kΩ, then the largest voltage for the positive and negative signals 212 and 213 is 250 mV below the ECL supply voltage 225, and the smallest voltage for the signals 212 and 213 is 520 mV below the ECL supply voltage 225. FIG. 4 is a schematic illustration of an example circuit 400 that implements and/or includes the example reference signal generator 245, the example source follower 255, the example ECL buffer 260 and/or, more generally, the example CMOS to ECL converter 200 of FIGS. 2 and/or 3. Portions of the example circuit 400 of FIG. 4 are identical to those discussed above in connection with FIG. 3 and, thus, the descriptions of those portions are not repeated here. Instead, identical elements are illustrated with identical reference numerals in FIGS. 3 and 4, and the interested reader is referred back to the descriptions presented above in connection with FIG. 3 for a complete description of those like-numbered elements. To enable and/or disable the example circuit 400 of FIG. 4, the example circuit 400 includes an enable input signal 405 and transistors MPl, MN2 and MN3. When the example enable input signal 405 is a logical "1", the transistors MPl, MN2 and MN3 are in an "on" state that enables the transistors MP2, MNl, Ql and Q2 to behave as described above in connection with FIG. 3. However, when the enable signal 405 is a logical "0", the transistors MP2, MNl, Ql and Q2 are biased into an "off state and, thus, the operation of the example circuit 400 is effectively disabled.To control a bias of the example ECL buffer 260, the example circuit 400 of FIG. 4 includes a bias input signal 410. The example bias input signal 410 of FIG. 4 controls the amount of current generated by the example current source CS, and the voltage drops that occur across the resistors R18, R19 and R20. Thus, the voltage values taken by the positive and negative signal components 212 and 213 may be adjusted by controlling the value of the bias input signal 410. For example, a bias input signal 410 equal to the Vbe of Q 17 plus 100 mV, yields voltage values of (ECLVDD - 0.25 V) and (ECLVDD - 0.52 V) for the signal components 212 and 213. A bias input signal equal to the Vbe of Q17 plus 50 mV, yields voltage values of (ECLVDD - 0.125 V) and (ECLVDD - 0.26 V) for the signal components 212 and 213. The example current source CS of FIG. 4 is configured in a simple bipolar current mirror topology and operates as follows. A bias voltage is present on the base of Q 17, such that, for example, the current into the collector of Q17 is 100 uA. This current passes through the diode connected p-channel MOSFET (PMOS) MP4 forming a simple MOS current mirror with MP3. The current out of the drain of MP3 has the same magnitude as that in the drain of MP4. The current from the drain of MP3 is fed into a diode connected NPN transistor, which forms a simple current mirror with another NPN, which provides the current of lOOuA into the ECL buffer 260. Alternatively, the current of lOOuA into the ECL buffer 260 may be generated in any other way, depending on the specific needs of the application. For example, the current could be provided by a resistor connected to ECLVSS. While example manners of implementing a low-delay CMOS to ECL converter are illustrated in FIGS. 2, 3 and 4, a reference signal generator, a source follower and/or an ECL buffer may be implemented using any number and/or type(s) of alternative and/or additional logic, devices, components, circuits, modules, interfaces, etc. Further, the logic, devices, components, circuits, modules, elements, interfaces, etc. illustrated in FIGS. 2, 3 and/or 4 may be combined, divided, re- arranged, omitted, eliminated and/or implemented in any other way. For example reference signal generator 245, the example source follower 255, the example ECL buffer 260 may be implemented together within a single integrated circuit (IC) and/or with multiple ICs. Moreover, a CMOS to ECL converter may include additionallogic, devices, components, circuits, interfaces and/or modules instead of, or in addition to those illustrated in FIGS. 2, 3 and/or 4. Those skilled in the art will appreciate that many other embodiments and variations are also possible within the scope of the claimed invention. Embodiments having different combinations of one or more of the features or steps described in the context of example embodiments having all or just some of such features or steps are also intended to be covered hereby. |
An example includes an apparatus for transmitting Universal Serial Bus (USB) packets. The apparatus includes a transmitter adapter to receive a USB packet from a USB device. The transmitter adapter can further generate one or more alternate mode packets based on the USB packet. The transmitter adapter can also transmit the alternate mode packets via an alternate mode connection. |
1.An apparatus for transmitting a universal serial bus (USB) packet, comprising:Transmitter adapter for:Receiving a USB packet from a USB device;Generating one or more alternating pattern packets based on the USB packet;The alternating mode packet is transmitted via an alternate mode connection.2.The device of claim 1 wherein the USB packet comprises a USB 3.x packet.3.The apparatus of claim 1, wherein the alternating mode packet comprises a packet header, the packet header including a value indicating a layer type.4.The apparatus of claim 1, wherein the alternating mode packet comprises a packet header, the packet header comprising a field for indicating a path between the upstream port and the downstream port.5.The apparatus of claim 1, wherein the alternating mode packet comprises a packet to be transmitted periodically.6.The apparatus of any combination of claims 1-5, wherein the alternating mode packet comprises a portion of the USB packet, wherein the USB packet comprises a data packet.7.The apparatus of any combination of claims 1-5, wherein said alternating mode packet comprises a packet header, said packet header including a path for indicating a virtual link between an upstream USB port and a downstream USB port Field.8.The apparatus of any combination of claims 1-5, wherein said alternating mode packet comprises a packet header, said packet header comprising a length field for indicating a transaction length in bytes.9.The apparatus of any combination of claims 1-5, wherein said alternating mode packet comprises a packet header, said packet header including fields to be used for synchronization and error checking.10.The device of any combination of claims 1-5, wherein the alternating mode connection comprises a USB Type-C cable.11.An apparatus for receiving a USB packet through an alternate mode interface, comprising:Receiver adapter for:Receiving an alternating mode packet from the transmitter adapter via an alternate mode connection;Recovering USB packets based on the alternating pattern grouping;The USB packet is sent to a USB device.12.The apparatus of claim 11 wherein said alternating mode connection comprises a USB Type-C cable.13.The apparatus of claim 11 wherein said receiver adapter is operative to recover said USB packet based on said alternate mode packet by removing an alternate mode header from said alternate mode packet.14.The apparatus of any combination of claims 11-13, wherein the receiver adapter is operative to recover the USB packet based on the alternating mode packet by combining two or more alternating mode packets.15.The apparatus of any combination of claims 11-13, wherein the alternating mode packet comprises a packet header, the packet header including a value to be used by the receiver adapter to parse commands and data.16.A method for transmitting a USB packet, comprising:Receiving a USB packet from the first USB device at the first interface;Generating an alternating mode packet based on the USB packet at the first interface;Transmitting the alternating mode packet to the second interface via an alternate mode connection;Restoring the USB packet based on the alternate mode grouping via the second interface;The restored USB packet is transmitted to the second USB device via the second interface.17.The method of claim 16 wherein the first interface comprises a transmitter adapter and the second interface comprises a receiver adapter.18.The method of claim 16 wherein generating the alternate mode packet based on the USB packet comprises adding an alternate mode packet header to the USB packet.19.The method of claim 16 wherein recovering the USB packet based on the alternate mode grouping comprises removing the alternate mode packet header from the alternating mode packet.20.The method of claim 16 wherein said USB packet is a USB 3.x packet.21.The method of any combination of claims 16-20, wherein the first USB device and the second USB device are USB 3.x devices.22.The method of any combination of claims 16-20, wherein generating the alternate mode packet based on the USB packet comprises segmenting the USB packet into an alternating mode packet.23.The method of any combination of claims 16-20, wherein restoring the USB packet comprises combining a plurality of the alternate mode packets to recover segmented USB packets.24.The method of any combination of claims 16-20, wherein the alternate mode connection is transparent to the first USB device and the second USB device.25.A method as in any combination of claims 16-20, comprising transmitting a second set of USB packets from the second USB device to the host via the second interface, the alternate mode connection, and the first interface The first USB device is described. |
Send Universal Serial Bus (USB) data via an alternate mode connectionCross-reference to related applicationsThis application claims the benefit of the filing date of the U.S. Patent Application Serial No. 15/088, 997, entitled "Transmitting Universal Serial Bus (USB) Data over Alternate Mode Connection", filed by Rozic et al., on April 1, 2016. And incorporated herein by reference.Background techniqueAn interconnect channel is used to connect an electronic device, such as a USB device, to a computing device. For example, a USB device can include a hard disk drive (HDD) that is connected to a long cable and a thumb drive that is connected via a short interconnect channel (as well as other devices and other lengths of interconnected channels).ThunderboltTM is an interface that combines the Peripheral Component Interconnect Express (PCIe) and DisplayPort (DP) interfaces into a single serial signal to additionally provide DC power in one cable.DRAWINGS1 is a block diagram showing an exemplary system that can tunnel a USB through an alternate mode interface;2 is a block diagram showing an exemplary physical topology for tunneling USB through an alternate mode interface.3 is a cross section of an exemplary logical topology for tunneling USB through an alternate mode interface;4 is a block diagram of an exemplary alternating mode packet header;5 is a block diagram showing an exemplary computing device that can tunnel USB through an alternate mode interface;6 is a flow chart showing a method for tunneling USB through an alternate mode interface;7 is a block diagram showing a computer readable medium storing code for tunneling USB through an alternate mode interface.The same components are used throughout the disclosure and the drawings to refer to the same components and features. The numbers in the 100 series refer to the features originally seen in Figure 1; the numbers in the 200 series refer to the features originally seen in Figure 2; and so on.Detailed waysAs mentioned above, ThunderboltTM is an interface that combines support for PCI Express (PCIe) and DisplayPort (DP) interfaces into a single serial signal while providing DC power in one cable. ThunderboltTM versions 1 and 2 use the Mini DisplayPort connector to connect to the device. The current ThunderboltTM version 3 uses USB Type-CTM cables and connectors according to the USB Type-CTM Cable and Connector Specification Revision 1.1 released on April 3, 2015. In addition, ThunderboltTM version 3 is implemented using an alternate mode of USB Type-CTM. Alternate mode dedicates some of the physical lines in the USB Type-C cable to direct device-to-host transfers of alternate data protocols. Specifically, four high-speed channels, two sideband pins, and (for bases, detachable devices, and permanent cable applications), two USB 2.0 pins and one configuration pin can be used for alternate mode transfers. The mode is configured by configuring the channel using vendor defined messages (VDMs).In order to implement USB tunneling via ThunderboltTM using a Type-CTM connection, USB 2.0 and USB 3.1 transfer modes can be supported. As used herein, tunneling refers to transmitting data according to a computer network protocol that is encapsulated within another network protocol. For example, the USB 2.0 standard includes low speed (LS), full speed (FS), and high speed (HS) modes. The USB 3.x standard includes overspeed (SS) and overspeed + (SSP) modes, which can transmit data up to 5 Gbit/s and 10 Gbit/s, respectively. USB 2.0 uses a reserved set of signals within the Type-CTM connector that, once enabled in the USBType-CTM alternate mode, enables USB 2.0 functionality to coexist with the alternate mode interface. Therefore, USB 2.0 functionality can coexist on ThunderboltTM version 3 by using different signaling without any tunneling. However, the USB 3.0 specification released on November 17, 2008 and the USB 3.1 specification released on July 26, 2013 do not use any reserved signal sets in the Type-CTM connector, and therefore cannot currently alternate with ThunderboltTM mode. coexist.In summary, the present disclosure relates to techniques for tunneling USB 3.1 data over a ThunderboltTM interface. In particular, the techniques described herein include systems, methods, and interface controllers for transmitting USB 3.x data over a Type-C connector by generating alternate mode compatible packets. For example, an alternate mode packet can be generated at the USB transmitter adapter and sent over the USB Type-C cable and received at the USB receiver adapter. The USB receiver adapter can then generate a USB packet corresponding to the initially received USB packet based on the alternate mode grouping. For example, a USB packet can be generated based on header information in an alternating pattern packet. Therefore, the technology enables USB devices connected to the ThunderboltTM domain to be connected to a single Extensible Host Controller Interface (xHCI) controller. In addition, no software modifications are required because the USB receiver adapter outputs the USB packets in the same form as the USB transmitter adapter receives the USB packets. In addition, instead of hot plugging the xHCI controller through the ThunderboltTM domain, USB hubs and USB devices can instead be hot swapped more naturally. Therefore, various connection types can be realized by an alternate mode as implemented by the USB Type-C specification. Specifically, the USB Type-C specification enables signal pins to be reassigned for purposes other than USB2/USB3 data transfers. These reassignments are called alternating patterns. Each USB Type-C port can support zero or more alternate modes. In an embodiment, the alternating mode may be an operational form in which data is transmitted and received across pins and/or hardware, wherein the pins and/or hardware indicate a first protocol and the data is packed, encoded/decoded according to a second protocol. Or otherwise transmitted.Referring now to FIG. 1, a block diagram showing an exemplary system that can tunnel a USB through an alternate mode interface, such as a ThunderboltTM interface, is shown. The exemplary system is generally referred to by reference numeral 100 and can be implemented using computing device 500 of FIG. 5 below. For example, the exemplary system 100 can be implemented using the alternate mode interface of the computing device 126 of FIG. 1 above.System 100 can include an xHCI port 102. For example, the xHCI 102 port can be a host controller for a universal serial bus (USB) that can interface with USB 1.x, 2.0, and 3.x compatible devices. System 100 can also include a USB 2.0 hub 104 and a USB 3.1 hub 106. USB 2.0 hub 104 and USB 3.0 hub 106 include USB uplink hubs 108 and 110, respectively. The USB 2.0 hub 104 may also include two USB downlink ports 112, 114. The USB 2.1 hub 106 can include two USB downlink hubs 116, 118. System 100 can also include two alternate mode connections 120, 122. Alternate mode connection 120 connects two Type-C ports 124, 126. Alternate mode connection 122 is coupled to two Type-C ports 128, 130. Moreover, system 100 can include a USB 3.x adapter 132 coupled to xHCI port 102 and an alternate mode interface 134 coupled to USB 3.x adapter 132. Depending on the direction of the data stream, the USB 3.x adapter 132 may be referred to as a USB transmitter adapter or a USB receiver adapter. The system can include another alternate mode interface 136 coupled to the USB adapter 138 and one or more other adapters 140. For example, other adapters 140 may include PCIe adapters and DP adapters. USB adapter 138 is also coupled to USB uplink hub 110 of USB 3.x hub 106. Another USB adapter 142 is coupled to the USB downlink hub 116 and the alternate mode interface 144 of the USB 3.x hub 106. Alternate mode interface 144 is coupled to USB Type-C connection 128. Another alternate mode interface 146 is coupled to the Type-C connection 130 and the USB adapter 148.As shown in FIG. 1, xHCI port 102 can receive 150 and send 152 USB 2.0 traffic via Type-C connectors 124, 126 over connections 154, 156 using Type-C cables. As indicated by arrows 158, 160, Type-C connector 124 can send USB 2.0 traffic directly to and from USB 2.0 hub 104. USB 2.0 traffic can be sent and received between the USB uplink hub 108 and the USB downlink hub 114 of the USB 2.0 hub, as indicated by arrows 162, 164. The USB downlink hub 114 can be coupled to the USB Type-C connector 118 and send and receive USB 2.0 traffic as indicated by arrows 166, 168. Then, as indicated by arrows 170, 172, USB 2.0 traffic can be sent or received between Type-C connector 128 and Type-C connector 130. For example, USB 2.0 services can be sent via any suitable USB Type-C compatible cable. USB 2.0 services can then be sent and received from any number of USB 2.0 devices (not shown).The xHCI port 102 can also send and receive USB 3.x data, such as USB 3.0 data or USB 3.1 data. For example, the xHCI port 102 can receive USB 3.x data packets from a USB device and forward the packets to the USB 3.x adapter 132. The USB 3.x adapter 132 can generate alternate mode packets based on the USB 3.x packets and send the alternating mode packets to the alternate mode interface 134 for transmission. In some examples, the alternating mode packet includes a packet header, the packet header including a value indicating a layer type. In some examples, the alternating mode packet can include a packet header that includes a field for indicating a path between the upstream port and the downstream port. For example, a field may indicate a path for a virtual link between an upstream USB port and a downstream USB port. In some examples, the alternating mode packet may include a packet to be transmitted periodically. In some examples, the alternating mode packets may each include portions of a USB packet. For example, a USB packet can be a data packet. In some examples, the alternating mode packet can include a packet header that includes a length field for indicating the length of the transaction in number of bytes. In some examples, the alternating mode packet may include a packet header that includes fields to be used for synchronization and error checking.Alternate mode packets can be sent and received between Type-C connectors 124, 126 as indicated by arrows 174, 176 in FIG. For example, alternate mode packets can be sent and received via a USB Type-C cable. Alternate mode interface 136 can receive alternate mode packets and forward them to USB 3.x adapter 138. The USB 3.x adapter 138 can then generate a USB 3.x packet based on the received alternating pattern grouping. For example, the USB 3.x adapter 138 can join two or more alternate mode packets using header information included in the alternating mode packet. The generated USB packet can then be sent to the USB 3.x hub 106. USB 3.x packets can be sent and received between the USB uplink hub 110 and the USB downlink hub 116 of the USB 3.x hub 106, as indicated by arrows 178, 180. In some examples, USB packets can be sent to one or more USB 3.x devices (not shown) via USB downlink hub 118. In some examples, a USB 3.x packet can be sent to the USB adapter 142. USB adapter 142 can generate alternating pattern packets to be sent to alternate mode interface 144 and transmitted between Type-C connectors 128, 130 (as indicated by arrows 182, 184). For example, alternate mode packets can be sent over a USB Type-C cable. Alternate mode interface 146 can receive alternate mode packets from Type-C port 130 and send the packets to USB adapter 148. For example, USB adapter 148 can be a USB receiver adapter. The USB adapter 148 can generate a USB packet based on the received alternating pattern grouping. For example, the USB adapter 148 can recover the USB packet based on the alternate mode packet by removing the alternate mode header from the alternate mode packet. In some examples, USB receiver adapter 148 may recover USB packets based on alternating mode packets by combining data from two or more alternate mode packets. In some examples, the alternating mode packet can include a packet header that includes values to be used by USB adapter 148 to parse commands and data. Thus, a USB virtual link can be established between the downstream port and the upstream port using at least one alternate mode path.The diagram of FIG. 1 is not intended to indicate that the exemplary system 100 is to include all of the components shown in FIG. Rather, the exemplary system 100 can be implemented using fewer components or additional components not shown in FIG. 1 (eg, additional hubs, USB adapters, ports, connections, etc.).2 is a block diagram showing an exemplary physical topology for tunneling USB through an alternate mode interface, such as a ThunderboltTM interface. An exemplary physical topology is generally referred to by reference numeral 200 and can be implemented using computing device 500 of FIG. 5 below. For example, the example physical topology 200 can be implemented in an alternate mode interface of the computing device of FIG. For example, SoC 202 can be implemented in the computing device of Figure 1 above.The physical topology 200 can include a system on chip (SoC) 202. The SoC 202 can include a plurality of xHCI ports 212 connected to the alternate mode switch 214 via connections 216, 218. For example, alternate mode switch 214 can be a ThunderboltTM switch. As shown by arrow 220, one of the xHCI host ports 212 is also shown as being connected to the USB endpoint 208. Alternate mode switch 214 is shown as alternate mode switch 224 coupled to alternate mode device 204 via arrow 222. For example, alternate mode switches 214, 224 can be coupled via a USB Type-C cable. Alternate mode switch 224 is also shown as being coupled to USB hub 226 of alternate mode device 204 via connections 228,230. Alternate mode switch 224 is also shown as alternate mode switch 234 coupled to alternate mode device 206 via arrow 232. Alternate mode switch 234 is shown as being coupled to USB hub 236 of alternate mode device 206 via connections 238,240. USB hub 236 is shown coupled to USB endpoint 210 via connection 242.In the exemplary physical topology 200, one or more USB 3.x data streams may be sent from the xHCI port 212 to the alternate mode switch 214 via the connections 216, 218 and received from the xHCI port 212. Alternate mode switches 214, 224, 234 may each have a USB adapter for generating alternate mode packets to be sent over connections 222, 232. The USB adapter can also generate USB packets to be sent to the USB hubs 226, 236 for delivery to any number of USB endpoints 210 from alternate mode packets. Thus, an alternate mode switch can be used to add USB 3.x functionality while maintaining compatibility with USB endpoint devices 208, 210. The function of the USB adapter is discussed in more detail with respect to Figure 5 below.The diagram of FIG. 2 is not intended to indicate that the exemplary physical topology 200 is to include all of the components shown in FIG. 2. Rather, the example physical topology 200 can be implemented using fewer components or additional components not shown in FIG. 2 (eg, additional ports, switches, hubs, endpoints, etc.).3 is a cross section of an exemplary logical topology for tunneling USB through an alternate mode interface such as a ThunderboltTM interface. An exemplary logical topology is generally referred to by reference numeral 300 and can be implemented using computing device 500 of FIG. 5 below. For example, the example logical topology 300 can correspond to the physical topology 200 of FIG. 2 and can be implemented in the alternate mode interface 526 of the computing device of FIG. 5 below.The logical topology 300 can include a system on chip (SoC) 302 coupled to USB endpoints 308, 310 and alternate mode devices 304, 306. For example, the alternate mode devices 304, 306 can be ThunderboltTM devices. The SoC 302 can include a plurality of xHCI host ports 312 that are connected to the USB endpoint 308 and the USB hub 314 of the alternate mode device 304, as indicated by arrows 316 and 318, respectively. For example, arrow 318 may represent an alternate mode connection using a USB Type-C connector. USB hub 314 is also coupled to USB hub 322 as indicated by arrow 320. Arrow 320 may represent an alternate mode connection using a USB Type-C connector.As shown in FIG. 3, logical topology 300 is shown as a series of USB hubs 314, 322 that connect xHCI host port 312 and USB endpoint 310. Although alternate mode packets are used during transmission through connections 318 and 320, logical topology 300 appears similar to a series of USB hubs connected via two alternate mode connections 318 and 320.The diagram of FIG. 3 is not intended to indicate that the exemplary logical topology 300 is to include all of the components shown in FIG. Rather, the exemplary logical topology 300 can be implemented using fewer components or additional components not shown in FIG. 3 (eg, additional ports, USB hubs, USB endpoints, SoCs, etc.).4 is a block diagram of an exemplary alternating mode packet header. An exemplary packet header is generally referred to by reference numeral 400 and can be implemented using the system 100 of FIG. 1 above. For example, the exemplary packet header 400 can be transmitted and received by the USB adapters 138, 142 of FIG. 1 above.The packet header 400 includes a protocol definition field (PDF) 402, a HopID field 404, a length field 406, and a header error control/check (HEC) field 408. In some examples, the size of PDF 402 may be 4 bits, the size of HopID field 404 may be 12 bits, the size of length field 406 may be 8 bits, and the size of HEC field 408 may be 8 bits.In the exemplary packet header 400, the USB receiver adapter can use the PDF field 402 to parse commands and data. For example, a USB transmitter adapter can encode PDF 402 to indicate the type of layer that is transmitting information. For example, the value 0000b can be used to indicate a low frequency periodic signaling (LFPS) layer.In some examples, the ordered set may be indicated in the PDF 402 field using the value 0001b. The ordered set value can be used to convey training sequence 1 (TS1), training sequence 2 (TS2), and start data stream (SDS) ordered set (OS) information.In some examples, the link layer may be indicated in the PDF 402 of the alternating mode packet header using the value 0010b. The link layer can be used to pass link command information. The link layer packet may include data covering two DWORDs of the full 8-symbol link command.In some examples, a value of 0011b may be used to indicate a protocol layer including a Link Management Packet (LMP), a Transaction Packet (TP), an Isochronous Time Stamp Packet (ITP), and a Data Packet Header (DPH). This protocol layer packet can be used to pass protocol layer headers. For example, the packet may include data of five or six DWORDs covering the complete structure of the protocol layer packet.In some examples, three protocol-defined field (PDF) 402 values may be used to define protocol layer data packets. For example, since a USB packet can be longer than an alternate mode packet, the USB packet can be segmented at the USB transmit adapter and reassembled on the USB receive adapter. For example, a value of 0100b may be used to indicate a start/single-segment data packet, a value of 0101b may indicate a protocol layer mid-segment data packet, and a value of 0110b may indicate a protocol layer-end segment data packet. In some examples, values 0111b-1111b may be reserved for indicating additional optional layers or functions.In some examples, the HopID field 404 can be used to indicate a path for each virtual link between the upstream USB port and the downstream USB port.In some examples, the length field 406 can be used to indicate the length of a particular packet. For example, the data packet can be variable length depending on the data being transmitted. Thus, the length field 406 can be used to indicate the data of a transaction in bytes.In some examples, HEC field 408 can be used to synchronize and check for errors in alternate mode packets. For example, the HEC field 408 can be used to verify the correctness of the header.The diagram of FIG. 4 is not intended to indicate that the exemplary packet header 400 is to include all of the components shown in FIG. Rather, the exemplary packet header 400 can be implemented using fewer components or additional components not shown in FIG. 4 (eg, additional fields, field values, etc.). For example, the particular structure of a packet header may depend on the protocol through which USB 3.x is tunneled.5 is a block diagram showing an exemplary computing device that can tunnel USB through an alternate mode interface. Computing device 500 can be, for example, a laptop computer, a desktop computer, a tablet computer, a mobile device or a server, and the like. Computing device 500 can include a central processing unit (CPU) 502 configured to execute stored instructions, and a memory device 504 that stores instructions executable by CPU 502. CPU 502 can be coupled to memory device 504 via bus 506. Additionally, CPU 502 can be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. Moreover, computing device 500 can include more than one CPU 502. Memory device 504 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory system. For example, memory device 504 can include dynamic random access memory (DRAM).Computing device 500 can also include a graphics processing unit (GPU) 508. As shown, CPU 502 can be coupled to GPU 508 via bus 506. GPU 508 can be configured to perform any number of graphics operations within computing device 500. For example, GPU 508 can be configured to present or manipulate graphical images, graphics frames, video, etc. of a user to be displayed to computing device 500.Memory device 504 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory system. For example, memory device 504 can include dynamic random access memory (DRAM). Memory device 504 can include a device driver 510 that is configured to execute instructions for device discovery. Device driver 510 can be software, applications, application code, and the like.CPU 502 can also be coupled via bus 506 to an input/output (I/O) device interface 512 that is configured to connect computing device 500 to one or more I/O devices 514. I/O device 514 can include, for example, a keyboard and pointing device, where the pointing device can include a touch pad or touch screen or the like. I/O device 514 can be a built-in component of computing device 500 or can be a device externally connected to computing device 500. In some examples, memory 504 can be communicatively coupled to I/O device 514 by direct memory access (DMA).CPU 502 can also be linked to display interface 516 via bus 506, which is configured to connect computing device 500 to display device 518. Display device 518 can include a display screen that is a built-in component of computing device 500. Display device 518 may also include a computer monitor, television or projector, etc., internal to computing device 500 or externally connected to computing device 500.The computing device also includes a storage device 520. Storage device 520 is a physical memory such as a hard drive, an optical drive, a thumb drive, a drive array, or any combination thereof. Storage device 520 can also include a remote storage drive.Computing device 500 can also include a network interface controller (NIC) 522. The NIC 522 can be configured to connect the computing device 500 to the network 524 over the bus 506. Network 524 can be a wide area network (WAN), a local area network (LAN), or the Internet. In some examples, a device can communicate with other devices via wireless technology. For example, a device can communicate with other devices via a wireless local area network connection. In some examples, devices may connect and communicate with other devices viaor similar technologies.CPU 502 can also be linked to alternate mode interface 526 via bus 506, which is configured to connect computing device 500 to any number of USB 3.1 devices 528. For example, a USB device can include a USB 2.0 device as well as a USB 3.1 device. In some examples, the alternate mode interface 526 can be a ThunderboltTM interface. Alternate mode interface 526 can also be configured to connect computing device 500 to any number of display devices. For example, alternate mode interface 526 can be connected to USB device 528 and display device 530 via any suitable connection (eg, a Type-C USB connection).The block diagram of FIG. 5 is not intended to indicate that computing device 500 is to include all of the components shown in FIG. Rather, computing device 500 can include fewer components or additional components not shown in FIG. 5, such as additional USB devices, additional display devices, and the like. Computing device 500 may include any number of additional components not shown in FIG. 5, depending on the details of the particular implementation. Moreover, any of the functions of CPU 502 may be implemented partially or completely in hardware and/or in a processor.6 is a flow chart showing a method for tunneling USB through an alternate mode interface 526, such as a ThunderboltTM interface. The exemplary method is generally referred to by reference numeral 600 and can be implemented using the alternate mode interface 526 of FIG. 5 above.At block 602, the first interface receives a USB packet from the first USB device. For example, the first interface can be a USB transmitter adapter.At block 604, the first interface generates an alternating mode packet based on the USB packet. For example, the first interface can add an alternate mode packet header to the USB packet. In some examples, the USB packet can be segmented into alternating mode packets at the first interface.At block 606, the first interface transmits the alternating mode packet to the second interface via the alternate mode connection. For example, the second interface can be a USB receiver adapter. In some examples, the alternate mode connection can be a USB Type-C cable.At block 608, the second interface resumes the USB packet based on the received alternate mode packet. For example, the second interface may remove the alternate mode packet header from the alternate mode packet to recover the USB packet. In some examples, the second interface combines two or more alternate mode packets to recover the segmented USB packets.At block 610, the second interface transmits the recovered USB packet to the second USB device. For example, the second USB device can detect the received USB packet as if the second USB device was connected to the first USB device via the USB hub. Thus, the alternate mode connection can be transparent to the first USB device and the second USB device.In some examples, the second USB device can transmit the USB packet to the first USB device in the same manner as the first USB device sends the packet to the second USB device (as described above in blocks 602-610) .The process flow diagrams are not intended to indicate that the blocks of the exemplary process 600 are to be performed in any particular order, or that all blocks are included in each case. Moreover, any number of additional blocks not shown may be included within the exemplary process 600, depending on the details of the particular implementation.FIG. 7 is a block diagram showing a computer readable medium 700 storing code for tunneling USB through an alternate mode interface. Computer readable medium 700 can be accessed by processor 702 via computer bus 704. Moreover, computer readable medium 700 can include code configured to direct processor 702 to perform the methods described herein. In some embodiments, computer readable medium 700 can be a non-transitory computer readable medium. In some examples, computer readable medium 700 can be a storage medium. However, in any event, computer readable media does not include transitory media such as carrier waves, signals, and the like.The block diagram of FIG. 7 is not intended to indicate that computer readable medium 700 is to include all of the components shown in FIG. Moreover, computer readable medium 700 may include any number of additional components not shown in FIG. 7, depending on the details of the particular implementation.The various software components discussed herein can be stored on one or more computer readable media 700, as indicated in FIG. For example, the receiver module 706 can be configured to receive USB packets from the first USB device. For example, the USB device can be a USB 3.x device. The USB packet can be a data packet. The generator module 708 can generate an alternating pattern packet based on the USB packet. For example, the alternate mode packet can be a ThunderboltTM packet. In some examples, generator module 708 can add an alternate mode packet header to the USB packet. In some examples, generator module 708 can parse commands and data via headers of alternate mode packets. In some examples, the generator module 708 can segment the USB packets into alternating pattern packets. Transmitter module 710 can send the alternating mode packet to an interface coupled to the second USB device. For example, the second USB device can be a USB 3.x device. The interface can be an alternate mode interface. For example, the interface can be a ThunderboltTM interface.In some examples, receiver module 706 can receive a second set of alternating mode packets from an interface. In some examples, generator module 708 can generate USB packets based on alternating pattern groupings. For example, a USB packet can be a USB 3.x packet. In some examples, the generator module 708 can remove the alternating mode packet header from the alternating mode packet. In some examples, generator module 708 can combine multiple alternate mode packets to recover segmented USB packets. In some examples, the transmitter module 710 can send the recovered USB packet to the first USB device. In some examples, the alternating mode grouping is therefore transparent to the first USB device and the second USB device. For example, the first USB device and the second USB may not detect the alternate mode packet.The block diagram of FIG. 7 is not intended to indicate that computer readable medium 700 is to include all of the components shown in FIG. Moreover, computer readable medium 700 may include any number of additional components not shown in FIG. 7, depending on the details of the particular implementation.ExampleExample 1 is a device for transmitting a universal serial bus (USB) packet. The means for transmitting a universal serial bus (USB) packet includes a transmitter adapter for receiving a USB packet from a USB device. The transmitter adapter is also used to generate one or more alternating pattern packets based on USB packets. The transmitter adapter is also used to send alternate mode packets via an alternate mode connection.Example 2 includes the apparatus of Example 1 for transmitting a Universal Serial Bus (USB) packet, with or without optional features. In this example, the USB packet includes a USB 3.x packet.Example 3 includes the means for transmitting a universal serial bus (USB) packet of any of examples 1 to 2, with or without optional features. In this example, the alternating mode packet includes a packet header, the packet header including a value indicating a layer type.Example 4 includes the means for transmitting a universal serial bus (USB) packet of any of examples 1 to 3, with or without optional features. In this example, the alternating mode packet includes a packet header that includes a field for indicating a path between the upstream port and the downstream port.Example 5 includes the means for transmitting a universal serial bus (USB) packet of any of examples 1 to 4, with or without optional features. In this example, the alternating mode packet includes packets to be transmitted periodically.Example 6 includes the means for transmitting a universal serial bus (USB) packet of any of examples 1 to 5, with or without optional features. In this example, the alternating mode packet includes a portion of a USB packet, wherein the USB packet includes a data packet.Example 7 includes the apparatus for transmitting a universal serial bus (USB) packet of any of examples 1 to 6, with or without optional features. In this example, the alternate mode packet includes a packet header that includes a field for indicating a path for a virtual link between the upstream USB port and the downstream USB port.Example 8 includes the apparatus for transmitting a universal serial bus (USB) packet of any of examples 1 to 7, with or without optional features. In this example, the alternating mode packet includes a packet header, the packet header including a length field for indicating the length of the transaction in number of bytes.Example 9 includes the means for transmitting a Universal Serial Bus (USB) packet of any of Examples 1-8, with or without optional features. In this example, the alternating mode packet includes a packet header that includes fields to be used for synchronization and error checking.Example 10 includes the means for transmitting a universal serial bus (USB) packet of any of examples 1-9, with or without optional features. In this example, the alternate mode connection includes a USB Type-C cable.Example 11 is a device for receiving a USB packet through an alternate mode interface. The apparatus includes a receiver adapter for: receiving an alternate mode packet from the USB transmitter adapter via the alternate mode connection; recovering the USB packet based on the alternating mode packet; and transmitting the USB packet to the USB device.Example 12 includes the apparatus of Example 11, with or without optional features. In this example, the alternate mode connection includes a USB Type-C cable.Example 13 includes the apparatus of any of Examples 11 to 12, with or without optional features. In this example, the receiver adapter is used to recover the USB packet based on the alternate mode packet by removing the alternate mode header from the alternate mode packet.Example 14 includes the apparatus of any of Examples 11 to 13, with or without optional features. In this example, the receiver adapter is used to recover USB packets based on alternating mode packets by combining two or more alternating mode packets.Example 15 includes the apparatus of any of Examples 11-14, with or without optional features. In this example, the alternating mode packet includes a packet header that includes values to be used by the receiver adapter to parse the command and data.Example 16 includes the apparatus of any of Examples 11-15, with or without optional features. In this example, the alternating mode packet includes a header that includes fields to be used for synchronization and error checking.Example 17 includes the apparatus of any of Examples 11-16, with or without optional features. In this example, the USB packet includes a data packet.Example 18 includes the apparatus of any of Examples 11-17, with or without optional features. In this example, the receiver adapter is also coupled to a Extensible Host Controller Interface (xHCI) controller, wherein the xHCI controller is configured to send USB 3.x packets to the receiver adapter and send the USB 2.0 packets directly to the alternate mode connection.Example 19 includes the apparatus of any of Examples 11-18, with or without optional features. In this example, the receiver adapter includes a USB receiver adapter.Example 20 includes the apparatus of any of Examples 11 to 19, with or without optional features. In this example, the device includes an interface.Example 21 is a method for transmitting a USB packet. The method includes receiving a USB packet from a first USB device at a first interface; generating an alternating mode packet based on the USB packet at the first interface; transmitting the alternating mode packet to the second interface via the alternate mode connection; Alternating mode grouping to recover the USB packet; and transmitting the restored USB packet to the second USB device via the second interface.Example 22 includes the method of Example 21, with or without optional features. In this example, the first interface includes a USB transmitter adapter and the second interface includes a USB receiver adapter.Example 23 includes the method of any of examples 21 to 22, with or without optional features. In this example, generating an alternating mode packet based on the USB packet includes adding an alternating mode packet header to the USB packet.Example 24 includes the method of any of examples 21 to 23, with or without optional features. In this example, restoring the USB packet based on the alternating mode grouping includes removing the alternating mode packet header from the alternating mode packet.Example 25 includes the method of any of examples 21 to 24, with or without optional features. In this example, the USB packet is a USB 3.x packet.Example 26 includes the method of any of examples 21 to 25, with or without optional features. In this example, the first USB device and the second USB device are USB 3.x devices.Example 27 includes the method of any of examples 21 to 26, with or without optional features. In this example, generating the alternating mode packet based on the USB packet includes segmenting the USB packet into an alternating mode packet.Example 28 includes the method of any of examples 21 to 27, with or without optional features. In this example, restoring the USB packet includes combining a plurality of alternate mode packets to recover the segmented USB packets.Example 29 includes the method of any of examples 21 to 28, with or without optional features. In this example, the alternate mode connection is transparent to the first USB device and the second USB device.Example 30 includes the method of any of examples 21 to 29, with or without optional features. In this example, the method includes transmitting the second set of USB packets from the second USB device to the first USB device via the second interface, the alternate mode connection, and the first interface.Example 31 is a system for transmitting a universal serial bus (USB) packet. The system includes: means for receiving a USB packet from a USB device; means for generating one or more alternating mode packets based on the USB packet; and means for transmitting the alternating mode packet via the alternate mode connection.Example 32 includes the system of Example 31, with or without optional features. In this example, the USB packet includes a USB 3.x packet.Example 33 includes the system of any of examples 31 to 32, with or without optional features. In this example, the alternating mode packet includes a packet header, the packet header including a value indicating a layer type.Example 34 includes the system of any of examples 31 to 33, with or without optional features. In this example, the alternating mode packet includes a packet header that includes a field for indicating a path between the upstream port and the downstream port.Example 35 includes the system of any of examples 31 to 34, with or without optional features. In this example, the alternating mode packet includes packets to be transmitted periodically.Example 36 includes the system of any of examples 31 to 35, with or without optional features. In this example, the alternating mode packet includes a portion of a USB packet, wherein the USB packet includes a data packet.Example 37 includes the system of any of examples 31 to 36, with or without optional features. In this example, the alternate mode packet includes a packet header that includes a field for indicating a path for a virtual link between the upstream USB port and the downstream USB port.Example 38 includes the system of any of examples 31 to 37, with or without optional features. In this example, the alternating mode packet includes a packet header, the packet header including a length field for indicating the length of the transaction in number of bytes.Example 39 includes the system of any of examples 31 to 38, with or without optional features. In this example, the alternating mode packet includes a packet header that includes fields to be used for synchronization and error checking.Example 40 includes the system of any of examples 31 to 39, with or without optional features. In this example, the alternate mode connection includes a USB Type-C cable.Example 41 is a system for receiving USB packets through an alternate mode interface. The system includes: means for receiving an alternating mode packet from the transmitter adapter via the alternate mode connection; means for recovering the USB packet based on the alternating mode grouping; and means for transmitting the USB packet to the USB device.Example 42 includes the system of Example 41, with or without optional features. In this example, the alternate mode connection includes a USB Type-C cable.Example 43 includes the system of any of examples 41 to 42, with or without optional features. In this example, the unit for recovering the USB packet is for restoring the USB packet based on the alternate mode packet by removing the alternate mode header from the alternate mode packet.Example 44 includes the system of any of examples 41 to 43, with or without optional features. In this example, the unit for recovering USB packets is used to recover USB packets based on alternating pattern packets by combining data from two or more alternating pattern packets.Example 45 includes the system of any of examples 41 to 44, with or without optional features. In this example, the alternating mode packet includes a packet header that includes values to be used by the USB receiver adapter to parse commands and data.Example 46 includes the system of any of examples 41 to 45, with or without optional features. In this example, the alternating mode packet includes a header that includes fields to be used for synchronization and error checking.Example 47 includes the system of any of examples 41 to 46, with or without optional features. In this example, the USB packet includes a data packet.Example 48 includes the system of any of examples 41 to 47, with or without optional features. In this example, the means for receiving the alternate mode packet is coupled to a Extensible Host Controller Interface (xHCI) controller, wherein the xHCI controller is configured to send the USB 3.x packet to the receiver adapter and direct the USB 2.0 packet Send to alternate mode connection.Example 49 includes the system of any of examples 41 to 48, with or without optional features. In this example, the unit for recovering USB packets includes a USB receiver adapter.Example 50 includes the system of any of examples 41 to 49, with or without optional features. In this example, the unit for receiving the alternating mode packet includes an interface.Example 51 is at least one computer readable medium for transmitting a USB packet having instructions stored therein. The computer readable medium includes instructions that direct the processor to receive a USB packet from the first USB device. The computer readable medium further includes instructions that cause the processor to generate an alternating pattern packet based on the USB packet. The computer readable medium further includes instructions that direct the processor to send the alternating mode packet to an interface coupled to the second USB device.Example 52 includes the computer readable medium of Example 51, with or without optional features. In this example, the computer readable medium includes instructions for receiving a second set of alternating mode packets from an interface. The computer readable medium further includes instructions that direct the processor to generate a USB packet based on the alternating pattern grouping. The computer readable medium further includes instructions to direct the processor to transmit the restored USB packet to the first USB device.Example 53 includes the computer readable medium of any of examples 51 to 52, with or without optional features. In this example, the USB packet includes a data packet.Example 54 includes the computer readable medium of any of examples 51 to 53, with or without optional features. In this example, the computer readable medium includes instructions for adding an alternate mode packet header to a USB packet.Example 55 includes the computer readable medium of any of examples 51 to 54, with or without optional features. In this example, the computer readable medium includes instructions for removing an alternate mode packet header from an alternate mode packet.Example 56 includes the computer readable medium of any of examples 51 to 55, with or without optional features. In this example, the first USB device and the second USB device are USB 3.x devices.The example 57 includes the computer readable medium of any of examples 51 to 56, with or without optional features. In this example, the computer readable medium includes instructions for segmenting a USB packet into an alternating pattern packet.The example 58 includes the computer readable medium of any of examples 51 to 57, with or without optional features. In this example, the computer readable medium includes instructions for combining a plurality of alternate mode packets to recover the segmented USB packets.Example 59 includes the computer readable medium of any of examples 51 to 58, with or without optional features. In this example, the alternating mode grouping is transparent to the first USB device and the second USB device.The example 60 includes the computer readable medium of any of examples 51 to 59, with or without optional features. In this example, the computer readable medium includes instructions for parsing commands and data via headers grouped in alternating patterns.All of the components, features, structures, characteristics, etc. described and illustrated herein are not necessarily included in one or more specific aspects. For example, if the specification states that a component, feature, structure, or characteristic is "may", "may", "can" or "may", the particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to "a" or "an" element, it does not mean that there is only one element. If the specification or claim refers to "an additional" element, it does not exclude the presence of more than one additional element.It should be noted that while some aspects have been described with reference to specific embodiments, other embodiments are possible in accordance with some aspects. In addition, the arrangement and/or order of the circuit elements or other features illustrated in the drawings and/or described herein are not required in the particular manner shown and described. Many other arrangements are possible in accordance with some aspects.In each of the systems shown in the figures, in some cases, elements may have the same reference numbers or different reference numerals to indicate that the elements represented may be different and/or similar. However, the elements may be flexible enough to have different implementations and work with some or all of the systems shown or described herein. The various elements shown in the figures may be the same or different. Which element is called the first element and which one is called the second element is arbitrary.It should be understood that the details in the foregoing examples may be used anywhere in one or more aspects. For example, all of the optional features of the computing device described above can also be implemented in relation to any of the methods or computer readable media described herein. Moreover, although flow diagrams and/or state diagrams may have been used herein to describe aspects, the techniques are not limited to those diagrams or corresponding descriptions herein. For example, the processes need not be moved through each of the illustrated blocks or states, or in the exact same order as shown and described herein.The technology is not limited to the specific details set forth herein. In fact, those skilled in the art having the benefit of this disclosure will appreciate that many other variations in accordance with the foregoing description and drawings may be made within the scope of the present technology. Accordingly, the appended claims, including any modifications thereto, define the scope of the present technology. |
A method, apparatus, and system to concurrently render independent images for display on one or more display devices. In an embodiment, a graphics-rendering engine concurrently renders independent images for display on multiple display devices. A graphics context manager stores in a first memory area and restores from the first memory area information describing a first rendering context associated with a first independent image. The graphics context manager stores in a second memory area and restores from the second memory area information describing a second rendering context associated with a second independent image. |
CLAIMSWhat is claimed is: 1. An apparatus, comprising: a graphics-rendering engine to concurrently render two or more independent images for display on multiple display devices, the two or more independent images include a first independent image and a second independent image; and a graphics context manager to store in a first memory area and restore from the first memory area information describing a first rendering context associated with the first independent image, the graphics context manager to store in a second memory area and restore from the second memory area information describing a second rendering context associated with the second independent image.2. The apparatus of claim 1, wherein the graphics context manager further comprises: a plurality of memory areas, each memory area to store a rendering context associated with the instructions from a particular graphics application, the plurality of memory areas includes the first memory area and the second memory area; and a plurality of context identification registers including a first context identification register and a second context identification register, the first context identification register contains information to point to an address of the first memory area, the second context identification register contains information to point to an address of the second memory area.3. The apparatus of claim 2, wherein the graphics context manager further comprises: a third register to track which memory area in the plurality of memory areas contains the rendering context information to be supplied to the graphics-rendering engine.4. The apparatus of claim 1, wherein the first memory area is located on the same chip containing the graphics-rendering engine.5. The apparatus of claim 2, wherein the first context identification register contains a field to assist in switching the first rendering context associated with a two dimensional image to the second rendering context associated with a three dimensional image.6. The apparatus of claim 2, wherein the first context identification register contains a field to assist in switching the first rendering context associated with a textured-map image to the second rendering context associated with a non-texture-mapped image.7. The apparatus of claim 2, further comprises: the first memory area to contain instructions for the two or more independent images in a first instruction stream.8. The apparatus of claim 2, further comprises: the first memory area to contain instructions for one or more independent images in a first instruction stream, and the first memory area to contain instructions for one or more independent images in a second instruction stream.9. The apparatus of claim 1, further comprises: One or more instruction transports to deliver instructions for the two or more independent images to the graphics-rendering engine, the one or more instruction transports including a first instruction transport.10. The apparatus of claim 9, wherein each instruction transport is associated with a particular display device.11. The apparatus of claim 9, wherein the first instruction transport comprises: an instruction memory area; a first register to define a start and an end to the instruction memory area; and a memory access engine to fetch and deliver the instructions from the instruction memory area to the graphics-rendering engine.12. The apparatus of claim 9, wherein the instruction transport further comprises: a third memory area to store an independent sequence of instructions that can be invoked from an instruction stream.13. The apparatus of claim 1, further comprises: a time allocator to arbitrate the use of the graphics-rendering engine between the two or more independent images. 14. The apparatus of claim 13, wherein the time allocator comprises: a plurality of registers including a first register, the first register having a plurality of fields, a first field to determine whether the first register participates in an arbitration process to use the graphics rendering engine, a second field to point to a memory location containing instructions from a first instruction stream.15. The apparatus of claim 13, wherein the time allocator further comprising:A first module to establish a programmable elapsed period of time to use the graphics-rendering engine, the period of time is defined by a programmable number of unit time periods, where each unit time period is defined by a programmable number of realtime time quanta.16. The apparatus of claim 14, wherein the time allocator further comprises: a first module to direct the graphics-rendering engine to process instructions associated with a first independent image, the instructions stored in a first memory area, the first memory area having an address defined by information contained within the plurality of the fields.17. A method, comprising : concurrently rendering instructions associated with multiple independent images within a first instruction-stream ; storing in a first memory area information representing a first rendering context associated with a first independent image; restoring from a second memory area instructions representing a second rendering context associated with a second independent image; and switching a graphics-rendering engine from the first rendering context to the second rendering context. 18. The method of claim 17, further comprising: using a timing circuit to allocate the use of the graphics-rendering engine between instructions associated with the first graphics application and instructions associated with the second graphics application.19. The method of claim 17, further comprising: including the first memory area and the second memory area in a plurality of memory areas; and using a volatile memory device to track which memory area in the plurality of memory areas contains the rendering context information to be supplied to the graphicsrendering engine.20. The method of claim 17, further comprising: displaying the multiple independent images on a single display device.21. A system, comprising: a central processing unit; and a graphics device, the central processing unit coupled to the graphics device, the graphics device containing a graphics-rendering engine to concurrently render two or more independent images for display on multiple display devices, and a graphics context manager to store in a first memory area and restore from the first memory area information describing a first rendering context associated with the first independent image, the graphics context manager to store in a second memory area and restore from the second memory area information describing a second rendering context associated with the second independent image.22. The system of claim 21, wherein the graphics device further comprises: a time allocator to arbitrate the use of the graphics-rendering engine between the two or more independent images.23. The system of claim 21, wherein the graphics device further comprises: an instruction transport to deliver instructions for the independent images to the graphics-rendering engine as controlled by the time allocator. ***** |
APPARATUS, METHOD AND SYSTEM WITH A GRAPHICS-RENDERINGENGINE HAVING A GRAPHICS CONTEXT MANAGERFIELD OF THE INVENTION [001] This invention generally relates to rendering multiple images. More particularly this invention relates to rendering multiple images on one or more display devices. BACKGROUND OF THE INVENTION [002] Image rendering is the conversion of a high-level object-based description into a graphical image for display on some display device. For example, an act of image rendering occurs during the conversion of a mathematical model of a three-dimensional object or scene into a bitmap image. Another example of image rendering is converting anHTML document into an image for display on a computer monitor. Typically, a hardware device referred to as a graphics-rendering engine accelerates these graphics processing tasks. [003] Multiple images may be commonly viewed on a computer monitor when surfing theInternet. For example, a web page and two banner ads super imposed over the web page may be displayed on a computer monitor when surfing the Internet. The graphicsrendering engine typically renders all of the instructions associated with the first image, such as the web page. After completing processing the instructions for the first image, the graphics-rendering engine starts processing the instructions associated with the second image, such as one of the banner ads. However, in general, the graphics-rendering engine must finish rendering the instructions associated with the first image before starting to process the instructions associated with the second image. Thus, if the graphics-rendering engine processes instructions faster than the graphics application program generates instructions, then the graphics-rendering engine remains idle during that period of time.Also, if the image instructions call for a real world event to occur prior to executing the next instruction, then the graphics-rendering engine remains idle during that period of time. Typically, a graphics-rendering engine services instruction streams sequentially.Thus, the instructions associated with the first instruction stream were processed before the graphics-rendering engine started processing instructions associated with a second instruction stream. [004] Another example could be the rendering of two independent images in a three dimension environment. A single display screen displays a first window that contains the 3D image and a second window that contains the displayed image of a controlling 2D graphic user interface. As noted, in previous technologies, the instructions for the image in the first window were processed before the graphics-rendering engine started processing instructions the image in the second window. [005] Previous technologies have displayed multiple images on multiple devices.Typically, two or more graphics-rendering engines exist to process the instructions associated with the multiple images. Each graphics-rendering engine services a single display device. However, in practice, multiple graphics-rendering engines occupy more physical space, consume more power, and cost more to produce than a single graphicsrendering engine. Thus, reducing the number of graphics-rendering engines is beneficial.Moreover, previous technologies attempting to render different images on the same display screen with two or more graphics-rendering engines encountered grave arbitration conflicts. [006] Each graphics-rendering engine is controlled via a set of rendering state variables.These state variables are known collectively as the rendering context. The rendering state variables control specific aspects of the graphics rendering process, such as object color, texture, texture application modes, etc.[007] A specific rendering context exists with each image as that image is being rendered.Previous technologies use an inefficient method to set the rendering context associated with an image. The graphics driver program receives instructions from the application programs and sends the instruction streams containing the instructions, including the state variable settings currently associated with the image, to the graphics-rendering engine.The graphics-rendering engine processes these rendering context instructions prior to executing the other rendering instructions. When a graphics-rendering engine switches between processing instructions associated with a first image and instructions associated with a second image, then the graphics application programs needs to send the rendering context instructions and the graphics-rendering engine needs to process those rendering context instructions.[008] Previously, the rendering context associated with a graphics-rendering engine was modified only via the software-generated instruction stream, and was not directly accessible from the host CPU. Changing from a first rendering context, such as the current rendering context, to a second rendering context, such as a new rendering context, therefore required the application software to generate instructions to specify the state variable settings for the second rendering context. Given that the first rendering context could not be read, application software was required to maintain a shadow copy of the first rendering context in order to restore that first rendering context at some later point. BRIEF DESCRIPTION OF THE DRAWINGS [009] The drawings refer to the invention in which: figure 1 illustrates a block diagram of an embodiment of a graphics device that renders one or more images using a single graphics-rendering engine to display the one or more images on multiple display devices; figure 2 illustrates a block diagram of an embodiment of a computer system containing a central processing unit, a cache, a memory, display devices, and a graphics device having an embodiment of an instruction transport and an embodiment of a graphics context manager; figure 3 illustrates a block diagram of an embodiment of a ring buffer memory area; figure 4 illustrates a block diagram of an embodiment of a time allocator to allocate the use of the graphics-rendering engine between each independent image being rendered ; figure 5 and figure 6 illustrate a flow diagram of an embodiment of a process for rendering multiple images on multiple display devices using a single graphic-rendering engine. [0010] While the invention is subject to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. The invention should be understood to not be limited to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. DETAILED DISCUSSION [0011] In the following description, numerous specific details are set forth, such as examples of specific instructions, named components, connections, etc. in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details.In other instances, well known components or methods have not been described in detail but rather in a block diagram in order to avoid unnecessarily obscuring the present invention. Thus, the specific details set forth are merely exemplary. The specific details may be varied from and still be contemplated to be within the spirit and scope of the present invention. The term coupled is defined as meaning connected either directly or indirectly.In general, a graphics-rendering engine concurrently renders independent images for display on multiple display devices. An instruction transport delivers instructions for the two or more independent images to the graphics-rendering engine. A time allocator arbitrates the concurrent use of the graphics-rendering engine between each independent image being rendered. A graphics context manager restores a rendering context associated with a first independent image from an established memory location to the graphicsrendering engine. [0013] Figure 1 illustrates a block diagram of an embodiment of a graphics device that renders one or more images using a graphics-rendering engine to display the one or more images on multiple display devices. Referring to figure 1, the graphics device 100 contains a graphics-rendering engine 102, one or more instruction transports 104, a context manager 106, a time allocator 108, and one or more display devices, such as the first display device 110 and the second display device 112. In an embodiment, the graphics device 100 contains a single graphics-rendering engine 102. [0014] The graphics-rendering engine 102 generates independent images to be displayed on either a single display device or multiple display devices. Thus, for example, two independent images may be displayed on the same display device or the two independent images may each be displayed on separate display devices. The instructions for each independent image come from a separate instruction stream 114 or from a single instruction stream 114 containing instructions from multiple graphic application programs. [0015] Each independent image may be concurrently rendered as compared to prior art technology displaying a web page with banner ads through a browser application or sequentially rendering a first instruction stream associated with a two dimensional image and then rendering a second instruction stream associated with a three dimension image.Generally, the prior art technology completely renders the image instructions associated with the first image contained in the first window, such as the banner ad, and then completely renders the instructions for the second image contained in the second window, such as the web page. Typically, the prior technology, the graphics-rendering engine does not concurrently operate on the instructions for each independent image. [0016] The time allocator 108 arbitrates the use of the graphics-rendering engine 102 between each independent image being rendered. A graphics context manager 106 stores the context associated with each independent image being rendered in a memory device (not shown). Various graphic's applications running on the processor or running on a browser running on the processor insert image rendering instructions into the instruction stream 114. An instruction transport 104 delivers the instructions from an instruction stream 114 to the graphic-rendering engine 102 for processing. [0017] The graphics-rendering engine 102 works with the graphics context manager 106, time allocator 108, and one or more instruction transports 104 to make efficient use of the graphics-rendering engine 102. Each graphics application supplying instructions to the instruction stream 114 may be generating images and operating at different rates of speed.For example, a streaming live video application usually operates at much faster image generation rate than a word processing application. The graphics-rendering engine 102 may concurrently render instructions associated with two or more images to minimize the time the graphics-rendering engine 102 remains idle. Also, in previous technologies if the instruction for a first image called for a real word event to occur prior to executing the next instruction, then the graphics-rendering engine 102 remained idle during that period of time. However, the graphics-rendering engine 102 may concurrently render instructions from multiple images in order to reduce the idle time for the graphics-rendering engine 102. [0018] The graphics-rendering engine 102 may save the current rendering context associated with a first image and load a new rendering context associated with a second image from established memory location (not shown). In an embodiment, the established memory location used to store a rendering context may be referred to as a logical context (not shown). The graphics-rendering device 100, when required to switch rendering contexts, may (1) write the current rendering context from the rendering state variables into a first established memory location in memory, (2) read the new rendering context from a second established memory location in memory, and (3) load the rendering state variables with the information from the new rendering context. In an embodiment, an established memory location in the context manger 106 is associated with each graphics application that is generating an independent image. In an embodiment, a separate instruction transport 104 is associated with each display device 110,112 to store the independent set of image rendering instructions to be processed for that particular display device 110,112.Figure 2 illustrates a block diagram of an embodiment of a computer system containing a central processing unit (CPU), a cache, a memory, display devices, and a graphics device having an embodiment of an instruction transport and an embodiment of a graphics context manager. The graphics device 200 contains multiple ring buffer registers 204,206, a ring buffer direct memory access engine (RB DMA ENG) 212, a graphicsrendering engine 214, and context identification registers (CID) 222,224. Multiple ring buffer memory areas 208, 210, multiple established memory locations 216,218, 220 and multiple display devices 228,230 are associated with the graphics device 200. In an embodiment, an instruction transport includes multiple ring buffer registers 204,206, multiple ring buffer memory areas 208,210 and a direct memory access engine 212. In an embodiment, a context manager consists of context identification registers (CID) 222,224, an active context identification register (Active CID) 226, and multiple established memory locations 216,218, 220. [0020] Figure 3 illustrates a block diagram of an embodiment of a ring buffer memory area. As noted above, an embodiment of the instruction transport contains one or more ring buffer registers 310 and one or more ring buffer memory areas 300 through which software-generated instructions can be passed to the graphics-rendering engine (not shown). A ring buffer memory area 300 holds the actual image rendering instructions from a graphics application (not shown). The ring buffer register 310 defines the start and length of the ring buffer memory area 300, and includes two"offsets", a head 304 and tail 302, into the ring buffer memory area 300. The tail offset 302 informs the graphics rendering engine of the presence of valid instructions that must be executed. The head offset 304 is incremented by the graphics-rendering engine as those instructions are parsed and executed. Instructions can wrap around from the bottom of the ring buffer memory area 300 back to the top of the ring buffer memory area 300. In an embodiment, the ring buffer memory area 300 stores an instruction to point to the location of a batch buffer (not shown). The batch buffer contains a separate list of image rendering instructions that may be stored in a discrete memory area to provide extra instruction storage capacity. In an embodiment, the batch buffer stores an independent sequence of instructions that can be invoked from an instruction stream. [0021] Referring back to figure 2, each ring buffer register 204,206 may have multiple fields within the register. The fields contained within an embodiment of a ring buffer register, such as the first ring buffer register 204, may be a ring buffer valid field (V) 232, a start address field (S) 234, a buffer length field (L) 235, a head offset field (H) 236, a head wrap count field (W) 233, a tail offset field (T) 237, an automatic report head enable field (R) 238, a time slice field (TS) 239 and other similar fields.The ring buffer valid field 232 controls whether this particular ring buffer register is included in the arbitration process for sharing the graphics-rendering engine 214. The start address field 234 points to the start of a contiguous memory region comprising the ring buffer memory area 208,210. A ring buffer memory area 208,210 located in either the system memory 232 or a dedicated memory. The buffer length field 235 specifies the size in bytes of the allocated ring buffer memory area 208,210. In an embodiment, the ring buffer length field 235 defines the largest amount of data that can be submitted at any one time to a ring buffer memory area 208,210. In an embodiment, the ring buffer memory area 208,210 may contain image rendering instructions and pointers to one or more batch buffers 240, thereby, making a virtually limitless memory area to contain instructions. [0023] The head offset field 236 points to the memory offset from start address 234 of the next instruction that the graphics-rendering engine 214 will parse. For example the head offset 236 may point to one memory unit past the last instruction parsed. The graphicsrendering engine 214 updates the head offset field 236 as instructions are parsed. Once the head offset 236 reaches the value of the tail offset 237, i. e. the offsets are equal, then the graphics-rendering engine 214 considers the ring buffer memory area 208,210 empty and removes the corresponding ring buffer register 204,206 from the arbitration process for sharing the graphics-rendering engine 214 as long as that condition remains. Thus, an indication exists that the instruction stream for that particular display device should be removed from the central process. Also, included in the ring buffer registers 204,206 is an automatic report head enable fieldthat enables the head pointer value and the head wrap count field 233 to be written to cacheable memory for more efficient flow control algorithms. For example, flow control algorithms during polling the head offset 236 to ascertain progress. [0024] The ring buffer memory area 208,210, may wrap instructions from the end of the memory area to the start of the memory area. The head wrap count field 233 is incremented by the graphics-rendering engine 214 every time the head offset 236 wraps around back to the start address 234 of the ring buffer memory area 208,210. In an embodiment, the head wrap count field 233 is included in the DWord written in the"report head"process. The graphics device 200 can use the head wrap count field 233 to track the instruction parsing progress as if the ring buffer memory area 208,210 has a"virtual" length much greater than the size of the actual physical buffer. The tail offset field 237 points to a location in the ring buffer memory area 208, 210 that is offset a specific distance from start address 234. The tail-offset field 237 may point to the next memory unit of instruction data that graphics application software can use to store additional image rendering instructions to be later executed. For example, the tail offset field 237 points one memory unit 232 past the last instruction submitted to the graphics-rendering engine 214 for execution. The instructions submitted can wrap around from the end of the ring buffer memory area 208,210 back to the top, in which case the tail offset 237 written will be less than the previous value. The"empty"condition of a ring buffer memory area 208,210 may be defined as"head offset field 236 equals the tail offset field 237." [0026] The automatic report head enable field 238 allows graphics application software or operating software to request to have the head offset field 236 and head wrap count field 233 contents to be written to a specific, CPU-snooped system memory location on a periodic basis. Auto-reports can be programmed to occur each time the head offset field 236 advances by a programmed amount. The auto-report mechanism allows software to use the head offset field 236 head wrap count field 233 to determine the amount of free space in the ring buffer. Thus, the head offset field 236 may be periodically reported to the system memory to provide a fairly up-to-date head offset field 236 value automatically, without having to explicitly obtain a head pointer value via an instruction.Each display device 228,230 may have a separate instruction transport associated with that individual display device. As illustrated in figure 2, the first ring buffer register 204 and the first ring buffer memory area 208 are associated with the first display device 228. The second ring buffer register 206 and the second ring buffer memory area 210 are associated with the second display device 230. Thus, in this example, the first ring buffer register 204 and first ring buffer memory 208 area provide the instructions for the rendering of the independent image to be displayed on the first display device 228. In an embodiment, the first ring buffer register 204 and first ring buffer memory area 208 may be associated with the second display device 230. [0028] Multiple instruction transports allow different priorities to be assigned to each instruction transport. For example, lower priority instruction transports can be used for interruptible background rendering tasks. Likewise, a higher priority instruction transport can be used to service asynchronous events, such as video frame capture. Also, by allocating a first instruction transport to service one display device, such as the first display 228, and a second instruction transport to service another display device, such as the second display device 230, the graphics device 200 can support separate instruction streams per display device. Further, the graphics device 200 can support separately controlled instruction streams per display device. [0029] As noted above, each instruction transport may include a direct memory access engine 212. The direct memory access engine 212 fetches instructions from a particular instruction transport and delivers these instructions to the graphics-rendering engine 214. [0030] The graphics-rendering engine 214 reads image instructions from the instruction transport via the direct memory access engine 212 and executes these image instructions.The graphics-rendering engine 214 detects the presence of instructions within the ring buffer memory areas 208,210 via the difference between head offset field 236 and tail offset field 237 in the ring buffer register 204,206. The graphics-rendering engine 214 interprets and decodes the common"Header"field of instructions in order to determine what information the instruction contains and therefore how to further execute the instruction. This interpretation and decoding of instructions is commonly referred to as parsing. [0031] In an embodiment, the graphics-rendering engine 214 decodes specific instructions from the instruction stream 242 to find out to find out what information the instruction contains (e. g. , a state variable change 246 to apply or a primitive 248 to be rendered). The graphics-rendering engine 214 then executes the instruction accordingly. The execution of state variable change instruction 246 causes a specific change to the current rendering context. The execution of a primitive instruction 248 causes modification of the appropriate image information in memory 256,258 (i. e. , the image is rendered). The graphics-rendering engine 214 then stores the image information in memory locations corresponding to each display device 228,230, such the first display image 256 and the second display image 258. In an embodiment, the information for the first display image 256 and the information for the second display image 258 are stored in a local memory dedicated to both the first display device 228 and the second display device 230. In an embodiment, the instructions for the first display image 256 and the instructions for the second display image 258 are stored in the system memory 232. The graphics-rendering engine 214 reads the rendered image information from memory and present the rendered image information to the associated display device on a periodic basis. The display device, such as the first display device 228, then illustrates the actual images on a display based upon this information. [0032] In an embodiment, the graphics applications supply instructions into the instruction stream 242. As noted, these instructions may be stored in a ring buffer memory area 208,210 which is usually associated with a particular display device 228,230. In an embodiment, some of the types of instructions found in the instruction stream 242 may be a state variable change 246, a primitive 248, and a set context commands 250,252. A primitive instruction 248 directs the graphics-rendering engine 214 as to the shapes to draw and the location and dimensions to attribute to those shapes. The state variable change instruction 246 directs the graphics-rendering engine 214 to modify the current values of the set of rendering state variables stored in the hardware graphics context circuit 244 when rendering an image. In an embodiment, the set context command (Set CXT #) 250,252 may cause the graphics-rendering engine 214 to save the current rendering context to an established memory location, such as the first established memory location 216, and restore the new rendering context from a new established memory location, such as a second established memory location 218. [0033] Each established memory location, such as the first established memory location 216, stores the rendering context of an image being rendered by the graphics-rendering engine 214. Likewise, each established memory location 216 218,220 may store the settings of the rendering state variables to be employed when rendering the associated independent image. In an embodiment, the existence of multiple established memory locations 216,218, 220 allows the graphic-rendering engine 214 to keep track of the rendering context associated with each image being rendered. An embodiment of a context manager contains multiple established memory locations 216,218, 220 and context identification registers 222,224, 226 in order to manage the concurrent rendering of multiple images. An embodiment of a context manager coordinates with a graphics display controller circuit (GDC) 270 to support displaying images on multiple display devices 228,230 as well as displaying multiple images on the same display device, such as the first display device 228. [0034] The settings of numerous hardware state variables in the hardware graphics context circuit 244 control the graphics operations, such as rendering, in the graphics device 200. The state variables may include global state variables and context state variables. Global state variables are common to all contexts (e. g. , logical address mapping resources, etc. ) and are therefore considered outside the scope of any specific rendering context. However, each rendering context associated with a specific graphics application does contain a separate set of context state variables. In an embodiment, these rendering contexts associated with a specific graphics application may be stored in established memory locations on active on-chip memory or in multiple established memory locations 216,218, 220 in system memory 232. [0035] As noted, the multiple established memory locations 216,218, 220 support the graphics-rendering engine 214 by storing in a memory 232 and restoring from the memory 232 the rendering context associated with the independent image being rendered by the graphics-rendering image. In an embodiment, a second set context instruction from the instruction stream 242, such as set context-AO 250, directs the graphics-rendering engine 214 to send the current rendering context for the image being rendered to an established memory location, such as the first established memory location 216, for storage. At the same time, the second established memory location 218 associated with the graphics application generating the second image receives a signal from the graphics-rendering engine 214 to restore the rendering context associated with a second image being concurrently rendered by the graphics-rendering engine 214. In an embodiment, the addition of a context cache 260 located on the device reduces the memory bandwidth and time required to swap contexts. [0036] The context manager also consists of context identification registers (CID) 222, 224, and an active context identification register 226. Context identification registers 222, 224 associate with a particular ring buffer register 204,206 and thus a particular display image memory location 256,258. [0037] In an embodiment, the active context identification register 226 tracks the context identification register 222,224 value contained within the currently active ring buffer register 204,206. The tracked context identification register, such as the first context identification register 222, establishes which particular established memory location 216, 218,220 is associated with the image currently being rendered by the graphics rendering engine. [0038] In an embodiment, each context identification register 222,224, contains an established memory location address and a set of context qualifier bits. The context qualifier bits control whether portions of the rendering context either do or do not have to be saved/restored upon context switch. In an embodiment, each context identification register 222,224, implements context qualifier bits such as, a"Texture Palette SaveDisable"context qualifier bit and a"Texture Palette Restore Disable"context qualifier bit.In an embodiment, these context qualifier bits aid in the swapping of context between two dimensional and three dimensional images, where the three dimensional images may require a current Texture Palette to be maintained (i. e. , saved and restored as part of the rendering context) while the two dimensional images may not. [0039] Established memory locations 216,218, 220 are referenced via the established memory location address of the corresponding context identification register 222,224. The actual size of an established memory location 216,218, 220 is the amount of data stored/restored during a context switch and depends on whether the rendering context includes a texture palette. In an embodiment, a context identification register 222,224 may contain two additional registers to specify the respective established memory location 216,218, 220 size in memory 232. In an embodiment, a particular context identification register 222,224 is made the active register during the processing of a"setcontext" instruction 250,252 from the instruction stream 242 being stored in the corresponding ring buffer memory area 208,210. In an embodiment, the set context instruction 250,252 provides a new context identification value (local context address + palette save disable bits) to be loaded into the context identification register 222,224. The set context instruction 250,252 also contains a restore inhibit bit used to optionally inhibit the restoration of the new context. In an embodiment, the restore inhibit bit may be used during context initialization to avoid the loading of uninitialized context data from memory 232.The active context identification register 226 contains the context identification values of the active ring buffer register, such as the first ring buffer register 204. As part of the execution of the set context instruction 250,252, the established memory location address fields from the active context identification register 226 and set context instruction are compared. If they differ or the active context identification register 226 is uninitialized, a context switch operation occurs. [0041] In an embodiment, during the context switch operation, if a restore inhibit instruction field is not set, a context restore operation may be performed. Here, the address value for an established memory location, such as the first established memory location 216 is used to load the active context identification register 226. Note, that the context qualifier fields of the instruction may further condition the restoration of portions of the rendering context. For example, the texture palette may or may not be restored. [0042] The HW GFX CXT 244 causes the load of the new context from the appropriate established memory location, as well as the loading of the active context identification register with the value from the set context instruction 250,252. At this point, the corresponding ring buffer register 204,206 and ring buffer memory area 208, 210 have switched the active context to the new established memory location 216,218, 220. [0043] As noted previously, each graphics application may be generating image instructions at different rates of speed. Equally true is that each display device 228,230 may refresh the display and its associated image at different rates of speed. In an embodiment, the content manager and the instruction transport support the seamless switching between different instruction streams, switching between different display devices 228,252, and switching between rendering contexts associated with different graphics applications within the same instruction stream 242. [0044] Figure 4 illustrates a block diagram of an embodiment of a time allocator to allocate the use of the graphics-rendering engine between each independent image being rendered. In an embodiment, the time allocator 400 contains an arbitration and switching module 410, a timer register 412, a unit register 414, a unit-time counter 416 and a time slice counter 418. In an embodiment, the time allocator 400 provides an elapsed time criteria and fairness use criteria to allocate the use of the single graphic-rendering engine 411. In an embodiment, the time allocator 400 may allocate the use of the graphicsrendering engine 411 to render independent images between either multiple display devices (not shown), multiple graphic application programs each having its own instruction stream 413, and multiple graphics application programs within a single instruction stream 413. [0045] Each ring buffer register, such as the first ring buffer register 402 and the second ring buffer register 404, may be time sliced or the ring buffer register may be non-timesliced, such as the third ring buffer register 406. As will be described later, each non-time sliced register may be used for hi-priority graphic images, such as live video, to temporarily monopolize the use of the graphics-rendering engine 411.Each time-sliced ring buffer register 402,404 has associated with it aTIME-SLICE register 420,422 that specifies the desired duration of instruction execution to be performed before indicating that a switch to another time-sliced ring buffer should be checked. In an embodiment, a time slice field 420,422 in the ring buffer register 402,404 exists to specify a percent of use of the graphics-rendering engine 411 that should be accorded to this particular ring buffer register 402,404. The time slice field 420,422 may also specify the minimum absolute time use of the graphics-rendering engine 411 that should be accorded to this ring buffer register 402,404. In an embodiment, the desired duration of instruction execution may be programmed in time units. In an embodiment, the driver software 424 may write these time unit values into each time slice field 420, 422. Thus, the driver software 424 is able to control both the absolute and relative time devoted to each time-sliced ring buffer register 420,422. The CPU 440 accesses the driver software 424 from a memory, such as memory 442, [0047] The unit register 414 provides a forward-compatible unit-time time quanta to be used by driver software 424. Establishing a unit-time quanta is important where the actual time reference of the device may vary between configurations and/or implementations. In an embodiment, the unit register 414 uses the graphic device's 400 core clock period as the actual time reference. The unit register 414 may be programmed via the BIOS firmware 426 for the graphic device 400. The other time slice parameters may be defined relative to this unit-time quanta established by the unit register 414. Each unit-time quota defined by unit register 414 may be, for example one unit-time equals fifty microseconds or one unittime equals forty clock cycles. [0048] The unit register 414 also contains a time-slice enable bit (T) 428 to turn ring buffer time slicing on or off. In an embodiment, when the time-slice enable bit 428 of the unit register 414 is clear, fixed ring buffer priorities are in effect. In an embodiment, when the time-slice enable bit 428 is set, arbitration between the time sliced ring buffer registers 401,404 is controlled via the time slice fields 420,422. [0049] A timer register 412 implements the time slice timing control. When the time-slice enable bit 428 is set, the time register 412 reads the value in units written into the time slice fields 420,422 portion of each ring buffer registers 402,404. In this mode, the activation or resumption of an instruction-stream 413 supplying instructions to a specific ring buffer memory area, such as the first ring buffer memory area 430, causes the timer countdown field (TC) 434 to be initialized with the content value in the time slice register 420,422 portion of that specific ring buffer, such as the first ring buffer register 420. The timer countdown field 434 decrements every time-unit while the execution of the instructions from the ring buffer memory area continues. [0050] The time slice counter 418 decrements the timer countdown field 434 every time unit. The unit time counter 416 monitors and counts every core clock cycle. The unit time counter 416 sends a signal to the time slice counter 418 to decrement the timer countdown field 434 based upon the established unit time quota defined by unit register 414. [0051] In an embodiment, if the following two conditions exist then the graphicsrendering engine 411 receives an instruction from the arbitration and switching module 410 to stop rendering the instructions from a ring buffer memory area and start rendering instructions from another ring buffer memory area. The two conditions are if the timer countdown field 434 becomes zero, and pending instructions exist in the other ring buffer memory area. The graphics-rendering engine 411 then switches to executing the other ring buffer memory area, such as the second ring buffer memory area 432, which causes the timer countdown field 434 to be reinitialized with the contents in time slice field 422 in the second ring buffer register 404. The switch occurs at the next instruction arbitration point. [0052] However, if there are no pending instructions in the other ring buffer memory areas, such as the first ring buffer memory area 430, when the timer countdown field 434 becomes zero, then execution of the instruction in the current ring buffer memory area continues. In an embodiment, the execution of the instructions in the current ring buffer memory area continues indefinitely until when the other ring buffer register communicates the presence of instructions. In an embodiment, a ring buffer register, such as the first ring buffer register 402 indicates the presence of instructions to execute when the value in the head offset field 415 differs form the value of the tail offset value 417. In an embodiment, the presence of the new instructions is communicated to the arbitration and switching module 410. The arbitration and switching module continues the execution of the instructions in the current ring buffer memory area for the value specified in the time slice field 402,422 and then switches to executing the new instructions. [0053] The active context identification register communicates to the graphics-rendering engine 411 via the arbitration and switching module 410 the context identification register values of the active ring buffer register (not shown). [0054] Several mechanisms can interrupt the arbitration process for use of the graphicsrendering engine 411 between two ring buffer registers having pending instructions stored in their respective ring buffer memory areas. As noted above, a non-time slicing high priority ring buffer, such as the third ring buffer register 406, may communicate to the arbitration and switching module 410 to suspend the timer countdown 434 and rendering of instructions for the currently active time-sliced ring buffer register. This suspension is only temporary until the graphics rendering engine 411 finishes rendering the current instructions associated with the non-time sliced ring buffers.The instruction stream 413 from the graphics application software may contain instructions to temporarily interrupt the arbitrated use of the graphics-rendering engine 411. For example, a"load register"instruction 423 may interrupt the arbitration use of the graphics rendering engine 411 between two time-sliced ring buffer registers 402,422 having pending instructions stored in their respective ring buffer memory areas 430,432.The software can use the"load register"instruction 423 to clear the timer countdown field 434 and, thus, effectively make the active ring buffer register give up the remainder of its time slice period if pending instructions exist in another ring buffer memory area. For example, the"load register"instruction 423 may be used when the time for the instructions being executed is not anticipated to exceed either the specified percent of use or the absolute minimum time accorded to the ring buffer register 402,422. In an embodiment, if the instructions associated with a first stream do not take up the entire time slice period, then the arbitration and switching module 410 automatically switches to another a ring buffer memory area containing pending instructions. Also, for example, the"load register"instruction 423 may be used prior to an extremely time-consuming instruction or non-interruptable sequence of instruction to allow the pending instructions for a second application to be processed before the graphics rendering engine 411 operates on this particular sequence of instructions. As noted, if there are no other ring buffer memory areas 430,432 with instructions ready to execute, the execution of instructions continues past the"load register"instruction 423. If another ring buffer memory area 430,432 does have instructions to execute, after the execution of the other ring buffer's instructions, then the graphics rendering engine 411 immediately switches back to the original ring buffer's instructions without waiting through a timer countdown 434. [0057] The instruction stream 413 may also contain a"wait for event"instruction 425.The"wait for event"instruction 425 may be used to pause execution of instructions from this particular instruction-stream 413 until a certain condition exists or event happens. If execution of"wait for event"instruction 425 results in a pause, other time-sliced ring buffer registers 402,404 are allowed to have the graphics-rendering engine process their associated instructions, even before the remainder of the paused ring buffer's time slice period is expired. For example, a"wait for event"instruction 425 may be used to wait for a video capture event. The display device must use those instructions to display the image when going from the top vertical position on the display screen to the low vertical position on the display screen. Thus, the graphics-rendering engine 411 has rendered all of the instructions for the complete image on the display screen and can not render any more instructions for that display device until transition period expires from the top vertical position to the low vertical position. During the time the graphics device 400 is waiting for such an event to occur, a"wait for event"instruction 425 permits the graphicsrendering engine 411 to re-enable the processing of another time-sliced ring buffer memory area associated with a different display device while waiting for that asynchronous event to occur for the current display device. An asynchronous event is an event that is not occurring at regular interval, or coordinated in time, such as a video capture event. In an embodiment, the asynchronous event occurs either randomly or at an interval unrelated to the instruction stream execution. For example, a display device's vertical blank event, an asynchronous event, actually occurs at a regular interval in real world time (i. e. , 60Hz), but is asynchronous to the irregular service time associated with the instruction stream 413 execution. [0058] Figure 5 and figure 6 illustrate a flow diagram of an embodiment of a process for rendering multiple images on multiple display devices using a single graphic-rendering engine. An instruction stream originates the process when the instruction stream carries instructions from one or more graphics applications to an instruction transport. [0059] In block 505, a first ring buffer memory area defined by a ring buffer register receives instructions from multiple graphics application programs or via a single graphics application program. The location and size of the first ring buffer memory area may be defined by programmable content contained in a first ring buffer register. The instruction transport may contain one or more ring buffer memory areas or similar memory areas. The instruction transport may contain one or more ring buffer registers or similar devices. [0060] In block 510, the driver stores the instructions representing the image in the first ring buffer memory area. In an embodiment, the Tail Offset field in the corresponding ring buffer register is changed by the driver to indicate the presence of these pending instructions contained in the first ring buffer memory area. The first ring buffer register communicates the presence of instructions to be executed to the graphics rendering engine and the arbitration and switching module. [0061] In block 515, the instruction transport uses a DMA engine to fetch the instructions from the first ring buffer memory for the graphics-rendering engine. The arbitration and switching module sets the first ring buffer memory as the memory the graphics-rendering engine is processing instructions from. [0062] In block 520, the graphics context manager sets the current rendering context associated with the first ring buffer register. [0063] In block 525, in an embodiment, if the first (current) image that being processed by the graphics-rendering engine has a rendering context different than the second (next) image to be processed next then the following happens. The graphics context manager stores the rendering context associated with the first image and restores the context associated with the second image to the graphics-rendering engine. The graphics context manager stores and restores state variable values representing a rendering context associated with an image from a particular graphics application in a second memory area, such as an established memory location. The second memory area may be defined by programmable content contained in a second register, such as a context identification register. [0064] In block 530, the graphics-rendering engine executes the instructions from the ring buffer memory area associated with a first display device, such as the first ring buffer memory area, and makes the appropriate modifications to the first image display memory area.. Based upon the time allocator, the graphics-rendering engine may then start executing instructions from a second ring buffer memory area associated with a second display device. In an embodiment, the graphics-rendering engine may start executing instructions from a second graphics application contained within the same instruction stream supplying the first ring buffer memory area. Thus, the graphics-rendering engine may alternate between the processing of instructions associated with a first independent image and instructions associated with a second independent image by switching. The graphics-rendering engine may switch between processing instructions from different ring buffer memory areas or by processing instructions from two different graphics applications within the same instruction stream. Note, the graphics-rendering engine need not wait to completely process all of the instructions associated with the first independent image before starting to process instructions associated with the second independent image. [0065] In block 535, the time allocator may load balance use of the graphic rendering engine between the instructions associated with first independent image and the second independent image. In an embodiment, the time allocator may load balance use of the graphics-rendering engine between the instructions associated with two or more independent images. In an embodiment, the time allocator balances the use of the graphics rendering engine based upon an percentage determined for each image and an absolute minimum time of usage of the graphics-rendering engine determined for each image. The time allocator may also balance the use of the graphics-rendering engine between high priority images demanding immediate use of the graphics-rendering engine and images sharing the percentage of use and absolute minimum time use of the graphics-rendering engine. [0066] In block 540, the time allocator may establish a time-unit quantum in the timing circuit compatible with devices operating at a different core frequency. Note, these blocks are not indicative of any set sequential order of performance. For example, block 540 may occur before block 505.In block 545, the time allocator may yield time designated for instructions associated with a first image to use the graphics-rendering engine over to instructions associated with a second image via a software instruction from the graphics device driver. In block 550, the time allocator may permit the graphics-rendering engine to process instructions associated with a second image while waiting for an image-rendering event to occur to a first image via a software instruction from a graphics application. [0069] In block 555, the graphic device concurrently displays images on one or more display devices. [0070] In block 570, the graphics device continues this process started in block 505. |
Apparatuses, methods, and storage media for modifying augmented reality in response to user interaction are described. In one instance, the apparatus for modifying augmented reality may include a processor, a scene capture camera coupled with the processor to capture a physical scene, and an augmentation management module to be operated by the processor. The augmentation management module may obtain and analyze the physical scene, generate one or more virtual articles to augment a rendering of the physical scene based on a result of the analysis, track user interaction with the rendered augmented scene, and modify or complement the virtual articles in response to the tracked user interaction. Other embodiments may be described and claimed. |
1.An apparatus for providing augmented reality calculations, comprising:processor;A scene capture camera coupled with the processor to capture a physical scene; andAn enhancement management module, the enhancement management module being operated by the processor for:Acquire and analyze the physical scene,Generate one or more virtual items based on a result of the analysis to enhance rendering of the physical scene,Track user interactions with the rendered enhanced scene, andThe one or more virtual items are modified or supplemented in response to the tracked user interaction.2.The apparatus of claim 1, further comprising: an interactive capture camera coupled to the processor to capture the user's interaction with the enhanced scene and provide the enhanced management module with information about captured Interactive information to track.3.The apparatus of claim 2, wherein the apparatus is selected from one of a laptop computing device, a tablet computing device, a mobile computing device, or an all-in-one (AIO) computing device.4.The apparatus of claim 1, wherein the enhancement management module for tracking user interactions with the rendered enhancement scene comprises means for obtaining information regarding a relationship between the user and the one of the rendered enhancement scenes or An indication of the interaction of at least one of the plurality of virtual items.5.The apparatus of claim 4, wherein the enhancement management module to modify or supplement the rendered enhancement scene comprises:Among the one or more virtual items in the rendered enhancement scene, the at least one virtual item with the indicated user and the one or more virtual items in the rendered enhancement scene Aligning the interaction of the at least one virtual item;In response to the indication of an interaction of the user with the at least one of the one or more virtual items in the rendered enhancement scene, changing a position in the rendered enhancement scene A location of the at least one of the one or more virtual items; orIn response to the indication of an interaction of the user with the at least one of the one or more virtual items in the rendered enhancement scene, changing a position in the rendered enhancement scene Said at least one of the one or more virtual items.6.The apparatus of claim 4, wherein the indication of the user interaction includes at least a selected one of: a gesture, a change in a facial expression, a verbal order, a change in eye gaze, a change in attitude, Or changes in head posture.7.The apparatus of claim 1, further comprising: a display device coupled with the processor to display the rendered enhancement scene to the user.8.The apparatus of claim 1, wherein the enhancement management module to enhance rendering of the physical scene with one or more virtual items comprises means for determining, based at least in part on one of the one associated with the physical scene Or multiple markings to enhance the rendering.9.The apparatus of any one of claims 1 to 8, wherein the enhancement management module is to modify or supplement the enhancement scene substantially simultaneously with the tracking of the user interaction.10.The apparatus of claim 2, wherein each of the cameras comprises a two-dimensional (2D) or three-dimensional (3D) camera for capturing real-time images respectively associated with the physical scene or the user interaction Depth data and color data, wherein the color data includes red, green and blue (RGB) data.11.The apparatus of claim 10, wherein the apparatus is placed on a substantially horizontal surface to enable the capture of the physical scene and the user interaction.12.A computer-implemented method for providing augmented reality computations, the method comprising:By a computing device, one or more virtual items to enhance the rendering of the physical scene;By the computing device, user interaction with the rendered enhanced scene; andBy the computing device, the one or more virtual items in the rendered enhancement scene in response to the tracking of the user interaction.13.The computer-implemented method of claim 12, further comprising:By the computing device, the physical scene; andBy the computing device, the one or more virtual items based on the analytic physical scene to enhance the rendering of the physical scene.14.The computer-implemented method of any one of claims 12 to 13, further comprising:The enhanced scene is rendered by the computing device for display.15.The computer-implemented method of claim 14, wherein tracking user interaction with the rendered enhancement scene comprises:By the computing device, an indication of user interaction with at least one of the one or more virtual items in the rendered enhancement scene, andWherein modifying or supplementing the one or more items further comprises:With an indication of the user's interaction with the at least one of the one or more virtual items in the rendered enhancement scene substantially concurrently and in response to the obtaining, The computing device aligns the at least one of the one or more virtual items in the rendered enhancement scene with the indicated user interaction.16.The computer-implemented method of claim 15, wherein obtaining an indication of user interaction comprises detecting, by the computing device, at least a selected one of a gesture, a change of facial expression, a verbal command, an eye Changes in gaze, changes in stance, or changes in head posture.17.One or more computer-readable media having instructions stored on the computer-readable medium for providing augmented reality computations, the instructions providing the computing device with an enhanced management environment in response to being executed by a computing device, The enhanced management environment is for:Use one or more virtual items to enhance rendering of the physics scene;Track user interaction with the rendered enhanced scene; andIn response to the tracking of the user interaction, the one or more items in the rendered enhancement scene are modified or supplemented.18.The computer-readable medium of claim 17, wherein the computing device is further equipped with the enhanced management environment, the enhanced management environment to:Obtain information about the physical scene;Analyze information about the physical scene; andGenerating the one or more virtual items based on a result of the analyzing of the physical scene to enhance the rendering of the physical scene.19.The computer-readable medium of any one of claims 17 to 18, wherein the computing device is equipped with the enhanced management environment for tracking user interactions with the rendered enhanced scene Interactively including the enhanced management environment for:Obtain an indication of user interaction with at least one of the one or more virtual items in the rendered enhancement scene.20.The computer-readable medium of claim 19, wherein the computing device is equipped with the enhanced management environment for modifying or supplementing the one or more items including the enhanced management Environment with the acquisition of an indication of an interaction of the user with the at least one of the one or more virtual items substantially concurrently and in response to the acquiring, Aligning the at least one of the one or more virtual items with the indicated user interaction. |
Enhanced editing based on user interaction with augmented reality scenesRelated applicationsThis application claims the benefit of U.S. Application No. 14 / 667,302 entitled "AUGMENTATION MODIFICATION BASEDON USER INTERACTION WITH AUGMENTED REALITY SCENE," filed March 24, 2015.Technical fieldThis disclosure relates to the field of augmented reality, and in particular to modifying augmented reality in response to user interaction.Background techniqueThe background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.Throughout human life, it may involve both the physical world inhabited and the virtual world that can play, learn, work, etc. Augmented reality and mixed reality applications create environments that blur the line between the virtual world and the real world. However, under the current state of the art, augmented reality may include augmenting a physical scene with a virtual product that may not be responsive to a user's natural interaction modality (such as facial expressions, gestures, voice commands, etc.) Interaction.BRIEF DESCRIPTION OF THE DRAWINGS FIGThe embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. For the convenience of this description, the same reference numerals refer to the same structural elements. The embodiments are shown by way of example and not by way of limitation in the figures of the accompanying drawings.FIG. 1 is a block diagram illustrating an example device for modifying augmented reality in response to user interaction according to various embodiments of the present disclosure.Figure 2 illustrates an example computing environment according to various embodiments of the present disclosure.Figure 3 illustrates an example process for modifying augmented reality in response to user interaction in accordance with some embodiments.FIG. 4 illustrates an example computing environment suitable for practicing various aspects of the disclosure in accordance with various embodiments.FIG. 5 illustrates an example non-transitory computer-readable storage medium having instructions configured to practice all or selected of the operations associated with the processes described with reference to FIGS. 1-3.detailed descriptionIn the following detailed description, reference is made to the accompanying drawings that form a part hereof, wherein like reference numerals refer to like parts throughout, and in which there are shown by way of illustration practical embodiments. It is to be understood that other embodiments may be utilized and structural changes or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.Described herein are computing devices, methods, and storage media for modifying augmented reality based on user interaction with rendered augmented reality. In one example, the device for modifying augmented reality may include a processor, a scene capture camera coupled to the processor for capturing a physical scene, and an enhanced management module operated by the processor. The enhancement management module can obtain and analyze the physical scene, generate one or more virtual items based on the results of the analysis to enhance rendering of the physical scene, track user interaction with the rendered enhanced scene, And modify or supplement the virtual item in response to the tracked user interaction.Each operation may be described in turn as a number of discrete actions or operations in the manner most helpful for understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. Specifically, these operations may not be performed in the order presented. The described operations may be performed in a different order than the described embodiments. Various additional operations may be performed and / or the described operations may be omitted in additional embodiments.For the purposes of the present disclosure, the phrase "A and / or B" means (A), (B), (A) or (B), or (A and B). For the purposes of this disclosure, the phrase "A, B and / or C" means (A), (B), (C), (A and B), (A and C), (A, B and C).The description may use the phrases "in an embodiment" or "in the embodiments", which phrases may each refer to one or more of the same or different embodiments. In addition, the terms "comprising", "including", "having" and the like as used in connection with the embodiments of the present disclosure are synonymous.As used herein, the terms "logic" and "module" may refer to, be part of, or include the following: Application-specific integrated circuits that execute one or more software or firmware programs An ASIC, an electronic circuit, a processor (shared processor, dedicated processor or group processor) and / or memory (shared memory, dedicated memory or group memory), a combinational logic circuit, and / or other devices that provide the described functionality Suitable parts.FIG. 1 is a block diagram illustrating an example device 100 for modifying augmented reality in response to user interaction, in accordance with various embodiments. As shown, the device 100 may include a processor 112, a memory 114, an enhancement management module 140, a display 134, a model library 138, a scene capture camera 102, and an interactive capture camera 104 that are communicatively coupled to each other. Device 100 may be, for example, a laptop computing device, a tablet computing device, a mobile computing device, or an all-in-one (AIO) computing device.The scene capture camera 102 may be disposed in or with the device 100 for capturing images / videos of the physical scene. The physical scene can include an image of one or more physical objects. In an embodiment, the image captured by camera 102 may include both color information and depth information. The interactive capture camera 104 may be disposed in or with the device 100 for capturing images / videos that interact with the user of the enhanced physical scene rendered for the user as will be described in more detail in FIG. 2 .The cameras 102 and 104 may be peripherally attached or integrated into the device 100. Cameras 102 and 104 may be communicatively coupled with device 100 via a wired or physical connection suitable for transmitting data captured by cameras 102 and 104. As indicated above, cameras 102 and 104 may be configured to capture both depth information and color information. For example, in some embodiments, cameras 102 and 104 may incorporate depth sensors such as infrared emitters used in conjunction with infrared cameras as well as two-dimensional (2D) image capture sensors such as red, green and blue (RGB) camera sensors. In general, the cameras 102 and 104 may have 2D or three-dimensional (3D) image capture capabilities and may be implemented as 3D cameras, depth cameras or bifocal cameras and / or otherwise capable of generating depth images, channels, or streams. The cameras 102, 104 may include still cameras, video cameras, webcams, infrared (IR) cameras, or other devices capable of capturing video and / or images. At least the interactive capture camera 104 may include a user interface (eg, a microphone) for application to augmented reality voice commands rendered for the user.The device 100 may be configured to receive captured images of the physical scene and user interaction with the scene from the cameras 102 and 104 and provide the captured image to the enhanced management module 140.The enhancement management module 140 may acquire and analyze the physical scene from the scene capture camera 102 and generate one or more virtual items based on the analyzed result to enhance the rendering of the physical scene. The enhancement management module 140 may also track user interaction with the rendered enhancement scene based on the images provided by the interaction capture camera 104 and modify or supplement the generated virtual item in response to the tracked user interaction. The operation of the enhancement management module 140 is described in more detail below.The enhancement management module 140 may include one or more components (not shown) that are configured to combine color information and depth information included in the image to create images for the images provided by the cameras 102 and 104 3D rendering. The enhancement management module 140 may further include a scene enhancement component 110 configured to analyze a physical scene image provided by the scene capture camera 102 and generate one or more virtual items based on the analyzed results to enhance the performance of Rendering. The rendering of the physical scene on the display 134 may be provided, for example, by the enhancement management module 140 or by a corresponding management component of the operating system of the device 100.The scene enhancement component 110 may include an object recognition and analysis module 122 for identifying predefined physical objects contained in the image. This can be done, for example, by feature extraction and comparison of extracted features with features of a predefined physical object. In an embodiment, the predefined physical object may be contained in a predefined physical object database.In some embodiments, scene enhancement component 110 may be configured to enable a user of device 100 to define a new physical object. This can be done by capturing one or more images of the physical object via the camera 102. The object recognition and analysis component 122 may be configured to extract features associated with the physical object from the one or more images and generate feature data sufficient to identify the physical object in the image of the physical scene. These features may be stored in a physical object repository (eg, memory 114) for identifying physical objects in a future physical scene.Some of the identified objects may include tags, tags, or other indicia of the virtual items that may be related to the enhancement of the identified objects in the physical scene. The object recognition and analysis component 122 may be configured to generate one or more virtual items and augment the 3D rendering of the identified physical objects with one or more virtual items for output onto the display 134. For example, the object recognition and analysis module 122 may be configured to identify a key color or other identifier in the indicia associated with the object to trigger the generation of one or more, associated with the object, in response to the identification of the color or other identifier Virtual items. For example, if the physical object includes a tree, it may be enhanced, for example, by adding one or more virtual birds, squirrels, nuts, fruits, etc. according to the identifier associated with the tree and identified by the object recognition and analysis module 122.In some embodiments, scene enhancement component 110 may be configured to dynamically track the location of a physical object in a rendered scene to determine the movement of a physical object and cause the generated virtual item to operate according to the movement. This can be considered as a context-based enhancement, where the context can be based on the movement of physical objects.The described physical scene enhancement techniques are provided for illustrative purposes and should not be taken as limiting the present disclosure. It will be appreciated that different augmented reality approaches may be applied to the physical scene captured by the scene capture camera 102.In an embodiment, the enhancement management module 140 may be configured to capture an image of the user interaction captured and provided by the camera 104 based on the interaction (in some embodiments, for purposes of tracking user interaction as described below User interaction information provided by sensors distributed around the device 100) to track user interactions with the enhanced physics scene rendered on the display 134 in real time or near real time.User interaction with enhanced rendered scenes can take different forms. For example, user interaction indications may include but are not limited to gestures, changes in user facial expressions, user issued oral commands, changes in user eye gazes, changes in user gestures, changes in user head gestures, or a combination thereof. The user interaction may include various types of interactions with one or more virtual items provided by the scene enhancement component 110 for the enhanced scene rendered on the display 134.The interactive capture camera 104 may be configured to capture user interactions with the rendered enhanced scene and provide the captured information to the user interaction tracking component 120. In embodiments, the interactive capture camera 104 may be placed in or around the device 100 to face the user of the device 100 in order to capture changes in user facial expressions, gestures and / or poses, gestures, eye gaze, etc. .In some embodiments, in addition to or as an alternative to the interactive capture camera 104, the device 100 may include multiple sensors 136 for tracking user interaction instructions with the rendered enhanced scene. Sensors 136 may include proximity sensors, inertial sensors, optical sensors, light sensors, audio sensors, temperature sensors, thermistors, motion sensors, vibration sensors, microphones, cameras, and / or other types of sensors.The sensors 136 may be distributed across the device 100 in a number of different ways. For example, some sensors (eg, a microphone for capturing audio associated with user voice commands) may reside in the interactive capture camera 104, while other sensors may be embedded around the body of the device 100. Some sensors, such as motion sensors (eg, accelerometers, gyroscopes, etc.), may be placed in or around the subject in the scene captured by the scene capture camera 102 to detect changes in position and velocity associated with the subject, and the like.The sensor 136 may include a recording device for recording, for example, content associated with the user interaction, such as an image of user interaction or voice commands. The recording device may be implemented as any external peripheral device or integrated device.The enhancement management module 140 may further include a user interaction tracking component 120 configured to track and enhance the physics of real-time or near real-time based on user interaction information provided by the interaction capture camera 104 and / or sensor 136 User interaction of scenes.The user interaction tracking component 120 may include a processing component 150 configured to receive, preprocess (eg, digitize and time stamp) interactively capture data provided by the camera 104 and / or sensor 136 and provide pre-processed Data for further processing as described below.The user interaction tracking component 120 may include a voice recognition component 124 configured to identify voice commands provided by the user that are associated with a particular virtual item in the rendered enhancement scene. In an embodiment, the voice recognition component 124 may include a translator for matching the voice of multiple users of the device 100, which multiple users may be eligible to provide voice commands associated with the virtual item.The user interaction tracking component 120 may include a facial expression tracking component 115 that is configured to track a user's facial expression (eg, mouth or eye movement), detect facial expression changes, record facial expression changes, Changes in the user's facial expressions related to a particular virtual item in the rendered enhanced scene. For example, the facial expression tracking component 115 may analyze the user facial expressions and enable manipulation of the virtual items in the enhanced scene in response to changes in facial expressions and / or audio narration provided by the user via voice commands. The facial expression tracking component 115 may also be configured to track the user's gaze and provide eye tracking information about the user's gaze of the virtual item.The user interaction tracking component 120 may further include a gesture tracking component 116 for tracking the gesture provided by the user regarding a particular virtual item in the rendered enhancement scene. Gestures, either alone or in conjunction with other user interaction indications, such as voice commands, act as an indication for manipulating the virtual item in the enhanced scene in response to the indication.The enhancement management module 140 may include an enhancement modification component 130 (shown as part of the scene enhancement component 110 for illustrative purposes) configured to modify or supplement one or more of the scene enhancement component 110 in response to the tracked user interaction Virtual items. For example, the enhancement modification component 130 may be configured to align the virtual item in the rendered enhancement scene with the indicated user interaction detected by the user interaction tracking component 120 with the virtual item in the rendered enhancement scene .In another example, the enhancement modification component 130 may be configured to change the location of the virtual item in the rendered enhancement scene in response to a user interaction indication with the virtual item provided by the user interaction tracking component 120.In another example, the enhancement modification component 130 may be configured to change (eg, change size, color, etc.) a virtual in the rendered enhancement scene in response to a user interaction indication with the virtual item in the rendered enhancement scene article.To facilitate enhanced modification, the apparatus 100 may include a model library 138 that is configured as a repository for enhanced modification of the virtual items based on the detected interaction indications. For example, the model library 138 may include rules configured to determine, based on, for example, a heuristic, a modification or addition to the virtual item based on the associated indication or user interaction with the virtual item. For example, the model repository 138 may store an index of gestures, voice commands, or facial expressions according to a particular nature and modifications or additions to the respective types of associated virtual items.It is to be understood that in some embodiments any or all of the components shown (such as cameras 102, 104, and / or sensors 136) may be separate from device 100 and away from the device, The device is communicatively coupled. In general, some or all of the functions of device 100, such as processing power and / or memory capacity, may be used or shared with enhanced management module 140. In addition, at least a portion of the components (eg, library 138) of the enhancement management module 140 may be accessed (eg, communicatively coupled with the device 100) by the device 100, but may not necessarily reside on the device 100. One or more of the above components may be distributed across device 100 and / or reside on a cloud computing device to host these components. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form part of, one another. For example, in some embodiments, memory 114, or a portion thereof, may be incorporated into processor 112. It will be understood that enhancement management module 140 may include hardware, software (eg, stored in memory 114), or a combination thereof.Figure 2 illustrates an example computing environment 200 according to various embodiments of the present disclosure. At least some of the components of computing environment 200 may correspond to the components of device 100 of FIG. 1. The computing environment 200 may include a device 100, such as a laptop computing device that may include a display 134 and an enhancement management module 140. The computing environment 200 may further include a scene capture camera 102 and an interactive capture camera 104 coupled with the computing device 100. Although depicted herein as integrated into the computing device 100, in some embodiments the cameras 102 and 104 may be peripherally attached to the computing device 100.As shown, the scene capture camera 102 substantially faces the physical scene in front of the computing device 100 to enable capture 204 (indicated by dashed lines) of the physical scene 206. As shown, the physical scene includes an object (cup) 208. The interactive capture camera 104 faces the user 210 to enable capturing 212 of the user interaction with the enhanced scene 214 rendered on the display 134 (indicated by the dotted lines). Computing device 100 may be placed in a fixed position (eg, on surface 202) to enable capture of a physical scene 206 with an object 208, user interaction with enhanced scene 214, and scene 214 enhancement described herein.In operation, the scene capture camera 102 may capture 204 the physical scene 206 with the object 208 and provide the captured scene to the enhancement management module 140 for processing and enhancement as described with reference to FIG. 1. The enhancement management module 140 may analyze the captured scene 206, identify the object 208 and provide enhancements of the captured scene 214. For example, the enhancement management module 140 may generate one or more related virtual items to enhance the identified object 208. In the example shown in FIG. 2, objects 208 may be augmented with virtual items, such as flowers 216 and butterflies 218, as shown in FIG. 2.At substantially the same time as the operation of the scene capture camera 102, the interactive capture camera 104 may capture 212 the user interaction with the enhanced scene 214 rendered on the display 134 and provide the captured image to the enhancement management module 140 for use as described with reference to FIG. 1 Described processing and analysis. As described with reference to FIG. 1, user interaction may be captured by multiple sensors 136 (not shown in FIG. 2) in addition to the interactive capture camera 104. However, it will be assumed that the camera 104 operates in cooperation with the sensor for capturing user interaction, and for the simplicity of description, user capture will be described with respect to the interactive capture camera 104.For example, the interactive capture camera 104 may capture an image of the user 210 and provide the captured image to the enhancement management module 140 for analysis. The analysis may include identifying and collecting information about the personal modality (eg, gesture, position relative to the camera 104, head pose, facial expression, eye gaze, gesture, etc.) of the user 210. Camera 104 may then capture the user's personal modality, either continuously or periodically, in real time or near real time and provide the captured information to enhancement management module 140 for tracking and detecting the user with enhanced rendered scene 214 Interactive instructions.As mentioned above, the user 210 may interact with the enhanced rendered scene 214 in various ways. For example, user 210 may use gestures, voice commands, facial expressions, eye movements, or a combination of voice commands, gestures, and facial expressions.For example, a user gesture may be associated with a voice command (eg, via the recording device described with reference to FIG. 1). The user 210 may point to an object in the scene 214 and provide an audio command to take a particular type of action for a particular virtual item.In the example depicted in FIG. 2, the user 210 may manipulate the virtual item 218 (butterfly), such as trying to grasp the butterfly with her hand 220 or trying to move the butterfly with the movement of her eye 222.The augmentation management module 140 may modify or supplement the virtual items 216, 218 in the rendered enhanced scene 214 in response to the detected user interaction (eg, trying to catch the butterfly). For example, the enhancement management module 140 may align the virtual item (eg, the butterfly 218) in the rendered enhancement scene 214 with the indicated user interaction (eg, trying to grasp the butterfly 218). In other words, in the rendered enhancement scene 214, the enhancement management module 140 may cause the butterfly 218 to virtually move (eg, "enter") into the user's hand 220.In another example, the enhancement management module 140 may change the position of the butterfly (eg, moving the butterfly 218 "away" from the user's hand 220 virtually) in the scene 214 in response to the user's hand 220 trying to grasp the butterfly 218.In another example, the enhancement management module 140 may change the virtual item in response to the detected user interaction indication with the virtual item. For example, enhancement management module 140 may cause flower 216 to be virtually open in response to a swipe of user's hand 220 or other indication of user interaction.In another example, the enhancement management module 140 may align the virtual item in response to a change in the user 210's viewpoint (eg, via a changed head gesture).The enhanced scene 214 in which the described maneuver has been performed on the virtual item may be re-rendered to the user 210 on the display 134.Figure 3 illustrates an example process for modifying augmented reality in response to user interaction in accordance with some embodiments. The process 300 may be performed, for example, by a device 100 (eg, a computing device) configured with the enhanced management module 140 described with reference to FIGS. 1 and 2.Process 300 may begin at block 302 and include obtaining and analyzing information about the physical scene and user characteristics with augmented reality user interaction. As described with reference to FIG. 2, a scene capture camera may capture a physical scene and provide the captured scene to an enhancement management module for processing and enhancement. The enhancement management module can analyze the captured scene and identify the objects in the scene to provide enhancements to the captured scene.Basically simultaneously, the interactive capture camera may capture the image of the user of the computing device and provide the captured image to the enhancement management module for analysis. The analysis may include identification and collected information about the user's personal modality (eg, gesture, position relative to the camera 104, head pose, facial expression, eye gaze, gesture, etc.).At block 304, the process 300 may include generating one or more virtual items based on the analysis of the physical scene to enhance rendering of the physical scene that may be displayed to the user. More specifically, one or more related virtual items may be generated to enhance the identified objects in the rendered enhancement scene.At block 306, process 300 may include enhancing rendering of the physical scene with the virtual item generated at block 304.At block 308, process 300 may include tracking user interaction with the rendered enhanced scene. For example, an interactive capture camera may capture a user's personal modality continuously or periodically in real-time or near real-time and provide the captured information to an enhancement management module for tracking and detecting user interaction with the enhanced rendered scene Instructions.At block 310, the process 300 may include modifying or supplementing one or more items in the rendered enhancement scene in response to tracking the user interaction. For example, if a user interaction indication with a virtual item in a rendered enhancement scene is detected, it may be modified by aligning the virtual item in the rendered enhancement scene with the indicated user interaction in the rendered enhancement scene Enhance the scene. In another example, the enhancement scene may be modified by changing the location of the virtual item in the rendered enhancement scene in response to the detected user interaction indication. In another example, the enhancement scene may be modified by changing the virtual item in the rendered enhancement scene in response to a user interaction indication.At decision block 312, the process 300 may include determining whether a user session with the computing device has ended. If the session has not ended, process 300 may return to block 308. Otherwise, process 300 may end.It should be understood that the acts described with reference to FIG. 3 may not necessarily occur in the order described. For example, the actions corresponding to block 308 may occur substantially simultaneously with the actions corresponding to block 310.FIG. 4 illustrates an example computing device 400 suitable for practicing aspects of the disclosure in accordance with various embodiments. As shown, computing device 400 may include one or more processors or processor cores 402, and system memory 404. For the purposes of this application (including the claims), the terms "processor" and "processor core" may be considered synonymous unless the context clearly requires otherwise. Processor 402 may include any type of processor, such as a central processing unit (CPU), microprocessor, or the like. Processor 402 may be implemented as an integrated circuit with multiple cores, such as a multi-core microprocessor. The computing device 400 may include a mass storage device 406 such as a magnetic disk, a hard disk drive, a volatile memory (eg, DRAM), a compact disc read only memory (CD-ROM), a digital versatile disk (DVD), or the like. In general, system memory 404 and / or mass storage 406 may be any type of temporary and / or permanent storage including, but not limited to, volatile and non-volatile memory, optical storage, magnetic storage, and / Or solid state mass storage devices and the like. Volatile memory may include, but is not limited to, static random access memory and / or dynamic random access memory. Non-volatile memory may include, but is not limited to, electrically erasable programmable read only memory, phase change memory, resistive memory, and the like.The computing device 400 may further include input / output (I / O) devices 408 such as a display 134, a keyboard, a cursor control, a remote control, a game controller, an image capture device, etc. and a communication interface Network interface cards, modems, infrared receivers, radio receivers (eg, Bluetooth), etc.). As shown, I / O device 408 may further include cameras 102 and 104 and sensor 136.The communication interface 410 may include communication chips (not shown) that may be configured to operate the apparatus 400 (or 100) according to the Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA), or LTE network. The communication chip may also be configured to operate according to the following: Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN E-UTRAN). The communication chip may be configured to operate according to the following: Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution Data Optimization (EV-DO) , And any other wireless protocols designated as 3G, 4G, 5G and beyond. In other embodiments, communication interface 410 may operate in accordance with other wireless protocols.The aforementioned computing device 400 elements may be coupled to each other via a system bus 412, which may represent one or more buses. In the case of multiple buses, they may be bridged by one or more bus bridges (not shown). Each of these elements may perform its conventional functions as known in the art. In particular, the system memory 404 and the mass storage device 406 may be used to store data that enables operations associated with the device 100 (eg, as described with reference to FIGS. 1-3, which are generally illustrated as computational logic 422 enhancements Management module 140 associated with the operation) of the programming instructions of the working copy and permanent copy. Computation logic 422 may be implemented by assembly instructions supported by processor (s) 402 or high-level languages that may be compiled into such instructions.A permanent copy of the programming instructions may be placed in the factory or on site at, for example, a distributed medium (not shown) such as a compact disc (CD) or via a communication interface 410 (from a distributed server (not shown) Capacity storage device 406.5 shows an example non-transitory computer-readable storage medium 502 having instructions configured to practice all or selected of the operations associated with the processes described above. As shown, the non-transitory computer-readable storage medium 502 may include a plurality of programming instructions 504 (eg, including the enhancement management module 140). The programming instructions 504 may be configured to enable a device (eg, the computing device 400) to perform one or more operations of the processes described with reference to FIGS. 1-3 in response to the execution of the programming instructions. In alternate embodiments, the programming instructions 504 may instead be arranged on a plurality of non-transitory computer-readable storage media 502. In still other embodiments, programming instructions 504 may be encoded in transient computer readable signals.Referring again to FIG. 4, the number, capabilities and / or capacities of the components 408, 410, 412 may depend on whether the computing device 400 is used as a fixed computing device such as a set-top box or desktop computer or a mobile computing device such as a tablet computing device, Laptop, game console or smart phone). Their composition is otherwise known, and therefore will not be further described.At least one processor in processor 402 may be packaged with a memory having computing logic 422 configured to practice the aspects of the embodiments described with reference to FIGS. 1-4. For example, computational logic 422 may be configured to include or access content enhancement module 140 (such as component 120 described with reference to FIG. 1). For one embodiment, at least one processor in the processor 402 may be packaged with a memory having computational logic 422 configured to practice aspects of the process 300 of FIG. 3 to form a system-in-package (SiP) or system-on-chip SoC).In various embodiments, computing device 500 may include a laptop computer, a netbook, a notebook, a supernotebook, a smart phone, a tablet computer, a personal digital assistant (PDA), a super mobile PC, a mobile phone, a desktop computer, a server, a printer , Scanner, monitor, set-top box, entertainment control unit, digital camera, portable music player, or digital video recorder. In a further embodiment, computing device 400 may be any other electronic device that processes data.The following paragraphs describe examples of various embodiments.Example 1 is an apparatus for providing augmented reality computation, comprising: a processor; a scene capture camera coupled with the processor for capturing a physical scene; and an enhancement management module, the enhancement management The module is operable by the processor to: obtain and analyze the physical scene, generate one or more virtual items based on a result of the analysis to enhance rendering of the physical scene, track a portion of the virtual scene that is augmented with the rendered enhancement User interaction of a scene, and modifying or supplementing the one or more virtual items in response to the tracked user interaction.Example 2 may include the subject matter of Example 1, further comprising: an interactive capture camera coupled to the processor for capturing the user interaction with the enhanced scene and for capturing user interaction with the enhanced scene The captured interaction information is provided to the enhancement management module for tracking.Example 3 may include the subject matter of Example 2, wherein the device is selected from one of a laptop computing device, a tablet computing device, a mobile computing device, or an all-in-one (AIO) computing device.Example 4 may include the subject matter of Example 1, wherein the enhancement management module is to track user interaction with the rendered enhancement scene including instructions for obtaining an interaction with the one of the rendered enhancement scenes Or at least one virtual item of the plurality of virtual items.Example 5 may include the subject matter of Example 4, wherein the enhancement management module is to modify or supplement the rendered enhancement scene including means for comparing the one or more of the rendered enhancement scenes The at least one virtual item in the plurality of virtual items with the indicated user interaction with the at least one of the one or more virtual items in the rendered enhancement scene; Perform the user interaction indication with the at least one of the one or more virtual items in the rendered enhancement scene, change the one of the rendered enhancement scenes or A position of the at least one virtual item in the plurality of virtual items; or in response to the user being in the position of the at least one of the one or more virtual items in the rendered enhancement scene Interacting instructions to change the at least one of the one or more virtual items in the rendered enhancement scene.Example 6 may include the subject matter of Example 4, wherein the user interaction indication includes at least a selected one of: a gesture, a change in a facial expression, a verbal order, a change in eye gaze, a change in attitude , Or changes in head position.Example 7 may include the subject matter of Example 1, further comprising: a display device coupled with the processor for displaying the rendered enhancement scene to the user.Example 8 may include the subject matter of Example 1, wherein the enhancement management module is to enhance rendering of the physical scene with one or more virtual items including instructions for determining, based at least in part on an association with the physical scene Of one or more markers to enhance the rendering.Example 9 may include the subject matter of any of Examples 1-8, wherein the enhancement management module is to modify or supplement the enhancement scene substantially concurrently with the tracking of the user interaction.Example 10 may include the subject matter of Example 2, wherein each of the cameras includes a two-dimensional (2D) or three-dimensional (3D) camera for capturing images captured separately from the physical scene or the user interaction Associated real-time depth data and color data, wherein the color data includes red, green and blue (RGB) data.Example 11 may include the subject matter of Example 10, wherein the device is placed on a substantially horizontal surface to enable the capture of the physical scene and the user interaction.Example 12 is a computer-implemented method for providing augmented reality computation, comprising: utilizing a computing device to utilize one or more virtual items to enhance rendering of a physical scene; tracking, by the computing device, the rendering of the augmented reality Scene; and modifying or supplementing the one or more virtual items in the rendered enhancement scene by the computing device in response to the tracking of the user interaction.Example 13 may include the subject matter of Example 12, further comprising: acquiring and analyzing the physical scene by the computing device; and generating, by the computing device, the one or more virtual based on the analyzing the physical scene Item to enhance the rendering of the physical scene.Example 14 may include the subject matter of any of Examples 12 to 13, further comprising: rendering, by the computing device, the enhanced scene for display.Example 15 may include the subject matter of Example 14, wherein tracking user interaction with the rendered augmented scene includes obtaining, by the computing device, the user interaction with the one or more of the rendered enhancement scenes User interaction instructions for at least one virtual item in the virtual item, and wherein modifying or supplementing the one or more items further comprises: retrieving the one or more virtual items in the one or more virtual items in the rendered enhancement scene The user interaction indication of the at least one virtual item in the rendered enhancement scene is substantially simultaneous and responsive to the obtaining by the computing device in the one or more virtual items in the rendered enhancement scene Aligning the at least one virtual item with the indicated user interaction.Example 16 may include the subject matter of Example 15, wherein obtaining a user interaction indication includes detecting, by the computing device, at least a selected one of a gesture, a change in facial expression, a verbal command, an eye-gaze Changes, changes in gestures, or changes in head posture.Example 17 is one or more computer-readable media having instructions stored on the computer-readable medium for providing augmented reality calculations, the instructions providing the computing device with an enhancement in response to being executed by a computing device A management environment for: enhancing rendering of a physical scene with one or more virtual items; tracking user interaction with the rendered enhanced scene; and in response to the user interaction with the user Track, modify, or supplement the one or more items in the rendered enhancement scene.Example 18 may include the subject matter of Example 17, wherein the computing device is further equipped with the enhanced management environment for: obtaining information regarding the physical scene; analyzing information regarding the physical scene And generating the one or more virtual items based on a result of the analyzing of the physical scene to enhance the rendering of the physical scene.Example 19 may include the subject matter of any of Examples 17 to 18, wherein the computing device is equipped with the enhanced management environment for tracking users who interact with the rendered enhanced scene Interactively including the enhanced management environment for: obtaining a user interaction indication with at least one of the one or more virtual items in the rendered enhancement scene.Example 20 may include the subject matter of Example 19, wherein the computing device is equipped with the enhanced management environment for modifying or supplementing the one or more items including the enhanced management environment At the same time as the obtaining of the user interaction indication with the at least one of the one or more virtual items and in response to the obtaining, The at least one of the plurality of virtual items is aligned with the indicated user interaction.Example 21 is an apparatus for providing augmented reality calculations, comprising: means for enhancing rendering of a physical scene with one or more virtual items; means for tracking user interaction with the rendered enhanced scene ; And means for modifying or supplementing the one or more items in the rendered enhancement scene in response to the tracking of the user interaction.Example 22 may include the subject matter of Example 21, further comprising: means for obtaining information regarding the physical scene; means for analyzing information regarding the physical scene; and means for determining, based on a comparison of the physical scene Means for generating the one or more virtual items to enhance the rendering of the physical scene.Example 23 may include the subject matter of any of Examples 21 to 22, wherein the means for tracking user interaction with the rendered enhancement scene includes means for obtaining information related to a user interaction in the rendered enhancement scene A user interaction indication of at least one of the one or more virtual items.Example 24 may include the subject matter of Example 23 wherein the means for modifying or supplementing the one or more items in the rendered enhancement scene in response to the tracking of the user interaction includes : For use in communicating with the acquisition of the user interaction indication with the at least one of the one or more virtual items substantially simultaneously and in response to the obtaining, Means for aligning the at least one of the one or more virtual items with the indicated user interaction.Computer readable media (including non-transitory computer readable media), methods, apparatuses, systems and devices for performing the above techniques are illustrative examples of the embodiments disclosed herein. In addition, other devices in the above interactions may be configured to perform various disclosed techniques.Although certain embodiments have been shown and described herein for the purpose of illustration, various alternative and / or equivalent embodiments that are suitable for implementing the same purposes may be made without departing from the scope of the present disclosure Or embodiments may be substituted for the embodiments shown and described. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that the embodiments described herein be limited only by the claims. |
A method for autocalibrating a plurality of phase-delayed clock signal edges within a reference clock period includes measuring delay spacing between the plurality of clock signal edges, calculating programmed delay spacing, calculating ideal signal edges from the programmed delay spacing and adjusting the clock signal edges to match the respective ideal signal edges. A plurality of calibrated clock signal edges is produced that are selectively available to a user. |
What is claimed is:1. A method of autocalibrating a plurality of phase-delayed clock signal edges, comprising:measuring delay spacings between said plurality of clock signal edges within a reference clock period;calculating desired delay spacings from said delay spacings;calculating ideal signal edges from said desired delay spacings; andadjusting said clock signal edges to match said respective ideal signal edges;wherein said plurality of clock signal edges are selectively available.2. The method of claim 1, further comprising:measuring a wrap-around delay spacing between the last and first signal edges of said plurality of clock signal edges to reduce error in said calculation of desired delay spacing.3. The method of claim 1, wherein said desired delay spacings are calculated by:calculating an average delay spacing so that the calibrated clock signal edges form an approximately linear time reference.4. The method of claim 1, wherein, for each successive pair of clock signal edges in said plurality of clock signal edge, said delay spacings are measured by:comparing the first and second clock signal edges to determine which arrives first.5. The method of claim 1, wherein, for each successive pair of clock signal edges in said plurality of clock signal edges, said delay spacings are measured by:switching first and second clock signal edges of said plurality of clock signal edges to target and delay signal paths, respectively; andcomparing the phases of said first and second clock signal edges.6. The method of claim 1, wherein, for each successive pair of clock signal edges in said plurality of clock signal edges, said delay spacings are measured by:delaying a first clock signal edge of said plurality of clock signal edges by one period with a one period delay circuit; andcomparing the phases of said first clock signal edge to the phase of a second clock signal edge of said plurality of clock signal edges.7. The method of claim 1, wherein said delay spacings are measured by:delaying a first clock signal edge of said plurality of clock signal edges to determine said delay spacing.8. The method of claim 1, wherein, for each successive pair of clock signal edges in said plurality of clock signal edges, said delay spacings are measured by:adjusting a first clock signal to match a second clock signal edge, each of said first and second clock signals of said plurality of clock signal edges; anddetermining said delay spacings from said adjustment.9. The method of claim 1, wherein, for each successive pair of clock signal edges in said plurality of clock signal edges, said delay spacings are measured by:incrementing a calibration control register to induce a change in delay of a first clock edge to match a delay of a second clock edge, said first and second clock edges of said plurality of clock signal edges; andtaking the resulting value of the calibration control register as the delay spacing measurement.10. The method of claim 1, wherein, for each successive pair of clock signal edges in said plurality of clock signal edges, said delay spacings are measured by:decrementing a calibration control register to induce a change in delay of a first clock edge to match a delay of a second clock edge, said first and second clock edges of said plurality of clock signal edges; andtaking the resulting value of the calibration control register as the delay spacing.11. The method of claim 1, further comprising:calculating error delays between said clock signal edges and respective next ideal signal edges to enable said adjusting of said clock signals based on said calculated error delays.12. The method of claim 11, further comprising:saving said error delays for subsequent retrieval. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates to reference timing circuits and, more particularly, to circuits for creating a linear time reference.2. Description of the Related ArtElectrical circuits often require access to precise timing information for proper operation. In the automatic test equipment (ATE) industry, it is desirable to create a linear time reference that is capable of producing timing edges at predetermined intervals within one period of a reference clock. The timing edges are used by a pattern generator to create a sequence of data codes for drivers used to create a number of different edges (high, low, open) for a device under test (DUT).One method to accomplish a linear program delay step over one full clock period P is to use an ideal voltage ramp to compare to a digital-to-analog (DAC) output. The comparison would switch from low to high or from high to low when the ramp voltage exceeds a programmed DAC output. A different delay may be chosen by programming the DAC to output a different voltage level for comparison with the ideal voltage ramp. One example implementation of this method is illustrated in U.S. Pat. No. 6,242,959. In this implementation, a ramp comparator circuit and DAC having a programmable delay are used to drive a one-shot circuit in a programmable delay circuit (PDC). Unfortunately, creating the highly linear ramp is difficult. Also, implementations using an ideal voltage ramp may have refire limitations that require a settling period after reset.A need continues to exist, therefore, for a linear time reference that has fast refire.SUMMARY OF THE INVENTIONA method for autocalibrating a plurality of phase-delayed clock signal edges within a reference clock period includes, in one embodiment of the invention, measuring delay spacing between the plurality of clock signal edges, calculating a programmed delay spacing, calculating ideal signal edges from the programmed delay spacing and adjusting the clock signal edges to match the respective ideal signal edges. This produces a plurality of calibrated clock signal edges that can be either highly linear or of a predetermined spacing, with fast refire and selective availability to a user.An apparatus is described for measuring the time delay between adjacent clock edges that includes, in one embodiment of the invention, target and delay signal paths and a variable delay module in said delay signal path. The delay module has a delay bias input that is operable to delay a first clock signal through the delay module in response to receiving an input voltage so that, when first and second clock signals are introduced to the target and delay signal paths, respectively, the input voltage corresponds to the time delay between the first and second clock signals.BRIEF DESCRIPTION OF THE DRAWINGSThe components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principals of the invention. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views.FIG. 1 is a timing diagram that illustrates, in one embodiment of the invention, delay spacing between clock signal edges and ideal signal edges.FIG. 2 is a flow diagram for adjusting a plurality of clock signal edges to match respective ideal clock signal edges.FIG. 3 is a flow diagram of one embodiment of the invention for measuring delay spacing between clock signal edges for the method illustrated in FIG. 2.FIG. 4 is a flow diagram of one embodiment of the invention for adjusting clock edges for the method illustrated in FIG. 2.FIGS. 5a-5d are timing diagrams illustrating sequential adjustment of clock signal edges to match the ideal clock signal edges.FIG. 6 is a block diagram of one embodiment of the invention that has a calibration edge circuit in an auto calibration circuit to compare clock signal edges.FIG. 7 is a block diagram of one embodiment of the variable delay cell illustrated in FIG. 6.FIG. 8 is a block diagram and schematic of one embodiment of an impedance string in the calibration edge circuit illustrated in FIG. 6.FIG. 9 is a block diagram of a timing vernier circuit coupled to the autocalibration circuit illustrated in FIG. 6.FIG. 10 is a block diagram of one embodiment of a timing vernier in the timing vernier module illustrated in FIG. 9.DETAILED DESCRIPTION OF THE INVENTIONA system and method for autocalibrating a plurality of phase-delayed clock signal edges within a reference clock period into a plurality of either nominally equal clock signal edges or clock signal edges that have a predetermined distribution includes measuring delay spacing between sequential clock signal edges, calculating a predetermined delay spacing from said delay spacing, calculating ideal signal edges from said programmed delay spacing and adjusting the clock signal edges to match the ideal signal edges so the plurality of clock signal edges are calibrated and selectively available to a user.An apparatus for dividing a reference clock period into a plurality of nominally equally spaced clock signal edges includes first and second signal paths with a variable delay cell in the second signal path that has a delay bias input. The delay bias input is operable to delay a first clock signal through the variable delay cell in response to an input voltage so that, when first and second clock signals are introduced to the first and second signal paths, respectively, the measurement delay between the first and second clock signal is approximately zero.In one embodiment, a multi-phase clock generator has plurality of timing outputs to provide a respective plurality of delayed clock signal edges ("clock edges") within one period P of a reference clock. FIG. 1 illustrates the clock edges in relation to calculated ideal signal edges ("Ideal Edges") that have an ideal time delay between them ("AVE") as calculated by the reference clock period P divided by the number of clocks N. A naming convention follows to facilitate description of the autocalibration of the clock edges. As illustrated in FIG. 1, the measured time delay between clock signal edges N-1 and N ("ClockEdge[n-1]"and "ClockEdge[n]", respectively) is Meas_Dly[n-1]. Similarly, the measured delay between ClockEdge[n] and the next adjacent clock signal edge ClockEdge[n+1] is Meas_Dly[n]. Thus, Meas_Dly[n-1] and Meas_Dly[n] represent time delays between actual clock edges.The time delays between ClockEdge[n-1] and ClockEdge[n] and ideal signal edges N and N+1 (IdealEdge[N] and IdealEdge[n+1], respectively) are Err_Dly[n-1] and Err_Dly[n], respectively. Clock signal edge 1 ("ClockEdge[1]") and clock signal edge 29 ("ClockEdge[29]") are also illustrated to facilitate description of the methods and systems that follow. Although thirty clock-signal edges are illustrated, the number of edges is arbitrary and at the convenience of the designer of the system. To obtain similar time delays between clock edges but with fewer of them, a faster reference clock may be used. Also, fewer or more edges can be provided with proportionally fewer or more timing outputs provided by the multi-phase clock generator.FIG. 2 is a flow diagram of a method to adjust the clock signal edges illustrated in FIG. 1. In a system designed for thirty (30) clocks ("Clk[0-29]") generating thirty (30) clock-signal edges within a single period P, a counter is initialized (block 200) and the delay spacing between each adjacent clock signal edge is measured (Meas_Dly<0:29>) (block 205) (see FIG. 3). The wraparound delay spacing between ClockEdge29 and ClockEdge0 is also measured (block 210) to complete the measurement of thirty intervals. A predetermined delay spacing is calculated from the measured delay spacing, preferably the average of all delay spacing measurements AVE (block 215), and the value is used to calculate delay locations for the ideal edges (block 220). Or, a different delay spacing, such as a bell-shaped, sinusoidal or logarithmic delay spacing can be calculated to calculate delay locations for the ideal edges. If the average of all delay spacing measurements AVE is used, the error delay Err_Dly[n] between each clock edge and its associated next ideal edge is calculated (block 225) according to equation 1:AVE-Meas_Dly[n-1]+Err_Dly[n-1]=Err_Dly[n] (1)Each value of the error delay between respective clock and ideal edges is saved in either calibration edge registers or other memory locations (block 230) for later comparison to uncalibrated clock edges. Starting with ClockEdge29 and continuing down to ClockEdge0, each respective error delay value Err_Dly[29:0] is used to adjust the actual clock signal edges ClockEdge[29:0] to match the ideal signal edges IdealEdge[29:0] (block 235) (see FIG. 4). The calibration register values can be normalized to reduce non-linearities that may be induced by use of the outer ranges of calibration register values (block 240). The counter is incremented (block 245) and, if five iterations have not yet been completed, the process repeats (block 200) to reduce further non-idealities. Otherwise, the process is stopped (block 250). Although five iterations are illustrated, further iterations would produce a more linear division of the reference clock period and less iteration would result in less linearity.FIG. 3 illustrates one embodiment of a method to measure delay spacing between clock signal edges as illustrated in FIG. 2. In a system designed to accept one clock edge at a time from a vernier timing generator, ClockEdge[n-1] is switched (block 305) to a one period delay circuit in a delay path at the beginning of the reference clock period (T=0) for a delay of one period (block 310). ClockEdge[n] is switched to a target path at T=1P and ClockEdge[n-1] is introduced to a calibration edge circuit (blocks 315, 230) to enable a further variable delay. The two clock signal edges, ClockEdge[n] and ClockEdge[n-1], are compared (block 325), preferably with a phase detector. The results of several comparisons are accumulated (block 330) to determine if one edge is in front of the other in time. If the result of the accumulation indicates that ClockEdge[n-1] is after ClockEdge[n] (block 335), the delay of ClockEdge[n-1] is decreased to move it closer to ClockEdge[n] (block 340) by increasing an input bias of the calibration edge circuit (the calibration edge circuit's delay is inversely related to its input bias). Preferably, its associated calibration edge register is incremented to enable switching of the input bias to a higher input voltage. If the accumulation indicates that ClockEdge[n-1] did not arrive first, but that they do not approximately match (block 345), the delay of ClockEdge[n-1] is increased (block 350) by decreasing the input bias. Preferably, associated calibration edge register is decremented to enable switching of the input bias to a lower voltage. The method is repeated with ClockEdge[n-1] and ClockEdge[n] switched to the delay and target paths, respectively, at times T=0 (block 305) and T=1P, respectively, to compare them using the phase detector (blocks 305-330). When the accumulated result (block 330) indicates that the edges arrive at the phase comparator at approximately the same time (block 345), the resulting value of the associated calibration register is used as a relative measurement of delay spacing Meas_Dly[n-1] between ClockEdge[n-1] and ClockEdge[n]. The method then continues with a comparison of ClockEdge[n] and ClockEdge[n+1] to find Meas_Dly[n] and with comparison of all other adjacent clock edges within the one period reference clock signal (block 355) so that all delay spacing measurements are stored in each associated calibration edge register. The delay spacing measurements are then returned to the method of FIG. 2 (block 360) for calculation of the average delay spacing (AVE) (see block 215).FIG. 4 is a flow diagram that illustrates one embodiment for adjusting the clock signal edges to match the calculated ideal signal edges, as illustrated in FIG. 2. Although the flow chart illustrates the process for adjusting thirty clock edges, any number of clocks may be used depending on the number of clock edges desired by the designer of the system. With the calibration edge registers previously set for each value of Err_Dly[29:0] (see block 230 in FIG. 2), ClockEdge[28] is switched to the delay path at T=0 (block 405) to be delayed one period (block 410). ClockEdge[29] is switched to the target path and ClockEdge[28] is introduced to the calibration edge circuit, each at T=1P (blocks 415, 420). ClockEdge[28] and ClockEdge[29] are introduced to a phase detector (block 425) to determine which is first in time and the result is accumulated (block 430).If the result of the accumulation indicates that ClockEdge[29] is before ClockEdge[28] in time (block 435), then ClockEdge[29] is delayed by decrementing its associated vernier calibration register to decrease its delay bias input (block 440). If the accumulated result does not indicate that ClockEdge[29] is before ClockEdge[28] (block 435) and that they do not match (block 445) then the delay for ClockEdge[29] is reduced (block 450) by incrementing its associated vernier calibration register to decrease its delay bias input and the process is repeated to accumulate a new result (blocks 405-430). Otherwise, if the result of the accumulation of the phase detector indicates the edges approximately match (block 445), then the method continues with the next lower clock signal pair (block 455) so that ClockEdge[28] and Clockedge[27] are compared (blocks 400-430) and, sequentially, each other sequential pair until ClockEdge[0] and ClockEdge[1] match (blocks 400-445) and operation returns (block 460) to the method illustrated in FIG. 2. The preceding description assumes that an increase or decrease in delay bias input results in decreased or increased delay, respectively. In another embodiment, an increase or decrease in delay bias input would result in an increased or decreased delay, respectively.FIGS. 5a through 5d are timing diagrams that illustrate clock signal edges moved sequentially to match respective calculated ideal signal edges. Turning first to FIG. 5a, ClockEdge[28] is delayed, preferably by the one period delay and calibration edge circuits, by the previously measured value of Err_Dly[29] to match the IdealEdge[29]. The ClockEdge[29] is then adjusted to increase or decrease its delay, such as with a decrease or increase, respectively, of an input bias voltage of a variable delay cell, to match ClockEdge[28]. Thus, ClockEdge[29] is calibrated to match IdealEdge[29]. FIG. 5b continues the process with ClockEdge[27] moved by the previously measured value of Err_Dly[28] to match IdealEdge[28]. ClockEdge[28] is then moved to match ClockEdge[27], which is the calculated location of IdealEdge[28], so that ClockEdge[28] is calibrated at the calculated IdealEdge[28] delay location. FIGS. 5c and 5d also illustrate the process for ClockEdge[27] and ClockEdge[26] with Err_Dly[28] and Err_Dly[27], respectively. At the conclusion of the adjustments, ClockEdges[29:0] are calibrated at respective IdealEdge[29:0] locations and either the process stops or another iteration can be performed according to the method illustrated in FIG. 2.FIG. 6 is a block diagram of, in one embodiment, an autocalibration circuit that is operable to adjust clock edges from a multi-phase clock generator to match respective calculated ideal clock edges. The multi-phase clock generator 605 drives thirty (30) phase-shifted clock signals on respective signal lines Clk0-Clk29 to a 30:1 MUX ("M1"). Each of the signal lines Clk0-Clk29 can be provided with a variable signal delay using respective variable bias cells FD0-FD29, with the delay bias of each cell controlled by a respective register in vernier calibration registers 610. A calibration sequencer 615 enables M1 to introduce sequential clock edges to switch SW1. As illustrated, SW1 is operable to switch between target and delay signal paths (620, 625) for comparison of clock edges on adjacent signal lines Clk0-Clk29. Preferably, one period and calibration edge circuits (630, 635) are provided in the delay signal path 625. The one period delay circuit 630 is operable to delay an introduced clock signal edge by one clock period. The calibration edge circuit 635 is operable to provide a variable delay, preferably up to a one period variable delay. Or, the one period delay and calibration edge circuits (630, 635) can be combined into one variable delay module to delay a clock signal edge between one and two periods of the reference clock.The calibration edge circuit 635 includes the variable delay cell 640, a second MUX ("M2"), a delay bias input 645, a resistance string 650 and a plurality of impedance lines 655. More particularly, the variable delay cell 640 accepts a bias voltage from M2 through the delay bias input 645. M2 is operable to select from a predetermined plurality of voltages for use by the variable delay cell 640. M2 is either coupled to an impedance string 650 through the plurality of impedance lines 655, as illustrated, or to another voltage source of variable voltages. If a resistor string is used as the impedance string 650, it is coupled between high and low reference voltages Vref MAX and Vref MIN to provide linearly spaced voltage source to M2. Through appropriate choice of resistor string 650 taps, control of M2 allows predetermined delays of a clock signal edge introduced to the calibration edge circuit 635 from M1. A phase detector 660 is selectively coupled at its inverting input to the target signal path 625 and to a vernier edge input terminal Vin through switch SW2. The phase detector's 660 non-inverting input is coupled to the output of the variable delay cell 645 to compare delay timing of clock signal edges between the target and delay signal paths (620, 625). The result of the comparison, in the form of a high ("HIGH") or low ("LOW") voltage on its output, is presented to a calibration control logic and increment/decrement circuit 665. During operation, the calibration sequencer 615 enables M1 to introduce ClockEdge[1] and ClockEdge[0] from Clk1 and Clk0, respectively, to the delay and target signal paths (625, 620), respectively, for eventual comparison at phase detector 660.Calibration edge registers 670 are coupled to M2 for switching control of bias input voltages selectively provided to the variable delay cell 640. Terminals Vin and Cout are coupled to SW2 and the calibration control logic & increment/decrement circuit 665, respectively, to enable subsequent calibration of externally generated clock edges.When the auto calibration circuit 600 is used to measure delay spacing between adjacent signal edges ClockEdge[n] and ClockEdge[n-1] (block 205), then a voltage HIGH signal at the output of the phase detector 660 would indicate that ClockEdge[n-1] precedes ClockEdge[n] through delay and target signal paths, respectively (620, 625). In this case, the calibration control logic and increment/decrement circuit 665 increments the associated calibration edge register 670 by switching M2 to a higher bias voltage at the delay bias input 645 to delay ClockEdge [n-1]. If, however, a LOW signal is indicated at the output of the phase detector 660, then the calibration edge register 670 would be decremented by the calibration control logic and increment/decrement circuit 665 to accomplish a higher delay bias input 650. The calibration control logic and increment/decrement circuit 665 accumulates a plurality of results from the phase detector 660 to determine when clock signal edges on target and delay signal path (620, 625) are approximately equal. When they are approximately equal, the resulting numerical value of the associated calibration edge register 670 is the measurement of delay spacing between the examined clock signal edges (Meas_Dly<n:n-1>). An averaging circuit 675 can be used by the calibration control logic and increment/decrement circuit 665 to reduce measurement errors when to determining if the clock signal edges are approximately in phase.When adjusting the clock edges to match the ideal delay spacing, the clock edge introduced to the delay path 625 is delayed by a predetermined amount by the calibration edge registers 670, and the vernier calibration registers 610 are incremented or decremented as required for the clock signal edges in the target and delay paths (625, 630) to approximately match.A vernier-edge input terminal Vin and a calibration-output terminal Cout are also provided with the auto calibration circuit 600, with the terminal Cout coupled to an output of the calibration control logic and increment/decrement circuit 665 to allow calibration of externally provided timing verniers.FIG. 7 is schematic of, in one embodiment, the variable delay cell illustrated in FIG. 6. Although FIG. 6 illustrates a single-ended circuit for simplicity, a differential solution can be readily implemented and is utilized in FIG. 7 to better describe one embodiment of the delay cell. For example, input terminals VIN and VIP are coupled to gates of transistors MN1 and MN2, respectively, which form a differential amplifier. Transistor pairs MP1/MP2 and MP3/MP4 form a load for transistors MN1 and MN2, respectively. More particularly, MP1 is connected as a current source to MN1 to pull the voltage at node V1 up to a voltage VDD for reduced values of current coming out of MN1. MP2 is coupled to transistor MN1 as a voltage limiter at node V1 so that, for large values of current coming out of transistor MN1, MP2 limits how low the voltage at node V1 can drop. Transistor MN6 is coupled to output terminal VON, with transistor MN4 providing its current to transistor MN6. Together, MN6 and MN4 form a source follower buffer. Similarly, transistor MN7 is coupled to output terminal VOP as source follower buffer, with transistor MN5 providing its current to transistor MN7. Transistor MN3 is coupled to MN1 and MN2 as a current source for each. Transistors MPref and MNref are coupled to terminals VCP and VBN, respectively, to provide biasing for transistor pairs MP1/MP4. By varying the voltage at terminal VBN, a varying signal delay is implemented between differential input terminals VIP/VIN and output terminals VOP/VON.FIG. 8 is a combined graph and schematic diagram of a resistor string implementation of the impedance string 650 and impedance lines 655. Each of the plurality of impedance lines 655 are coupled to a plurality of bias taps 800 distributed along the length of the impedance string 650. The bias taps 800 supply the bias voltage levels to M2 for supplying the delay bias input 645 with selectable voltage levels. They are either grouped into successive subsets of taps or individually connected to respective impedance lines. As indicated in FIG. 8, the relationship between bias voltage and the resulting delay of the variable delay cell 640 is nonlinear and inverse. In this illustration, the bias taps 800 are equally spaced along the impedance string 650, resulting in a nonlinear sequence of variable delay cell 640 delays. If linear delay increments are desired, the bias taps 800 could be spaced at unequal increments along the string to compensate for the nonlinearity of the voltage-delay curve. The bias taps 800 are distributed along a respective section of the resistor length, and include parallel switches or, alternatively, a switch tree which connect to the taps.FIG. 9 illustrates a system for using the auto calibration circuit 600 to calibrate a plurality of timing verniers 0-7 in a timing vernier circuit 900. A timing vernier circuit 900 is coupled to the auto calibration circuit 600 at vernier edge input and calibration output terminals Vin and Cout. Each of the timing verniers 0-7 receive differently phase-delayed clock signals from the multiphase clock generator 605 through timing control lines 905. An output terminal Vnout on each timing vernier 0-7 is coupled to terminal Vin through a third MUX ("M3") to provide the autocalibration circuit 600 with its respective clock signal edge for calibration. Each timing vernier also has an input terminal CN coupled to Cout through a fourth MUX ("M4") to receive feedback from the calibration control logic and increment/decrement circuit 665 in the form of a timing register update. Each timing vernier N in the timing veriner module module 900 is coupled to respective timing vernier module output terminals TV1-TV7. Subsequent to the autocalibration methods illustrated in FIGS. 1-4, and 5a-d, the auto calibration circuit 600 is operable to compare the thirty uncalibrated clock signal edges from each of timing verniers 0-7 to the calibrated clock edges ClockEdge[0:29].FIG. 10 is a block diagram of, in one embodiment, a timing vernier N for use as each of the timing verniers 0-7 illustrated in FIG. 9. User logic 1000 is coupled to a timing vernier 30:1 MUX ("MT1") to selectively switch one of a plurality of clock edges to output terminal TNout. MT1 is coupled to input terminals CG0-CG29 to receive uncalibrated clock edges from the multi-phase clock generator 605 illustrated in FIG. 9. Timing vernier registers 1005 are coupled to respective variable delay cells VD0-VD29 to enable a selective delay of each respective clock edge prior to switching to terminal TN through MT1. Preferably, each of VD0-VD29 includes a MUX that is operable to select from a plurality of voltages based on input from the timing vernier registers 1005. Terminal CN is coupled to the timing vernier registers 1005 to receive the timing register update signal from the autocalibration circuit 600 (FIG. 6).While several illustrative embodiments of the invention have been shown and described, numerous variations and alternate embodiments will occur to those skilled in the art. Such variations and alternate embodiments are contemplated, and can be made without departing from the spirit and scope of the invention as defined in the appended claims. |
Apparatus, computer-readable storage medium, and method associated with orienting a display image are described. In embodiments, a computing device may include a display to render the display image and a display orientation module coupled with the display. In embodiments the display orientation module may receive audio input from a user of the computing device and determine a position of the user relative to the display, based on the audio input. In embodiments, the display orientation module may further either orient the display image in accordance with the position of the user or output a result of the determination for use to orient the display image in accordance with the position of the user. Other embodiments may be described and/or claimed. |
CLAIMS What is claimed is: 1 . A computing device for computing, including orienting a display image, comprising: a display to render the display image; and a display orientation module coupled with the display to: receive audio input from a user of the computing device; determine a position of the user relative to the display, based on the audio input; and either orient the display image in accordance with the position of the user or output a result of the determination for use to orient the display image in accordance with the position of the user. 2. The computing device of claim 1 , further comprising a microphone array coupled with the display orientation module, the microphone array including a plurality of microphones to individually capture respective audio streams, wherein the audio input from the user includes individual audio streams captured by the plurality of microphones of the microphone array. 3. The computing device of claim 2, wherein the microphone array is disposed on the computing device in an L shaped configuration. 4. The computing device of claim 2, wherein the display orientation module is to further analyze the individual audio streams of the audio input to determine the position of the user relative to the display. 5. The computing device of claim 4, wherein to analyze the individual audio streams includes at least one of: a determination of a delay, relative to each other, of the individual audio streams of the audio input; or a determination of a difference in amplitude, relative to each other, of the individual audio streams of the audio input. 6. The computing device of any one of claims 1 -5, wherein the audio input from the user includes a voice command given by the user and the display orientation module is to determine the position of the user relative to the display and either output the result of the determination to enable the display image to be oriented or orient the display image, in response to detection of the voice command. 7. The computing device of any one of claims 1 -5, wherein the display orientation module is to: further determine when the position of the user is in an ambiguous zone with respect to the display; and on determination that the position of the user is in an ambiguous zone with respect to the display, initiate an ambiguous position timer, wherein the ambiguous position timer is to execute for a predetermined period of time. 8. The computing device of claim 7, wherein the display orientation module is to: receive additional audio input from the user; determine a new position of the user relative to the display, based on the additional audio input; and either orient the display image to a display image orientation adjacent to the previous orientation or output a result of the determination for use to orient the display image to a display image orientation adjacent to the previous orientation, when both the new position of the user is an ambiguous zone with respect to the display and an ambiguous position timer is executing. 9. One or more computer-readable media having instructions stored thereon which, when executed by a computing device provide the computing device with a display orientation module to: receive audio input from a user, the audio input captured by a microphone array of the computing device; determine a position of the user, relative to a display of the computing device, based on the audio input; and either output a result of the determination for use to orient the display image in accordance with the position of the user or orient the display image in accordance with the position of the user. 10. The computer-readable media of claim 9, wherein the microphone array is comprised of a plurality of microphones to individually capture respective audio streams, wherein the audio input from the user includes individual audio streams captured by the plurality of microphones of the microphone array. 1 1 . The computer-readable media of claim 10, wherein the plurality of microphones of the microphone array are disposed on the computing device in an L shaped configuration. 12. The computer-readable media of claim 10, wherein the display orientation module is to further analyze the individual audio streams of the audio input to determine the position of the user relative to the display. 13. The computer-readable media of claim 12, wherein to analyze the individual audio streams includes at least one of: a determination of a delay, relative to each other, of the individual audio streams of the audio input; or a determination of a difference in amplitude, relative to each other, of the individual audio streams of the audio input. 14. The computer-readable media of any one of claims 9-13, wherein the audio input from the user includes a voice command given by the user and the display orientation module is to determine the position of the user relative to the display and either output the result of the determination to enable the display image to be oriented or orient the display image, in response to detection of the voice command. 15. A computer-implemented method for computing, including orienting a display image, comprising: receiving, by a display orientation module of a computing device, audio input from a user, the audio input containing a voice command and the audio input captured by a microphone array; determining, by the display orientation module, in response to detection of the voice command in the audio input, a position of the user, relative to a display of the computing device, based on the audio input; and either orienting, by the display orientation module, the display image in accordance with the position of the user or outputting, by the display orientation module, a result of the determination to enable a display image to be rendered on the display in an orientation in accordance with the position of the user. 16. The computer-implemented method of claim 15, wherein the microphone array includes a plurality of microphones to individually capture respective audio streams and wherein the audio input from the user includes individual audio streams captured by the plurality of microphones of the microphone array. 17. The computer-implemented method of claim 16, wherein the microphone array is disposed on the computing device in an L shaped configuration. 18. The computer-implemented method of claim 16, further comprising analyzing, by the display orientation module, the individual audio streams of the audio input to determine the position of the user relative to the display. 19. The computer-implemented method of claim 18, wherein analyzing the individual audio streams includes at least one of: determining a delay, relative to each other, of the individual audio streams of the audio input; or determining a difference in amplitude, relative to each other, of the individual audio streams of the audio input. 20. The computer-implemented method of any one of claims 15-19, wherein the audio input from the user includes a voice command given by the user and further comprising, determining, by the display orientation module, in response to detecting the voice command, the position of the user relative to the display and either orienting, by the display orientation module, the display image or outputting, by the display orientation module, the result of the determination to enable the display image to be oriented. 21 . The computer implemented method of any one of claims 15-19, further comprising: determining, by the display orientation module, when the position of the user is in an ambiguous zone with respect to the display; and on determination that the position of the user is in an ambiguous zone with respect to the display, initiating, by the display orientation module, an ambiguous position timer, wherein the ambiguous position timer is to execute for a predetermined period of time. 22. The computer implemented method of claim 21 , further comprising: receiving, by the display orientation module, additional audio input from the user; determining, by the display orientation module, a new position of the user relative to the display, based on the additional audio input; and either orienting, by the display orientation module, the display image to a display image orientation adjacent to the previous orientation or outputting, by the display orientation module, a result of the determination for use to orient the display image to a display image orientation adjacent to the previous orientation, when both the new position of the user is an ambiguous zone with respect to the display and an ambiguous position timer is executing. 23. An apparatus for computing, including orienting a display image, comprising: displaying means for rendering the display image; and display orientation means for receiving audio input from a user of the apparatus; determining a position of the user relative to the display means, based on the audio input; and either orienting the display image in accordance with the position of the user or outputting a result of the determination for use to orient the display image in accordance with the position of the user. 24. The apparatus of claim 23, further comprising means for individually capturing a plurality of audio streams, wherein the audio input from the user comprises the audio streams individually captured. 25. The apparatus of claim 24, further comprising means for analyzing the individual audio streams of the audio input to determine the position of the user relative to the display. |
Orientation of Display Rendering on a Display based on Position of User CROSS REFERENCE TO RELATED APPLICATIONS The present application claims priority to Indian Patent Application No. 2658/DEL/2013, filed September 9, 2013, entitled ORIENTATION OF DISPLAY RENDERING ON A DISPLAY BASED ON POSITION OF USER." TECHNICAL FIELD Embodiments of the present disclosure are related to the field of data processing, and in particular, to display image orientation based on position of a user. BACKGROUND The background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section. Computer display technology is continually advancing making it possible to manufacture thinner and lighter displays, such as, for example, liquid crystal displays (LCDs) or organic light emitting diode (OLED) displays. Because of these advances displays are becoming more prevalent in all manner of computing devices and are now able to be placed in locations and devices that would have been impermissible with traditional cathode ray tube (CRT) displays. As a result, users are interacting with these displays in new settings and situations. To be able to interact with the displays the orientation of any image rendered on the display may need to be oriented with respect to the user. With current display technology; however, a user must physically interact with a display or must manually adjust software settings to adjust the display image orientation of the display. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 depicts an illustrative environment in which some embodiments of the present disclosure may be practiced. FIG. 2 depicts an illustrative microphone array according to some embodiments of the present disclosure. FIG. 3 depicts a representation of an illustrative placement of a microphone array disposed on a display with corresponding zones associated with display image orientations. FIG. 4 depicts an illustrative graph representing the treatment of voice commands received in ambiguous and unambiguous zones FIG. 5 depicts an illustrative computing device according to some embodiments of the present disclosure. FIG. 6 depicts an illustrative process flow according to some embodiments of the present disclosure. FIG. 7 depicts an illustrative system according to some embodiments of the present disclosure. DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS A method, storage medium, and computing device for display image orientation are described. In embodiments, the computing device may include a display to render a display image; and a display orientation module coupled with the display. In embodiments the display orientation module may be configured to receive audio input from a user of the computing device. The display orientation module may then determine a position of the user relative to the display, based on the audio input. The display orientation module may then either orient the display image in accordance with the position of the user or output a result of the determination for use to orient the display image in accordance with the position of the user or. In the following detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents. Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments. For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The description may use the phrases "in an embodiment," or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. FIG. 1 depicts an illustrative environment in which some embodiments of the present disclosure may be practiced. As depicted by 108a, user 106 may approach a computing device, such as kiosk 104. Kiosk 104 may be comprised of a processor (not shown), and one or more peripherals, including but not limited to, a display 100 and a microphone array 106. Display 100 may be configured to render a display image 102 in a number of different display image orientations. For example, as depicted in 108a, the display image has been rendered in a landscape orientation prior to the user approaching kiosk 104. Upon arriving at kiosk 104, or while approaching kiosk 104, user 106 may issue audio input directed towards kiosk 104, such as a voice command. In embodiments, the voice command may be a specific voice command associated with display image orientation, such as, for example, "rotate." In other embodiments the voice command may be a generic command directed at functionality other than the display image orientation, such as, for example, a command to open an application or perform an internet search. Upon receiving the voice command, regardless of whether the voice command is specific or generic, kiosk 104 may be configured to utilize microphone array 106 to determine a location of the user with respect to display 100 based upon the direction from which the voice command was given. Once the location of the user has been determined by kiosk 104, the display image orientation may be automatically adjusted based upon the determined location of the user, as depicted by arrow 1 10. As seen in 108b the display image orientation has been adjusted so that display image 102 is now rendered in a portrait orientation, thus allowing the user to have the display oriented in the user's direction without any manual adjustment of the display. FIG. 2 depicts an illustrative microphone array 106 according to some embodiments of the present disclosure. In embodiments, microphone array 106 may include a number of individual microphones each configured to capture individual audio streams for processing. As depicted here, microphone array 106 may be comprised of 4 individual microphones M1 -M4. It will be appreciated; however, that the four microphones depicted here are merely meant to be illustrative and that microphone array 106 may include any number of microphones whose audio streams may be sufficient to determine the location of a user. For example, if kiosk 104 of FIG. 1 were placed in a corner of a room such that it could only be approached from two directions, two microphones may be sufficient to determine a location of the user with enough accuracy to orient the display. In some embodiments a higher level of accuracy in determining the location of the user may be desired and a greater number of microphones may be utilized to achieve the higher level accuracy. As depicted in FIG. 2, in some embodiments, microphone array 106 may be disposed in an L shaped orientation; however, this disclosure is not intended to be limited to such embodiments. Any disposition or orientation of microphones in a microphone array whose audio streams may be sufficient to determine the location of a user is contemplated by this disclosure. For example, if kiosk 104 of FIG. 1 were located in such a way that it could only be approached from two opposite sides a linear 2 microphone array may be sufficient to determine the user's location. In embodiments where a more precise location may be necessary other orientations may be chosen for the microphone array. The audio streams from microphones M1 -M4 of microphone array 106 may be utilized to determine the location of the user by analyzing a time delay and/or amplitude difference with respect to one another. For example, consider audio wave-fronts 202-206. Audio wave-fronts 202-206 may be utilized to determine the position of the user by determining when the individual microphones capture audio wave-fronts 202-206 through an analysis of the audio streams captured by the individual microphones M1 -M4. As depicted here, audio wave-fronts 202-206 arrive at microphone M1 first, followed by microphone M2, then microphone M4 and finally microphone M3. When analyzed by a computing device, such as kiosk 104 or computing device 500 of FIG. 5 below, the order and delay with which the audio wave-fronts reach the individual microphones may indicate the direction from which the sound originated, and thus may be used to determine the user's location with respect to the microphone array. In other embodiments, amplitude may be utilized in addition to, or in place of, a time delay. For instance, the microphone reached first by audio wave-fronts 202-206 may record the highest amplitude while each microphone the audio wave-fronts 202-206 reach thereafter may record a lower amplitude and thus a measure of amplitude from each individual microphone may be able to be utilized in some embodiments to determine a sequence in which an audio wave-front reaches the microphone array and thus may indicate position of a user in a similar manner to that described above with respect to the time delay. While depicted here as only being implemented in two dimensions, it will be appreciated that a three dimensional microphone array may be utilized in some embodiments. Such a microphone array could be utilized to determine a user's position in a three dimensional space. A user's position in a three dimensional space could be utilized in embodiments having a display capable of rendering an image in three dimensions. In embodiment's where a three dimensional display may be combined with a three dimensional microphone array, it will be appreciated that the three dimensional display image may be oriented in all three dimensions based upon the user's position. In addition, in embodiments where a three dimensional microphone may be utilized with a display capable of rendering in two dimensions, the display itself could be adjusted in the third dimension while the display image is adjusted in the other two dimensions. For example, the display itself may be raised, lowered, turned, and/or tilted based upon the user's position as determined when utilizing a three dimensional microphone array. FIG. 3 depicts a representation of an illustrative placement of a microphone array 304 disposed on display 302 with corresponding zones associated with display image orientations. Each of the four display image orientations depicted may be comprised of one unambiguous zone and two ambiguous zones. For example, display image orientation 90 is comprised of zones 2, 3 and 4 with zone 3 being the unambiguous zone and zones 2 and 4 being ambiguous zones. As used herein, an ambiguous zone is a zone in which it may be difficult to determine the exact side of the display at which the user is located, while an unambiguous zone is a zone in which the exact side of the display at which the user is located may be more clearly determined. The handling of commands received from ambiguous zones and unambiguous zones is discussed in greater detail below in reference to FIG. 4. As depicted herein, the zones may have a nexus at microphone array 304. It will be appreciated that the placement of the microphone array may determine the location of the ambiguous zones and the unambiguous zones. Therefore, in some embodiments, the microphone array may be placed in other locations relative to the display to reduce the impact of ambiguous zones. It will further be appreciated that the use of more than one microphone array may be utilized in an effort to reduce the impact of the ambiguous zones. Any such placement of microphone array 304 or integration of one or more additional microphone arrays is specifically contemplated by this disclosure. In some embodiments, a computing device may be configured to treat a voice command from an ambiguous zone differently than that of a voice command received from an unambiguous zone. In some embodiments, when a voice command is received by a computing device, such as, for example, kiosk 104 of fig. 1 , the computing device may be configured to determine if the voice command originated from an ambiguous zone. If the voice command originated from an ambiguous zone, the computing device may be configured to select a default display image orientation corresponding to that ambiguous zone. After the default display image orientation is selected, the computing device may be configured to adjust the display image orientation if another voice command is received from an ambiguous zone within a predetermined period of time. If another voice command is received from an ambiguous zone within the predetermined period of time, the computing device may be configured to adjust the display image orientation to a different display image orientation. This embodiment may be based on an assumption that if a first voice command is received from an ambiguous zone and a second voice command is received from an ambiguous zone in quick succession to the first voice command, that the default selected display orientation is incorrect and an adjustment may be necessary. In some embodiments, the different display image orientation may be adjacent to the previously selected display image orientation. FIG. 4 depicts an illustrative graph representing the treatment of voice commands received in ambiguous and unambiguous zones, such as the zones depicted in FIG. 3, above. The horizontal axis represents time and the upper horizontal axis represents ambiguous time windows while the lower horizontal axis represents display image orientation. At time T1 voice command 1 may be received by a computing device. In this example, the computing device may determine that voice command 1 originated in ambiguous zone 7 of FIG. 3. As a result, the default display image orientation would be display image orientation 0 as represented in 406. Because voice command 1 was determined to be from an ambiguous zone, the computing device may be configured to initiate ambiguous position time window 402. At time T2, voice command 2 may also be received from an ambiguous zone by the computing device. In addition, as depicted herein, time T2 is within ambiguous position time window 402. As a result, upon receiving voice command 2, the computing device may be configured to adjust the display image orientation to a display image orientation adjacent to the ambiguous zone, as indicated in the transition from box 406 as a display image orientation of 0 to box 408 with a display image orientation of 270. At time T3, voice command 3 may be received by the computing device. Voice command 3 is again received from an ambiguous zone, as indicated by the initiation of ambiguous position time window 404 by the computing device. Because voice command 3 was received from an ambiguous zone and the default display image orientation is 180, as indicated by 410, it may have been received from either of zones 1 or 1 1 as depicted in FIG. 3. As depicted here, no other voice command is received from an ambiguous zone within ambiguous position time window 404 and therefore no further change may be necessary to the display image orientation. At time T4, voice command 4 may be received by the computing device. As depicted here, voice command 4 may not be received from an ambiguous zone because no ambiguous position time window is initiated by the computing device. Furthermore, because the display image orientation remains at 180 in box 410 it may be determined from the graph that voice command 4 is received from zone 12 of FIG. 3. FIG. 5 depicts an illustrative configuration of a computing device 500 according to some embodiments of the disclosure. Computing device 500 may be any type of computing device including a portable computing device, such as a smart phone, tablet, ultrabook, ebook, laptop computer, etc., or a stationary computing device, such as a desktop computer or kiosk computing device, such as kiosk 104 of FIG. 1 . It will be appreciated that the computing devices mentioned above are merely examples that are meant to be illustrative. This disclosure is equally applicable regardless of the computing device's form. Computing device 500 may comprise processor(s) 502, display 504, microphone array 506, storage 508 containing display orientation module 510, and other input/output (I/O) devices 512. Processor(s) 502, display 504, microphone array 506, storage 508 and other input/output (I/O) devices 512 may all be coupled together utilizing system bus 514. Processor(s) 502 may be comprised of a single processor or multiple processors. In multiple processor embodiments, the multiple processors may be of the same type, i.e. homogeneous, or may be of differing types, i.e. heterogeneous and may include any type of single or multi-core processors. This disclosure is equally applicable regardless of type and/or number of processors. Display 504 may be any type of display including, but not limited to a cathode ray tube (CRT), a liquid crystal diode (LCD), or an organic light emitting diode (OLED). Display 504 may be incorporated into computing device 500 or may be peripherally connected to computing device 500 through any type of wired and/or wireless connection. This disclosure is equally applicable regardless of the type of display. In embodiments, storage 508 may be any type of computer-readable storage medium or any combination of differing types of computer-readable storage media. Storage 508 may include volatile and non-volatile/persistent storage. Volatile storage may include e.g., dynamic random access memory (DRAM). Non-volatile/persistent storage may include, but is not limited to, a solid state drive (SSD), a magnetic or optical disk hard drive, flash memory, or any multiple or combination thereof. In embodiments display orientation module 510 may be implemented as software, firmware, or any combination thereof. In some embodiments, display orientation module 510 may comprise one or more instructions that, when executed by processor(s) 502, cause computing device 500 to perform one or more operations of the process described in reference to FIG. 6, below, or any other processes described herein. FIG. 6 depicts an illustrative process flow 600 according to some embodiments of the present disclosure. The process may begin at block 602 where a voice command is received from a user. Upon receiving the voice command, the user's position may be determined at block 604. As discussed above, the user's position may be determined via an analysis of audio streams captured by a microphone array, such as that depicted in FIGs. 1 -3, 5 and 7. At block 606 a determination may be made as to whether the user is in an ambiguous zone or not. If the user's position is determined to be in an ambiguous zone the processing may go to block 608 where a determination is made as to whether an ambiguous position timer is running, such as that described in reference to FIG. 4 above. If an ambiguous position timer is running, then the process may proceed to block 614 where the display image orientation may be adjusted. In some embodiments, the adjustment at block 614 may be to a display image orientation adjacent to the current display image orientation and corresponding to an ambiguous zone adjacent to the previously determined ambiguous zone. Once the display image orientation is adjusted the process may end at block 616. Returning to block 608, if an ambiguous position timer is not running the process may continue to block 610 where such a timer may be initiated. Once the ambiguous position timer is initiated the process may proceed to block 612 where a determination is made as whether the display image is currently oriented in the user's direction. If the display image is currently oriented in the user's direction the process may move on to block 616 where the process ends. If the display image is not currently oriented in the user's position the process may proceed to block 614 where the display image orientation may be adjusted in relation to the user's position. After the display image orientation is adjusted based upon the user's position the process may proceed to block 616 where the process may end. Going back to block 606, if the user's position is not determined to be in an ambiguous zone then the process may proceed to block 612 where a determination is made as to whether the display image is currently oriented in the user's direction. If the display image is currently oriented in the user's direction the process may move on to block 616 where the process ends. If the display image is not currently oriented in the user's direction the process may proceed to block 614 where the display image orientation may be adjusted in relation to the user's position. After the display image orientation is adjusted based upon the user's position the process may proceed to block 616 where the process may end. In embodiments, process 600 may be implemented in hardware and/or software. In hardware embodiments, process 600 may be implemented in application specific integrated circuits (ASIC), or programmable circuits, such as Field Programmable Gate Arrays, programmed with logic to practice process 100. In a hardware/software implementation, process 100 may be implemented with software modules configured to be operated by the underlying processor. The software modules may be implemented in the native instructions of the underlying processor(s), or in higher level languages with compiler support to compile the high level instructions into the native instructions of the underlying processor(s). In some embodiments, not pictured, a voice command may not be necessary to monitor the user's location. For instance the user may be able to issue a command, either by voice or manually, or modify a hardware or software setting such that the user's position is continuously calculated based upon audio input received from the user. In such embodiments, the user could walk around the display and have the display image continuously oriented based upon the user's position. FIG. 7 depicts a system 702 according to some embodiments of the present disclosure. In embodiments, system 702 may be comprised of display orientation sensors 714, display orientation module 716, and Operating System (OS) 718 all coupled with one another. Display orientation sensors 714 may include a microphone array 704, such as, for example, microphone arrays discussed in reference to FIGs. 1 -3 and 5, above. Display orientation sensors may also include optional sensors such as camera 706, display bezel sensor 708, passive infra-red sensor 710 and touch sensor 712. These optional sensors may be utilized, in some embodiments, to determine a display image orientation when no audio input is received or to determine an orientation of the display with respect to the user to aid in determining a display image orientation with respect to the user's position with respect to the display. Display orientation module 716 may be configured to determine an appropriate display image orientation based upon one or more of the display orientation sensors 714. As discussed above, display orientation module 716 may be configured to determine a position of a user by analyzing audio streams captured by microphone array 704 and may take the position of the user into account when determining an appropriate display image orientation. Once an appropriate display image orientation is determined by display orientation module, the determination may be passed to the OS display API 720 to cause a display, not pictured, attached to system 702 to render an image in the determined appropriate display image orientation. In other embodiments, the display orientation module 716 may be configured to adjust the display image orientation directly, not depicted here. For the purposes of this description, a computer-usable or computer- readable medium can be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer- readable storage medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk - read only memory (CD- ROM), compact disk - read/write (CD-R/W) and DVD. Embodiments of the disclosure can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In various embodiments, software, may include, but is not limited to, firmware, resident software, microcode, and the like. Furthermore, the disclosure can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a wide variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described, without departing from the scope of the embodiments of the disclosure. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that the embodiments of the disclosure be limited only by the claims and the equivalents thereof. EXAMPLES Some non-limiting examples are: Example 1 is a computing device for computing, including orienting a display image, comprising: a display to render the display image; and a display orientation module coupled with the display to: receive audio input from a user of the computing device; determine a position of the user relative to the display, based on the audio input; and either orient the display image in accordance with the position of the user or output a result of the determination for use to orient the display image in accordance with the position of the user. Example 2 may include the subject matter of Example 1 , further comprising a microphone array coupled with the display orientation module, the microphone array including a plurality of microphones to individually capture respective audio streams, wherein the audio input from the user includes individual audio streams captured by the plurality of microphones of the microphone array. Example 3 may include the subject matter of Example 2, wherein the microphone array is disposed on the computing device in an L shaped configuration. Example 4 may include the subject matter of Example 2, wherein the display orientation module is to further analyze the individual audio streams of the audio input to determine the position of the user relative to the display. Example 5 may include the subject matter of Example 4, wherein to analyze the individual audio streams includes at least one of: a determination of a delay, relative to each other, of the individual audio streams of the audio input; or a determination of a difference in amplitude, relative to each other, of the individual audio streams of the audio input. Example 6 may include the subject matter of any one of Examples 1 -5, wherein the audio input from the user includes a voice command given by the user and the display orientation module is to determine the position of the user relative to the display and either output the result of the determination to enable the display image to be oriented or orient the display image, in response to detection of the voice command. Example 7 may include the subject matter of any one of Examples 1 -5, wherein the display orientation module is to: further determine when the position of the user is in an ambiguous zone with respect to the display; and on determination that the position of the user is in an ambiguous zone with respect to the display, initiate an ambiguous position timer, wherein the ambiguous position timer is to execute for a predetermined period of time. Example 8 may include the subject matter of Example 7, wherein the display orientation module is to: receive additional audio input from the user; determine a new position of the user relative to the display, based on the additional audio input; and either orient the display image to a display image orientation adjacent to the previous orientation or output a result of the determination for use to orient the display image to a display image orientation adjacent to the previous orientation, when both the new position of the user is an ambiguous zone with respect to the display and an ambiguous position timer is executing. Example 9 is one or more computer-readable media having instructions stored thereon which, when executed by a computing device provide the computing device with a display orientation module to: receive audio input from a user, the audio input captured by a microphone array of the computing device; determine a position of the user, relative to a display of the computing device, based on the audio input; and either output a result of the determination for use to orient the display image in accordance with the position of the user or orient the display image in accordance with the position of the user. Example 10 may include the subject matter of Example 9, wherein the microphone array is comprised of a plurality of microphones to individually capture respective audio streams, wherein the audio input from the user includes individual audio streams captured by the plurality of microphones of the microphone array. Example 1 1 may include the subject matter of Example 10, wherein the plurality of microphones of the microphone array are disposed on the computing device in an L shaped configuration. Example 12 may include the subject matter of Example 10, wherein the display orientation module is to further analyze the individual audio streams of the audio input to determine the position of the user relative to the display. Example 13 may include the subject matter of Example 12, wherein to analyze the individual audio streams includes at least one of: a determination of a delay, relative to each other, of the individual audio streams of the audio input; or a determination of a difference in amplitude, relative to each other, of the individual audio streams of the audio input. Example 14 may include the subject matter of any one of Examples 9-13, wherein the audio input from the user includes a voice command given by the user and the display orientation module is to determine the position of the user relative to the display and either output the result of the determination to enable the display image to be oriented or orient the display image, in response to detection of the voice command. Example 15 is a computer-implemented method for computing, including orienting a display image, comprising: receiving, by a display orientation module of a computing device, audio input from a user, the audio input containing a voice command and the audio input captured by a microphone array; determining, by the display orientation module, in response to detection of the voice command in the audio input, a position of the user, relative to a display of the computing device, based on the audio input; and either orienting, by the display orientation module, the display image in accordance with the position of the user or outputting, by the display orientation module, a result of the determination to enable a display image to be rendered on the display in an orientation in accordance with the position of the user. Example 16 may include the subject matter of Example 15, wherein the microphone array includes a plurality of microphones to individually capture respective audio streams and wherein the audio input from the user includes individual audio streams captured by the plurality of microphones of the microphone array. Example 17 may include the subject matter of Example 16, wherein the microphone array is disposed on the computing device in an L shaped configuration. Example 18 may include the subject matter of Example 16, further comprising analyzing, by the display orientation module, the individual audio streams of the audio input to determine the position of the user relative to the display. Example 19 may include the subject matter of Example 18, wherein analyzing the individual audio streams includes at least one of: determining a delay, relative to each other, of the individual audio streams of the audio input; or determining a difference in amplitude, relative to each other, of the individual audio streams of the audio input. Example 20 may include the subject matter of any one of Examples 15-19, wherein the audio input from the user includes a voice command given by the user and further comprising, determining, by the display orientation module, in response to detecting the voice command, the position of the user relative to the display and either orienting, by the display orientation module, the display image or outputting, by the display orientation module, the result of the determination to enable the display image to be oriented. Example 21 may include the subject matter of any one of Examples 15-19, further comprising: determining, by the display orientation module, when the position of the user is in an ambiguous zone with respect to the display; and on determination that the position of the user is in an ambiguous zone with respect to the display, initiating, by the display orientation module, an ambiguous position timer, wherein the ambiguous position timer is to execute for a predetermined period of time. Example 22 may include the subject matter of Example 21 , further comprising: receiving, by the display orientation module, additional audio input from the user; determining, by the display orientation module, a new position of the user relative to the display, based on the additional audio input; and either orienting, by the display orientation module, the display image to a display image orientation adjacent to the previous orientation or outputting, by the display orientation module, a result of the determination for use to orient the display image to a display image orientation adjacent to the previous orientation, when both the new position of the user is an ambiguous zone with respect to the display and an ambiguous position timer is executing. Example 23 is an apparatus for computing, including orienting a display image, comprising: displaying means for rendering the display image; and display orientation means for receiving audio input from a user of the apparatus; determining a position of the user relative to the display means, based on the audio input; and either orienting the display image in accordance with the position of the user or outputting a result of the determination for use to orient the display image in accordance with the position of the user. Example 24 may include the subject matter of Example 23, further comprising means for individually capturing a plurality of audio streams, wherein the audio input from the user comprises the audio streams individually captured. Example 25 may include the subject matter of Example 24, further comprising means for analyzing the individual audio streams of the audio input to determine the position of the user relative to the display. Example 26 may include the subject matter of Example 25, wherein the means for analyzing the individual audio streams further comprise means for: determining a delay, relative to each other, of the individual audio streams of the audio input; or determining a difference in amplitude, relative to each other, of the individual audio streams of the audio input. Example 27 may include the subject matter of any one of Examples 23-26, wherein the audio input from the user includes a voice command given by the user and the display orientation means further comprise, means for: determining in response to detecting the voice command, the position of the user relative to the display and either orienting the display image or outputting, by the display orientation module, the result of the determination to enable the display image to be oriented. Example 28 may include the subject matter of any one of Examples 23-26, further comprising means for: determining, by the display orientation module, when the position of the user is in an ambiguous zone with respect to the display; and initiating, on determining that the position of the user is in an ambiguous zone with respect to the display, an ambiguous position timer, wherein the ambiguous position timer is to execute for a predetermined period of time. Example 29 may include the subject matter of Example 28, further comprising means for: receiving additional audio input from the user; determining a new position of the user relative to the display, based on the additional audio input; and either orienting the display image to a display image orientation adjacent to the previous orientation or outputting a result of the determination for use to orient the display image to a display image orientation adjacent to the previous orientation, when both the new position of the user is an ambiguous zone with respect to the display and an ambiguous position timer is executing. Example 30 is one or more computer-readable media having instructions stored thereon which, when executed by a computing device cause the computing device to perform the method of any one of Examples 15-22. Example 31 is an apparatus comprising means for performing the method of any one of Examples 15-22. |
In a semiconductor memory device, a die architecture is provided that arranges memory arrays into a long, narrow configuration. Bond pads may then be placed along a long side of a correspondingly shaped die. As a result, this architecture is compatible with short lead frame "fingers" for use with wide data busses as part of high speed, multiple band memory integrated circuits. |
What is claimed is: 1. A memory device, comprising: a plurality of memory banks, wherein each memory bank of said plurality of memory banks comprises a plurality of sub-arrays, and wherein said each memory bank defines a discontinuous area on said memory device; and a plurality of sub-array groups, wherein each group of said plurality of sub-array groups comprises a particular sub-array from said each memory bank, and wherein said each group defines a continuous area on said memory device, and wherein said each group of said plurality of sub-array groups extends along two dimensions. 2. The memory device of claim 1, wherein said each group of said plurality of sub-array groups is configured to communicate with a respective bond pad. 3. The memory device of claim 2, wherein said each memory bank of said plurality of memory banks further comprises sense amplifier circuitry and row decoder circuitry. 4. The memory device of claim 3, wherein said each group of said plurality of sub-array groups further comprises sense amplifier circuitry and row decoder circuitry. |
TECHNICAL FIELD This invention relates generally to semiconductor devices. In particular, this invention relates to die architecture for semiconductor memory devices configured to execute high speed applications, such as those performed in synchronous dynamic random access memory devices. BACKGROUND OF THE INVENTION Assembling an integrated circuit package often involves attaching a die to a lead frame. As an additional part of assembly, bond wires are used to electrically connect the conductive leads of the lead frame to the die's bond pads. The die/lead frame assembly is then encased in a housing with the outer ends of the conductive leads remaining exposed in order to allow electrical communication with external circuitry. The die's architecture may represent one of many circuitry configurations, such as a Dynamic Random Access Memory (DRAM) circuit or, more specifically, a synchronous DRAM (SDRAM) circuit. The high speed synchronous operations associated with SDRAM circuitry often involve communication with an external device such as a data bus. Occasionally, the data bus may be relatively wide in comparison to the standard width of prior art SDRAM dies. The width of the data bus, in turn, requires an appropriate number of conductive leads positioned to accommodate the bus. Further, the position of the conductive leads and their spacing limitations require a certain amount of die space for bond pad connection. However, the prior art does not provide a die having one particular region that can provide enough bond pads to accommodate all of the conductive leads. Rather, the architecture of the die as found in prior art allows for bond pads to be located in different areas of the die. Consequently, conductive leads of different lengths are needed to connect the bond pads to the relatively wide data bus. These differing lengths slow the operations of the SDRAM, or any semiconductor device for that matter, as it takes longer for signals to travel through the longer conductive leads. Thus, if synchronized signals are desired, the speed of the device is limited by the speed of signal propagation through the longest conductive lead. The longer leads also have a greater inductance associated with them, thereby further slowing signal propagation. Moreover, the inductance in the longer conductive leads is different from the inductance associated with the relatively short conductive leads. This imbalance in induction makes synchronizing the signals even more difficult. Thus, it would benefit the art to have a die configuration that provides bond pads in a common location such that all of the conductive leads of the lead frame could be the same length. It would further benefit the art if the die configuration allowed uniformly short conductive leads. Indeed, this desire is mentioned in U.S. Pat. No. 5,408,129, by Farmwald, et al., which discloses a high-speed bus as well as memory devices that are adapted to use the bus. Specifically, Farmwald '129 discloses a narrow multiplexed bus, as demonstrated by Farmwald's preferred embodiment, wherein the bus comprises only nine bus lines. Accordingly, Farmwald's narrow bus allows for a relatively low number of bond pads on the die of a memory device. Farmwald '129 concludes that it would be preferable to place the small number of bond pads on one edge of each die, as that would allow for short conductive leads. Farmwald '129 at col. 18, ln. 37-43. However, it is possible to do so under Farmwald '129 only because the "pin count . . . can be kept quite small" due to the narrow architecture of the bus. Id. at ln. 17-18. Contrary to the teachings in Farmwald '129, it would be advantageous at times to accommodate a relatively wide bus requiring a large number of pins. It would therefore be additionally advantageous to provide a die capable of providing the correspondingly large number of bond pads on one side of the die. SUMMARY OF THE INVENTION Accordingly, the present invention provides die architectures allowing for the relocation of the die's bond pads. One embodiment of this invention arranges for all of the die's bond pads to be located on one side of the die, with the corresponding memory banks arranged accordingly. In a preferred embodiment, the length of the die side having the bond pads is extended relative to prior architectures and the memory arrays are shaped to follow along the extended side. Consequently, the perpendicular sides contiguous to the extended side may be shortened. This architecture has the advantage of allowing the die to cooperate with a lead frame having conductive leads of the same length, thereby balancing inductance and aiding in the ability to synchronize signals. This architecture also has the advantage of allowing the conductive leads to be relatively short, which further increases the operational speed of the die's circuitry and decreases inductance. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 depicts the architecture of a SDRAM chip as found in the prior art. FIG. 2 illustrates an SDRAM chip within a lead frame as found in the prior art. FIGS. 3a and 3b portray a first exemplary embodiment of the present invention. FIG. 4 represents an embodiment of the present invention in cooperation with a lead frame. FIGS. 5a and 5b demonstrate a second exemplary embodiment of the present invention. FIGS. 5c and 5d illustrate a third exemplary embodiment of the present invention. FIGS. 6a and 6b depict a fourth exemplary embodiment of the present invention. FIGS. 6c and 6d depict a fifth exemplary embodiment of the present invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT FIG. 1 depicts the architecture of an SDRAM 20 as it exists in the prior art. The SDRAM 20 is fabricated on a die 22 and includes sixteen memory banks B0 through B15. The shape of each bank is determined by the number and arrangement of component sub-arrays. In this prior art example, each bank comprises a row of sixteen sub-arrays. Bank B0, for example, comprises sub-arrays 000 through 015. Similarly, bank B1 comprises sub-arrays 100 through 115. For purposes of explaining the current invention, it is understood that each bank is analogously numbered, ending with sub-arrays 1500 through 1515 comprising memory bank B15. Each sub-array contains a number of memory bit components and accompanying n/p channel sense amplifier circuitry 26 as well as row decoder circuitry 28. The banks B0-B15 are also serviced by a first 64.times. DC sense amp 30 and a second 64.times. DC sense amp 32. It should be noted that the size and number of DC sense amps can vary based on the compression rate desired. Column decoder circuitry 34 is located next to the DC sense amps 30 and 32; and a column select line 36 extends from the column decoder circuitry 34 through all of the memory banks B0-B15. Logic circuitry is located in a region 38 on the other side of the DC sense amps 30 and 32 relative to the memory banks B0-B15. Bond pads 40 are placed on the perimeter of the die 22 to allow easy access. For purposes of this application, the term "bond pad" includes any conductive surface configured to permit temporary or permanent electrical communication with a circuit or node. Further, it should be noted that there exists a series of bond pads--defined here as access pads, wherein each access pad of the series is coupled to one sub-array of each bank, thereby allowing electrical signals to access those sub-arrays. For example, access pad 40A is defined to be coupled to sub-arrays 000, 100, 200, 300, 400, through 1500. Access pad 40G is coupled to sub-arrays 006 through 1506. Access pad 40P, in turn, is defined to be coupled to sub-arrays 015 through 1515. Accordingly, there are thirteen other access pads, each associated with a corresponding column comprising one sub-array from every bank. In order to keep connective circuitry to a minimum, these sixteen access pads are located near their respective sub-arrays. It should be noted that, in FIG. 1, the group of sub-arrays 000 through 1500 is highlighted in bold for purposes of indicating the common association those sub-arrays have with a particular access pad (such as 40A, for these sub-arrays). Groups 006-1506 and 015-1515 are similarly highlighted. Other bond pads 40, representing additional input and output terminals for communicating with the die 22, are placed in the remaining available spaces on the die 22, which may include more than one side of the die 22. Packaging of the die 22 may be influenced by the fact that the internal circuitry of the die 22 will be interacting with a data bus. Specifically, as seen in FIG. 2, the die 22 can be placed within a lead frame wherein the conductive leads 48, 50 extend from the die 22 and eventually orient in one direction in anticipation of connecting to the data bus. In FIG. 2, bond pads 40 that are on the die's near side 42--the side that will be closest to the external device--require only relatively short conductive leads 48. However, bond pads 40 along the sides 44, 46 contiguous to the near side 42 require longer conductive leads 50. Assuming that the signal propagation rate through the conductive leads 48, 50 is generally the same, the longer conductive leads 50 will take a longer time to transmit any signals. Moreover, inductance of the longer conductive leads 50 will be greater than inductance of the shorter conductive leads 48. FIGS. 3a and 3b illustrate one embodiment of the current invention that solves these problems. In this embodiment, the memory banks are separated into discontiguous portions. Despite placing portions of the banks in separate locations, the columnar arrangement of sub-arrays, one from each bank, is retained, and the columns are rotated ninety degrees relative to the configuration addressed above. Thus, rather than being parallel to the contiguous sides 44 and 46, the columns are now parallel to the near side 42. For example, the sixteen sub-arrays associated with access pad 40A (000 through 1500) extended along contiguous side 44 in the prior art die depicted in FIG. 1. Again, this group of sub-arrays commonly coupled to access pad 40A is highlighted to show the new orientation of the sub-arrays and of the group in general. In FIG. 3a, this group of sub-arrays now extends along the near side 42. While this group of sub-arrays 000 through 1500 is still relatively near contiguous side 44, this is not necessary for purposes of the current invention; this group could occupy any of the columnar positions depicted in FIG. 2. Regardless of the particular position of the columns, it is preferred that their respective access pad remain relatively close by. Moreover, given this new configuration, each sub-array is now oriented perpendicular to the near side 42 of the die 22. Further, it should be noted that, while the arrangements of sub-arrays in FIG. 2 might be described as "rows" given the ninety degree rotation, the arrangements are referred to as "columns" or "columnar positions" for purposes of demonstrating the continuity with portions of the die architecture in FIG. 1. As an example of this continuity, the row decoder circuitry 28 and column decoder circuitry are also rotated ninety degrees and, therefore, retain their orientation relative to each sub-array. Column decoder devices in this embodiment include a first modified column decoder circuit 60 interposed between a 700 series of sub-arrays (700 to 703) and an 800 series of sub-arrays (800-803). In addition, a first modified column select line 62 extends from the first modified column decoder circuit 60 through sub-arrays 700 to 000. Similarly, a second modified column select line 64 extends from the first modified column decoder circuit 60 through sub-arrays 800 to 1500. This embodiment also includes three other similarly configured modified column decoder circuits 66, 61, and 67, each with their own modified column select lines 68 and 70, 63 and 65, and 69 and 71, respectively. Moreover, instead of two 64.times. DC sense amps 30 and 32, this embodiment of the present invention uses four 32.times. DC sense amps 52, 54, 56, and 58. However, as in the prior art, the size and number of DC sense amps merely affect data compression and no one DC sense amp configuration is required for any embodiment of the current invention. In this exemplary embodiment, the columns are further arranged in groups of four. In doing so, this embodiment partially retains some of the bank continuity found in the prior art. For example, the sub-array sequence 000, 001, 002, and 003 of Bank 0 remain contiguous. The Bank 0 sequence continues in the next four rotated columns with sub-arrays 004, 005, 006, and 007 remaining next to each other. These intervals of bank continuity apply to the other memory banks as well and aid in minimizing the complexity of row decoder and column decoder circuitry. Arranging the columns in groups of four also means that certain columns will be further away from the near side 42 than other columns. As a result, there may be unassociated sub-arrays between a column and its access pad. For example, connective circuitry (not shown) coupling column 003-1503 to access pad 40D will probably pass by sub-arrays within columns 002-1502, 001-1501, and 000-1500. Additionally, this arrangement of rotated columns allows for altering the dimensions of the die 22. Not only can the near side 42 be extended to a length commensurate with the data bus, but the contiguous sides 44 and 46 may also be shortened. Moreover, extending the near side 42 provides chip space for the bond pads 40 that had been along the contiguous sides 44, 46 in the prior architecture. FIG. 4 demonstrates the result of this architecture: when the die 22 is attached to a lead frame 76 having conductive leads on only one side, the die's formation accommodates short conductive leads 78 of uniform length. Packaging the die 22 with this lead frame 76, in turn, allows for fast operation of the die 22 in conjunction with a device having a relatively large number of data terminals, such as a wide data bus. Other embodiments of the present invention can lead to the same packaging advantages. The exemplary embodiment in Figures 5a and 5b, for instance, demonstrates that, although the sub-arrays are rotated ninety degrees as in FIGS. 3a and 3b, it is not necessary to retain the columnar arrangement of the previous embodiment. Instead of the 16.times.1 columns, the sub-arrays in FIGS. 5a and 5b have been grouped into 4.times.4 associations. As demonstrated in the previous embodiment, there is a repetition of the sub-array pattern at continuous intervals. In the embodiment shown in FIGS. 5a and 5b, sequential sub-arrays of a particular bank are separated by sub-arrays of other banks. Sub-arrays 000 and 001 of Bank 0, for example, are separated by sub-arrays 400, 800, and 1200. As further demonstrated in the previous embodiment, it is still preferred to configure the access pads near their respective grouping. Nevertheless, because the associated sub-arrays in FIGS. 3a and 3b extend along one dimension and include one sub-array from every bank, there is more sharing of row decoder circuitry 28 as well as column select circuitry 62, 64, 68, 70, 63, 65, 69, and 71 in that embodiment than in the more fragmented sub-array groupings depicted in FIGS. 5a and 5b. Accordingly, the embodiment in FIGS. 3a and 3b is the more preferred embodiment of the two. FIGS. 5c and 5d represent an alternate configuration of 4.times.4 associations. There are also alternative embodiments that do not involve rotating the orientation of the sub-arrays, as demonstrated in FIGS. 6a and 6b. Whereas there are sixteen rows of sub-arrays extending back from the near side 42 of the die 22 in FIG. 1, the die 22 in FIGS. 6a and 6b has a memory configuration only eight sub-arrays "deep." Further, the sub-arrays are gathered into 8.times.2 groupings, again with one sub-array from every bank in each group and with each group associated with a particular access pad. Moreover, each group is oriented perpendicular to the near side 42 of die 22. Group 90 has been defined to contain sub-arrays 000 through 1500, group 92 contains sub-arrays 001 through 1501, and group 94 contains sub-arrays 002 through 1502. While no particular order of groups is required, it is noteworthy in this embodiment that the sub-arrays 800 through 1500 in group 90 are next to sub-arrays 801 through 1501 in group 92. In effect, groups 90 and 92 could be considered "mirror images" of each other. This mirror image configuration is useful in compressing data for test modes and in maximizing the opportunity to share row decoder circuitry 28. It can further be seen in FIGS. 6a and 6b that group 94 is a mirror image of group 92, wherein sub-arrays 002 through 702 are respectively contiguous to sub-arrays 001 through 701. While these mirror image configurations are preferable in a die architecture having 8.times.2 sub-array groupings, they are not necessary to realize the current invention. As in other embodiments, this one has a die shape capable of including bond pads in a configuration accommodatable to communication with an external device, with a memory arrangement generally conforming to the die shape. The embodiment in FIGS. 6a and 6b also benefits from four 32.times. DC sense amps 80, 81, 82, and 83. Further, there are two column decoder circuits 84 and 85, each associated with respective column select lines 86 and 87. Unlike the previous embodiments, however, each sub-array is oriented parallel to the near side 42 of the die 22. FIGS. 6c and 6d represent an alternate configuration of 8.times.2 associations or groupings of sub-arrays. One of ordinary skill can appreciate that, although specific embodiments of this invention have been described for purposes of illustration, various modifications can be made without departing from the spirit and scope of the invention. For example, embodiments of die architecture covered by this invention need not be restricted to placing bond pads on only one side of a die. It may be desirable in certain applications to use a lead frame having conductive leads facing two or more sides of a die. Die architectures included within the scope of this invention could locate the die's bond pads to allow for conductive leads of a uniform length and, more specifically, a uniformly short length on all relevant sides. In addition, the dimensions of the memory banks could be adapted to conform to a particular die's requirements. If, for example, the number of bond pads and the conductive lead pitch limitations require a die side even longer than the near side 42 in FIGS. 5a and 5b, the 4.times.4 banks of rotated sub-arrays can be replaced with an embodiment having a series of rotated sub-arrays grouped into 2.times.8 banks. Accordingly, the invention is not limited except as stated in the claims. |
The invention relates to managing encryption keys per logical block on a persistent memory device. A command to perform a data operation at a memory device is received. The command includes a cryptographic key tag. A first key table is accessed from a local memory. The first key table includes a first set of key entries corresponding to a first set of encryption keys. The first key table is searched to determine whether it contains an entry corresponding to the encryption key tag. A second key table is accessed from RAM based on a determination that the first key table does not include an entry corresponding to the tag. The second key table includes a second set of key entries corresponding to a second set of encryption keys. A key entry corresponding to the cryptographic key tag is identified from the second key table. The key entry includes an encryption key corresponding to the encryption key tag. The command is processed using the encryption key. |
1.A system comprising:memory device; anda processing device coupled to the memory device, the processing device configured to perform operations comprising:receiving a command to perform a data operation at the memory device, the command including an encryption key tag;accessing a first key table from local storage, the first key table including a first set of key entries corresponding to a first set of encryption keys;determining whether the first key table contains an entry corresponding to the encryption key label;accessing from random access memory RAM a second key table including a second set of key entries corresponding to a second set of encryption keys based on determining that the first key table does not contain an entry corresponding to the tag;identifying a key entry corresponding to the encryption key label from the second set of key entries, the key entry including an encryption key corresponding to the encryption key label; andThe command is processed using the encryption key.2.The system of claim 1, wherein:the command includes a command to write data to the memory device; andThe processing of the command includes encrypting the data using the encryption key.3.The system of claim 1, wherein:the command includes a command to read data from the memory device; andThe processing of the command includes decrypting encrypted data read from the memory device using the encryption key.4.The system of claim 3, wherein the operations further comprise:reading the encrypted data and key identifier from the memory device; andIt is determined that the key identifier read from the memory device matches the key identifier contained in the key entry.5.The system of claim 1, wherein:The command is a first command for performing a first data operation;The encryption key label is the first encryption key label;the encryption key is the first encryption key; andThe operations further include:A second command to perform a second data operation at the memory device is received, the second command including a second encryption key tag.6.The system of claim 5, wherein the operations further comprise:determining that the first key table contains a key entry corresponding to the second encryption key label, the key entry corresponding to the second encryption key label including a second encryption key; andThe second command is processed using the second encryption key.7.The system of claim 6, wherein:the second command includes a command to read data from the memory device; andThe operations further include:read encrypted data and a key identifier from the memory device; andIt is determined that the key identifier read from the memory device matches the key identifier contained in the key entry.8.The system of claim 5, wherein the operations further comprise:An error is returned in response to the second command based on determining that the first and second key tables do not contain a key entry corresponding to the second encryption key tag.9.The system of claim 5, wherein:the second command includes a command to read data from the memory device; andThe operations further include:reading encrypted data and a key identifier from the memory device;determining that the first key table contains a key entry corresponding to the second encryption key label, the key entry corresponding to the second encryption key label including a second encryption key; andresponding to the second encryption key identifier based on determining that the key identifier read from the memory device does not match a key identifier contained in the key entry corresponding to the second encryption key tag The command returned an error.10.The system of claim 5, wherein the operations further comprise:determining that the first key table does not contain a key entry corresponding to the second encryption key label;identifying a key entry from the second key table corresponding to the second encryption key label; andAn existing key entry in the first key table is replaced with the key entry from the second key table corresponding to the second encryption key label.11.A method comprising:receiving, at the processing device, a command to perform a data operation at the memory device, the command including an encryption key tag;accessing a first key table from a local memory of the processing device, the first key table including a first set of key entries corresponding to a first set of encryption keys;searching, by the processing device, the first key table to determine whether the first key table contains an entry corresponding to the encryption key label;accessing a second key table including a second set of key entries from random access memory RAM in response to determining that the first key table does not contain an entry corresponding to the tag;identifying a key entry corresponding to the encryption key label from the second set of key entries, the key entry including an encryption key corresponding to the encryption key label; andThe command is processed by the processing means using the encryption key.12.The method of claim 11, wherein:the command includes a command to write data to the memory device; andThe processing of the command includes encrypting the data using the encryption key.13.The method of claim 11, wherein:the command includes a command to read data from the memory device; andThe processing of the command includes decrypting encrypted data read from the memory device using the encryption key.14.The method of claim 13, further comprising:reading the encrypted data and key identifier from the memory device; andIt is determined that the key identifier read from the memory device matches the key identifier contained in the key entry.15.The method of claim 14, wherein:The command is a first command for performing a first data operation;The encryption key label is the first encryption key label;the encryption key is the first encryption key; andThe method further includes:A second command to perform a second data operation at the memory device is received, the second command including a second encryption key tag.16.The method of claim 15, further comprising:determining that the first key table contains a key entry corresponding to the second encryption key label, the key entry corresponding to the second encryption key label including a second encryption key; andThe second command is processed using the second encryption key corresponding to the key entry in the first key table.17.The method of claim 16, wherein:the second command includes a command to read data from the memory device; andThe method further includes:read encrypted data and a key identifier from the memory device; andIt is determined that the key identifier read from the memory device matches the key identifier contained in the key entry.18.The method of claim 15, further comprising:An error is returned in response to the second command based on determining that the first and second key tables do not contain a key entry corresponding to the second encryption key tag.19.The method of claim 15, wherein:the second command includes a command to read data from the memory device; andThe method further includes:reading encrypted data and a key identifier from the memory device;determining that the first key table contains a key entry corresponding to the second encryption key label, the key entry corresponding to the second encryption key label including a second encryption key; andIn response to determining that the key identifier read from the memory device does not match a key identifier contained in the key entry corresponding to the second encryption key tag, in response to the first encryption key identifier The second command returned an error.20.A computer-readable storage medium comprising instructions that, when executed by a processing apparatus, configure the processing apparatus to perform operations comprising:receiving a command to perform a data operation at the memory device, the command including an encryption key tag, the data operation including a read operation or a write operation;accessing a first key table from local storage, the first key table including a first set of key entries corresponding to a first set of encryption keys;determining that the first key table does not contain an entry corresponding to the encryption key label;accessing from random access memory RAM a second key table including a second set of key entries corresponding to a second set of encryption keys based on determining that the first key table does not contain an entry corresponding to the tag;identifying a key entry corresponding to the encryption key label from the second set of key entries, the key entry including an encryption key corresponding to the encryption key label; andThe command is processed using the encryption key, the processing of the command comprising using the encryption key to encrypt or decrypt data. |
Manage per-logical block encryption keys on persistent storage devicestechnical fieldEmbodiments of the present disclosure relate generally to memory subsystems, and more particularly, to managing encryption keys per logical block on persistent storage devices in the memory subsystem.Background techniqueThe memory subsystem may include one or more memory devices that store data. The memory components can be, for example, non-volatile memory devices and volatile memory devices. In general, a host system may utilize a memory subsystem to store data at and retrieve data from a memory device.SUMMARY OF THE INVENTIONIn one aspect, the present application provides a system comprising: a memory device; and a processing device coupled to the memory device, the processing device configured to perform operations comprising: receiving at the memory device a command to perform a data operation at the location, the command including an encryption key tag; accessing a first key table from local storage, the first key table including a first set of key entries corresponding to a first set of encryption keys ; determining whether the first key table contains an entry corresponding to the encryption key tag; storing an entry from random access memory (RAM) based on determining that the first key table does not contain an entry corresponding to the tag take a second key table that includes a second set of key entries corresponding to a second set of encryption keys; identify a key entry corresponding to the encryption key label from the second set of key entries, the encryption key A key entry includes an encryption key corresponding to the encryption key label; and the command is processed using the encryption key.In another aspect, the present application provides a method comprising: receiving, at a processing device, a command to perform a data operation at a memory device, the command including an encryption key tag; accessing a first memory from a local memory of the processing device a key table including a first set of key entries corresponding to a first set of encryption keys; searching the first key table by the processing device to determine the first key whether a table contains an entry corresponding to the encryption key tag; accessing from random access memory (RAM) including a second set of keys in response to determining that the first key table does not contain an entry corresponding to the tag a second key table of entries; identifying a key entry corresponding to the encryption key label from the second set of key entries, the key entry including the encryption key corresponding to the encryption key label; and processing the command by the processing means using the encryption key.In yet another aspect, the present application provides a computer-readable storage medium comprising instructions that, when executed by a processing device, configure the processing device to perform operations comprising: receiving execution data at a memory device an operation command, the command including an encryption key tag, the data operation including a read operation or a write operation; accessing a first key table from the local storage, the first key table including a first key table corresponding to a first group a first set of key entries for an encryption key; determining that the first key table does not contain an entry corresponding to the encryption key label; based on determining that the first key table does not contain an entry corresponding to the label , accessing from random access memory (RAM) a second key table comprising a second set of key entries corresponding to a second set of encryption keys; identifying from said second set of key entries a second set of key entries corresponding to said encryption keys a key entry for a key tag, the key entry including an encryption key corresponding to the encryption key tag; and processing the command using the encryption key, the processing of the command including using the encryption key encryption key to encrypt or decrypt data.Description of drawingsThe present disclosure will be more fully understood from the detailed description given below and from the accompanying drawings of various embodiments of the present disclosure.1 illustrates an example computing environment that includes a memory subsystem in accordance with some embodiments of the present disclosure.Figure 2 is a block diagram illustrating the operation of a memory subsystem to perform key injection in accordance with some embodiments.3 is a block diagram illustrating the operation of a memory subsystem to perform a write operation in accordance with some embodiments of the present disclosure.4A and 4B are block diagrams illustrating the operation of a memory subsystem to perform a read operation in accordance with some embodiments of the present disclosure.5 is a block diagram illustrating an example key cache used by a memory subsystem to manage encryption keys, according to some embodiments.6 is a flowchart illustrating an example method for key injection in a memory subsystem in accordance with some embodiments of the present disclosure.7, 8, 9A, and 9B are flowcharts illustrating example methods for managing encryption keys during data operations, in accordance with some embodiments of the present disclosure.10 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.Detailed waysAspects of the present disclosure relate to managing encryption keys per logical block on persistent storage devices in a memory subsystem. The memory subsystem may be a storage device, a memory module, or a mix of storage devices and memory modules. Examples of memory devices and memory modules are described below in conjunction with FIG. 1 . In general, a host system may utilize a memory subsystem that includes one or more components, such as memory devices that store data. The host system can provide data for storage at the memory subsystem, and can request data to be retrieved from the memory subsystem.The memory device may be a non-volatile memory device. One example of a non-volatile memory device is a NAND (NAND) memory device. Other examples of non-volatile memory devices are described below in conjunction with FIG. 1 . Some memory devices, such as NAND memory devices, include arrays of memory cells (eg, flash cells) used to store data. Each cell includes a transistor, and within each cell, data is stored as the transistor's threshold voltage. Memory cells in these devices can be grouped into pages that can refer to logical units of the memory device used to store data. For example, memory cells in a NAND memory device are connected horizontally to word lines at their control gates to form pages. In the case of some types of memory devices (eg, NAND), pages are grouped to form blocks (also referred to herein as "memory blocks").Data operations may be performed by the memory subsystem. Data operations may be host-initiated operations. For example, the host system may initiate data operations (eg, write, read, erase, etc.) on the memory subsystem. The host system may send access requests (eg, write commands, read commands) to the memory subsystem to store data on and read data from memory devices on the memory subsystem.Current storage methods add additional information called metadata to user data. This metadata is stored in persistent storage of the memory device along with the user data. The metadata is retrieved when the host system requests user data. Currently, metadata is often used to add protection information to user data that allows the memory subsystem to determine if the user data has been corrupted or if the correct data is being returned.Data encryption boundaries on memory devices are becoming increasingly granular. Initially, the entire memory device is encrypted using a single encryption key. This is followed by technologies such as the Trusted Computing Histone Specification, which allow a large number of encryption bands to be established on devices, each with individual encryption keys. Current technology and initiatives now allow each logical block on a memory device to have its own key. This smaller and smaller granularity of encryption is being driven by initiatives such as the EU General Data Protection Regulation (GDPR)'s "right to be forgotten," containerization of applications in cloud storage services where data must be securely partitioned, and many others program to drive.This new method of encrypting data on memory devices creates challenges in key management. For example, with these modern methods of encrypting data, identifying the key used to encrypt the data to ensure that the correct key is used to decrypt the data is a challenge. Furthermore, since data encryption (write operations) and decryption (read operations) are part of the main data path of the memory device and will have a significant impact on the performance of the device, the keys need to be quickly accessible.Aspects of the present disclosure address encryption key management on a per logical block basis by maintaining a key table that tracks encryption keys within a memory subsystem using key tags and key identifiers. Rather, the key table maintained by the memory subsystem includes a set of key entries, and each key entry includes an encryption key and a key identifier associated with the encryption key. The key table is indexed by key labels, and the key labels are used by the memory subsystem to perform fast lookups of encryption keys. Each key identifier is a globally unique identifier for the corresponding encryption key. The globally unique key identifier of the key used to encrypt the user data can be added to the metadata that can be used to ensure that the correct key can be used to decrypt the user data. Furthermore, to provide fast access to a large number of keys, the memory subsystem further utilizes a key cache for storing a large number of keys that can be accessed quickly.Key corruption can occur through a variety of mechanisms, including transient errors and firmware coding errors. While the memory subsystem may not be able to determine whether the wrong key is being used for write operations, it is possible for the memory subsystem to determine whether the wrong key is being used to read data. To this end, a key identifier is stored with the encrypted data and the key identifier is checked when the data is read back to determine whether it matches the key identifier being used to decrypt the data.By utilizing the key table in the manner described herein, the memory subsystem enables each logical block on the memory device to have its own encryption. Utilization of a key cache further enables the memory subsystem to maintain a large number of keys and access them quickly without significantly impacting device performance.1 illustrates an example computing system 100 that includes a memory subsystem 110 in accordance with some embodiments of the present disclosure. Memory subsystem 110 may include media such as one or more volatile memory devices (eg, memory device 140 ), one or more non-volatile memory devices (eg, memory device 130 ), or a combination of these.Memory subsystem 110 may be a storage device, a memory module, or a mix of storage devices and memory modules. Examples of storage devices include solid state drives (SSD), flash drives, universal serial bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal flash storage (UFS) drives, secure digital (SD) cards, and hard disk drives (HDDs). Examples of memory modules include dual inline memory modules (DIMMs), small outline DIMMs (SO-DIMMs), and various types of non-volatile dual inline memory modules (NVDIMMs).Computing system 100 may be a computing device, such as a desktop computer, laptop computer, web server, mobile device, vehicle (eg, airplane, drone, train, car, or other means of transportation), Internet of Things (IoT) enabled A device, an embedded computer (eg, included in one of a vehicle, industrial equipment, or networked business device), or such computing device that includes memory and processing means.Computing system 100 may include multiple host systems coupled to one or more memory subsystems 110 . In some embodiments, host system 120 is coupled to different types of memory subsystems 110 . FIG. 1 shows an example host system 120 coupled to one memory subsystem 110 . As used herein, "coupled to" or "coupled with" generally refers to a connection between components, which may be an indirect communication connection or a direct communication connection (eg, without intervening components), whether wired or wireless, including Such as electrical, optical, magnetic, etc. connections.Each host system 120 may include a processor chipset and a software stack executed by the processor chipset. A processor chipset may include one or more cores, one or more caches, a memory controller (eg, an NVDIMM controller), and a storage protocol controller (eg, a Peripheral Component Interconnect Express (PCIe) controller, serial Advanced Technology Attachment (SATA) controller). The host system 120 may use the memory subsystem 110 , for example, to write data to and read data from the memory subsystem 110 .Host system 120 may be coupled to memory subsystem 110 via a host interface. Examples of host interfaces include, but are not limited to, SATA interfaces, PCIe interfaces, USB interfaces, Fibre Channel, Serial Attached SCSI (SAS), Small Computer System Interface (SCSI), Double Data Rate (DDR) memory bus, DIMM interfaces such as , DIMM sockets supporting Double Data Rate (DDR), Open NAND Flash Interface (ONFI), Double Data Rate (DDR), Low Power Double Data Rate (LPDDR) or any other interface. A host interface may be used to transfer data between host system 120 and memory subsystem 110 . While memory subsystem 110 is coupled with host system 120 through a PCIe interface, any of host systems 120 may further utilize an NVM Express (NVMe) interface to access components (eg, memory device 130). The host interface may provide an interface for transferring control, address, data, and other signals between memory subsystem 110 and host system 120 . FIG. 1 shows a memory subsystem 110 as an example. In general, host system 120 may access multiple memory subsystems via the same communication connection, multiple individual communication connections, and/or a combination of communication connections.The memory devices 130, 140 may comprise any combination of different types of non-volatile memory devices and/or volatile memory devices. Volatile memory devices (eg, memory device 140) may be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).Some examples of non-volatile memory devices (eg, memory device 130) include NAND-type flash memory and write-in-place memory, such as three-dimensional (3D) cross-point memory devices, which are intersections of non-volatile memory cells array. Cross-point arrays of non-volatile memory can perform bit storage based on changes in bulk resistance in conjunction with stackable cross-grid data access arrays. In addition, in contrast to many flash-based memories, cross-point non-volatile memory can perform write-in-place operations in which non-volatile memory cells can be programmable memory cells. The NAND-type flash memory includes, for example, two-dimensional NAND (2D NAND) and 3D NAND.Each of memory devices 130 may include one or more arrays of memory cells. One type of memory cell, such as a single level cell (SLC), can store one bit per cell. Other types of memory cells, such as multi-level cell (MLC), three-level cell (TLC), quad-level cell (QLC), and five-level cell (PLC), can store multiple bits per cell. In some embodiments, each of memory devices 130 may include one or more arrays of memory cells, eg, SLC, MLC, TLC, QLC, or any combination of these. In some embodiments, a particular memory device may include an SLC portion, an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of memory device 130 may be grouped into pages that may refer to logical units of the memory device used to store data. For example, memory cells in a NAND memory device are connected horizontally to word lines at their control gates to form pages. In the case of some types of memory (eg, NAND), pages may be grouped to form blocks. Additionally, word lines within a memory device may be organized into multiple word line groups, each of which includes one or more word lines, but each word line group includes fewer word lines than are included in a block.Although non-volatile memory components such as NAND-type flash memory (eg, 2D NAND, 3D NAND) and 3D cross-point non-volatile memory cell arrays are described, memory device 130 may be based on any other type of non-volatile memory memory, such as read only memory (ROM), phase change memory (PCM), optional memory, other chalcogenide based memories, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM), Magnetic Random Access Memory (MRAM), Spin Transfer Torque (STT)-MRAM, Conductive Bridge RAM (CBRAM), Resistive Random Access Memory (RRAM), Oxide Based RRAM (OxRAM), NOR Flash, and Electrically Erasable Programmable Read-Only Memory (EEPROM).Memory subsystem controller 115 (or, for simplicity, controller 115) may communicate with memory device 130 to perform operations such as reading data, writing data, or erasing data at memory device 130, among other such operations operate. The memory subsystem controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The hardware may include digital circuitry with dedicated (ie, hard-coded) logic to perform the operations described herein. Memory subsystem controller 115 may be a microcontroller, special purpose logic circuitry (eg, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.), or other suitable processor.Memory subsystem controller 115 may include a processor 117 (processing device) configured to execute instructions stored in local memory 119 . In the example shown, the local memory 119 of the memory subsystem controller 115 includes embedded memory that is configured to store various processes, operations, logic for performing control operations of the memory subsystem 110 Instructions for flows and routines, including handling communications between memory subsystem 110 and host system 120 .In some embodiments, local memory 119 may include memory registers that store memory pointers, retrieved data, and the like. Local memory 119 may also contain ROM for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been shown as including the memory subsystem controller 115, in another embodiment of the present disclosure, the memory subsystem 110 does not include the memory subsystem controller 115, but may rely on For external control (eg, provided by an external host, or by a processor or controller separate from the memory subsystem).In general, memory subsystem controller 115 may receive commands or operations from host system 120 and may convert the commands or operations into instructions or appropriate commands to effect desired accesses to memory device 130 and/or memory device 140 . The memory subsystem controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and ECC operations, encryption operations, cache operations, and logical addresses (eg, logical block addresses (LBAs) associated with the memory devices 130 . ), namespace) and physical addresses (eg, physical block addresses). Memory subsystem controller 115 may further include host interface circuitry to communicate with host system 120 via a physical host interface. Host interface circuitry may translate commands received from host system 120 into command instructions to access memory device 130 and/or memory device 140, and translate responses associated with memory device 130 and/or memory device 140 into useful instructions. information on the host system 120 .In some embodiments, memory device 130 includes a local media controller 135 that operates in conjunction with memory subsystem controller 115 to perform operations on one or more memory cells of memory device 130 .The memory subsystem 110 also includes a key management component 113, which is responsible for managing encryption keys on a per-block basis. As an example, when a command for a data operation is received by memory subsystem 110, key management component 113 identifies an encryption key to be used to perform a cryptographic operation that facilitates the data operation based on a key tag included with the command. For write operations, the identified encryption key is used to encrypt data written to one of the memory devices 130 or 140, and for read operations, the identified encryption key is used to encrypt data from the memory device 130 or 140. One of the read encrypted data is decrypted. An encryption key may be specifically associated with a block or other logical unit to which data is written or read from. The key management component 113 utilizes a key cache for storing a large number of keys that can be accessed quickly. Additional details regarding the operation of the multi-level key cache and key management component 113 are described below.In some embodiments, memory subsystem controller 115 includes at least a portion of key management component 113 . For example, memory subsystem controller 115 may include a processor 117 (processing device) configured to execute instructions stored in local memory 119 for performing the operations described herein. In some embodiments, at least a portion of key management component 113 is part of host system 120, an application, or an operating system.FIG. 2 is a block diagram illustrating the operation of memory subsystem controller 115 to perform key injection in accordance with some embodiments. As shown, host system 120 encrypts encryption key 200 and generates key injection command 202 that includes encrypted encryption key 200, key label 204, key identifier 205, and information about the encryption key 200. Information on how the key 200 was encrypted. Host system 120 provides key injection command 202 to memory subsystem controller 115 .Upon receipt of the key injection command 202 , the key management component 113 decrypts the encryption key 200 and injects a new key entry for the encryption key 200 into the key table 206 . Key table 206 may include a set of key entries indexed by a key label, and each key entry includes an encryption key and a key identifier. Thus, the new key entry contains the encryption key 200 and the key identifier 205 contained in the key injection command 202 . The key entry for encryption key 200 is inserted into key table 206 at the index defined by key label 204 .3 is a block diagram illustrating the operation of memory subsystem 110 to perform write operations in accordance with some embodiments of the present disclosure. As shown, host system 120 provides a command to memory subsystem controller 115 that includes data 300 and a key tag 302 associated with an encryption key. In response to receiving the command, key management component 113 of memory subsystem controller 115 searches key table 304 to identify the key entry corresponding to key tag 302 . The key entry in key table 304 that matches key label 302 contains key identifier 306 and encryption key 308 . Encryption component 310 of key management component 113 encrypts data 300 using encryption key 308 corresponding to a matching entry in key table 304 and memory subsystem controller 115 stores encrypted data 312 along with key identifier 306 in memory device 130 .4A and 4B are block diagrams illustrating the operation of memory subsystem 110 to perform read operations in accordance with some embodiments of the present disclosure. As shown in FIG. 4A , host system 120 provides commands to memory subsystem controller 115 to read data from memory device 130 . The command contains a key tag 400 associated with the encryption key. In response to the command, memory subsystem controller 115 reads encrypted data 402 and corresponding key identifier 404 from memory device 130 .The key management component 113 searches the key table 406 to identify the key entry corresponding to the key tag 400 contained in the read command. The key entry in key table 406 that matches key label 400 contains key identifier 408 and encryption key 410 . Because key corruption can occur through a variety of mechanisms, including transient errors and firmware coding errors, key identifier 408 is stored with encrypted data 402 so that when encrypted data 402 is read back, key management component 113 can determine Whether it matches the key identifier of the encryption key to be used to decrypt encrypted data 402 . Accordingly, key management component 113 performs a key identifier check 412 to determine whether key identifier 408 in key table 406 matches key identifier 404 stored with encrypted data 402 . If key identifier 404 does not match key identifier 408, key management component 113 returns an error message to host system 120. If the key identifiers 404, 408 match, then the decryption component 414 of the key management component 113 decrypts the encrypted data 402 using the encryption key 410.As shown in FIG. 4B , in some embodiments, encrypted data 402 and key identifier 404 may also be protected by error correction code (ECC) 416 . According to these embodiments, ECC checking 418 is performed prior to decryption of encrypted data 402 . If the ECC check 418 fails, the memory subsystem controller 115 returns an error message to the host system 120 . If ECC check 418 passes, encrypted data 402 is decrypted by decryption component 414 using encryption key 410, as described above.Referring to Figure 5, an example key table and key cache used by key management component 113 of memory subsystem controller 115 to manage encryption keys are shown, according to some embodiments. As shown, key management component 113 can utilize two key tables. The first key table, hardware key table 500, contains a first set of key entries with n entries. Each key entry in hardware key table 500 contains an encryption key, a key label associated with the encryption key, and a unique identifier for the encryption key. Hardware key table 500 may be stored in local memory 119 of memory subsystem controller 115 to provide extremely fast access to key management component 113 .The second key table, RAM key table 550, includes a second set of key entries containing k entries. Like hardware key table 500, each entry in RAM key table 550 contains an encryption key, a key label associated with the encryption key, and a unique identifier for the encryption key. RAM key table 550 is a fast key cache that is substantially larger than hardware key table 500 (eg, k>n) but takes longer to access. Additionally, when the key requested by host system 120 is not in hardware key table 500, key management component 113 transfers the new key from RAM key table 550 into hardware key table 500 for processing data operations. This RAM key table 550 may be implemented in fast RAM close to the memory subsystem controller 115 (eg, low access time).6 is a flow diagram illustrating an example method 600 for key injection in a memory subsystem in accordance with some embodiments of the present disclosure. The method 600 may be performed by processing logic, which may include hardware (eg, processing device, circuitry, special purpose logic, programmable logic, microcode, hardware of the device, integrated circuits, etc.), software (eg, in the processing device instructions to run or execute on it), or a combination thereof. In some embodiments, method 600 is performed by key management component 113 of FIG. 1 . Although the processes are shown in a particular order or sequence, unless otherwise specified, the order of the processes may be modified. Accordingly, the illustrated embodiments are to be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. Furthermore, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are possible.At operation 605, the processing device receives a key injection command. The key injection command contains the encryption key, key identifier and key label. The key injection command may be received from a host system (eg, host system 120).At operation 610, upon receiving the key injection command, the processing device accesses the RAM key table from RAM. The RAM key table contains a set of key entries, and each key entry contains an encryption key, a key identifier, and a key label. At operation 615, the processing device inserts the new key entry into the RAM key table based on the RAM key table having space available for at least one new entry. The new key entry contains the encryption key, the key identifier, and the key tag included in the key injection command.If the RAM key table is full, the processing device selects an existing key entry in the RAM key table to replace (at operation 620 ), and the processing device replaces the existing key entry with the new key entry at operation 625 . As an example, the processing device may select an existing key entry to replace based on the most recently used corresponding encryption key (eg, the least recently used entry).7 is a flow diagram illustrating an example method for managing encryption keys during data manipulation in accordance with some embodiments of the present disclosure. Method 700 may be performed by processing logic, which may include hardware (eg, processing device, circuitry, special purpose logic, programmable logic, microcode, hardware of a device, integrated circuits, etc.), software (eg, in a processing device instructions to run or execute on it), or a combination thereof. In some embodiments, method 700 is performed by key management component 113 of FIG. 1 . Although the processes are shown in a particular order or sequence, unless otherwise specified, the order of the processes may be modified. Accordingly, the illustrated embodiments are to be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. Furthermore, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are possible.At operation 705, the processing device receives a command to perform a data operation at a memory device (eg, memory device 130). The command may be a command to read data from the memory device (read command) or a command to write data to the memory device (write command). The command contains a key tag associated with the encryption key used to process the command. Commands are received from a host system (eg, host system 120).At operation 710, the processing device uses the key tag included in the command to identify an encryption key from a key table maintained by the processing device, and at operation 715, the processing device processes the command using the encryption key. For example, a processing device may use an encryption key to encrypt data before it is written to the memory device or to decrypt data read from the memory device. If the processing device is unable to identify the encryption key using the key tag, the processing device returns an error in response to the command.As shown in FIG. 8 , method 700 may include operations 805 , 810 , 815 , 820 , 825 , 830 , 835 , and 840 according to some embodiments. According to these embodiments, operation 805 may be performed as part of operation 705 in which the processing device receives a command to perform a data operation. At operation 805, the processing device receives a command to write data to a memory device (eg, memory device 130). As noted above, the command contains a key tag associated with the encryption key.According to these embodiments, operations 810, 815, 820 and 825 may be performed as part of operation 710 in which the processing device identifies the encryption key using the key tag.At operation 810, the processing device accesses a first key table (eg, hardware key table 500) from local storage (eg, local storage 119). The first key table includes a first set of key entries corresponding to a first set of encryption keys. Each key entry in the first set of key entries includes an encryption key, an identifier for the encryption key, and a label associated with the encryption key.At operation 815, the processing device searches the first key table to determine whether the first key table contains a key entry corresponding to the key tag included in the write command. At operation 820, the processing device accesses a second key table (eg, RAM key table 550) from RAM based on determining that the first key table does not contain an entry corresponding to the key tag. The second key table contains a second set of key entries corresponding to a second set of encryption keys. As with the first set of key entries, each key entry in the second set of key entries contains an encryption key, an identifier for the encryption key, and a label associated with the encryption key.At operation 825, the processing device searches the second key table to determine whether the second key table contains a key entry corresponding to the key tag included in the write command.According to these embodiments, any of operations 830, 835, and 840 may be performed as part of operation 715 in which the processing device processes the command. Based on the determination at operation 825 that the second key table does not contain an entry corresponding to the key tag, the processing device returns an error in response to the write command at operation 830 .Based on identifying (at operation 825) the key entry in the second key table that matches the key label, the processing device encrypts the data at operation 835 using the encryption key corresponding to the matching entry in the second key table to encrypt.Based on identifying (at operation 815) the key entry in the first key table that matches the key label, the processing device encrypts the data at operation 840 using the encryption key corresponding to the matching entry in the first key table to encrypt.As shown in FIGS. 9A and 9B , method 700 may include operations 905 , 910 , 915 , 920 , 925 , 930 , 935 , 940 , 945 , 950 , 955 , 960 and 965 . According to these embodiments: operation 905 may be performed as part of operation 705 in which the processing device receives a command to perform a data operation; operations 915, 920, 925, 940, 945 and 955 may be performed as part of operation 915, in which the processing device identifies a password corresponding to the command in the command. and any of operations 930, 935, 960, or 965 may be performed as part of operation 715, in which the processing device processes the command.At operation 905, the processing device receives a command to read data from a memory device (eg, memory device 130). The command contains a key tag associated with the encryption key. At operation 910, in response to the command, the processing device reads the encrypted data and the corresponding encryption key identifier.At operation 915, the processing device accesses a first key table (eg, hardware key table 500) from local storage (eg, local storage 119). The first key table includes a first set of key entries corresponding to a first set of encryption keys. Each key entry in the first set of key entries includes an encryption key, an identifier for the encryption key, and a label associated with the encryption key.At operation 920, the processing device searches the first key table to determine whether the first key table contains a key entry corresponding to the key tag included in the write command. At operation 925, based on identifying the key entry in the first key table that matches the key label, the processing device determines whether the key identifier stored with the encrypted data matches the key identifier contained in the key entry symbol. If the key identifiers do not match, the processing device returns an error in response to the command at operation 930 .If the key identifiers match, the processing device decrypts the data at operation 935 using the encryption key corresponding to the matching entry in the first key table.9B, at operation 940, the processing device accesses a second key table (eg, RAM key table 550) from RAM based on determining that the first key table does not contain an entry corresponding to a key tag. The second key table contains a second set of key entries corresponding to a second set of encryption keys. As with the first set of key entries, each key entry in the second set of key entries contains an encryption key, an identifier for the encryption key, and a label associated with the encryption key.At operation 945, the processing device searches the second key table to determine whether the second key table contains a key entry corresponding to the key tag included in the read command. At operation 950, based on identifying the key entry in the second key table that matches the key label, the processing device replaces the existing key entry in the first key table with the identified key entry from the second key table entry. As an example, the processing device may select an existing key entry to replace based on the most recently used corresponding encryption key (eg, the least recently used entry).At operation 955, the processing device determines whether the key identifier stored with the encrypted data matches the key identifier contained in the key entry. If the keys match, the processing device decrypts the encrypted data at operation 960 using the encryption key corresponding to the key entry identified from the second key table.If the key identifiers do not match or if the second key table does not contain a key entry that matches the key tag contained in the command, the processing device returns an error in response to the command at operation 965 .In view of the above disclosure, various examples are set forth below. It should be noted that one or more features of the examples employed independently or in combination are to be considered within the disclosure of the present application.Example 1 is a system comprising: a memory device; and a processing device coupled to the memory device, the processing device configured to perform operations comprising: receiving, performing, at the memory device, an operation of data a command that includes an encryption key tag; accesses a first key table from local storage, the first key table including a first set of key entries corresponding to a first set of encryption keys; determines the whether the first key table contains an entry corresponding to the encryption key tag; based on determining that the first key table does not contain an entry corresponding to the tag, accessing from random access memory RAM includes a a second key table of a second set of key entries for a set of encryption keys; a key entry corresponding to the encryption key label is identified from the second set of key entries, the key entry including a key entry corresponding to the the encryption key of the encryption key tag; and processing the command using the encryption key.Example 2 includes the system of example 1, wherein: the command comprises a command to write data to the memory device; and the processing of the command comprises encrypting the data using the encryption key .Example 3 includes the system of any one or more of examples 1 or 2, wherein: the command comprises a command to read data from the memory device; and the processing of the command comprises using the encryption key to decrypt encrypted data read from the memory device.Example 4 includes the system of any one or more of Examples 1-3, wherein the operations further comprise: reading the encrypted data and key identifier from the memory device; and determining from the memory device The key identifier read matches the key identifier contained in the key entry.Example 5 includes the system of any one or more of Examples 1-4, wherein: the command is a first command to perform a first data operation; the encryption key tag is a first encryption key tag; the The encryption key is a first encryption key; and the operations further include: receiving a second command to perform a second data operation at the memory device, the second command including a second encryption key tag.Example 6 includes the system of any one or more of Examples 1-5, wherein the operations further comprise: determining that the first key table contains a key entry corresponding to the second encryption key label, The key entry corresponds to the second encryption key tag including a second encryption key; and the second command is processed using the second encryption key.Example 7 includes the system of any one or more of Examples 1-6, wherein: the second command comprises a command to read data from the memory device; and the operations further comprise: from the memory device reading encrypted data and a key identifier; and determining that the key identifier read from the memory device matches the key identifier contained in the key entry.Example 8 includes the system of any one or more of Examples 1-7, wherein the operations further comprise: based on determining that the first key table and the second key table do not contain encryption corresponding to the second key table The key entry of the key tag returns an error in response to the second command.Example 9 includes the system of any one or more of Examples 1-8, wherein: the second command comprises a command to read data from the memory device; and the operations further comprise: from the memory device reading encrypted data and a key identifier; determining that the first key table contains a key entry corresponding to the second encryption key tag, the key entry corresponding to the a second encryption key tag; and based on determining that the key identifier read from the memory device does not match the key identifier contained in the key entry corresponding to the second encryption key tag character and an error is returned in response to the second command.Example 10 includes the system of any one or more of Examples 1-9, wherein the operations further comprise: determining that the first key table does not contain a key entry corresponding to the second encryption key label identifying a key entry from the second key table corresponding to the second encryption key tag; and using the key entry from the second key table corresponding to the second encryption key tag The key entry replaces the existing key entry in the first key table.Example 11 is a method comprising: receiving, at a processing device, a command to perform a data operation at a memory device, the command including an encryption key tag; accessing a first key table from a local memory of the processing device, The first key table includes a first set of key entries corresponding to a first set of encryption keys; the first key table is searched by the processing device to determine whether the first key table contains entries corresponding to a first set of encryption keys; an entry for the encryption key tag; accessing a second key table including a second set of key entries from random access memory RAM in response to determining that the first key table does not contain an entry corresponding to the tag identifying a key entry corresponding to said encryption key label from said second set of key entries, said key entry comprising an encryption key corresponding to said encryption key label; and using by said processing means the encryption key to process the command.Example 12 includes the method of example 11, wherein: the command comprises a command to write data to the memory device; and the processing of the command comprises encrypting the data using the encryption key .Example 13 includes the method of any one or more of examples 11 or 12, wherein: the command comprises a command to read data from the memory device; and the processing of the command comprises using the encryption key to decrypt encrypted data read from the memory device.Example 14 includes the method of any one or more of Examples 11-13, and further comprising: reading the encrypted data and key identifier from the memory device; and determining the read from the memory device The key identifier matches the key identifier contained in the key entry.Example 15 includes the method of any one or more of Examples 11-14, wherein: the command is a first command to perform a first data operation; the encryption key tag is a first encryption key tag; the the encryption key is a first encryption key; and the method further comprises: receiving a second command to perform a second data operation at the memory device, the second command including a second encryption key tag.Example 16 includes the method of any one or more of Examples 11-15, and further includes determining that the first key table includes a key entry corresponding to the second encryption key label, the encryption key a key entry corresponding to the second encryption key tag including a second encryption key; and processing the second encryption key using the second encryption key corresponding to the key entry in the first key table Second order.Example 17 includes the method of any one or more of Examples 11-16, wherein: the second command comprises a command to read data from the memory device; and the method further comprises: from the memory device reading encrypted data and a key identifier; and determining that the key identifier read from the memory device matches the key identifier contained in the key entry.Example 18 includes the method of any one or more of Examples 11-17, and further comprising: based on determining that the first key table and the second key table do not contain a label corresponding to the second encryption key returns an error in response to the second command.Example 19 includes the method of any one or more of Examples 11-18, wherein: the second command comprises a command to read data from the memory device; and the method further comprises: from the memory device reading encrypted data and a key identifier; determining that the first key table contains a key entry corresponding to the second encryption key tag, the key entry corresponding to the a second encryption key tag; and in response to determining that the key identifier read from the memory device does not match a key contained in the key entry corresponding to the second encryption key tag identifier and an error is returned in response to the second command.Example 20 is a computer-readable storage medium comprising instructions that, when executed by a processing device, configure the processing device to perform operations comprising: receiving a command to perform a data operation at a memory device, thereby The command includes an encryption key tag, the data operation includes a read operation or a write operation; accesses a first key table from local storage, the first key table includes a first key table corresponding to the first set of encryption keys; a set of key entries; determining that the first key table does not contain an entry corresponding to the encryption key tag; based on determining that the first key table does not contain an entry corresponding to the tag, from random access memory RAM accesses a second key table including a second set of key entries corresponding to a second set of encryption keys; identifying a key entry corresponding to the encryption key label from the second set of key entries, the key entry includes an encryption key corresponding to the encryption key tag; and processing the command using the encryption key, the processing of the command comprising using the encryption key to encrypt or Decrypt data.10 illustrates an example machine in the form of a computer system 1000 within which a set of instructions can be executed for causing the machine to perform any one or more of the methods discussed herein. 10 illustrates an example machine in the form of a computer system 1000 within which a set of instructions 1026 may be executable for causing the machine to perform any one or more of the methods discussed herein. In some embodiments, computer system 1000 may correspond to a host system (eg, host system 120 of FIG. 1 ) that includes, is coupled to, or utilizes a memory subsystem (eg, memory subsystem 110 of FIG. 1 ) or may be operable to execute Operation of the controller (eg, executing an operating system to perform operations corresponding to key management component 113 of FIG. 1). In alternative embodiments, the machines may be connected (eg, networked) to other machines in a local area network (LAN), intranet, extranet, and/or the Internet. A machine may qualify as a server or client machine in a client-server network environment as a peer machine in a peer-to-peer (or decentralized) network environment or as a server or client machine in a cloud computing infrastructure or environment operate.The machine may be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network appliance, server, network router, switch or bridge, or capable (sequentially or otherwise) Any such machine that executes a set of instructions specifying actions to be taken by the machine. Furthermore, although a single machine is shown, the term "machine" should also be understood to encompass any collection of machines that, individually or jointly, execute a set (or sets) of instructions to perform any of the methods discussed herein. any one or more of.Example computer system 1000 includes processing devices 1002 in communication with each other via bus 1030, main memory 1004 (eg, ROM, flash memory, DRAM such as SDRAM or RDRAM, etc.), static memory 1006 (eg, flash memory, static Random Access Memory (SRAM), etc.) and data storage system 1018.Processing device 1002 represents one or more general-purpose processing devices, such as microprocessors, central processing units, and the like. Rather, the processing device 1002 may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets. , or a processor that implements a combination of instruction sets. Processing device 1002 may also be one or more special-purpose processing devices, such as an ASIC, FPGA, digital signal processor (DSP), network processor, and the like. The processing device 1002 is configured to execute instructions 1026 for performing the operations and steps discussed herein. Computer system 1000 may further include a network interface device 1008 to communicate via network 1020 .The data storage system 1018 may include a machine-readable storage medium 1024 (also referred to as a computer-readable medium) on which are stored one or more sets of instructions 1026 or embody the methods or functions described herein any one or more of the software. Instructions 1026 may also reside wholly or at least partially within main memory 1004 and/or within processing device 1002 during execution thereof by computer system 1000, which also constitute machine-readable storage media. Machine-readable storage medium 1024 , data storage system 1018 , and/or main memory 1004 may correspond to memory subsystem 110 of FIG. 1 .In one embodiment, instructions 1026 include instructions to implement functionality corresponding to a security component (eg, key management component 113 of Figure 1). Although machine-readable storage medium 1024 is shown as a single medium in example embodiments, the term "machine-readable storage medium" should be considered to encompass a single medium or multiple media storing one or more sets of instructions 1026 . The term "machine-readable storage medium" shall also be considered to encompass any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Accordingly, the term "machine-readable storage medium" shall be considered to include, but not be limited to, solid-state memory, optical media, and magnetic media.Some portions of the previous detailed description have been presented with respect to algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here and generally thought of as a self-consistent series of operations that produce a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to the acts and processes of a computer system or similar electronic computing device that manipulate and transform data represented as registers and physical (electronic) quantities within the memory of the computer system into computer system memory or registers or other Such information stores other data of physical quantities within the system.The present disclosure also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs may be stored in computer-readable storage media, each coupled to a computer system bus, such as, but not limited to, any type of disk, ROM, RAM, EPROM, EEPROM, magnetic or optical cards, or any type of medium suitable for storing electronic instructions.The algorithms and displays presented herein are not inherently related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the described methods. The structure of a variety of these systems will be presented as set forth in the description below. Additionally, the present disclosure is not described with reference to any particular programming language. It should be appreciated that various programming languages may be used to implement the teachings of the present disclosure described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having stored thereon instructions that may be used to program a computer system (or other electronic device) to perform processes in accordance with the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, machine-readable (eg, computer-readable) media include machine (eg, computer-readable) storage media such as ROM, RAM, disk storage media, optical storage media, flash memory components, and the like.In the foregoing specification, embodiments of the present disclosure have been described with reference to specific example embodiments of the present disclosure. It should be apparent that various modifications may be made to the present disclosure without departing from the broader scope of embodiments of the disclosure as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. |
An additional memory bank (214) is added to a block of regular banks (206) in a memory to reduce dynamic power consumption of the memory. The additional bank is accessed by a set of bit lines (216) that is substantially shorter than corresponding bit lines (210) extending through all of the regular memory banks. Memory read (308) and write (306) operations, which are addressed to one of the regular banks (206), are deliberately redirected to the additional bank (214) having the short bit lines (216). Tracking circuitry (220, 222) identifies the regular bank that was addressed for each location in the additional bank. Data is moved from the additional bank to a regular bank (334) only when a new write operation (306) does not match (330) the bank of the previous write operation. Dynamic power is reduced because locality of reference causes access to the additional bank (214) without having to access a regular bank (206) for most memory read and write operations. |
CLAIMS What is claimed is: 1. A memory apparatus, comprising: a plurality of regular memory banks; a plurality of word lines coupled to each of the regular banks; a plurality of regular bit lines coupled to each of the regular banks; a sacrificial memory bank in addition to the regular banks, the sacrificial bank coupled to the plurality of word lines; a plurality of sacrificial bank bit lines shorter than the regular bit lines and coupled to the sacrificial bank; and bank selection circuitry coupled to the plurality of regular memory banks and to the sacrificial bank, the bank selection circuitry configured to direct a memory operation to the sacrificial memory bank when a bank addressed in the memory operation is unset or matches a bank accessed by a previous memory operation for a corresponding word line. 2. The memory apparatus of claim 1, further comprising: tracking circuitry coupled to the plurality of regular memory banks and to the sacrificial bank, the tracker circuitry configured to store tracker bits indicating one of the regular banks for each of the word lines, the bank selection circuitry coupled to the tracking circuitry and configured to direct memory operations to the sacrificial bank or one of the regular banks in response to the tracker bits. 3. The memory apparatus of claim 2, in which the tracker bits identify one of the regular banks corresponding to a previous memory write trace received by the memory apparatus for each word line. 4. The memory apparatus of claim 2, in which the bank selection circuitry includes comparison circuitry configured to indicate a bank match when a bank addressed in a memory operation matches the bank indicated by the tracker bits for a corresponding word line. 5. The memory apparatus of claim 2, further comprising:a multiplexer coupled to the bank selection circuitry and configured to select one of the regular bit lines for each of the memory operations directed by the bank selection circuitry to one of the regular banks and to select one of the sacrificial bank bit lines for each of the memory operations directed by the bank selection circuitry to the sacrificial bank. 6. The memory apparatus claim 1 , integrated in at least one of a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and a fixed location data unit. 7. A method for reducing energy consumed during memory access operations, comprising: receiving data along with a memory write address identifying a selected memory bank for the data within a block of regular memory banks and identifying a selected word line of a plurality of word lines within the memory bank; determining whether the selected memory bank matches a previous bank addressed in a previous write operation for the selected word line; and writing the data to a sacrificial bank. 8. The method of claim 7, in which writing the data comprises: writing of previous data from the sacrificial bank to the previous bank when the selected bank does not match the previous bank, and subsequently performing the low power write to the sacrificial bank. 9. The method of claim 8, further comprising updating tracker bits for the selected word line to identify the selected memory bank as a target memory bank for the data. 10. The method of claim 7, in which writing the data comprises overwriting data in the sacrificial bank when the selected memory bank matches the previous bank. 11. The method of claim 7, further comprising:receiving a memory read address identifying a selected read memory bank in the block of regular memory banks and identifying a selected read word line of the plurality of word lines; determining whether the selected read bank matches the previous bank addressed in the previous write operation for the selected read word line; and performing a low power read from the selected read word line in the sacrificial bank when the selected read bank matches the previous bank. 12. The method of claim 7, further comprising: receiving a memory read address identifying a selected read memory bank in the block of regular memory banks and identifying a selected read word line of the plurality of word lines; determining whether the selected read bank matches the previous bank addressed in the previous write operation for the selected read word line; and performing a regular power read from the selected read word line in the selected regular memory bank when the selected read bank does not match the previous bank. 13. The method of claim 7, in which determining whether the selected memory bank matches comprises reading sacrificial bank tracker bits for the selected word line. 14. The method of claim 7, further comprising integrating the sacrificial memory bank into at least one of a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and a fixed location data unit. 15. An apparatus for reducing energy consumed during memory access operations, comprising: means for receiving data along with a memory write address identifying a selected memory bank for the data within a block of regular memory banks and identifying a selected word line of a plurality of word lines within the memory bank; means for determining whether the selected memory bank matches a previous bank addressed in a previous write operation for the selected word line; and means for writing the data to a sacrificial bank. 16. The apparatus of claim 15, in which the means for writing the data comprises: means for writing previous data from the sacrificial bank to the previous bank when the selected bank does not match the previous bank, and subsequently performing the low power write to the sacrificial bank. 17. The apparatus of claim 16, further comprising means for updating tracker bits for the selected word line to identify the selected memory bank as a target memory bank for the data. 18. The apparatus claim 15, integrated in at least one of a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and a fixed location data unit. 19. A method for reducing energy consumed during memory access operations, comprising steps of: receiving data along with a memory write address identifying a selected memory bank for the data within a block of regular memory banks and identifying a selected word line of a plurality of word lines within the memory bank; determining whether the selected memory bank matches a previous bank addressed in a previous write operation for the selected word line; and writing the data to a sacrificial bank. 20. The method of claim 19, further comprising a step of integrating the sacrificial memory bank into at least one of a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and a fixed location data unit. |
ENERGY EFFICIENT MEMORY WITH RECONFIGURABLE DECODING CROSS REFERENCE TO RELATED APPLICATION [0001] The present application claims the benefit of U.S. provisional patent application no. 61/542861 to H. RAO, filed on October 4, 201 1. TECHNICAL FIELD [0001] The present disclosure relates to electronic memory operation and more specifically to systems and methods for reducing power consumption in memory operation. BACKGROUND [0002] Power consumption is a concern in electronic memory operations. Power consumption falls into two categories, namely, stand-by power and dynamic power. In the stand-by or quiescent mode, the memory uses the least power because neither read operations nor write operations are occurring. Dynamic power consumption occurs during switching when memory is accessed for reads and/or writes. [0003] Memory power consumption can be reduced by limiting the switching frequency and/or reducing the line capacitance because: p = c*v2 *f*A where P = dynamic power; C is line capacitance; V is the voltage applied to the line operated; f is the frequency of memory access; and A is the activity factor, i.e., the number of switches as a system cycles through reads and writes. [0004] Often, memory power consumption is managed by dividing the memory into banks and then only enabling one bank at a time. One reason for creating banks is to reduce the amount of capacitance, and reduce switching activity which in turn reduces dynamic power. Frequency normally is not subject to control because it is desirable to operate the memory at high frequencies. Reducing voltage of operation isa very powerful technique to reduce dynamic power because a "cubic" effect results, with a concomitant decrease in frequency. Reducing voltage, however, impacts performance. Other techniques for reducing dynamic power have included limiting the swing of a signal, and reducing switching events for each cycle. SUMMARY [0005] A sacrificial memory bank is added to a block of regular banks in a memory to reduce dynamic power consumption of the memory. The sacrificial bank is accessed by a set of bit lines that is substantially shorter than corresponding bit lines extending through all of the regular memory banks. Memory read and write operations, which are addressed to one of the regular banks, are deliberately redirected to the sacrificial bank having the short bit lines. This avoids using longer bitlines to access the regular banks unless a conflict exits in the sacrificial bank. Tracking circuitry identifies the regular bank that was addressed for each location in the sacrificial bank. Data is moved from the sacrificial bank to a regular bank only when a new write operation does not match the bank of the previous write operation. Dynamic power is substantially reduced because locality of reference causes access to the sacrificial bank without having to access a regular bank (with longer bit lines) for most memory read and write operations. [0006] A memory apparatus according to one aspect of the present disclosure includes a set of regular memory banks, a set of word lines coupled to each of the regular banks, a set of regular bit lines coupled to each of the regular banks, and a sacrificial memory bank in addition to the regular banks. The sacrificial bank is alsocoupled to the set of word lines. A set of sacrificial bank bit lines, which are shorter than the regular bit lines, are coupled to the sacrificial bank. Bank selection circuitry is coupled to the regular memory banks and to the sacrificial bank. The bank selection circuitry is configured to direct a memory operation to the sacrificial memory bank when a bank addressed in the memory operation is unset or matches a bank accessed by a previous memory operation for a corresponding word line. [0007] Another aspect of the present disclosure includes a method for reducing energy consumed during memory access operations. The method includes receiving data along with a memory write address identifying a selected memory bank for the data within a block of regular memory banks and identifying a selected word line within the memory bank. The method further includes determining whether the selected memory bank matches a previous bank addressed in a previous write operation for the selected word line and writing the data to a sacrificial bank. [0008] In yet another aspect, an apparatus for reducing energy consumed during memory access operations has means for receiving data along with a memory write address identifying a selected memory bank for the data within a block of regular memory banks and identifying a selected word line of a plurality of word lines within the memory bank. The apparatus also has means for determining whether the selected memory bank matches a previous bank addressed in a previous write operation for the selected word line. The apparatus further includes means for writing the data to a sacrificial bank. [0009] The foregoing has outlined rather broadly the features and technical advantages of the present disclosure in order that the detailed description that follows may be better understood. Additional features and advantages will be described hereinafter which form the subject of the claims. It should be appreciated by those skilled in the art that the conception and specific embodiments disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the disclosure as set forth in the appended claims. The novel features which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure. BRIEF DESCRIPTION OF THE DRAWINGS [00010] For a more complete understanding of the present disclosure, reference is now made to the following descriptions taken in conjunction with the accompanying drawings. [00011] FIGURE 1 is a block diagram illustrating a prior art generic memory. [00012] FIGURE 2 is a block diagram illustrating a memory architecture according to an aspect of the present disclosure. [00013] FIGURE 3 is a process flow diagram illustrating a method for managing memory traffic to implement an efficient memory according to an aspect of the present disclosure. [00014] FIGURE 4 is a diagram illustrating the use of tracker bits to redirect memory write operations according to an aspect of the present disclosure.[00015] FIGURE 5 is a circuit diagram showing one circuit configuration for managing memory traffic according to an aspect of the present disclosure. [00016] FIGURE 6 is a process flow diagram illustrating a method for reducing energy consumed in memory access operations according to an aspect of the present disclosure. [00017] FIGURE 7 is a block diagram showing an exemplary wireless communication system in which aspects of the disclosure may be advantageously employed. [00018] FIGURE 8 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of a memory according to one aspect of the present disclosure. DETAILED DESCRIPTION [00019] FIGURE 1 illustrates a prior art generic memory 100. The memory 100 can be, for example, a SRAM, DRAM, MRAM, or other memory type and includes a pre-decoder 102 coupled to a decoder 104 which is coupled to a group of substantially identical memory banks 106. Input/output drivers (I/O drivers) 108 are coupled to the pre-decoder 102. The I/O drivers 108 are also coupled to the banks 106 via long bitlines 110 that extend from the I O drivers 108 through all of the banks 106. A group of word lines 112 is replicated in each of the banks 106. When a program trace, which represents either a memory read operation or a memory write operation for a memory address in the group of banks, is received at the I O drivers 108, the memory address indicated by the trace is decoded by the pre-decoder 102 to generate a bank identifier (B) and a word line identifier (WL). The Bank/WL combination corresponds to the memory location being accessed. The I/O drivers 108 energize the word line WL in bank B from the pre-decoded address. Data can then be read from or written to a memory cell at the intersection of a bitline 110 and word line 112 in one of the banks. [00020] Memory architectures are usually configured using groups of banks to facilitate efficient addressing schemes and to save power by allowing only one bank in a group to be accessed at a time. This allows the other banks to be inactive or to operateat low power levels when they are not being accessed. A bank that is being accessed consumes dynamic power to decode the memory address, energize a word line, energize a bit line and/or sense data. However, even banks that are not being accessed consume leakage power whenever the memory power supply is on. [00021] Dynamic power in a memory is generally dominated by bitline switching power which is proportional to bitline capacitance and switching frequency. Therefore, as memory speeds increase, dynamic power is becoming a more significant component of memory power consumption. The bitline capacitance of a typical memory is quite large because the bitlines extending through all of the banks in a group are relatively long. Even in memory architectures that include hierarchical bit lines in which unused sections of a bitline can be switched off, all of the banks are still connected together with long common bitlines that increase dynamic power usage. When a particular bank is being addressed, some of the power is consumed due to the bitline loading caused by all of the other banks also connected to the long bit line. [00022] One aspect of the present disclosure provides an improved memory architecture with an additional bank configured to substantially reduce dynamic power of the memory. The additional bank, referred to as a "sacrificial bank" can be accessed by a set of bit lines that is substantially shorter than corresponding bit lines extending through all of the regular memory banks in the block. Memory read and write operations, which are addressed to one of the regular banks, are deliberately redirected to the sacrificial bank with the short bit lines. This avoids using longer bitlines to access the regular bank unless a conflict exits in the sacrificial bank in which a new write operation does not match the bank of the previous write operation. Such conflicts occurs infrequently because, according to locality of reference principals, most write operations match the bank of the previous write operation. [00023] Referring to FIGURE 2, the memory 200 includes a pre-decoder 202 coupled to a decoder 204 which is coupled to a group of substantially identical regular banks 206. Input/output drivers (I/O drivers) 208 are coupled to the pre-decoder 202. The I/O drivers 208 are also coupled to the banks 206 via long bitlines 210 that extend from the I/O drivers 208 through all of the regular banks 206. A group of word lines 212 is replicated in each of the banks 206. The memory 200 also includes an additional bank 214 that is substantially identical to a regular bank 206. The group of word lines212 is also replicated in the additional bank 214, the replicated word lines 212' will be referred to as "sacrificial word lines.". The additional bank 214 is referred to as a "sacrificial bank" because it involves the sacrifice of some additional space and a small amount of leakage power to save a large amount of dynamic power. [00024] The sacrificial bank 214 is coupled to the I/O drivers 208 and is not coupled to the long bitlines 210. Instead, a different set of bit lines 216 that are physically much shorter than the long bitlines 210 are coupled between the I/O drivers 208 and the sacrificial bank 214. Tracker logic circuitry 220 is coupled to the pre- decoder 202, the regular banks 206 and the sacrificial bank 214. The tracker logic circuitry 220 is coupled to a bank of tracker bits 222 which can store a tracker bit for each regular bank address (Bank/WL). For each word line 212, only one tracker bit may be set at a time to indicate the bank address for the previous write operation on that word line 212. [00025] When a program trace for a memory address in the group of banks is received at the I/O drivers 208, the memory address is decoded from the trace by the pre-decoder 202 to generate a bank identifier and a word line identifier (WL) in which the Bank/WL combination correspond to the memory location being accessed. Memory read and write operations may access either the sacrificial bank 214 or one of the regular banks 206 depending on the state of a tracker bit for the Bank/WL address of the memory operation. A multiplexer 218 selects one of the shorter bit lines 216 when the sacrificial bank 214 is being accessed or a corresponding one of the longer bit lines 210 when one of the regular banks 206 is being accessed. Because the regular banks 206 and the sacrificial bank 214 share the same address decoding by the pre-decoder 202, memory access to a sacrificial bank may be performed on the same memory access cycle as a regular bank access. [00026] The amount of power that can be saved by accessing the sacrificial bank 214 instead of one of the regular bank 206 depends on the difference in length between the short bit lines 216 and the longer bit lines 210 which depends on the number of regular banks 206 in a group for each sacrificial bank 214. For example, if the group includes four regular banks 206 as shown in FIGURE 2, the shorter bitlines are 1/4 of the length of the longer bit lines so memory operations accessing the sacrificial bank 214 use only about 1/4 of the power that would be used for a memory operation thataccesses one of the regular banks 206. [00027] Aspects of the present disclosure provide a method of using a sacrificial bank 214 to substantially reduce the dynamic power consumption of a memory 200. The method includes redirecting memory operations to the sacrificial bank 214 rather than to the regular bank 206 indicated by the pre-decoded address of the memory operation. The regular banks 206 are accessed only when a memory operation addresses a different regular bank 206 than the previous memory operation for the same word line. The method takes advantage of the principal of locality of reference which recognizes that a vast majority of memory operations are directed to the same bank as a previous operation for the same word line. [00028] According to aspects of the present disclosure, traffic to the sacrificial bank may be substantially increased or maximized to reduce power consumption. Program traces are tracked and address decoding is reconfigured so that either the sacrificial bank 214 or a regular bank 206 is accessed depending on the tracking information. Redirecting memory operations to the sacrificial bank saves energy by limiting regular bank access, limiting the global bit line switching associated with regular bank access, and limiting the use of longer word lines 212. [00029] A method for managing memory traffic according to aspects of the present disclosure by tracking program traces is described with reference to FIGURE 3. The method 300 starts in block 302. A global reset may be performed in block 304 in which an array of tracker bits is cleared. The global reset may occur upon start up or at other times according to control of higher level program policies, for example. [00030] When a trace comes into the memory block, the method determines whether the trace is a write operation in block 306 or a read operation in block 308. The trace includes an encoded memory address for the data being written or read. If the trace is read operation, the encoded memory address of the trace is pre-decoded in block 310. The pre-decoding converts the encoded memory address to generate identification of a bank and a word line where data is to be read from. [00031] Upon identifying the bank and word line of the incoming trace, a set of tracker bits is read in block 312 to determine if the bank and word line (Bank/WL) combination is already represented in the sacrificial bank. The tracker bits store theprevious cycle's address (Bank/WL) and are compared with the incoming Bank/WL in block 314. If the incoming Bank/WL matches a Bank/WL read from the tracker bits, then a bank hit (BNK Hit) is indicated. The bank hit means that the incoming Bank/WL is already represented in the sacrificial bank. [00032] If the comparison in block 314 results in a bank hit, then the data may be read from the sacrificial bank in block 316, thereby saving energy. If the comparison in block 314 does not result in a bank hit, then the data to be read is not stored in the sacrificial bank and therefore is retrieved from the one of the regular banks in block 318, i.e., using the longer bit line. [00033] If the trace is write operation, the encoded memory address of the trace is pre-decoded in block 320. The pre-decoding converts the encoded memory address to generate identification of a bank and a word line where data is intended to be written. [00034] Upon identifying the bank and word line of the incoming trace, a set of tracker bits is read in block 322 to determine if the bank and word line (Bank/WL) combination is already represented in the sacrificial bank. [00035] In block 324, the method determines whether the tracker bits are in an unset state to determine whether the incoming trace is the very first cycle after a global reset. If the tracker bits are in an unset state, which indicates that nothing was written and nothing was read in a previous cycle, then the sacrificial bank is free to be written. [00036] The tracker bits are updated in block 326 to indicate data will be written for the Bank/WL to the sacrificial bank for the incoming write operation. In block 328, data is written to the sacrificial bank at the location indicated by the word line of the incoming trace address. The tracker bits store the bank identifier portion of the Bank/WL decoded from the incoming address to record which regular bank the data would have been written to if it had not been diverted to the sacrificial bank. In other words, each word line in the sacrificial bank substitutes for a word line having the same identifier in one of the regular banks. The word line decoder for the sacrificial bank is the same as the word line decoder of the regular banks so the sacrificial bank does not distinguish which bank's data it is storing. [00037] Because the sacrificial bank is a same size replication of any oneregular banks, the tracker bits only store the bank identifying portion of the Bank/WL (address). The word line portion in the sacrificial bank is the same as the word line portion in the regular bank in accordance with the incoming trace address. If the sacrificial bank is implemented with only four regular banks as shown in FIGURE 2, for example, only four bits are used to track the bank for each word line. [00038] If it is determined in block 326 that the tracker bits are not unset, then the tracker bits are compared with the incoming trace's bank identifier in block 330. If the incoming trace's bank identifier matches the bank identified by the tracker bits for the incoming trace's word line, then a bank hit (BNK Hit) is indicated. The bank hit means that the incoming Bank WL is already represented in the sacrificial bank. In other words, the most recent operation involving the word line of the incoming trace also involved the same bank, i.e., the same address. Therefore, the incoming data was intended to overwrite the data in the sacrificial bank. In block 332, the data of the incoming trace can overwrite the previous information in the sacrificial bank Bank/WL location and the tracker bits can remain unchanged. [00039] If the incoming trace's bank identifier does not match the bank identified by the tracker bits for the incoming trace's word line, then a bank hit (BNK Hit) is not indicated. This means that there is some data already stored in the word line of the sacrificial bank to which the incoming trace is directed, but the stored information was originally addressed to a different bank then the incoming trace is directed. Because the new data and the old data in the sacrificial bank correspond to two different banks, the incoming data was not intended to overwrite the data stored in the sacrificial bank. In this case, in block 334, the old data is first moved, i.e., read from the sacrificial bank and written to the regular bank to which it was originally directed as indicated by the tracker bits. Then the tracker bits are updated in block 336 to identify the bank to which the incoming data is addressed. In block 338, the incoming data can then be safely written to the incoming word line in the sacrificial bank. [00040] These operations performed in a single memory cycle are worth the few extra steps because of the significant power savings that result from leveraging locality of reference. Because of locality of reference, there is a very high probability that subsequent memory operations for a word line will be directed to the same bank as the previous operation for that word line. For example, the trace is likely to almostimmediately go back and read what it has just written. Thus, the energy spent setting and checking tracker bits and periodically swapping out data to a regular bank is offset by increasing/maximizing traffic to the sacrificial bank. This substantially reduces the active energy that would be spent activating long bit lines in the regular banks for every memory access operation. [00041] FIGURE 4 illustrates the use of a tracker bit for each word line to indicate which regular bank the data stored in the sacrificial bank was originally addressed, and shows the movement of data between the sacrificial bank and the regular bank using the process diagrammed in FIGURE 3. [00042] In a first state 402, after a global reset, no tracker bits are set, no data is stored in the sacrificial bank (SB) and no data is stored in any of the regular banks (B0, B1, B2, B3). [00043] In a second state 404, after a write operation addressed to regular bank B0,WL0, data for the write operation is stored in the WL0 of the sacrificial bank rather than in regular bank B0. No data is stored in any of the regular banks. The tracker bit for WL0 is set to indicate B0 as the bank to which the data in WL0 of the sacrificial bank was addressed. [00044] In a third state 406, after a write operation to B0,WL1, data for the write operation is stored in WLl of the sacrificial bank rather than in regular bank B0. The tracker bit for WLl is set to indicate B0 as the regular bank to which the data in WLl of the sacrificial bank was addressed. [00045] In a fourth state 408, after a write operation to B 1, WL2, data for the write operation is stored in WL2 of the sacrificial bank rather than in regular bank Bl. The tracker bit for WL2 is set to indicate B 1 as the regular bank to which the data in WL2 of the sacrificial bank was addressed. [00046] In a fifth state 410, after a write operation to B3, WL3, data for the write operation is stored in WL3 of the sacrificial bank rather than in regular bank B 1. The tracker bit for WL3 is set to indicate B3 as the regular bank to which the data in WL3 of the sacrificial bank was addressed. [00047] In a sixth state 412, a write operation is addressed to B2, WLl which is adifferent bank than the bank address of the previous data stored in WLl of the sacrificial bank. In other words, the incoming data to WLl was not intended to overwrite the data previously stored in WLl of the sacrificial bank. In this case, the previously stored data is read from WLl of the sacrificial bank and written to WLl of regular bank BO to which it was originally addressed as determined by reading the tracker bit for WLl. The new data is then stored in WLl of the sacrificial bank and the tracker bit for WLl is moved to indicate that the data now stored in WLl of the sacrificial bank was addressed to regular bank B2. [00048] In a seventh state 414, after another write operation to Bl, WL2, data for the write operation is stored in WL2 of the sacrificial bank rather than in regular bank Bl. Because the tracker bit for WL2 was already set to indicate Bl as the regular bank to which the previous data in WL2 of the sacrificial bank had been addressed, the new data for WL2 overwrites the previous data for WL2 in the sacrificial bank. In this case there is no need to preserve the previous data stored in WL2 of the sacrificial bank, because the new data intended to overwrite without preserving the same data for WL2 in the regular bank B 1. [00049] In an eighth state 416, a write operation is addressed to B3, WL2 which is a different bank than the bank address of the previous data stored in WL2 of the sacrificial bank. The incoming data to WL2 was not intended to overwrite the data previously stored in WL2 of the sacrificial bank. In this case, the previously stored data is read from WL2 of the sacrificial bank and written to WL2 of regular bank B 1 to which it was originally addressed as determined by reading the tracker bit for WL2. The new data is then stored in WL2 of the sacrificial bank and the tracker bit for WL2 is moved to indicate that the data now stored in WL2 of the sacrificial bank was addressed to regular bank B3. [00050] In a ninth state 418, a read operation is addressed to B0,WL0. Because the tracker bit for WL0 indicates that the data stored in WL0 of the sacrificial bank was addressed to the same regular bank, i.e., B0, the data is read from the WL0 of the sacrificial bank. [00051] In a tenth state 420, a read operation is addressed to B0, WLl. Because the tracker bit for WLl indicates that the data stored in WLl of the sacrificial bank wasaddressed to B3 which does not match the address of the read operation, data is read from WL1 of the regular bank BO rather than from the sacrificial bank. [00052] A memory apparatus for implementing a sacrificial bank according to aspects of the present disclosure is described with reference to the schematic circuit diagram shown in FIGURE 5. The memory apparatus includes address lines 501, coupled to an address flip flop 502, control lines 503 coupled to a control flip flop 504, and data lines 505 connected to a data flip flop 506 which capture information from an incoming trace. A chip select line 507 is coupled to a chip select flip flop 508. A chip select signal (CS) on the chip select line 507 enables operation of the memory apparatus when a memory operation is directed to the address space of the memory apparatus. [00053] Information into the memory apparatus on the address lines 501, control lines 503, and data lines 505 is captured by flip flops 502, 504, 506 upon a rising edge of a clock signal. An address predecoder 510 is coupled to the address flip flop 502. The address predecoder 510 pre-decodes address information to generate a bank identifier (B) 512 and a word line identifier (WL) 514. A tracker bank 516 is coupled to the address predecoder 510 to receive and store the predecoded bank identifier (B) . A tracker bit flip flop 518 is coupled to the tracker bank 516. Tracker logic circuitry coupled to the input side of the tracker bank 516 and to the clock input of the tracker bit flip flop 518 receives the word line identifier (WL) from the address predecoder 510. The tracker logic circuitry is configured to provide a read tracker bit (RTB) signal to the tracker bank 516. The RTB signal causes the tracker bank 516 to output previously stored tracker bits (Q) to the tracker bit flip flop 518. The previously stored tracker bits identify the bank that was addressed upon the previous write operation for the word line addressed in the incoming trace. The tracker logic circuitry is also configured to provide a clocking signal to the tracker bit flip flop 518 after providing the RTB signal to the tracker bank 516. The clocking signal causes the tracker bit flip flop 518 to output the previously stored tracker bits for the word line addressed in the incoming trace. [00054] The tracker logic circuitry is also coupled to the control flip flop 504. A control signal from the control flip flop indicates whether the incoming trace is a read operation (R) or a write operation (W). If the control signal indicates the incoming trace is a write operation (W), the tracker logic circuitry provides an update tracker bit (UTB)signal to the tracker bank 516 after providing the clocking signal. The UTB signal causes the tracker bank 516 to receive the bank identifier (B) and to store tracker bits corresponding to the bank identifier (B) of the incoming trace. [00055] Bank match circuitry compares the tracker bits stored in the tracker bit flip flop 518 with the bank identifier (B) of the incoming trace from the address predecoder 510. The bank match circuitry is configured to generate a bank match indicator (M) if the tracker bits stored in the tracker bit flip flop 518 match the bank identifier (B) of the incoming trace. Additional logic circuitry generates a set indicator (S) if the tracker bits stored in the tracker bit flip flop 518 are set. [00056] Combinational logic circuitry coupled to a sacrificial bank 522 and a regular bank 524 receives the set indication (S), match indicator (M) and a control signal (R) or (W) from the control flip flop 504. The combinational logic circuitry causes data (WD) from the data flip flop 506 to be written to the sacrificial bank 522 if a write operation is indicated (W) and the set indicator (S) is low indicating a not-set tracker bit. This circumstance corresponds to a Write SB operation as shown in block 328 in FIGURE 3. [00057] The combinational logic circuitry also causes data (WD) to be written to the sacrificial bank 522 if a write operation is indicated (W), the set indicator (S) indicates a set tracker bit, and the match indicator (M) indicates a match between the bank identifier (B) and tracker bit for the incoming word line. This circumstance corresponds to an overwrite sacrificial bank operation as shown in block 332 of FIGURE 3. [00058] The combinational logic circuitry also causes data (QSB) to be read from the sacrificial bank 522 and written to the regular bank 524 if a write operation is indicated (W), the set indicator (S) indicates a set tracker bit, and the match indicator (M) is low indicating no match between the bank identifier (B) and tracker bit for the incoming word line. In this circumstance, the combinational logic circuitry generates a signal for a write back related read operation (WB_R) and a sacrificial bank read signal (R_SB). In response to the sacrificial bank read signal (R_SB), the sacrificial bank 522 provides the data to be read (QSB) along with a ready signal (SB_RDY) to the combinational logic circuitry. In response to the SB_RDY signal and the WB_R signal,the combinational logic circuitry generates a write back write signal (WB_W). In response to the WB_W signal, the regular bank 524 stores the data (QSB) from the sacrificial bank 522. This circumstance corresponds to the write-back related read operation and regular bank write operation shown in block 334 of FIGURE 3. [00059] The combinational logic circuitry also responds to the WB_W signal by generating a sacrificial bank write signal (W_SB). This causes data from the data flip flop 506 to be written to the sacrificial bank 522 when data from the sacrificial bank (QSB) is written to the regular bank 524. This corresponds to the -write operation shown in block 338 of FIGURE 3. [00060] The combinational logic circuitry also causes data to be read from the sacrificial bank 522 if a read operation is indicated (R), the set indicator (S) indicates a tracker bit is set, and the match indicator (M) indicates a match between the bank identifier (B) and tracker bit for the incoming word line. In this circumstance, the combinational logic circuitry generates the sacrificial bank read signal (R_SB). In response to the sacrificial bank read signal (R_SB), the sacrificial bank 522 provides the data to be read (QSB) along with a sacrificial bank ready signal (SB_RDY) to the combinational logic circuitry. This circumstance corresponds to a read sacrificial bank operation as shown in block 316 of FIGURE 3. [00061] The combinational logic circuitry causes data to be read from the regular bank 524 if a read operation is indicated (R), the set indicator (S) indicates a tracker bit is set, and the match indicator (M) indicates no match between the bank identifier (B) and tracker bit for the incoming word line. In this circumstance, the combinational logic circuitry generates a regular bank read signal (RB_R). In response to the regular bank read signal (RB_R), the regular bank 524 provides data (QRB) to be read along with a regular bank ready signal (RB_RDY). This circumstance corresponds to a read regular bank operation as shown in block 318 of FIGURE 3. [00062] Multiplexor circuitry 526 is coupled to the sacrificial bank 522 and to the regular bank 524. The multiplexor circuitry 526 receives the data (QSB) from the sacrificial bank 522 and the data (QRB) from the regular bank 524. The multiplexor circuitry 526 is coupled to the combinational logic circuitry and configured to receive the read sacrificial bank (R_SB) signal as its control input. In response to the R_SBsignal, the multiplexor circuitry 526 outputs the data (QSB) from the sacrificial bank 522. When the R_SB signal is not provided, the multiplexor circuitry 526 outputs the data (QRB) from the regular bank 524. [00063] An output flip flop 528 is coupled to the output of the multiplexor circuitry 526. A delay element 527 is coupled to the combinational logic circuitry and to the output flip flop 528. In response to the R SB signal, the delay element provides a control signal to the output flip flop 528 which causes the output flip flop 528 to store the multiplexor output after allowing the multiplexor output to settle. Data stored in the output flip flop provides the output of the memory apparatus (Dout). [00064] A method for reducing energy consumed in memory access operations according to an aspect of the present disclosure is described with reference to FIGURE 6. In block 602, data along with a memory write address are received. The memory address identifies a selected memory bank for the data within a block of regular memory banks and identifies a selected word line within the memory bank. At block 604, it is determined whether the selected memory bank matches a previous bank addressed in a previous write operation for the selected word line. At block 606, the data is written to a sacrificial bank. [00065] Aspects of the present disclosure include an apparatus for reducing energy consumed during memory access operations. The apparatus includes means for receiving data along with a memory write address identifying a selected memory bank for the data within a block of regular memory banks and identifying a selected word line of a plurality of word lines within the memory bank. The apparatus also includes means for determining whether the selected memory bank matches a previous bank addressed in a previous write operation for the selected word line and means for writing the data to a sacrificial bank. Referring to FIGURE 5, the means for receiving data along with a memory write address may be address lines 501 and data lines 505, for example. The means for identifying a selectedword line may be the address predecoder 510. The means for determining whether the selected memory bank matches a previous bank addressed in a previous write operation for the selected word line may be the bank match circuitry shown in FIGURE 5, for example. [00066] According to aspects of the present disclosure, the means for writing the data may include means for writing of previous data from the sacrificial bank to the previous bank when the selected bank does not match the previous bank, and subsequently performing the low power write to the sacrificial bank. The means for writing the data may include the combinational logic circuitry shown in FIGURE 5, for example. The apparatus may also include means for updating tracker bits for the selected word line to identify the selected memory bank as a target memory bank for the data. The means for updating tracker bits may include the tracker logic circuitry shown in FIGURE 5, for example. In another configuration, the aforementioned means may be any module or any apparatus configured to perform the functions recited by the aforementioned means. Although specific means have been set forth, it will be appreciated by those skilled in the art that not all of the disclosed means are required to practice the disclosed configurations. Moreover, certain well known means have not been described, to maintain focus on the disclosure. [00067] FIGURE 7 shows an exemplary wireless communication system 700 in which an aspect of the disclosure may be advantageously employed. For purposes of illustration, FIGURE 7 shows three remote units 720, 730, and 750 and two base stations 740. It will be recognized that typical wireless communication systems may have many more remote units and base stations. Remote units 720, 730, and 750 include improved reconfigurable decoding 725 A, 725B, and 725C, respectively, which are aspects of the disclosure as discussed further below. FIGURE 7 shows forward link signals 780 from the base stations 740 and the remote units 720, 730, and 750 andreverse link signals 790 from the remote units 720, 730, and 750 to base stations 740. [00068] In FIGURE 7, remote unit 720 is shown as a mobile telephone, remote unit 730 is shown as a portable computer, and remote unit 750 is shown as a fixed location remote unit in a wireless local loop system. For example, the remote units may be cell phones, hand-held personal communication systems (PCS) units, portable data units such as personal data assistants, or fixed location data units such as meter reading equipment. Although FIGURE 7 illustrates remote units according to the teachings of the disclosure, the disclosure is not limited to these exemplary illustrated units. The disclosure may be suitably employed in any device which includes improved reconfigurable decoding. [00069] FIGURE 8 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of a semiconductor component, such as the memory disclosed above. A design workstation 800 includes a hard disk 801 containing operating system software, support files, and design software such as Cadence or OrCAD. The design workstation 800 also includes a display 802 to facilitate design of a circuit 810 or a semiconductor component 812 such as discussed above. A storage medium 804 is provided for tangibly storing the circuit design 810 or the semiconductor component 812. The circuit design 810 or the semiconductor component 812 may be stored on the storage medium 804 in a file format such as GDSII or GERBER. The storage medium 804 may be a CD-ROM, DVD, hard disk, flash memory, or other appropriate device. Furthermore, the design workstation 800 includes a drive apparatus 803 for accepting input from or writing output to the storage medium 804. [00070] Data recorded on the storage medium 804 may specify logic circuit configurations, pattern data for photolithography masks, or mask pattern data for serial write tools such as electron beam lithography. The data may further include logic verification data such as timing diagrams or net circuits associated with logic simulations. Providing data on the storage medium 804 facilitates the design of the circuit design 810 or the semiconductor component 812 by decreasing the number of processes for designing semiconductor wafers. [00071] Although specific circuitry has been set forth, it will be appreciated by those skilled in the art that not all of the disclosed circuitry is required to practice thedisclosure. Moreover, certain well known circuits have not been described, to maintain focus on the disclosure. Similarly, although the description refers to logical "0" and logical "1" in certain locations, one skilled in the art appreciates that the logical values can be switched, with the remainder of the circuit adjusted accordingly, without affecting operation of the present disclosure. [00072] Although the present disclosure and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the disclosure as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps. [00073] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
Methods and apparatus for continuation passing in a virtual machine (VM). A method is provided for operating a virtual machine to provide continuation passing in a wireless device. The virtual machine comprises a stack memory. The method comprises encountering a context-creating trigger, constructing a continuation block in response to the trigger that comprises a stack fragment derived from the stack memory, encountering an evaluation instruction, and storing the stack fragment from the continuation block on the stack memory in response to the evaluation instruction. |
CLAIMS 1. A method for operating a virtual machine to provide continuation passing in a wireless device, wherein the virtual machine comprises a stack memory, and the method comprises: encountering a context-creating trigger; constructing a continuation block in response to the trigger, wherein the continuation block comprises a stack fragment derived from the stack memory; encountering an evaluation instruction; and storing the stack fragment from the continuation block on the stack memory in response to the evaluation instruction. 2. The method of claim 1, wherein the context-creating trigger comprises a selected program instruction. 3. The method of claim 1, wherein the context-creating trigger comprises a program marker associated with a program instruction. 4. The method of claim 1, further comprising storing the continuation block in a memory. 5. The method of claim 1, further comprising jumping to selected program code to evaluate the continuation. 6. A virtual machine for use in a wireless device having an embedded processor, the virtual machine comprising: a stack memory that comprises logic to store and retrieve information; logic to encounter a context-creating trigger; logic to construct a continuation block in response to the trigger, wherein the continuation block comprises a stack fragment derived from the stack memory; logic to encounter an evaluation instruction; and logic to store the stack fragment from the continuation block on the stack memory in response to the evaluation instruction. 7. The virtual machine of claim 6, wherein the context-creating trigger comprises a context evaluation instruction. 8. The virtual machine of claim 6, wherein the context-creating trigger comprises a program marker associated with a program instruction. <Desc/Clms Page number 16> 9. The virtual machine of claim 6, further comprising logic to store the continuation block in a memory. 10. The virtual machine of claim 6, further comprising logic to jump to selected program code to evaluate the continuation. 11. A computer readable media comprising program instructions that when executed by processing logic provides a VM that performs continuation passing, wherein the virtual machine comprises a stack memory, and the computer readable media comprises: program instructions for encountering a context-creating trigger; program instructions for constructing a continuation block in response to the trigger, wherein the continuation block comprises a stack fragment derived from the stack memory; program instructions for encountering an evaluation instruction; and program instructions for storing the stack fragment from the continuation block on the stack memory in response to the evaluation instruction. 12. A virtual machine for use in a wireless device having an embedded processor, the virtual machine comprising: means for providing a stack memory means for encountering a context-creating trigger; means for constructing a continuation block in response to the trigger, wherein the continuation block comprises a stack fragment derived from the stack memory; means for encountering an evaluation instruction; and means for storing the stack fragment from the continuation block on the stack memory in response to the evaluation instruction. 13. The virtual machine of claim 12, further comprising means for storing the continuation block in a memory. 14. The virtual machine of claim 12, further comprising means for jumping to selected program code to evaluate the continuation. 15. A wireless device having an embedded processor, the wireless device comprising: a stack memory that comprises logic to store and retrieve information; and a virtual machine that operates to perform continuation passing, the virtual machine comprising: logic to encounter a context-creating trigger; <Desc/Clms Page number 17> logic to construct a continuation block in response to the trigger, wherein the continuation block comprises a stack fragment derived from the stack memory; logic to encounter an evaluation instruction; and logic to store the stack fragment from the continuation block on the stack memory in response to the evaluation instruction. |
<Desc/Clms Page number 1> METHOD AND APPARATUS FOR CONTINUATION-PASSING IN A VIRTUAL MACHINE BACKGROUND I. FIELD [0001] The present invention relates generally to computing systems, and more particularly, to methods and apparatus for providing continuation passing in a virtual machine to provide efficient program flow and memory resource utilization. II. DESCRIPTION OF THE RELATED ART [0002] Advances in technology have resulted in smaller and more powerful wireless devices. For example, there currently exist a variety of portable wireless telephones, personal digital assistants (PDAs), and paging devices that are small, lightweight, and can be easily carried by users. Typically, these devices include an embedded controller with limited memory resources. For example, the amount of available memory may be limited by the small size of the device. [0003] As wireless devices have become more widespread, there is an increasing need for these devices to handle larger amounts of data and to execute programs that are more sophisticated. For example, users are demanding remote access to interactive programs, such as gaming programs, that require wireless devices to provide fast and efficient communication with remote service providers using a wireless network. In addition, users would like to have remote access to specific programs that are typically accessible on larger home or office systems. [0004] In order to meet these demands, device and service providers have the choice of developing their own technology or trying to make use of existing technology. Unfortunately, developing new technology is both time consuming and expensive, and therefore, an unattractive alternative. To use existing technology, such as existing software, compatibility problems must be overcome. For example, software developed for one processing system may not be compatible with another processing system. Thus, compatibility problems need to be addressed when porting software from one or more systems to run on a wireless device. [0005] One technique used to overcome compatibility problems involves the use of a virtual machine (VM). A typical VM comprises software executing on a host system that allows the host system to run non-native program instructions written for some other system (i. e. , a remote system). For example, the non-native program instructions <Desc/Clms Page number 2> written to execute on the remote system are interpreted by the VM software running on the host system. Thus, a VM running on a wireless device allows the device to run software written for various different systems, thereby allowing device developers and service providers to use existing software to provide added functionality to wireless device users. [0006] Unfortunately, implementing a VM on a resource limited wireless device raises other problems. For example, most VM implementations employ a stack for temporary storage that may be used as a scratch pad to store constants, variables, arguments to called procedures, or other information needed for program execution. During bytecode execution, it is possible to encounter a dynamic function that creates an activation record or context, which may include stack pointers, current program counter (PC), code pointers, etc. A closure or block is a bytecode fragment that refers to elements on the stack in the current context (from where the block was created. ) Blocks can be returned from contexts to be used elsewhere in application code. An example is a sort block passed to a sorting function. In order to execute the block at a later stage, the creating context cannot be released, i. e. , the creating function cannot return. The block can only be passed to function calls made from the creating function. However, being able to return a parameterized block (i. e. , a block that refers to data in the creating context) would be extremely useful. Some systems solve this problem by creating a completely new stack for each activation record. Since the maximum stack size for each activation record can be computed at compile-time, the stack size is bounded. While this technique seems to solve the problem, it penalizes every function call with stack creation and parameter copying, which is costly on systems with low processing power and limited memory, such as a wireless device. Others techniques allow the return of the creating activation record only if the block does not refer to any actual data in the creating activation record (i. e. the block is clean). This technique solves part of the problem, but does not allow the block to be parameterized with data that was available in the creating context. Therefore, what is needed is a VM for use in a resource-limited wireless device to provide continuation passing to allow a return of a parameterized block that refers to data in the creating context, thereby providing fast program execution while efficiently utilizing the available memory resources. <Desc/Clms Page number 3> SUMMARY [0009] In one or more embodiments, methods and apparatus are provided to allow a VM to perform continuation passing in a resource limited wireless device. For example, the wireless device may be a wireless telephone having an embedded processor and limited memory resources that execute program instructions to provide one embodiment of a VM. The VM allows the wireless device to execute non-native program instructions written for a different system. As a result, the wireless device is able to provide the device user with the functionality of the non-native program. In one embodiment, the VM performs continuation passing so that a block is created in response to encountering a context-creating trigger, such as a dynamic function call. The VM behaves an extended context that includes a copy of the current stack fragment of the current context. In addition, a parameter offset into the fragment of the parameters included in the block is stored. Upon block evaluation, the elements of the stored stack fragment are pushed back onto the stack, effectively reconstructing the context of the block from which the block was created. The parameters passed to the block are stored in the fragment using a stored parameter index. The block can then execute with the full state of the creating context. By pushing the stack fragment back onto the stack, the creating context of the block is effectively re-instated. This enables parameterized blocks to be returned by a context. By copying the stack fragment only when a block is created, processing and memory overhead are minimized. In one embodiment, a method is provided for operating a virtual machine to provide continuation passing in a wireless device. The virtual machine comprises a stack memory. The method comprises encountering a context-creating trigger, constructing a continuation block in response to the trigger that comprises a stack fragment derived from the stack memory, encountering an evaluation instruction, and storing the stack fragment from the continuation block on the stack memory in response to the evaluation instruction. [0013] In another embodiment, a virtual machine is provided for use in a wireless device having an embedded processor. The virtual machine comprises a stack memory that comprises logic to store and retrieve information. The virtual machine also comprises logic to encounter a context-creating trigger and logic to construct a continuation block in response to the trigger, wherein the continuation block comprises a stack fragment derived from the stack memory. The virtual machine also comprises <Desc/Clms Page number 4> logic to encounter an evaluation instruction and logic to store the stack fragment from the continuation block on the stack memory in response to the evaluation instruction. In another embodiment, a virtual machine is provided for use in a wireless device having an embedded processor. The virtual machine comprises means for providing a stack memory and means for encountering a context-creating trigger. The virtual machine also comprises means for constructing a continuation block in response to the trigger, wherein the continuation block comprises a stack fragment derived from the stack memory. The virtual machines also comprises means for encountering an evaluation instruction and means for storing the stack fragment from the continuation block on the stack memory in response to the evaluation instruction. In another embodiment, a computer readable media is provided that comprises program instructions that when executed by processing logic provides a virtual machine that performs continuation passing. The virtual machine comprises a stack memory, and the computer readable media comprises program instructions for encountering a context-creating trigger. The computer readable media also comprises program instructions for constructing a continuation block in response to the trigger, wherein the continuation block comprises a stack fragment derived from the stack memory. The computer readable media also comprises program instructions for encountering an evaluation instruction and program instructions for storing the stack fragment from the continuation block on the stack memory in response to the evaluation instruction. In another embodiment, a wireless device is having an embedded processor is provided. The wireless device comprises a stack memory that comprises logic to store and retrieve information. The wireless device also comprises a virtual machine that operates to perform continuation passing. The virtual machine comprises logic to encounter a context-creating trigger and logic to construct a continuation block in response to the trigger, wherein the continuation block comprises a stack fragment derived from the stack memory. The virtual machines also comprises logic to encounter an evaluation instruction and logic to store the stack fragment from the continuation block on the stack memory in response to the evaluation instruction. Other aspects, advantages, and features of the present invention will become apparent after review of the hereinafter set forth Brief Description of the Drawings, Detailed Description of the Invention, and the Claims. <Desc/Clms Page number 5> BRIEF DESCRIPTION OF THE DRAWINGS [0018] The foregoing aspects and the attendant advantages of the embodiments described herein will become more readily apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings wherein: [0019] FIG. 1 illustrates a data network that includes a wireless device with limited memory resources suitable for implementing one embodiment of a VM to perform continuation passing; [0020] FIG. 2 shows a functional block diagram illustrating one embodiment of the wireless device of FIG. 1; [0021] FIG. 3 shows an illustration of memory resources used to create a continuation block to provide continuation passing;] FIG. 4 shows an illustration of memory resources used to evaluate the continuation block of FIG. 3 ; [0023] FIG. 5 shows one embodiment of a method for providing continuation passing in a VM for use in a wireless device; and [0024] FIG. 6 illustrates a data network that includes portable computing devices with limited resources that are suitable to implement one or more embodiments of a VM to perform continuation passing. DETAILED DESCRIPTION [0025] The following detailed description describes one or more embodiments of methods and apparatus for providing a VM that performs continuation passing in a wireless device. In one or more embodiments, the wireless device has limited resources (i. e. , limited memory capacity), and continuation passing provided by the VM is achieved by performing the following steps. 1. Encountering a context-creating trigger (i. e. , a continuation creating instruction such as a push continuation instruction). 2. Constructing a continuation block that includes a stack fragment plus other information included in an activation record (i. e. , code pointer, etc.). 3. Pushing the continuation block onto a stack memory. 4. Encountering a continuation evaluation instruction. 5. Retrieving the continuation block. <Desc/Clms Page number 6> 6. Pushing the stack fragment back onto the stack. 7. Evaluating the continuation by jumping to program code associated with a code pointer stored in the continuation block. [0033] FIG. 1 illustrates a data network 100 that includes a wireless device 102 with limited memory resources suitable for implementing one embodiment of a VM that performs continuation passing. In the system 100, the wireless device 102 communicates with a network server 104 over a wireless network 108 using wireless communication channels 106. In one embodiment, the device 102 comprises a wireless telephone that may transmit and/or receive data and/or voice information over the wireless network 108. However, the device 102 may comprise any other type of wireless device. The device 102 operates to request various information from the server 104 that include applications 110,112 and/or system services. For example, the system services include a VM 114 that provides one embodiment of continuation passing. [0034] In one embodiment, the device 102 also couples directly to a local system, such as a local workstation 116, via a direct link 120. This direct link 120 allows the device 102 to exchange data and/or programs with the local workstation 116. In one embodiment, the local workstation 116 downloads a VM 118 to the device 102 using the direct link 120. The VM 118 may be the same as the VM 114, and both operate to provide one or more embodiments of continuation passing. In one embodiment, the device 102 comprises an embedded system that includes an embedded processor, memory and various interfaces, so that the device 102 may store, load and execute the applications 110 and/or the VM 114 downloaded from the server 104. The applications 110 and VM 114 may interact with a runtime environment executing on the device 102 used to simplify operation of the device, such as by providing generalized calls for device specific resources. One such runtime environment is the Binary Runtime Environment for Wireless (BREW) software platform developed by QUALCOMM, Inc., of San Diego, California. The VM 114 maybe download from the server 104 to the device 102 in order to facilitate the device's 102 execution of software developed for different computing systems. For example, application 112 may include non-native program instructions written for a target device or system that is different from the device 102. The VM 114 operates to simulate the environment of the target system so that target applications (like application 112) that are designed to execute on the target system may also execute on <Desc/Clms Page number 7> the device 102. For example, in one embodiment, the VM 114 operates to provide a JAVA system environment so that JAVA applications may be downloaded and executed on the device 102. In one or more embodiments, the VM 114 includes methods and apparatus for providing continuation passing during the execution of these non-native instructions. The VM 118 that is downloaded to the device 102 from the local workstation 116 maybe identical to the VM 114, and therefore, also operates to provide one or more embodiments of continuation passing. In one embodiment, the VM 118 is provided on a computer readable media, such as a floppy disk, and is loaded onto the system 116 for transmission to the device 102. In another embodiment, the VM may be stored on a computer readable memory device, such as a memory card (not shown), and plugged directly into the device 102, so that the VM may execute on the device 102. Thus, the device 102 may receive the VM in a wireless transmission, a wired transmission, or by retrieving it directly from a memory device. Because the device 102 is portable and has limited memory resources, it is especially well suited to run a VM with one or more embodiments of continuation passing. For example, because the device 102 has limited memory capacity, a VM with continuation passing operates to efficiently utilize the available memory and provide fast and efficient program interpretation and execution of non-native program instructions. FIG. 2 shows a functional block diagram illustrating one embodiment of the device 102 that includes a VM that operates to perform continuation passing. The device 102 comprises instruction processing logic 202 that is coupled to an internal data bus 204. Also coupled to the internal data bus 204 are native instruction memory 206, interpreted instruction memory 208, heap memory 210, user interface 212 and input/output (I/O) interface 214. During operation of the device 102, the processing logic 202 executes program instructions stored in the native instruction memory 206. In one or more embodiments, the processing logic 202 comprises a CPU, gate array, hardware logic, software or any combination of hardware and software. Thus, the processing logic 202 generally comprises logic to execute machine-readable instructions stored in the native instruction memory 206. The native instruction memory 206 comprises RAM, ROM, FLASH, EEROM, or any other suitable type of memory, or any combination thereof. In one embodiment, <Desc/Clms Page number 8> the native instruction memory 206 is located internal to the device 102, and in another embodiment, the native instruction memory 206 comprises a portable memory card or memory device that may be selectively attached to the device 102, and thereby couple to the internal bus 204. Thus, the native instruction memory 206 may comprise virtually any type of memory that is capable of storing instructions that may be executed by the processing logic 202. The user interface 212 receives user input, for example, from a keypad, pointing device, touch pad, or other input mechanisms, such as audio circuitry to receive and process voice commands. The user interface 212 may also provide outputs to various output mechanisms, such as a display, LEDs, audio speaker or other types of visual or audible indicators. Thus, the user interface 212 comprises hardware and/or software in any combination to allow the device 102 to receive user input and output visual information or audible indicators to the user. The 1/0 interface 214 operates to transmit and receive information between the device 102 and external devices, systems, and/or networks. For example, in one embodiment, the 1/0 interface 214 comprises a radio transceiver circuit (not shown) that operates to transmit and receive information over a wireless data network using, for example, communication link 106. For example, the transceiver comprises circuitry that modulates information received from the processing logic 202 and converts the modulated information into high frequency signals suitable for wireless transmission. Similarly, the transceiver also comprises circuitry to convert received high frequency communication signals into signals suitable for demodulation and subsequent processing by the processing logic 202. In another embodiment, the I/O interface 214 comprises a transceiver that operates to transmit and receive information over a hardwired communication link, such as a telephone line, to communicate with a remote system on a public data network, such as the Internet. In still another embodiment, the I/O interface 214 comprises circuitry that operates to communicate with local devices, such as the local workstation 116 using the link 120. The I/O interface 214 may also include circuitry to communicate with a printer or other local computer or device, such as floppy disk or memory card. Thus, the I/O interface 214 may comprise any type of hardware, software, or combination thereof to allow the device 102 to communicate with other local or remotely located devices or systems. <Desc/Clms Page number 9> [0046] During operation of the device 102, native program instructions stored in the native instruction memory 206 are executed by the processing logic 202. In one embodiment, execution of the native program instructions by the processing logic 202 causes a VM 218 to be generated. The VM 218 operates to interpret non-native program instructions that are stored in the interpreted instruction memory 208. For example, applications having non-native program instructions, like application 112, may be downloaded to the device 102 via the wireless network and stored in the interpreted instruction memory 208. To assist with instruction execution, the VM 218 utilizes a stack memory 216 to store program data or instructions on a temporary basis. For example, the VM 218 may store constants, variables, program addresses, pointers, instructions or other information items on the stack memory 216. In another embodiment, the VM 218 may store information on a temporary basis in the heap memory 210. The heap memory comprises virtually any type of memory suitable for the storage and retrieval of information by the processing logic 202. The stack memory 216 may be dedicated for use by the VM 218, or may also be shared with the processing logic 202 during instruction execution. In one embodiment, the processing logic 202 retrieves native instructions from the native instruction memory 206 via the internal bus 204. Execution of the native program instructions causes the VM 218 to be generated. The VM 218 then retrieves and executes the non-native instructions stored in the interpreted instruction memory 208 via the internal bus 204. Thus, the device 102 operates to generate the VM 218, which allows the device 102 to run non-native program code to provide selected functionality to the user. For example, the device user may wish to download and run a JAVA application that is incompatible with either the hardware or software configuration of the device 102. The VM 218 operates to provide a JAVA system environment, thereby allowing JAVA applications to run on the device 102. Furthermore, the VM 218 operates to provide one or more embodiments of continuation passing to provide fast interpretation and execution of the non-native instructions and efficient utilization of the limited memory resources of the device 102. In one embodiment, native program instructions to generate the VM 218 are downloaded into the native instruction memory 206 of the device 102 from a remote server via the I/O interface 214. For example, referring to FIG. 1, the remote server 104 downloads the native program instructions to the device 102 via the wireless <Desc/Clms Page number 10> network 108. In another embodiment, the local workstation 116 downloads the native program instructions to the device 102 via the link 120. In a similar manner, non-native program instructions may also be downloaded to the device 102. [0050] It should be noted that the configuration of the device 102 is just one configuration suitable for generating a VM that provides continuation passing. It is also possible to generate a VM using other device configurations within the scope of the present invention. Furthermore, although the described VM is shown implemented in the wireless device 102, it is also possible to implement the VM in virtually any type of device having an embedded processor and limited memory resources. FIG. 3 shows a detailed illustration of the memory resources in the wireless device 102 that are used by the VM 218 to create a continuation block to provide continuation passing. The memory resources comprise the stack memory 216, which is shown at 302 before the continuation block is created, and at 304, after the continuation block is created. In one embodiment, the continuation block may be created and/or stored in the heap memory 210. A stack fragment 306 is defined that contains information relevant to a current continuation-creating context. FIG. 4 shows a detailed illustration of the memory resources in the wireless device 102 that are used by the VM 218 to evaluate the continuation block shown in FIG-3. The stack memory 216 is shown at 402 before the continuation block is evaluated, and at 404, after the continuation block is evaluated. FIG. 5 shows one embodiment of a method 500 for operating a VM to provide continuation passing for use in a resource-limited device, such as the wireless device 102. For the purpose of clarity, the description of the method 500 will reference the memory resources shown in FIGS. 3 and 4, and the architectures shown in FIGS. 1 and 2. Furthermore, it will be assumed that native program instructions for generating a VM that provides continuation passing are stored in the native program memory 206. It will further be assumed that an application, for instance application 112, comprising non- native program instructions is stored in the interpreted program memory 208. The non- native program instructions were created for use with another system and are not directly compatible with the device 102. However, the non-native program instructions provide functionality that is desirable to the user of the device 102. Thus, it is of benefit to the user of the device 102 to generate a VM to interpret and execute the non-native program instructions to achieve the desired functionality. Furthermore, the VM <Desc/Clms Page number 11> operates to provide one or more embodiments of continuation passing to efficiently utilize the limited memory resources of the device 102. At block 502, native program instructions for generating a VM are stored into the native memory. For example, the VM may be downloaded into the device 102 from the wireless network 108 via the channel 106 and interface 214. In another embodiment, the VM may be downloaded into the device 102 from the local workstation 116 via the link 120 and the interface 214. In another embodiment, the native instruction memory may comprise a memory device that is plugged into the device 102, such as a memory card, and the VM is stored on that memory device. In still another embodiment, the VM is stored into the memory 206 during manufacture of the device 102. At block 504, non-native program instructions that represent an application designed to run on a different system are stored into the interpreted instruction memory. For example, the non-native instructions may be downloaded from the network server 104 into the interpreted instructions memory 208 of the device 102 via the wireless network 108. In another embodiment, the non-native instructions are downloaded from the local workstation 116, or included on a memory device that is plugged into the device 102. At block 506, the VM is activated. For example, the processing logic 202 retrieves the native instructions from the native instruction memory 206 via the internal bus 204 and begins to execute those instructions. By executing the native instructions, the processing logic operates to generate the VM 218. At block 508, the VM begins interpreting the non-native instructions in the interpreted instruction memory. For example, the VM 218 retrieves non-native instructions from the memory 208 via the internal bus 204. The VM interprets and executes these instructions. In one embodiment, the VM uses the stack memory 216 or the heap memory 210 as temporary storage areas. In another embodiment, the stack memory 216 is a stack memory dedicated to the VM that may be different from any stack memory used by the processing logic 202. [0058] At block 510, the VM encounters a context-creating trigger during the interpretation of the non-native instructions. For example, the context-creating trigger may occur when the VM encounters one or more selected non-native instructions to interpret. In another embodiment, the trigger may occur when the VM encounters a <Desc/Clms Page number 12> program marker that is associated with the non-native instructions. When the trigger is encountered, the stack 216 appears as that shown at 302. [0059] At block 512, in response to the context-creating trigger, the VM operates to create a continuation block, as shown in FIG. 3. The continuation block includes the stack fragment 306 and a block header that includes information included in an activation record (i. e. , code pointer, etc. ). The stack fragment 306 represents a copy of a portion of the stack 216. For example, the information within the stack fragment 306 is a portion of the stack 216 that extends from the current stack base 308 to the current stack top 310. The stack fragment 306 is then copied into the continuation block. In one embodiment, the continuation block is then stored on the stack 216, as shown at 304. Thus, the first portion of the method 500 operates to generate the continuation block that includes the stack fragment 306 in response to encountering the context- creating trigger. The remaining portion of the method 500 describes how the VM operates to evaluate the continuation. At block 514, the VM encounters a continuation evaluation instruction, which may occur sometime after the continuation block is created. At block 516, the VM retrieves the continuation block, for example, from the stack 216, as shown in FIG. 4. At block 518, the stack fragment 306 stored within the continuation block is pushed back onto the stack. For example, referring to FIG. 4, the stack fragment 306 is pushed onto the stack 216, as shown by the stack illustration at 404. In performing this step, a new stack top and a new stack base are determined. At block 520, the continuation is evaluated by jumping to program code associated with a code pointer stored in the continuation block. For example, the block header associated with the continuation block contains a code pointer that is jumped to when the continuation is to be evaluated. Thus, the method 500 provides for continuation passing in a VM implemented in a memory limited wireless device. It should be noted that it is also possible to extend the above-described process to provide nested continuation passing within the scope of the present invention. The method 500 is intended to be illustrative and not limiting of the operation of the various embodiments continuation passing described herein. For example, it would be obvious to one with skill in the art to make minor changes, additions or deletions to any of the described method steps. Furthermore, the described method steps may be <Desc/Clms Page number 13> combined, rearranged or reordered without deviating from the scope of the described embodiments. FIG. 6 illustrates a data network 600 that includes wireless devices with limited memory resources that are suitable to implement one or more embodiments of VM to perform continuation passing. The wireless devices comprise a wireless telephone 602, personal digital assistant (PDA) 604, pager/email device 606 and a tablet computer 608. Because of their small size and light weight, the devices utilize embedded processors and have limited memory resources. The devices (602,604, 606, and 608) include circuitry to communicate over a wireless data network 614 with a wireless network server 612 using wireless communication channels 610. The wireless communication channels 610 may comprise, for example, satellite communication channels, terrestrial communication channels, or any other type of radio frequency (RF) or electromagnetic communication channels. The wireless data network 614 may be any suitable network capable of operating with the selected communication channels 610. Additionally, the wireless devices (602,604, 606, and 608) include circuitry to communicate over wired communication channels 616 with a workstation 618. The workstation 618 includes logic to communicate with a network server 620 over wired communication channels 622 using a wired data network 624. Furthermore, the workstation 618 includes logic to communicate with the wireless network server 612 using a wireless communication channel 626 and the wireless network 614 [0068] During operation of the network 600, the wireless devices (602,604, 606, and 608) include one or more embodiments of a VM constructed to perform continuation passing. For example, the VM may be incorporated into a wireless device when the respective device is manufactured. In another embodiment, the VM may be stored on a memory card (not shown) that plugs into a wireless device, thereby allowing the wireless device to retrieve instructions and operate the VM from the memory card. Thus, the program instructions that comprise the VM are stored on a computer readable media. Virtually any type of computer readable media may be used to store the program instructions that when executed by a wireless device generates one or more embodiments of a VM that performs continuation passing as described herein. In one or more embodiments included in the present invention, methods and apparatus provide a VM that performs continuation passing for use in a resource-limited device. Accordingly, while one or more embodiments of the methods and apparatus <Desc/Clms Page number 14> have been illustrated and described herein, it will be appreciated that various changes can be made to the embodiments without departing from their spirit or essential characteristics. Therefore, the disclosures and descriptions herein are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims. |
A processor supports a processing mode in which the address size is greater than 32 bits and the operand size may be 32 or 64 bits. The address size may be nominally indicated as 64 bits, although various embodiments of the processor may implement any address size which exceeds 32 bits, up to and including 64 bits, in the processing mode. The processing mode may be established by placing an enable indication in a control register into an enabled state and by setting a first operating mode indication and a second operating mode indication in a segment descriptor to predefined states. Other combinations of the first operating mode indication and the second operating mode indication may be used to provide compatibility modes for 32 bit and 16 bit processing compatible with the x86 processor architecture (with the enable indication remaining in the enabled state). |
1. A processor comprising:a segment register configured to store a segment selector identifying a segment descriptor including a first operating mode indication, a second operating mode indication, and one or more bits identifying a segment described by said segment descriptor as a code segment;a control register configured to store an enable indication, wherein said processor is configured to establish a default address size responsive to said enable indication, said first operating mode indication, and said second operating mode indication.2. The processor as recited in claim 1 wherein said default address size is a first address size if said enable indication is in an enabled state and said first operating mode indication is in a first state, and wherein said default address size is a second address size if said enable indication is in said enabled state, said first operating mode indication is in a second state, and said second operating mode indication is in said first state.3. The processor as recited in claim 2 wherein said second address size is one of a plurality of address sizes available if said enable indication is in said enabled state and said first operating mode indication is in said second state, and wherein said one of said plurality of address sizes is selected in response to a state of said second operating mode indication.4. The processor as recited in claim 3 wherein one of said plurality of address sizes is a 32 bit address size.5. The processor as recited in claim 3 wherein one of said plurality of address sizes is a 16 bit address size.6. The processor as recited in claim 2 wherein said first address size is greater than 32 bits.7. The processor as recited in claim 6 wherein said default address size applies to virtual addresses generated by said processor.8. The processor as recited in claim 7 wherein a virtual address is generated according to a segmentation mechanism employed by said processor.9. The processor as recited in claim 7 wherein said default address size further applies to physical addresses generated by said processor.10. The processor as recited in claim 1 wherein, if said enable indication is in a disabled state, said first operating mode indication is undefined and said processor is configured to establish said default address size responsive to said second operating mode indication.11. A method comprising:establishing a default address size in a processor in response to an enable indication in a control register within said processor, a first operating mode indication in a segment descriptor, and a second operating mode indication in said segment descriptor, said segment descriptor further including one or more bits identifying a segment described by said segment descriptor as a code segment; andgenerating addresses of said default address size.12. The method as recited in claim 11 wherein said establishing comprises establishing a first address size responsive to said enable indication being in an enabled state and said first operating mode indication being in a first state, and wherein said first address size is greater than 32 bits.13. The method as recited in claim 12 wherein said default address size applies to a virtual address.14. The method as recited in claim 13 wherein said default address size applies to a physical address.15. The method as recited in claim 12 wherein said establishing further comprises establishing a second address size responsive to said enable indication being in an enabled state, said first operating mode indication being in a second state, and said second operating mode indication being in said first state, and wherein said second address size is 32 bits.16. The method as recited in claim 12 wherein said establishing further comprises establishing one of a plurality of address sizes if said enable indication is in said enabled state and said first operating mode indication is in a second state, and wherein said one of said plurality of address sizes is selected in response to a state of said second operating mode indication. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention is related to the field of processors and, more particularly, to address and operand sizes in processors.2. Description of the Related ArtThe x86 architecture (also known as the IA-32 architecture) has enjoyed widespread acceptance and success in the marketplace. Accordingly, it is advantageous to design processors according to the x86 architecture. Such processors may benefit from the large body of software written to the x86 architecture (since such processors may execute the software and thus computer systems employing the processors may enjoy increased acceptance in the market due to the large amount of available software).As computer systems have continued to evolve, 64 bit address size (and sometimes operand size) has become desirable. A larger address size allows for programs having a larger memory footprint (the amount of memory occupied by the instructions in the program and the data operated upon by the program) to operate within the memory space. A larger operand size allows for operating upon larger operands, or for more precision in operands. More powerful applications and/or operating systems may be possible using 64 bit address and/or operand sizes.Unfortunately, the x86 architecture is limited to a maximum 32 bit operand size and 32 bit address size. The operand size refers to the number of bits operated upon by the processor (e.g. the number of bits in a source or destination operand). The address size refers to the number of bits in an address generated by the processor. Thus, processors employing the x86 architecture may not serve the needs of applications which may benefit from 64 bit address or operand sizes.SUMMARY OF THE INVENTIONThe problems outlined above are in large part solved by a processor as described herein. The processor supports a first processing mode in which the address size is greater than 32 bits and the operand size may be 32 or 64 bits. The address size may be nominally indicated as 64 bits, although various embodiments of the processor may implement any address size which exceeds 32 bits, up to and including 64 bits, in the first processing mode. The first processing mode may be established by placing an enable indication in a control register into an enabled state and by setting a first operating mode indication and a second operating mode indication in a segment descriptor to predefined states. Other combinations of the first operating mode indication and the second operating mode indication may be used to provide compatibility modes for 32 bit and 16 bit processing compatible with the x86 processor architecture (with the enable indication remaining in the enabled state). Advantageously, 64 bit processing may be provided while providing compatibility with the x86 processor architecture, and hence supporting existing code written to the x86 processor architecture.Furthermore, by providing compatibility modes for 32 bit and 16 bit processing while the enable indication for the first processing mode remains in the enabled state in the control register, software compatibility may be simplified. For example, an operating system coded to take advantage of the first processing mode may still launch applications written to 32 or 16 bit modes. The processor may operate in the first processing mode while executing operating system code. While executing application code, the processor may operate in 32 or 16 bit mode (as directed by the first and second operating mode indications in the corresponding segment descriptors). However, when a call to the operating system is performed or when an exception or interrupt causes operating system code to be executed, the enable indication may indicate to the processor that the operating system code may be in the first processing mode and thus may allow for the first processing mode to be established at the switch (based on the first and second operating mode indications in the segment descriptor corresponding to the operating system).Additionally, the processor may support the 16 and 32 bit x86 operating modes if the enable indication in the control register is in the disabled state. The first operating mode indication may be undefined if the enable indication is disabled, and the second operating mode indication may determine if the processor's operating mode is 16 or 32 bit. Such modes may be used, for example, if the processor supports a segment descriptor which defines the first operating mode indication in a different fashion.Broadly speaking, a processor is contemplated. The processor comprises a segment register and a control register. The segment register is configured to store a segment selector identifying a segment descriptor including a first operating mode indication and a second operating mode indication. The control register is configured to store an enable indication. Responsive to the enable indication, the first operating mode indication, and the second operating mode indication, the processor is configured to establish an operating mode.Additionally, a processor is contemplated which comprises a segment register and a control register configured to store an enable indication. The segment register is configured to store a segment selector and information from a segment descriptor. The segment selector includes an index into a segment descriptor table stored in a memory to which the processor is coupled, and the segment descriptor is stored in the segment descriptor table in an entry indicated by the index. The processor is configured to read the segment descriptor from the segment descriptor table responsive to the segment selector, and the segment descriptor includes an operating mode indication. The processor is configured to operate in an operating mode in which virtual addresses are greater than 32 bits responsive to the enable indication being in an enabled state and the operating mode indication being in a first state.Moreover, a method is contemplated. An operating mode is established in a processor in response to an enable indication in a control register within the processor, a first operating mode indication in a segment descriptor, and a second operating mode indication in the segment descriptor. Operands are fetched and addresses are generated in response to the operating mode.BRIEF DESCRIPTION OF THE DRAWINGSOther objects and advantages of the invention will become apparent upon reading the following detailed description and upon reference to the accompanying drawings in which:FIG. 1 is a block diagram of one embodiment of a processor.FIG. 2 is a block diagram of one embodiment of a segment descriptor for 32/64 mode.FIG. 3 is a block diagram of one embodiment of a segment descriptor for compatibility mode.FIG. 4 is a block diagram of operation in compatibility mode and in legacy mode according to one embodiment of the processor shown in FIG. 1.FIG. 5 is a table illustrating one embodiment of operating modes as a function of segment descriptor and control register values.FIG. 6 is a table illustrating one embodiment of the use of instruction prefixes to override default operating modes.FIG. 7 is a block diagram of one embodiment of a register.FIG. 8 is a diagram illustrating one embodiment of a global descriptor table and a local descriptor table.FIG. 9 is a block diagram of one embodiment of a 32/64 call gate descriptor.FIG. 10 is a block diagram of an instruction format.FIG. 11 is a block diagram of one embodiment of a computer system including the processor shown in FIG. 1.FIG. 12 is a block diagram of another embodiment of a computer system including the processor shown in FIG. 1.FIG. 13 is a flowchart illustrating one embodiment of a method.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSTurning now to FIG. 1, a block diagram illustrating one embodiment of a processor 10 is shown. Other embodiments are possible and contemplated. In the embodiment of FIG. 1, processor 10 includes an instruction cache 12, an execution core 14, a data cache 16, an external interface unit 18, a memory management unit (MMU) 20, and a register file 22. In the illustrated embodiment, MMU 20 includes a set of segment registers 24, a first control register 26, a second control register 28, a local descriptor table register (LDTR) 30, and a global descriptor table register (GDTR) 32. Instruction cache 12 is coupled to external interface unit 18, execution core 14, and MMU 20. Execution core 14 is further coupled to MMU 20, register file 22, and data cache 16. Data cache 16 is further coupled to MMU 20 and external interface unit 18. External interface unit 18 is further coupled to MMU 20 and to an external interface.Generally speaking, processor 10 employs a processor architecture compatible with the x86 architecture and including additional architectural features to support 64 bit processing. Processor 10 is configured to establish an operating mode in response to information stored in a code segment descriptor corresponding to the currently executing code and further in response to one or more enable indications stored in one or more control registers. As used herein, an "operating mode" specifies default values for various programmably selectable processor attributes. For example, the operating mode may specify a default operand size and a default address size. The default operand size specifies the number of bits in an operand of an instruction, unless an instruction's encoding overrides the default. The default address size specifies the number of bits in an address of a memory operand of an instruction, unless an instruction's encoding overrides the default. The default address size specifies the size of at least the virtual address of memory operands, and may also specify the size of the physical address. Alternatively, the size of the physical address may be independent of the default address size and may instead be dependent on the LME bit described below (e.g. the physical address may be 32 bits if the LME bit is clear and an implementation-dependent size greater than 32 bits and less than 64 bits if the LME bit is set) or on another control bit (e.g. the physical address extension bit, or PAE bit, in another control register). As used herein, a "virtual address" is an address generated prior to translation through an address translation mechanism (e.g. a paging mechanism) to a "physical address", which is the address actually used to access a memory. Additionally, as used herein, a "segment descriptor" is a data structure created by software and used by the processor to define access control and status for a segment of memory. A "segment descriptor table" is a table in memory having multiple entries, each entry capable of storing a segment descriptor.In the illustrated embodiment, MMU 20 generates an operating mode and conveys the operating mode to execution core 14. Execution core 14 executes instructions using the operating mode. More particularly, execution core 14 fetches operands having the default operand size from register file 22 or memory (through data cache 16, if the memory operands are cacheable and hit therein, or through external interface unit 18 if the memory operands are noncacheable or miss data cache 16) unless a particular instruction's encoding overrides the default operand size, in which case the overriding operand size is used. Similarly, execution core 14 generates addresses of memory operands, wherein the addresses have the default address size unless a particular instruction's encoding overrides the default address size, in which case the overriding address size is used. In other embodiments, the information used to generate the operating mode may be shadowed locally in the portions of processor 10 which use the operating mode (e.g. execution core 14), and the operating mode may be determined from the local shadow copies.As mentioned above, MMU 20 generates the operating mode responsive to a code segment descriptor corresponding to the code being executed and further responsive to one or more values in control registers. Information from the code segment descriptor is stored in one of the segment registers 24 (a register referred to as CS, or code segment). Additionally, control register 26 stores an enable indication (LME) which is used to enable an operating mode in which the default address size is greater than 32 bits ("32/64 mode") as well as certain compatibility modes for the 32 bit and 16 bit operating modes. The default operand size may be 32 bits in 32/64 mode, but instructions may override the default 32 bit operand size with a 64 bit operand size when desired. If the LME indication is in an enabled state, then 32/64 mode may be used in addition to 32 bit and 16 bit modes. If the LME indication is in a disabled state, then 32/64 mode is disabled. In one embodiment, the default address size in 32/64 mode may be implementation-dependent but may be any value up to and including 64 bits. Furthermore, the size of the virtual address may differ in a given implementation from the size of the physical address in that implementation.It is noted that enable indications may be described herein as bits with the enabled state being the set state of the bit and the disabled state being the cleared state of the bit. However, other encodings are possible, including encodings in which multiple bits are used and encodings in which the enabled state is the clear state and the disabled state is the set state. Accordingly, the remainder of this description may refer to the LME indication in control register 26 as the LME bit, with the enabled state being set and the disabled state being clear. However, other encodings of the LME indication are contemplated, as set forth above.Segment registers 24 store information from the segment descriptors currently being used by the code being executed by processor 10. As mentioned above, CS is one of segment registers 24 and specifies the code segment of memory. The code segment stores the code being executed. Other segment registers may define various data segments (e.g. a stack data segment defined by the SS segment register, and up to four data segments defined by the DS, ES, FS, and GS segment registers). FIG. 1 illustrates the contents of an exemplary segment register 24A, including a selector field 24AA and a descriptor field 24AB. Selector field 24AA is loaded with a segment selector to activate a particular segment in response to certain segment load instructions executed by execution core 14. The segment selector identifies the segment descriptor in a segment descriptor table in memory. More particularly, processor 10 may employ two segment descriptor tables: a local descriptor table and a global descriptor table. The base address of the local descriptor table is stored in the LDTR 30. Similarly, the base address of the global descriptor table is stored in GDTR 32. A bit within the segment selector (the table indicator bit) selects the descriptor table, and the remainder of the segment selector is used as an index into the selected table. When an instruction loads a segment selector into one of segment registers 24, MMU 20 reads the corresponding segment descriptor from the selected segment descriptor table and stores information from the segment descriptor into the segment descriptor field (e.g. segment descriptor field 24AB for segment register 24A). The information stored in the segment descriptor field may comprise any suitable subset of the segment descriptor, including all of the segment descriptor, if desired. Additionally, other information derived from the segment descriptor or other sources may be stored in the segment descriptor field, if desired. For example, an embodiment may decode the operating mode indications from the code segment descriptor and store the decoded value rather than the original values of the operating mode indications. If an instruction causes CS to be loaded with a segment selector, the code segment may change and thus the operating mode of processor 10 may change. Segment descriptor tables are described in more detail below.In one embodiment, only the CS segment register is used in 32/64 mode. The data segment registers are ignored. In 16 and 32 bit modes, the code segment and data segments may be active. Furthermore, a second enable indication (PE) in control register 28 may affect the operation of MMU 20. The PE enable indication may be used to enable protected mode, in which segmentation and/or paging address translation mechanisms may be used. If the PE enable indication is in the disabled state, segmentation and paging mechanisms are disabled and processor 10 is in "real mode" (in which addresses generated by execution core 14 are physical addresses). Similar to the LME indication, the PE indication may be a bit in which the enabled state is the bit being set and the disabled state is the bit being clear. However, other embodiments are contemplated as described above.It is noted that MMU 20 may employ additional hardware mechanisms, as desired. For example, MMU 20 may include paging hardware to implement paging address translation from virtual addresses to physical addresses. The paging hardware may include a translation lookaside buffer (TLB) to store page translations.It is noted that control registers 26 and 28 may be implemented as architected control registers (e.g. control register 26 may be CR4 and control register 28 may be CR0). Alternatively, one or both of the control registers may be implemented as model specific registers to allow for other uses of the architected control registers without interfering with 32/64 mode.Generally, instruction cache 12 is a high speed cache memory for storing instruction bytes. Execution core 14 fetches instructions from instruction cache 12 for execution. Instruction cache 12 may employ any suitable cache organization, including direct-mapped, set associative, and fully associative configurations. If an instruction fetch misses in instruction cache 12, instruction cache 12 may communicate with external interface unit 18 to fill the missing cache line into instruction cache 12. Additionally, instruction cache 12 may communicate with MMU 20 to receive physical address translations for virtual addresses fetched from instruction cache 12.Execution core 14 executes the instructions fetched from instruction cache 12. Execution core 14 fetches register operands from register file 22 and updates destination registers in register file 22. The size of the register operands is controlled by the operating mode and any overrides of the operating mode for a particular instruction. Similarly, execution core 14 fetches memory operands from data cache 16 and updates destination memory locations in data cache 16, subject to the cacheability of the memory operands and hitting in data cache 16. The size of the memory operands is similarly controlled by the operating mode and any overrides of the operating mode for a particular instruction. Furthermore, the size of the addresses of the memory operands generated by execution core 14 is controlled by the operating mode and any overrides of the operating mode for a particular instruction.Execution core 14 may employ any suitable construction. For example, execution core 14 may be a superpipelined core, a superscalar core, or a combination thereof. Execution core 14 may employ out of order speculative execution or in order execution, according to design choice.Register file 22 may include 64 bit registers which may be accessed as 64 bit, 32 bit, 16 bit, or 8 bit registers as indicated by the operating mode of processor 10 and any overrides for a particular instruction. The register format for one embodiment is described below with respect to FIG. 7. The registers included in register file 22 may include the LEAX, LEBX, LECX, LEDX, LEDI, LESI, LESP, and LEBP registers. Register file 22 may further include the LEIP register. Alternatively, execution core 14 may employ a form of register renaming in which any register within register file 22 may be mapped to an architected register. The number of registers in register file 22 may be implementation dependent for such an embodiment.Data cache 16 is a high speed cache memory configured to store data. Data cache 16 may employ any suitable cache organization, including direct-mapped, set associative, and fully associative configurations. If a data fetch or update misses in data cache 16, data cache 16 may communicate with external interface unit 18 to fill the missing cache line into data cache 16. Additionally, if data cache 16 employs a writeback caching policy, updated cache lines which are being cast out of data cache 16 may be communicated to external interface unit 18 to be written back to memory. Data cache 16 may communicate with MMU 20 to receive physical address translations for virtual addresses presented to data cache 16.External interface unit 18 communicates with portions of the system external to processor 10. External interface unit 18 may communicate cache lines for instruction cache 12 and data cache 16 as described above, and may communicate with MMU 20 as well. For example, external interface unit 18 may access the segment descriptor tables and/or paging tables on behalf of MMU 20.It is noted that processor 10 may include an integrated level 2 (L2) cache, if desired. Furthermore, external interface unit 18 may be configured to communicate with a backside cache in addition to communicating with the system.Turning now to FIG. 2, a block diagram of one embodiment of a code segment descriptor 40 for 32/64 mode is shown. Other embodiments are possible and contemplated. In the embodiment of FIG. 2, code segment descriptor 40 comprises 8 bytes with the most significant 4 bytes illustrated above the least significant 4 bytes. The most significant four bytes are stored at a numerically larger address than the least significant four bytes. The most significant bit of each group of four bytes is illustrated as bit 31 in FIG. 2 (and FIG. 3 below), and the least significant bit is illustrated as bit 0. Short vertical lines within the four bytes delimit each bit, and the long vertical lines delimit a bit but also delimit a field (both in FIG. 2 and in FIG. 3).Unlike the 32 bit and 16 bit code segment descriptors illustrated in FIG. 3 below, code segment descriptor 40 does not include a base address or limit. Processor 10 employs a flat virtual address space for 32/64 mode (rather than the segmented linear address space employed in 32 bit and 16 bit modes). Accordingly, the portions of code segment descriptor 40 which would otherwise store the base address and limit are reserved in segment descriptor 40. It is noted that a virtual address provided through segmentation may also be referred to herein as a "linear address". The term "virtual address" encompasses any address which is translated through a translation mechanism to a physical address actually used to address memory, including linear addresses and other virtual addresses generated in non-segmented architectures.Segment descriptor 40 includes a D bit 42, an L bit 44 (set to one for a 32/64 mode code segment), an available bit (AVL) 46, a present (P) bit 48, a descriptor privilege level (DPL) 50, and a type field 52. D bit 42 and L bit 44 are used to determine the operating mode of processor 10, as illustrated in FIG. 5 below. AVL bit 46 is available for use by system software (e.g. the operating system). P bit 48 is used to indicate whether or not the segment is present in memory. If P bit 48 is set, the segment is present and code may be fetched from the segment. If P bit 48 is clear, the segment is not present and an exception is generated to load the segment into memory (e.g. from disk storage or through a network connection). The DPL indicates the privilege level of the segment. Processor 10 employs four privilege levels (encoded as 0 through 3 in the DPL field, with level 0 being the most privileged level). Certain instructions and processor resources (e.g. configuration and control registers) are only executable or accessible at the more privileged levels, and attempts to execute these instructions or access these resources at the lower privilege levels result in an exception. When information from code segment 40 is loaded into the CS segment register, the DPL becomes the current privilege level (CPL) of processor 10. Type field 52 encodes the type of segment. For code segments, the most significant bit two bits of type field 52 may be set (the most significant bit distinguishing a code or data segment from a system segment, and the second most significant bit distinguishing a code segment from a data segment), and the remaining bits may encode additional segment type information (e.g. execute only, execute and read, or execute and read only, conforming, and whether or not the code segment has been accessed).It is noted that, while several indications in the code segment descriptor are described as bits, with set and clear values having defined meanings, other embodiments may employ the opposite encodings and may use multiple bits, as desired. Thus, for example, the D bit 42 and the L bit 44 may each be an example of an operating mode indication which may be one or more bits as desired, similar to the discussion of enable indications above.Turning now to FIG. 3, a block diagram of one embodiment of a code segment descriptor 54 for 32 and 16 bit compatibility mode is shown. Other embodiments are possible and contemplated. As with the embodiment of FIG. 2, code segment descriptor 54 comprises 8 bytes with the most significant 4 bytes illustrated above the least significant 4 bytes.Code segment descriptor 54 includes D bit 42, L bit 44, AVL bit 46, P bit 48, DPL 50, and type field 52 similar to the above description of code segment descriptor 40. Additionally, code segment descriptor 54 includes a base address field (reference numerals 56A, 56B, and 56C), a limit field (reference numerals 57A and 57B) and a G bit 58. The base address field stores a base address which is added to the logical fetch address (stored in the LEIP register) to form the linear address of an instruction, which may then optionally be translated to a physical address through a paging translation mechanism. The limit field stores a segment limit which defines the size of the segment. Attempts to access a byte at a logical address greater than the segment limit are disallowed and cause an exception. G bit 58 determines the scaling of the segment limit field. If G bit 58 is set the limit is scaled to 4 K byte pages (e.g. 12 least significant zeros are appended to the limit in the limit field). If G bit 58 is clear, the limit is used as is.It is noted that code segment descriptors for 32 and 16 bit modes when 32/64 mode is not enabled via the LME bit in control register 26 may be similar to code segment descriptor 54, except the L bit is reserved and defined to be zero. It is further noted that, in 32 and 16 bit modes (both compatibility mode with the LME bit set and modes with the LME bit clear) according to one embodiment, data segments are used as well. Data segment descriptors may be similar to code segment descriptor 54, except that the D bit 42 is defined to indicate the upper bound of the segment or to define the default stack size (for stack segments).Turning next to FIG. 4, a diagram illustrating exemplary uses of the LME bit in control register 26 and the compatibility modes to allow for a high degree of flexibility in implementing the 32/64 mode and the 32 and 16 bit modes is shown. A box 60 illustrates exemplary operation when the LME bit is set, and a box 62 illustrates exemplary operation when the LME bit is clear.As illustrated in box 60, the compatibility modes supported when the LME bit is set may allow for a 64 bit operating system (i.e. an operating system designed to take advantage of the virtual and physical address spaces in excess of 32 bits and/or data operands of 64 bits) to operate with a 32 bit application program (i.e. an application program written using 32 bit operand and address sizes). The code segment for the operating system may be defined by the 32/64 mode code segment descriptor 40 illustrated in FIG. 2, and thus the L bit may be set. Accordingly, the operating system may take advantage of the expanded virtual address space and physical address space for the operating system code and the data structures maintained by the operating system (including, e.g. the segment descriptor tables and the paging translation tables). The operating system may also use the 64 bit data type defined in 32/64 mode using instruction encodings which override the default 32 bit operand size. Furthermore, the operating system may launch a 32 bit application program by establishing one or more 32 bit compatibility mode segment descriptors (L bit cleared, D bit set, e.g. segment descriptor 54 shown in FIG. 2) in the segment descriptor table and branching into one of the compatibility mode segments. Similarly, the operating system may launch a 16 bit application program by establishing one or more 16 bit compatibility mode segment descriptors (L bit cleared, D bit cleared, e.g. segment descriptor 54 shown in FIG. 2) in the segment descriptor table and branching into one of the compatibility mode segments. Accordingly, a 64 bit operating system may retain the ability to execute existing 32 bit and 16 bit application programs in the compatibility mode. A particular application program may be ported to 32/64 mode if the expanded capabilities are desired for that program, or may remain 32 bit or 16 bit.While processor 10 is executing the 32 bit application program, the operating mode of processor 10 is 32 bit. Thus, the application program may generally execute in the same fashion as it does in 32 bit mode with the LME bit clear (e.g. when the operating system is a 32 bit operating system as well). However, the application program may call an operating system service, experience an exception, or terminate. In each of these cases, processor 10 may return to executing operating system code (as illustrated by arrow 64 in FIG. 4). Since the operating system code operates in 32/64 mode, the address of the operating system service routine, exception handler, etc. may exceed 32 bits. Thus, processor 10 may need to generate an address greater than 32 bits prior to returning to the operating system code. The LME bit provides processor 10 with an indication that the operating system may be operating in 32/64 mode even though the current operating mode is 32 bit, and thus processor 10 may provide the larger address space for operating system calls and exceptions.In one embodiment, exceptions are handled using interrupt segment descriptors stored in an interrupt segment descriptor table. If the LME bit is set, the interrupt segment descriptors may be 16 byte entries which include a 64 bit address of the operating system routine which handles the exception. If the LME bit is clear, the interrupt segment descriptors may be eight byte entries which include a 32 bit address. Accordingly, processor 10 accesses the interrupt descriptor table responsive to the LME indication (i.e. reading a 16 byte entry if the LME bit is set and reading an eight byte entry if the LME bit is clear). Therefore, exceptions may be handled by the 64 bit operating system even though the application program is executing in 32 bit compatibility mode. Furthermore, processor 10 supports a 32 bit (or 16 bit) operating system if the LME bit is clear.Similarly, the call mechanisms within processor 10 may operate in different fashions based on the state of the LME bit. Since the operating system typically executes at a higher privilege level than the application program, transfers from the application program to the operating system are carefully controlled to ensure that the application program is only able to execute permitted operating system routines. More generally, changes in privilege level are carefully controlled. In one embodiment, processor 10 may support at least two mechanisms for performing operating system calls. One method may be through a call gate in the segment descriptor tables (described in more detail below). Another method may be the SYSCALL instruction supported by processor 10, which uses a model specific register as the source of the address of the operating system routine. Updating the model specific registers is a privileged operation, and thus only code executing at a higher privilege level (e.g. operating system code) may establish the address in the model specific register used by the SYSCALL instruction. For the SYSCALL method, a second model specific register may be defined to store the most significant 32 bits of the address of the operating system routine. Thus, if the LME bit is set, the address may be read from the two model specific registers. If the LME bit is clear, the address may be read from the model specific register storing the least significant 32 bits. Alternatively, the model specific register used by the SYSCALL instruction may be expanded to 64 bits and the address may be 32 bits (the least significant 32 bits of the model specific register) or 64 bits based on the state of the LME bit.As illustrated above, having the LME bit set may allow for processor 10 to operate in a system in which the operating system is 64 bit and one or more application programs are not 64 bit (e.g. 32 bit as shown or 16 bit, which operates in a similar fashion to the above description). Additionally, as illustrated by box 62, having the LME bit clear may allow for processor 10 to operate in 32 bit or 16 bit modes compatible with the x86 architecture. As described above, the mechanisms for handling exceptions and operating system calls are designed to handle the LME bit being set or clear, and thus the 32 bit and 16 bit modes may operate unmodified, even though processor 10 is capable of operating in 32/64 mode. Furthermore, by providing the x86 compatible 16 and 32 bit modes when the LME bit is clear, (and ignoring the L bit, which is reserved in these modes) processor 10 may operate in a system in which the L bit is defined for some other purpose than for 32/64 mode and may still support 32/64 mode if the LME bit is set. Accordingly, a system employing a 32 bit operating system and 32 bit or 16 bit application programs may employ processor 10. Subsequently, the system could be upgraded to a 64 bit operating system without having to change processor 10.Not illustrated in FIG. 4 is a 64 bit operating system and a 64 bit application program operating with the LME bit set. The mechanisms for calling operating system routines described above for the 64 bit operating system and 32 bit application program may apply equally to the 64 bit application program as well. Additionally, call gates which support 64 bits of offset are supported (as will be described in more detail below).Turning next to FIG. 5, a table 70 is shown illustrating the states of the LME bit, the L bit in the code segment descriptor, and the D bit in the code segment descriptor and the corresponding operating mode of processor 10 according to one embodiment of processor 10. Other embodiments are possible and contemplated. As table 70 illustrates, if the LME bit is clear, then the L bit is reserved (and defined to be zero). However, processor 10 may treat the L bit as a don't care if the LME bit is clear. Thus, the x86 compatible 16 bit and 32 bit modes may be provided by processor 10 if the LME bit is clear. If the LME bit is set and the L bit in the code segment is clear, then a compatibility operating mode is established by processor 10 and the D bit selects 16 bit or 32 bit mode. If the LME bit and the L bit are set and the D bit is clear, 32/64 mode is selected for processor 10. Finally, the mode which would be selected if the LME, L and D bits are all set is reserved.As mentioned above and illustrated in FIG. 6 below, the 32/64 operating mode includes a default address size in excess of 32 bits (implementation dependent but up to 64 bits) and a default operand size of 32 bits. The default operand size of 32 bits may be overridden to 64 bits via a particular instruction's encoding. The default operand size of 32 bits is selected to minimize average instruction length (since overriding to 64 bits involves including an instruction prefix in the instruction encoding which may increase the instruction length) for programs in which 32 bits are sufficient for many of the data manipulations performed by the program. For such programs (which may be a substantial number of the programs currently in existence), moving to a 64 bit operand size may actually reduce the execution performance achieved by the program (i.e. increased execution time). In part, this reduction may be attributable to the doubling in size in memory of the data structures used by the program when 64 bit values are stored. If 32 bits is sufficient, these data structures would store 32 bit values, Thus, the number of bytes accessed when the data structure is accessed increases if 64 bit values are used where 32 bit values would be sufficient, and the increased memory bandwidth (and increased cache space occupied by each value) may cause increased execution time. Accordingly, 32 bits is selected as the default operand size and the default may be overridden via the encoding of a particular instruction.Turning next to FIG. 6, a table 72 is shown illustrating one embodiment of the use of instruction prefixes to override the operating mode for a particular instruction. Other embodiments are possible and contemplated. Execution core 14 determines the address size and operand size for a particular instruction according to table 72. In particular for the embodiment illustrated in FIG. 6, an instruction prefix byte (the address size override prefix byte) may be used to override the default address size and another instruction prefix byte (the operand size override prefix byte) may be used to override the default operand size. The address size override prefix byte is encoded as 67 (in hexadecimal) and the operand size override prefix byte is encoded as 66 (in hexadecimal). The number of override prefixes in the particular instruction forms the columns of the table. The rows of the table indicate the operand size and address size of the particular instruction, based on the operating mode and the number of override prefixes in the corresponding column. The number of override prefixes refers to the number of override prefixes of the corresponding type (e.g. address size rows are the address size based on the number of address size override prefixes and operand size rows are the operand size based on the number of operand size override prefixes).The column labeled "0" for the number of override prefixes illustrates the default operand size and address size for each operating mode. It is noted that the 32 bit and 16 bit mode rows refer to both the compatibility modes (LME set) and the standard modes (LME clear). Furthermore, while the default address size is 64 bits in 32/64 mode, the actual number of address bits may be implementation dependent, as discussed above.The inclusion of one address size override prefix in 32/64 bit mode changes the address size from 64 bit (which may be less than 64 bits for a given implementation but is greater than 32 bits) to 32 bit, as shown in table 72. Additionally, the inclusion of one operand size override prefix in 32/64 bit mode changes the operand size from 32 bit to 64 bit. It may be desirable to provide for a 16 bit operand as well (e.g. to support the short integer data type in the "C" programming language). Accordingly, the inclusion of two operand size override prefixes in 32/64 mode selects an operand size of 16 bits. The inclusion of more than two operand size override prefixes results in the same operand size as the inclusion of two operand size override prefixes. Similarly, the inclusion of more than one address size override prefix results in the same address size as the inclusion of one address size override prefix.For the 32 bit modes, the inclusion of one override prefix toggles the default 32 bit size to 16 bit, and the inclusion of more than one override prefix has the same effect as the inclusion of one override prefix. Similarly, for 16 bit modes, the inclusion of one override prefix toggles the default 16 bit size to 32 bit, and the inclusion of more than one override prefix has the same effect as the inclusion of one override prefix.Turning now to FIG. 7, a diagram illustrating one embodiment of the LEAX register 74 is shown. Other registers within register file 22 may be similar. Other embodiments are possible and contemplated. In the embodiment of FIG. 7, register 74 includes 64 bits, with the most significant bit labeled as bit 63 and the least significant bit labeled as bit 0. FIG. 7 illustrates the portions of the LEAX register accessed based upon the operand size of an instruction (if the A register is selected as an operand). More particularly, the entirety of register 74 is accessed if the operand size is 64 bits (as illustrated by the brace labeled "LEAX" in FIG. 7). If the operand size is 32 bits, bits 31:0 of register 74 are accessed (as illustrated by the brace labeled "EAX" in FIG. 7). If the operand size is 16 bits, bits 16:0 of the register are accessed (as illustrated by the brace labeled "AX" in FIG. 7). The above operand sizes may be selected based on the operating mode and the inclusion of any override prefixes. However, certain instruction opcodes are defined which access an eight bit register (AH or AL in FIG. 7).Turning next to FIG. 8, a block diagram is shown illustrating one embodiment of a global descriptor table 80 and a local descriptor table 82. Other embodiments are possible and contemplated. As illustrated in FIG. 8 and mentioned above, the base address of global descriptor table 80 is provided by GDTR 32 and the base address of local descriptor table 82 is provided by LDTR 30. Accordingly, to support placing global descriptor table 80 and local descriptor table 82 arbitrarily within the virtual address space, GDTR 32 and LDTR 30 may store 64 bit base addresses. If the LME bit is clear, the least significant 32 bits of the base address may be used to locate the descriptor tables.Both global descriptor table 80 and local descriptor table 82 are configured to store segment descriptors of various types. For example, 32/64 mode code segment descriptors 84, 86, and 90 and compatibility mode descriptors 92 and 94 are illustrated in FIG. 8. Each of descriptors 84-94 occupies an entry in the corresponding descriptor table, where an entry is capable of storing one segment descriptor (e.g. 8 bytes for the embodiments illustrated in FIGS. 2 and 3). Another type of descriptor in global descriptor table 80 is a local descriptor table descriptor 96, which defines a system segment for the local descriptor table 82 and provides the base address stored in LDTR 30. LDTR 30 is initialized using an LLDT instruction having as an operand a segment selector locating descriptor 96 in global descriptor table 80. Global descriptor table 80 may store multiple LDT descriptors locating different local descriptor tables, if desired. Since the LDT descriptor 96 may store a 64 bit offset if the LME bit is set, LDT descriptor 96 may occupy two entries in global descriptor table 80. If the LME bit is clear, LDT descriptor 96 may occupy a single entry in global descriptor table 80. Similarly, each task may have a task state segment (TSS) descriptor in one of descriptor tables 80 and 82 to store certain information related to the task. Accordingly, a TSS descriptor may occupy two entries to allow for TSS information to be stored anywhere in the 64 bit address space.The local and global descriptor tables may also store a call gate descriptor. For example, FIG. 8 illustrates call gate descriptors 100, 102, and 104. Call gate descriptors support a 64 bit offset as well, and thus may occupy two entries in the corresponding descriptor table as well. An exemplary 32/64 call gate descriptor is illustrated in FIG. 9 below.By maintaining the segment descriptor tables 80 and 82 at 8 bytes and using two entries for descriptors which include 64 bit offsets, descriptors for 16 and 32 bit modes may be stored in the same tables as the descriptors which include 64 bit offsets. Thus, applications operating in compatibility modes may have appropriate descriptors in the same segment descriptor tables as the 64 bit operating systems.Generally, call gates are used to manage the transition between a code segment having a lesser privilege level and a code segment have a greater privilege level (e.g. an application program calling an operating system routine). The lesser privileged code includes a call or other branch instruction specifying, as a target, a segment selector (and an offset into the segment, which is ignored in this case). The segment selector identifies a call gate descriptor within the descriptor tables, which includes a minimum privilege level required to execute the greater privilege level code. When processor 10 executes the call or other branch instruction, processor 10 indexes the descriptor tables with the segment selector and locates the call gate. If the current privilege level of processor 10 and the requestor privilege level (which is part of the segment selector, and may be used to lower the current privilege level for privilege checking purposes) both reflect sufficient privilege (e.g. the privilege levels are numerically less than or equal to the minimum privilege level in the call gate descriptor), then the call may proceed. The call gate descriptor includes a segment selector for the target segment (the code segment having the greater privilege level) and the offset within the target segment at which code fetching is to begin. Processor 10 extracts the segment selector and the offset from the call gate descriptor and reads the target segment descriptor to begin fetching the code having the greater privilege level. On the other hand, if either the current privilege level or the requestor privilege level is a lesser privilege level than the minimum privilege level in the call gate descriptor (e.g. either the current or requestor privilege level is numerically greater than the minimum privilege level), processor 10 signals an exception after accessing the call gate descriptor and without accessing the target descriptor. Thus, access to code executing at greater privilege levels is carefully controlled.As mentioned above, the call gate descriptor includes a target segment selector and offset within the segment. The reference to the target segment descriptor is illustrated in FIG. 8 as an arrow from a call gate descriptor to another descriptor. For example, call gate descriptor 100 references mode descriptor 90; call gate descriptor 102 references 32/64 mode descriptor 86, and call gate descriptor 104 references 32/64 mode descriptor 84. As FIG. 8 illustrates, a call gate descriptor may be stored in either descriptor table and may reference a descriptor in the other table or in the same table. Furthermore, a call gate descriptor may reference either a 32/64 mode descriptor or a compatibility mode descriptor.Generally, when processor 10 reads a descriptor from one of the descriptor tables using a segment selector, one descriptor table entry is read. However, if the LME bit is set and processor 10 detects that the entry is a call gate descriptor, an LDT descriptor, or a TSS descriptor, processor 10 reads the next succeeding entry in the table to obtain the remainder of the descriptor. Accordingly, call gate descriptors, LDT descriptors, and TSS descriptors may coexist in a table with compatibility mode descriptors (or standard mode descriptors) which are of a different size, without redefining the size of the table entries nor how the table is managed for descriptors which occupy one entry. Furthermore, since the second portion of the call gate descriptor, the LDT descriptor, and the TSS descriptor may be accessed as a segment descriptor, the portion of the descriptor which would be the type field of a descriptor in the second portion is set to an invalid type when the descriptor is stored into the descriptor table, as shown below in FIG. 9. Alternatively, processor 10 may read two consecutive entries from a descriptor table each time a descriptor table read is performed, and the second entry may be used if the first entry is a call gate, LDT descriptor type, or TSS descriptor type.It is noted that code operating in any operating mode (32/64 mode, 32 bit compatibility mode, or 16 bit compatibility mode) may reference a call gate descriptor when the LME bit is set. Thus, a 32 or 16 bit application may call an operating system routine even if the address of the routine is outside the 32 bit or 16 bit address space using the call gate mechanism. Additionally, a call gate descriptor may reference a code segment having any operating mode. The operating system may ensure that the most significant 32 bits of the offset in the call gate are zero (for a 32 bit target segment) or the most significant 48 bits of the offset in the call gate are zero (for a 16 bit target segment).Turning now to FIG. 9, a block diagram of one embodiment of a call gate descriptor 120 is shown. Other embodiments are possible and contemplated. Similar to FIGS. 2 and 3, the most significant bytes are illustrated above the least significant bytes. The most significant bit of each group of four bytes is illustrated as bit 31 and the least significant bit is illustrated as bit 0. Short vertical lines within the four bytes delimit each bit, and the long vertical lines delimit a bit but also delimit a field. As mentioned above, a call gate descriptor occupies two entries in a descriptor table. The horizontal dashed line in FIG. 9 divides call gate descriptor 120 into an upper portion (above the line) and a lower portion (below the line). The lower portion is stored in the entry indexed by the call gate's segment selector, and the upper portion is stored in the next succeeding entry.Call gate descriptor 120 includes a target segment selector (field 122), an offset (fields 124A, 124B, and 124C), a present (P) bit 126, a descriptor privilege level (DPL) 128, a type field 130, and a pseudo-type field 132. The P bit is similar to P bit 48 described above. The target segment selector identifies an entry within one of the descriptor tables at which the target segment descriptor (having the greater privilege level) is stored. The offset identifies the address at which code fetching is to begin. In 32/64 mode, since the code segment has no base address and flat linear addressing is used, the offset is the address at which code fetching begins. In other modes, the offset is added to the segment base defined by the target segment descriptor to generate the address at which code fetching begins. As mentioned above, the offset may comprise 64 bits in the present embodiment.DPL 128 stores the minimum privilege level of the calling routine must have (both in the current privilege level and the requested privilege level) which may successfully pass through the call gate and execute the called routine at the privilege level specified in the target segment descriptor.Type field 130 is coded to a call gate descriptor type. In one embodiment, this type is coded as the 32 bit call gate type defined in the x86 architecture. Alternatively, other encodings may be used. Finally, pseudo-type field 132 is coded to an invalid type (e.g. zero) to ensure that if a segment selector identifying the segment table entry storing the upper half of call gate descriptor 120 is presented, then an exception will be signalled by processor 10.It is noted that the lower half of LDT descriptor 96 may be similar to the 32 bit LDT descriptor and the upper half of LDT descriptor 96 may be similar to the upper half of call gate descriptor 120.Turning next to FIG. 10, a block diagram of an instruction format 140 for instructions executed by processor 10 is shown. Other embodiments are possible and contemplated. In the embodiment of FIG. 10, instruction format 140 includes a prefix field 142, an opcode field 144, a mod R/M (register/memory) field 146, an SIB (scale index base) field 148, a displacement field 150, and an immediate field 152. Each of the fields except for the opcode field 144 are optional. Thus, instruction format 140 may define a variable length instruction.Prefix field 142 is used for any instruction prefixes for the instruction. As described above, an operand size override prefix and an address size override prefix may be encoded into an instruction to override the operating mode of processor 10. These override prefixes are included in prefix field 142. As noted above, the operand size override prefix and address size override prefix may each by bytes included within prefix field 142.Opcode field 144 includes the opcode of the instruction (i.e. which instruction in the instruction set is being executed). For some instructions, operands may be specified within opcode field 144. For other instructions, a portion of the opcode may be included within mod R/M field 146. Furthermore, certain opcodes specify an eight bit or 16 bit register as an operand. Thus opcode encodings may serve to override the defaults indicated by the operating mode of processor 10 as well.Mod R/M field 146 and SIB field 148 indicate operands of the instruction. Displacement field 150 includes any displacement information, and immediate field 152 includes an immediate operand.Computer SystemsTurning now to FIG. 11, a block diagram of one embodiment of a computer system 200 including processor 10 coupled to a variety of system components through a bus bridge 202 is shown. Other embodiments are possible and contemplated. In the depicted system, a main memory 204 is coupled to bus bridge 202 through a memory bus 206, and a graphics controller 208 is coupled to bus bridge 202 through an AGP bus 210. Finally, a plurality of PCI devices 212A-212B are coupled to bus bridge 202 through a PCI bus 214. A secondary bus bridge 216 may further be provided to accommodate an electrical interface to one or more EISA or ISA devices 218 through an EISA/ISA bus 220. Processor 10 is coupled to bus bridge 202 through a CPU bus 224 and to an optional L2 cache 228. Together, CPU bus 224 and the interface to L2 cache 228 may comprise an external interface to which external interface unit 18 may couple.Bus bridge 202 provides an interface between processor 10, main memory 204, graphics controller 208, and devices attached to PCI bus 214. When an operation is received from one of the devices connected to bus bridge 202, bus bridge 202 identifies the target of the operation (e.g. a particular device or, in the case of PCI bus 214, that the target is on PCI bus 214). Bus bridge 202 routes the operation to the targeted device. Bus bridge 202 generally translates an operation from the protocol used by the source device or bus to the protocol used by the target device or bus.In addition to providing an interface to an ISA/EISA bus for PCI bus 214, secondary bus bridge 216 may further incorporate additional functionality, as desired. An input/output controller (not shown), either external from or integrated with secondary bus bridge 216, may also be included within computer system 200 to provide operational support for a keyboard and mouse 222 and for various serial and parallel ports, as desired. An external cache unit (not shown) may further be coupled to CPU bus 224 between processor 10 and bus bridge 202 in other embodiments. Alternatively, the external cache may be coupled to bus bridge 202 and cache control logic for the external cache may be integrated into bus bridge 202. L2 cache 228 is further shown in a backside configuration to processor 10. It is noted that L2 cache 228 may be separate from processor 10, integrated into a cartridge (e.g. slot 1 or slot A) with processor 10, or even integrated onto a semiconductor substrate with processor 10.Main memory 204 is a memory in which application programs are stored and from which processor 10 primarily executes. A suitable main memory 204 comprises DRAM (Dynamic Random Access Memory). For example, a plurality of banks of SDRAM (Synchronous DRAM) or Rambus DRAM (RDRAM) may be suitable.PCI devices 212A-212B are illustrative of a variety of peripheral devices such as, for example, network interface cards, video accelerators, audio cards, hard or floppy disk drives or drive controllers, SCSI (Small Computer Systems Interface) adapters and telephony cards. Similarly, ISA device 218 is illustrative of various types of peripheral devices, such as a modem, a sound card, and a variety of data acquisition cards such as GPIB or field bus interface cards.Graphics controller 208 is provided to control the rendering of text and images on a display 226. Graphics controller 208 may embody a typical graphics accelerator generally known in the art to render three-dimensional data structures which can be effectively shifted into and from main memory 204. Graphics controller 208 may therefore be a master of AGP bus 210 in that it can request and receive access to a target interface within bus bridge 202 to thereby obtain access to main memory 204. A dedicated graphics bus accommodates rapid retrieval of data from main memory 204. For certain operations, graphics controller 208 may further be configured to generate PCI protocol transactions on AGP bus 210. The AGP interface of bus bridge 202may thus include functionality to support both AGP protocol transactions as well as PCI protocol target and initiator transactions. Display 226 is any electronic display upon which an image or text can be presented. A suitable display 226 includes a cathode ray tube ("CRT"), a liquid crystal display ("LCD"), etc.It is noted that, while the AGP, PCI, and ISA or EISA buses have been used as examples in the above description, any bus architectures may be substituted as desired. It is further noted that computer system 200 may be a multiprocessing computer system including additional processors (e.g. processor 10a shown as an optional component of computer system 200). Processor 10a may be similar to processor 10. More particularly, processor 10a may be an identical copy of processor 10. Processor 10a may be connected to bus bridge 202 via an independent bus (as shown in FIG. 11) or may share CPU bus 224 with processor 10. Furthermore, processor 10a may be coupled to an optional L2 cache 228a similar to L2 cache 228.Turning now to FIG. 12, another embodiment of a computer system 300 is shown. Other embodiments are possible and contemplated. In the embodiment of FIG. 12, computer system 300 includes several processing nodes 312A, 312B, 312C, and 312D. Each processing node is coupled to a respective memory 314A-314D via a memory controller 316A-316D included within each respective processing node 312A-312D. Additionally, processing nodes 312A-312D include interface logic used to communicate between the processing nodes 312A-312D. For example, processing node 312A includes interface logic 318A for communicating with processing node 312B, interface logic 318B for communicating with processing node 312C, and a third interface logic 318C for communicating with yet another processing node (not shown). Similarly, processing node 312B includes interface logic 318D, 318E, and 318F; processing node 312C includes interface logic 318G, 318H, and 318I; and processing node 312D includes interface logic 318J, 318K, and 318L. Processing node 312D is coupled to communicate with a plurality of input/output devices (e.g. devices 320A-320B in a daisy chain configuration) via interface logic 318L. Other processing nodes may communicate with other I/O devices in a similar fashion.Processing nodes 312A-312D implement a packet-based link for inter-processing node communication. In the present embodiment, the link is implemented as sets of unidirectional lines (e.g. lines 324A are used to transmit packets from processing node 312A to processing node 312B and lines 324B are used to transmit packets from processing node 312B to processing node 312A). Other sets of lines 324C-324H are used to transmit packets between other processing nodes as illustrated in FIG. 12. Generally, each set of lines 324 may include one or more data lines, one or more clock lines corresponding to the data lines, and one or more control lines indicating the type of packet being conveyed. The link may be operated in a cache coherent fashion for communication between processing nodes or in a noncoherent fashion for communication between a processing node and an I/O device (or a bus bridge to an I/O bus of conventional construction such as the PCI bus or ISA bus). Furthermore, the link may be operated in a non-coherent fashion using a daisy-chain structure between I/O devices as shown. It is noted that a packet to be transmitted from one processing node to another may pass through one or more intermediate nodes. For example, a packet transmitted by processing node 312A to processing node 312D may pass through either processing node 312B or processing node 312C as shown in FIG. 12. Any suitable routing algorithm may be used. Other embodiments of computer system 300 may include more or fewer processing nodes then the embodiment shown in FIG. 12.Generally, the packets may be transmitted as one or more bit times on the lines 324 between nodes. A bit time may be the rising or falling edge of the clock signal on the corresponding clock lines. The packets may include command packets for initiating transactions, probe packets for maintaining cache coherency, and response packets from responding to probes and commands.Processing nodes 312A-312D, in addition to a memory controller and interface logic, may include one or more processors. Broadly speaking, a processing node comprises at least one processor and may optionally include a memory controller for communicating with a memory and other logic as desired. More particularly, each processing node 312A-312D may comprise one or more copies of processor 10. External interface unit 18 may includes the interface logic 318 within the node, as well as the memory controller 316.Memories 314A-314D may comprise any suitable memory devices. For example, a memory 314A-314D may comprise one or more RAMBUS DRAMs (RDRAMs), synchronous DRAMs (SDRAMs), static RAM, etc. The address space of computer system 300 is divided among memories 314A-314D. Each processing node 312A-312D may include a memory map used to determine which addresses are mapped to which memories 314A-314D, and hence to which processing node 312A-312D a memory request for a particular address should be routed. In one embodiment, the coherency point for an address within computer system 300 is the memory controller 316A-316D coupled to the memory storing bytes corresponding to the address. In other words, the memory controller 316A-316D is responsible for ensuring that each memory access to the corresponding memory 314A-314D occurs in a cache coherent fashion. Memory controllers 316A-316D may comprise control circuitry for interfacing to memories 314A-314D. Additionally, memory controllers 316A-316D may include request queues for queuing memory requests.Generally, interface logic 318A-318L may comprise a variety of buffers for receiving packets from the link and for buffering packets to be transmitted upon the link. Computer system 300 may employ any suitable flow control mechanism for transmitting packets. For example, in one embodiment, each interface logic 318 stores a count of the number of each type of buffer within the receiver at the other end of the link to which that interface logic is connected. The interface logic does not transmit a packet unless the receiving interface logic has a free buffer to store the packet. As a receiving buffer is freed by routing a packet onward, the receiving interface logic transmits a message to the sending interface logic to indicate that the buffer has been freed. Such a mechanism may be referred to as a "coupon-based" system.I/O devices 320A-320B may be any suitable I/O devices. For example, I/O devices 320A-320B may include network interface cards, video accelerators, audio cards, hard or floppy disk drives or drive controllers, SCSI (Small Computer Systems Interface) adapters and telephony cards, modems, sound cards, and a variety of data acquisition cards such as GPIB or field bus interface cards.Turning now to FIG. 13, a flowchart is shown illustrating one embodiment of a method. At block 400, an operating mode is established in a processor. The operating mode is established in response to an enable indication in a control register within the processor, a first operating mode indication in a segment descriptor, and a second operating mode indication in the segment descriptor. For example, a first operating mode may be established responsive to the enable indication being in an enabled state and the first operating mode indication being in a first state, wherein the first operating mode includes a default address size greater than 32 bits. As another example, a second operating mode may be established responsive to the enable indication being in an enabled state, the first operating mode indication being in a second state, and the second operating mode being in the first state, wherein the second operating mode includes a default address size of 32 bits. As yet another example, one of a plurality of operating modes may be established if the enable indication is in the enabled state and the first operating mode indication is in a second state, wherein the one of the plurality of operating modes is selected in response to a state of the second operating mode indication. At block 402, the processor fetches operands and generates addresses in response to the operating mode.Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. |
A fin-based structure may include fins on a surface of a semiconductor substrate. Each of the fins may include a doped portion proximate to the surface of the semiconductor substrate. The fin-based structure may also include an isolation layer disposed between the fins and on the surface of the semiconductor substrate. The fin-based structure may also include a recessed isolation liner on sidewalls of the doped portion of the fins. An unlined doped portion of the fins may extend from the recessed isolation liner to an active potion of the fins at a surface of the isolation layer. The isolation layer is disposed on the unlined doped portion of the fins. |
1.A method for isolating doped portions of a fin-based structure on a substrate, comprising:Depositing a doped isolation liner on the plurality of fins of the fin-based structure;Depositing a first layer of isolation material between the plurality of fins on sidewalls of the doped isolation liner;Exposing a portion of the doped isolation liner on the plurality of fins;Etching the doped isolation liner over the plurality of fins to expose an active portion of the plurality of fins;Driving dopants in the doped isolation liner into the doped portions of the plurality of fins including linerless doped portions; andDepositing a second layer of isolation material on the linerless doped portion of the plurality of fins and the first layer of isolation material to between the active portion and the doped portion of the plurality of fins boundary.2.The method of claim 1, further comprising: depositing a spacer layer between the first layer of isolation material and the doped isolation liner.3.The method of claim 2, wherein the spacer layer is a nitride-based dopant barrier layer.4.The method of claim 2, wherein etching the doped isolation liner further comprises:Removing a portion of the spacer layer to expose the doped isolation liner;The doped isolation liner is etched to expose the active portions of the plurality of fins.5.The method of claim 1, further comprising: thinning the doped oxide to form the doped isolation liner prior to depositing the first layer of isolation material.6.The method of claim 1, wherein driving further comprises: annealing the doped isolation liner to incorporate the dopant in the doped isolated liner into the plurality of In the doped portion of the fin that includes the linerless doped portion.7.The method of claim 1, wherein exposing a portion of the doped isolation liner comprises: recessing the first layer of isolation material to expose the doped isolation liner on the plurality of fins Of the part.8.The method of claim 1, wherein the fin-based structure is integrated into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand held personal communication system ) Unit, a portable data unit, and / or a fixed location data unit.9.A fin-based structure comprising:A plurality of fins on a surface of a semiconductor substrate, each fin including a doped portion adjacent to the surface of the semiconductor substrate;A spacer layer disposed between the plurality of fins and the surface of the semiconductor substrate; andA recessed isolation liner on sidewalls of the doped portion of the plurality of fins extending from the recessed isolation liner to a surface of the isolation layer, The isolation layer exposes an active portion of the plurality of fins and is disposed on the linerless doped portion of the plurality of fins.10.The fin-based structure of claim 9, further comprising a spacer layer disposed between the spacer layer and the recessed isolation liner.11.The fin-based structure of claim 10, wherein the spacer layer is a nitride-based dopant barrier layer.12.The fin-based structure of claim 9, wherein the recessed isolation liner further comprises a doped isolation liner on a sidewall of the doped portion of the plurality of fins, the doped A miscellaneous isolation liner extends from the surface of the semiconductor substrate to the linerless doped portions of the plurality of fins.13.The fin-based structure of claim 9, wherein the isolation layer includes a first layer and a second layer, the first layer is disposed between the plurality of fins, and the second layer At the base of the active portion of the plurality of fins and on the linerless doped portion of the plurality of fins.14.The fin-based structure of claim 9, wherein the fin-based structure is integrated into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a handheld personal communication System (PCS) unit, a portable data unit, and / or a fixed location data unit.15.A fin-based structure comprising:A plurality of fins on a surface of a semiconductor substrate, each fin including a doped portion adjacent to the surface of the semiconductor substrate;Means for isolating, the means for isolating disposed between the plurality of fins and the surface of the semiconductor substrate; andA recessed isolation liner on a sidewall of the doped portion of the plurality of fins, an un-doped portion of the plurality of fins extending from the recessed isolation liner to an active of the plurality of fins Portion, a surface of the isolation device, the isolation device being disposed on the linerless doped portion of the plurality of fins.16.The fin-based structure of claim 15, further comprising a spacer layer disposed between the spacer and the recessed isolation liner.17.The fin-based structure of claim 16, wherein the spacer layer is a nitride-based dopant barrier layer.18.The fin-based structure of claim 15, wherein the recessed isolation liner further comprises a doped isolation liner on a sidewall of the doped portion of the plurality of fins, the doped A miscellaneous isolation liner extends from the surface of the semiconductor substrate to the linerless doped portions of the plurality of fins.19.The fin-based structure of claim 15, wherein the isolation means comprises a first layer and a second layer, the first layer being disposed between the plurality of fins, the second layer At the base of the active portion of the plurality of fins and on the linerless doped portion of the plurality of fins.20.The fin-based structure of claim 15, wherein the fin-based structure is integrated into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a handheld personal communication System (PCS) unit, a portable data unit, and / or a fixed location data unit.21.A method for isolation within a fin-based structure on a substrate, comprising:A step for depositing a doped isolation liner on a plurality of fins of the fin-based structure;A step of depositing a first layer of insulating material between the plurality of fins on a sidewall of the doped isolation liner;A step for exposing a portion of the doped isolation liner on the plurality of fins;A step for exposing an active portion of the plurality of fins;A step of driving a dopant in the doped isolation liner into a doped portion of the plurality of fins including a linerless doped portion; andFor depositing a second layer of isolation material on the linerless doped portion of the plurality of fins and the first layer of isolation material to the active portion and the doped portion of the plurality of fins The steps between the borders.22.The method of claim 21, further comprising the step of depositing a separation layer between the first layer of isolation material and the doped isolation liner.23.The method of claim 22, wherein the spacer layer is a nitride-based dopant barrier layer.24.The method of claim 22, wherein the step of etching the doped isolation liner further comprises:A step for removing a portion of the spacer layer to expose the doped isolation liner; andA step of etching the doped isolation liner to expose the active portion of the plurality of fins.25.The method of claim 21 further comprising the step of thinning the doped oxide to form said doped isolation liner prior to depositing said first layer of isolation material.26.The method of claim 21, wherein the step of driving further comprises: a step for driving the doped isolation liner to anneal the dopant in the doped isolation liner Into the doped portion of the plurality of fins including the linerless doped portion.27.The method of claim 21, wherein the step for exposing a portion of the doped isolation liner comprises: a step for recessing the first layer of isolation material to expose a portion of the plurality of fins The steps of doping the portion of the isolation liner.28.The method of claim 21, wherein the fin-based structure is integrated into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand held personal communication system ) Unit, a portable data unit, and / or a fixed location data unit. |
Fins under the device isolationbackgroundfieldAspects of the present disclosure relate to semiconductor devices, and more particularly to isolation between adjacent devices.backgroundWith advances in integrated circuit (IC) technology, the device geometry is reduced. Reducing the geometry and "spacing" (spacing) between the devices can cause the devices to interfere with each other for proper operation.Fin-based devices are three-dimensional structures on the surface of a semiconductor substrate. Fin-based transistors, which may be fin-based metal oxide semiconductor field effect transistors (MOSFETs), may be referred to as FinFETs. Doping the portion of the FinFET closer to the substrate for isolation between the devices is difficult because the active portion of the fin impedes implantation or also receives injection thereby reducing the effectiveness of the attempted isolation.OverviewA method for isolation within a fin-based structure on a substrate can include providing a doped isolation liner on a fin of the fin-based structure. The method may further include depositing a first layer of isolation material between the fins on a sidewall of the doped isolation liner. The method may further include exposing a portion of the doped isolation liner on the fin. The method may further include exposing an active portion of the fin. The method may further include driving a dopant in the doped isolation liner into a doped portion of the fin that includes a linerless doped portion. The method may further include depositing a second layer of isolation material on the linerless doped portion of the fin and the first layer of isolation material to the active portion of the fin and the doped portion The border between.A fin-based structure may include a fin on a surface of a semiconductor substrate. Each of the fins may include a doped portion adjacent to the surface of the semiconductor substrate. The fin-based structure may further include a spacer layer disposed between the fins and the surface of the semiconductor substrate. The fin-based structure may further include a recessed isolation liner on a sidewall of the doped portion of the fin. A linerless doped portion of the fin may extend from the recessed isolation liner to an active portion of the fin at a surface of the isolation layer. The isolation layer is disposed on the linerless doped portion of the fin.A fin-based structure may include a fin on a surface of a semiconductor substrate. Each of the fins may include a doped portion adjacent to the surface of the semiconductor substrate. The fin-based structure may further include a means for isolating provided between the fins and the surface of the semiconductor substrate. The fin-based structure may further include a recessed isolation liner on a sidewall of the doped portion of the fin. A linerless doped portion of the fin may extend from the recessed isolation liner to an active portion of the fin at a surface of the isolation device. The isolation device is disposed on the linerless doped portion of the fin.This broad outline of the features and technical advantages of the present disclosure may be better understood from the following detailed description. Additional features and advantages of the disclosure are described below. It should be appreciated by those skilled in the art that the present disclosure may readily be utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the disclosure as set forth in the appended claims. The novel features which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages, will be better understood upon consideration of the following description taken in conjunction with the accompanying drawings. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the disclosure.Brief Description of the DrawingsFor a fuller understanding of aspects of the disclosure, reference is now made to the following descriptions taken in conjunction with the accompanying drawings.FIG. 1 illustrates a perspective view of a semiconductor wafer in an aspect of the present disclosure. FIG.Figure 2 illustrates a cross-sectional view of a die according to an aspect of the present disclosure.3 illustrates a cross-sectional view of a metal oxide semiconductor field-effect transistor (MOSFET) device in an aspect of the present disclosure.FIG. 4 illustrates a fin field effect transistor (FinFET) according to an aspect of the present disclosure. FIG.FIG. 5 illustrates a cross-sectional view of the fin-based structure in an aspect of the present disclosure. FIG.6 illustrates a cross-sectional view of a doped oxide deposition in accordance with an aspect of the present disclosure.7A and 7B illustrate a cross-sectional view of the etched doped oxide layer in an aspect of the present disclosure.FIG. 8 illustrates a cross-sectional view of a deposition of isolation material according to an aspect of the present disclosure. FIG.FIG. 9A illustrates a cross-sectional view of a recessed isolation material in an aspect of the present disclosure. FIG.9B illustrates a cross-sectional view of the recessed isolation material and the recessed-doped oxide material in one aspect of the present disclosure.FIG. 10 illustrates an annealing process according to an aspect of the present disclosure.FIG. 11 illustrates depositing a layer of isolation material in accordance with an aspect of the present disclosure.FIG. 12 illustrates a method for fabricating a fin-based structure in accordance with an aspect of this disclosure.Figure 13 is a block diagram illustrating an example wireless communication system in which one aspect of the disclosure may be beneficially employed.Figure 14 is a block diagram illustrating a design workstation for circuit, layout, and logic design of a fin-based structure according to one configuration.A detailed descriptionThe detailed description set forth below in connection with the appended drawings is intended as a description of the various configurations and is not intended to represent the only configuration in which the concepts described herein may be practiced. The detailed description includes specific details in order to provide a thorough understanding of various concepts. However, it will be apparent to one skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts. As used herein, the use of the term "and / or" is intended to mean "or" and the use of the term "or" is intended to mean "exclusive or."Semiconductor device operation generally involves isolating one device from another device. In planar or fin-based (three dimensional) structures, adjacent devices, such as transistors, may be physically or electrically isolated. Fin-based devices are three-dimensional structures on the surface of a semiconductor substrate. Fin-based transistors, which may be fin-based metal oxide semiconductor field effect transistors (MOSFETs), may be referred to as FinFETs.As the device geometry decreases and additional device structures are added to the integrated circuit, isolation of devices, particularly adjacent devices, becomes more difficult. In planar devices, an implant process can be used to electrically isolate one device from the other. However, in fin-based devices, the fin geometry and fin spacing null the standard implant process. In particular, doping a portion of the FinFET closer to the substrate for isolation between the devices is difficult because the active portion of the fin resists implantation, or also receives implantation, reducing the effectiveness of the attempted isolation.The portion of the doped fin closer to the substrate (eg, the "base" portion) isolates the FinFET's source and drain from the substrate and from other devices, allowing more precise operation and reverse punch-through protection of each device. One aspect of the present disclosure describes the substrate portion (eg, the portion of the fin that is closer to the substrate) that is doped with a doped oxide. The doped oxide is patterned to control the received doped portion of the fin. The doped oxide is annealed, driving dopant atoms into the portion of the fin. This aspect of the disclosure incorporates reverse punch-through doping into the base portion of the fin-based structure in a precise manner, which helps to suppress sub-fin leakage to the substrate and / or other devices.The semiconductor manufacturing process is usually divided into three parts: FEOL, MOL, and BEOL. Front-end processes include wafer preparation, isolation, well formation, gate patterning, spacers, and doping. The middle process includes gate and terminal contact formation. However, the gate and terminal contact formation of the mid-range process is a more challenging part of the manufacturing process, especially for lithographic patterning. Back-end processes include forming interconnects and dielectric layers for coupling to FEOL devices. These interconnects can be fabricated using a dual damascene process using inter-layer dielectric (ILD) materials deposited using plasma-enhanced chemical vapor deposition (PECVD).FIG. 1 illustrates a perspective view of a semiconductor wafer in an aspect of the present disclosure. FIG. The wafer 100 may be a semiconductor wafer or may be a substrate material having one or more layers of semiconductor material on the surface of the wafer 100. When the wafer 100 is a semiconductor material, it can grow from the seed using a Czochralski process in which the seed is dipped into the molten pool of semiconductor material and slowly rotated and removed from the pool Was removed. The molten material is then crystallized onto the seed crystals in the orientation of the crystals.Wafer 100 may be a composite material such as gallium arsenide (GaAs) or gallium nitride (GaN), a ternary material such as InGaAs, a quaternary material, or may be a substrate material for other semiconductor materials Of any material. Although many materials may be crystalline in nature, polycrystalline or non-crystalline materials may also be used for the wafer 100.The wafer 100 or layers coupled to the wafer 100 may be provided with a material that makes the wafer 100 more conductive. By way of example, and not limitation, a silicon wafer may have phosphorous or boron added to the wafer 100 to allow charge to flow in the wafer 100. These additives are referred to as dopants and provide additional charge carriers (electrons or holes) within the various portions of the wafer 100 or wafer 100. By selecting which region to provide additional charge carriers, which type of charge carriers to provide, and the amount (density) of additional charge carriers in the wafer 100, different types of charge carriers may be formed in the wafer 100 or on the wafer 100 Electronic devices.The wafer 100 has an orientation 102 that indicates the crystal orientation of the wafer 100. The orientation 102 may be a flat edge of the wafer 100 as shown in FIG. 1 or may be a notch or other indicia to illustrate the crystal orientation of the wafer 100. The orientation 102 may indicate the Miller index of the plane of the crystal lattice in the wafer 100.The Miller index forms a labeling system of crystalline planes in the crystal lattice. The lattice plane can be indicated by three integers h, k and l, which are Miller indices in the mid-plane (hkl) of the crystal. Each exponent represents a plane that is orthogonal to the direction (h, k, l) based on the reciprocal lattice vector. These integers are usually written in the lowest terms (for example, their greatest common denominator should be 1). Miller index 100 represents a plane orthogonal to direction h; index 010 represents a plane orthogonal to direction k, and index 001 represents a plane orthogonal to l. For some crystals, a negative number (written as a bar above the exponent) is used, and for some crystals, such as gallium nitride, more than three numbers can be used to adequately describe different crystal planes.Once the wafer 100 is processed as desired, the wafer 100 is singulated along the dicing line 104. The cutting line 104 indicates where the wafer 100 will be separated or divided into multiple pieces. The dicing lines 104 may define the outline of various integrated circuits that have been fabricated on the wafer 100.Once the cutting line 104 is defined, the wafer 100 may be sawn or otherwise divided into multiple pieces to form the die 106. Each die 106 may be an integrated circuit with many devices or may be a single electronic device. The physical size of the die 106, which may also be referred to as a chip or semiconductor chip, depends at least in part on the ability to divide the wafer 100 into specific sizes and the die 106 is designed to contain the number of individual devices.Once the wafer 100 has been divided into one or more dies 106, the die 106 may be mounted into the package to allow access to the devices and / or integrated circuits fabricated on the die 106. The package may include a single inline package, a dual inline package, a motherboard package, a flip chip package, an indium dot / bump package, or other types of devices that provide access to the die 106. The die 106 may also be accessed directly by wire bonding, probes, or other connections without the need to mount the die 106 in a separate package.FIG. 2 illustrates a cross-sectional view of die 106 according to an aspect of the present disclosure. In die 106, there may be a substrate 200, which may be a semiconductor material and / or may serve as a mechanical support for the electronic device. The substrate 200 may be a doped semiconductor substrate having electrons (designated as N-channel) or holes (designated as P-channel) charge carriers present throughout the substrate 200. Subsequent doping with substrate 200 of charge carrier ions / atoms can change the charge carrying capacity of substrate 200.Within the substrate 200 (eg, a semiconductor substrate), there may be wells 202 and 204, which may be the source and / or drain of a field effect transistor (FET), or the wells 202 and / or 204 may be of a fin structure FET (FinFET) fin structure. The wells 202 and / or 204 may also be other devices (eg, resistors, capacitors, diodes, or other electronics) depending on the structure and other characteristics of the wells 202 and / or 204 and the peripheral structure of the substrate 200.The semiconductor substrate may also have wells 206 and wells 208. Well 208 may be entirely within well 206, and in some cases, a bipolar junction transistor (BJT) may be formed. The well 206 may also be used as an isolation well to isolate the well 208 from the electric and / or magnetic fields within the die 106.Layers (eg, 210 to 214) may be added to the die 106. Layer 210 may be, for example, an oxide or insulating layer that may isolate wells (eg, 202-208) from one another or from other devices on die 106. In such cases, layer 210 may be silicon dioxide, a polymer, a dielectric, or another electrically insulating layer. Layer 210 may also be an interconnect layer, in which case layer 212 may include a conductive material, such as copper, tungsten, aluminum, an alloy, or other conductive or metallic material.Layer 212 may also be a dielectric or conductive layer, depending on the desired device characteristics and / or the materials of the various layers (eg, 210 and 214). Layer 214 may be an encapsulation layer that may protect layers (eg, 210 and 212), as well as wells 202-208 and substrate 200 from external forces. By way of example, and not limitation, layer 214 may be a layer that protects die 106 from mechanical damage or layer 214 may be a layer of material that protects die 106 from electromagnetic or radiation damage.Electronic devices designed on die 106 may include many features or structural components. For example, die 106 may be exposed to any number of methods to deliver dopants into substrate 200, wells 202-208, and into layers (eg, 210-214) if desired. By way of example, and not limitation, die 106 may be exposed to ion implantation, the deposition of dopant atoms that are driven into the lattice by diffusion processes, chemical vapor deposition, epitaxial growth, or other methods. Through selective growth of portions of layers (eg, 210-214), material selection and removal, and through selective removal of substrate 200 and wells 202-208, material selection, and dopant concentration, Many different structures and electronic devices are formed within the scope of the disclosure.In addition, the substrate 200, wells 202-208, and layers (eg, 210-214) may be selectively removed or added by various processes. Chemical wet etching, chemical mechanical planarization (CMP), plasma sub-etching, photoresist masking, damascene processes, and other methods may create the structures and devices of the present disclosure.FIG. 3 illustrates a cross-sectional view of a metal oxide semiconductor field-effect transistor (MOSFET) device 300 in an aspect of the present disclosure. The MOSFET device 300 may have four input terminals. The four inputs are source 302, gate 304, drain 306, and substrate 308. The source 302 and drain 306 may be fabricated as wells 202 and 204 in the substrate 308 or may be fabricated as a region over the substrate 308 or as part of other layers on the die 106. Such other structures may be fins or other structures that protrude from the surface of the substrate 308. In addition, the substrate 308 may be the substrate 200 on the die 106, but the substrate 308 may also be one or more of the various layers (eg, 210 - 214) coupled to the substrate 200.The MOSFET device 300 is a unipolar device because, depending on the type of MOSFET, the current is generated by only one type of charge carrier (eg, electrons or holes). The MOSFET device 300 operates by controlling the amount of charge carriers in the channel 310 between the source 302 and the drain 306. A voltage V source 312 is applied to the source 302, a voltage V gate 314 is applied to the gate 304, and a voltage V drain 316 is applied to the drain 306. A separate voltage V substrate 318 may also be applied to the substrate 308 although the voltage V substrate 318 may be coupled to one of a voltage V source 312, a voltage V gate 314, or a voltage V drain 316.To control the charge carriers in the channel 310, the voltage Vgate 314 creates an electric field in the channel 310 as the gate 304 accumulates the charge. The charge opposite to the charge accumulated on the gate electrode 304 starts to accumulate in the channel 310. The gate insulator 320 insulates the charges accumulated on the gate 304 from the source 302, the drain 306, and the channel 310. The gate 304 and the channel 310 with the gate insulator 320 therebetween create a capacitor and as the voltage Vgate 314 increases, charge carriers on the gate 304 that serves as one of the plates of the capacitor begin to accumulate . This accumulation of charge on the gate 304 draws opposite charge carriers into the channel 310. Eventually, enough charge carriers accumulate in the channel 310 to provide a conductive path between the source 302 and the drain 306. This condition can be referred to as the channel that opens the FET.The amount of voltage applied to the gate 304 to open the channel 310 can be varied by changing the voltage V source 312 and the voltage V drain 316, and their relationship to the voltage V gate 314. For example, the voltage V source 312 typically has a higher potential than the voltage V drain 316. Making the voltage difference between the voltage V source 312 and the voltage V drain 316 larger will change the amount of voltage V gate 314 used to turn on the channel 310. In addition, a greater voltage difference will alter the amount of electromotive force that moves the charge carriers through the channel 310, creating a larger current through the channel 310.The gate insulator 320 material may be silicon oxide or may be a dielectric or other material having a different dielectric constant (k) than silicon oxide. In addition, the gate insulator 320 may be a combination of materials or a different material layer. For example, the gate insulator 320 may be aluminum oxide, hafnium oxide, hafnium oxynitride, zirconium oxide, or a lamination and / or alloy of these materials. Other materials for the gate insulator 320 may be used without departing from the scope of this disclosure.The amount of charge on the gate 304 for opening the channel 310 can be varied by changing the material used for the gate insulator 320 and the thickness of the gate insulator 320 (eg, the distance between the gate 304 and the channel 310). Also illustrated is a symbol 322 showing each terminal of the MOSFET device 300. For N-channel MOSFETs (using electrons as charge carriers in channel 310), an arrow away from the gate terminal of the gate 304 is applied to the substrate 308 terminal in the symbol 322. For p-type MOSFETs (using holes as charge carriers in channel 310), an arrow to the gate 304 terminal is applied to the substrate 308 terminal in the symbol 322.The gate 304 can also be made of different materials. In some designs, the gate 304 is made of polycrystalline silicon, which is also referred to as polysilicon or poly, which is a conductive form of silicon. Although referred to herein as "polycrystalline" or "polysilicon," metals, alloys, or other conductive materials are also contemplated as suitable materials for the gate 304 as described in this disclosure.In some MOSFET designs, high-k materials may be desirable in the gate insulator 320, and in such designs other conductive materials may be employed. By way of example, and not limitation, a "high-k metal gate" design may use a metal, such as copper, for the gate 304 terminal. Although referred to as "metal," polycrystalline materials, alloys, or other conductive materials are also contemplated as suitable materials for the gate 304 as described in the present disclosure.To interconnect to the MOSFET device 300, or to interconnect other devices (eg, semiconductors) in the die 106, interconnect traces or interconnect layers are used. These interconnect traces may be in one or more of the layers (eg, 210-214), or in other layers of the die 106.FIG. 4 illustrates a transistor according to an aspect of the present disclosure. The FET (FinFET 400) with a fin structure operates in a similar manner to the MOSFET device 300 described with respect to FIG. 3. However, the fins 410 in the FinFET 400 grow or otherwise couple to the substrate 308. The substrate 308 may be a semiconductor substrate or other similar support layer that includes, for example, an oxide layer, a nitride layer, a metal oxide layer, or a silicon layer. Fin 410 includes a source 302 and a drain 306. The gate 304 is disposed on the fin 410 and the substrate 308 through the gate insulator 320. Height H Fin, Width W Fin and Length L The fins represent the size of the fins. In the FinFET structure, the physical size of the FinFET 400 may be smaller than the MOSFET device 300 structure shown in FIG. 3. This reduction in physical size allows more devices per unit area on die 106.Fins under the device isolationAs the device geometry decreases and additional device structures are added to the integrated circuit, isolation of devices, particularly adjacent devices, becomes more difficult. In planar devices, an implant process can be used to electrically isolate one device from the other. However, in fin-based devices, the fin geometry and fin spacing null the standard implant process. In particular, doping the portions of the FinFET closer to the substrate for isolation between the devices is difficult because the active portion of the fin impedes the injection, or also receives the injection, reducing the effectiveness of the attempted isolation.The portion of the doped fin closer to the substrate (eg, the "base" portion) isolates the FinFET's source and drain from the substrate and from other devices, allowing more precise operation and reverse punch-through protection of each device. One aspect of the present disclosure describes the substrate portion (eg, the portion of the fin closer to the substrate) that is doped with the doped oxide. The doped oxide is patterned to control the received doped portion of the fin. The doped oxide is annealed, driving dopant atoms into the portion of the fin. This aspect of the present disclosure incorporates reverse punch-through doping into the base portion of the fin-based structure in a precise manner, which helps to suppress leakage under the fin to the substrate and / or other devices.FIG. 5 illustrates a cross-sectional view of the fin-based structure 500 in an aspect of the present disclosure. In integrated circuits, fin-based structures can be used. The fins 510 (510-1, ..., 510-5) are supported by the substrate 508 and are doped with a particular type of charge carriers such that the fins 510 are conductive. Fins 510 may be doped with n-type dopants or p-type dopants, depending on the type of charge carriers expected in the final device.6 illustrates a cross-sectional view illustrating a doped isolation layer 620 according to an aspect of the present disclosure. In the silicon-based structure, silicon dioxide may be used as the oxide for doping the oxide layer. The dopant for doping the isolation layer 620 may be boron, phosphorus, or other Group II, III, V, or VI elements and may be based on the material used in the fin-based structure. Dopant isolation layer 620 may be grown on fin 510. During oxide growth / deposition, the dopant may enter the oxide growth cavity as a gas or plasma and will then be incorporated into the doped isolation layer 620 that is coupled to the fin 510.7A and B illustrate cross-sectional views of a doped isolation liner 720 in an aspect of the present disclosure. Due to the introduction of dopants, the doped isolation liner 720 is not a good insulator as "pure" oxide. In addition, the doped isolation liner 720 may grow in a manner that the doped isolation liner 720 is too thick to allow high quality isolation material to be deposited between the fins 510. In addition, the doped isolation liner 720 may not grow in a uniform manner. One aspect of the present invention contemplates that the doped isolation layer 620 may be deposited and etched as shown in FIG. 7A or deposited / grown in a precise manner that will result in the doped isolation liner 720 shown in FIG. 7A without the need Etched.In addition, a spacer layer 730 (which may be a nitride layer) may be grown or deposited on the doped isolation liner 720 in order to direct dopants into the fins 510, and / or to protect other isolation materials from degradation. This is shown in FIG. 7B. The fins 510 may also be doped with n-type dopants or p-type dopants depending on the type of charge carriers specified for the final device. In one aspect of the present disclosure, in integrating n-type and p-type metal oxide semiconductor (NMOS / PMOS) fin-based devices, spacer layer 730 provides a dopant barrier for opposite polarity doping. For example, an n-type dopant is initially deposited on the fins 510, followed by the deposition of the spacer layer 730. Next, the spacer 730 and the n-type dopant are removed from the PMOS portion of the fin 510 but remain on the NMOS portion of the fin 510. The p-type dopant is then deposited on the fin 510 followed by the deposition of the spacer 730 only on the PMOS portion of the fin 510. The p-type dopant is removed from the NMOS portion of the fin 510 but remains on the NMOS portion of the fin 510 due to the separation layer. However, the spacer 730 is not shown in FIGS. 8-12 for simplicity of illustration.FIG. 8 illustrates a cross-sectional view of the insulation material 840 according to an aspect of the present disclosure. Once the doped isolation layer 620 is thinned (or directly applied) to a suitable thickness, the doped isolation liner 720 is formed. The doped isolation liner 720 has space between the fins 510 at the intersection of the fin 510 and the substrate 508. Isolation materials 840, such as shallow trench isolation (STI) materials, are deposited in these spaces. The separation layer 730, if present, will be between the doped isolation liner 720 and the isolation material 840. Fins 510 are then planarized (see FIG. 9A), which can be performed using chemical mechanical planarization (CMP), to expose portions of the fins furthest from the substrate (eg, the active portion of the fin as shown in FIG. 9B Show).9A illustrates a cross-sectional view of a recessed isolation material 950 in an aspect of the present disclosure. In this arrangement, the isolation material 840 is etched or otherwise recessed to a certain level toward the substrate 508, exposing the fin 510 with the doped isolation liner 720 (and the separation layer 730, if deposited). This step of removing or etching the isolation material 840 is the first step in a two-step etching process. Since the volume of the isolation material 840 removed in this step is larger than the volume removed at the base of the fin pitch, it is difficult to precisely control the first etching step. This spacer layer 730 protects the doped isolation liner 720 from this "rough" or "roughened" etch process as the spacer layer 730 is deposited.9B illustrates a cross-sectional view of the recessed isolation material 950 and the recessed isolation liner 960 in one aspect of the present disclosure. If the separation layer 730 is deposited, the separation layer 730 is first removed, exposing the doped isolation liner 720. As the doping isolation liner 720 is thinner than the isolation material 840 and has a different chemical composition than the dopant, the doping isolation liner 720 may be etched faster than the isolation material 840. In addition, the removal of the doped isolation liner 720 can be precisely controlled to expose the linerless portion 912 of the active fin portion 910 of the fin-based structure 500.The amount of the fin 510 to be exposed during doping liner removal can account for dopant up diffusion in which dopants in the doping isolation liner 720 diffuse upwardly and across the width of the fins 510. As such, the doped isolation liner 720 is etched below the linerless portion 912 of the fin 510 at a level that the isolation material 840. In addition, by providing the recessed isolation liner 960, the fins 510 that are deeper than the final expected active fin are exposed to account for the introduction of the upward diffusion dopant. A separation layer (not shown) resists diffusion and also protects the recessed isolation material 950 from diffusion of the dopant from the recessed isolation liner 960, as shown in FIG. 10.FIG. 10 illustrates an annealing process according to an aspect of the present disclosure. Once a suitable amount of doped isolation liner 720 has been removed to provide a recessed isolation liner 960, the structure is annealed to drive dopants into the fins 510 to form doped portion 1070. Dopants in the doped isolation liner 720 have a different charge carrier type than the charge carriers in the fin 510 which will reduce the conductivity of the fins 510 in the base portion of the fin-based structure 500. In this arrangement, a recessed isolation liner 960 is disposed on a sidewall of the doped portion 1070 of the fin 510. In addition, a linerless portion 912 of the fin 510 is also shown. In this example, dopant up-diffusion causes formation of a linerless doped portion, as shown in FIG. 11.FIG. 11 illustrates an additional layer of deposition isolation material 840 in accordance with an aspect of this disclosure. Annealing drives dopant atoms into the fins 510, which results in a dopant density in the range of 1x1020 to 5x1020 (charge carrier type opposite to the charge carrier type of the fin). The active fin 910 is then electrically isolated from the substrate 508 because there is now a P-N junction between the active fin 910 and the doped portion 1070 of the fin 510. In addition, a P-N junction is provided between the doped portion 1070 of the fin 510 and the active surface of the substrate 508. In order to reach the final fin height 1182, additional spacer material is deposited on the fin-based structure 500. This can be done using standard deposition techniques, but can also be performed using furnace chemical vapor deposition (FCVD), which will fill the volume between each fin from the portion of the fin closest to the substrate into the active portion. The deposition of isolation material 840 may be controlled to adjust the exposed height of the fin 510 to reach the final fin height 1182. In this arrangement, the linerless doped portion 1180 of the fin 510 extends from the recessed isolation liner 960 to the surface 1142 of the isolation material 840. In addition, the isolation material 840 is also disposed on the linerless doped portion 1180 of the fin 510.FIG. 12 illustrates a method 1200 for fabricating a fin-based structure in accordance with an aspect of this disclosure. In block 1202, a doped isolation liner is provided on a fin (eg, fin 510) of a fin-based structure (eg, fin-based structure 500). In block 1204, a first layer of isolation material (eg, isolation material 840) is deposited between the fins and on the sidewalls of the doped isolation liner (eg, doped isolation liner 720). In block 1206, the first layer of isolation material is recessed to expose a portion of the doped isolation liner on the fin.In block 1208, a doped isolation liner on the fin is etched to expose the active portion of the fin. In block 1210, the doped isolation liner is annealed to incorporate the dopant in the doped isolation liner into the doped portion of the fin that includes the linerless doped portion (eg, linerless doped portion 1180) For example, doped portion 1070). In block 1212, a second layer of isolation material is deposited on the linerless doped portion and the first layer of isolation material to the boundary between the active portion and the doped portion of the fin. For example, the linerless doped portion 1180 of the fin 510 extends from the recessed isolation liner 960 to the surface 1142 of the isolation material 840. In addition, the isolation material 840 is disposed on the linerless doped portion 1180 of the fin 510.According to an aspect of the present disclosure, a fin-based structure is described. In one configuration, the fin-based structure includes a means for isolating between the fins of the fin-based structure. The isolation device may be an isolation material 840. In another aspect, the aforementioned means may be any means or any equipment or material configured to perform the functions recited by the aforementioned means.FIG. 13 is a block diagram illustrating an example wireless communication system 1300 in which one aspect of the disclosure may be beneficially employed. For illustration purposes, FIG. 13 shows three remote units 1320, 1330 and 1350 and two base stations 1340. It will be appreciated that a wireless communication system may have far more remote units and base stations. Remote units 1320, 1330, and 1350 include IC devices 1325A, 1325C, and 1325B that include the disclosed FinFET devices. It will be appreciated that other devices may also include the disclosed FinFET devices, such as base stations, switching devices, and network equipment. 13 shows forward link signals 1380 from base station 1340 to remote units 1320, 1330 and 1350 and reverse link signals 1390 from remote units 1320, 1330 and 1350 to base station 1340.In FIG. 13, the remote unit 1320 is shown as a mobile phone, the remote unit 1330 is shown as a portable computer, and the remote unit 1350 is shown as a fixed-location remote unit in a wireless local loop system. For example, the remote units may be mobile phones, hand-held personal communication system (PCS) units, portable data units such as personal data assistants, GPS-enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, Fixed location data units (such as meter reading equipment), or other devices that store or retrieve data or computer instructions, or a combination thereof. Although FIG. 13 illustrates a remote unit according to various aspects of the present disclosure, the present disclosure is not limited to the illustrated exemplary units. Aspects of the present disclosure may be suitably employed in many devices including the disclosed devices.14 is a block diagram illustrating a design workstation for circuit, layout, and logic design of a fin-based structure, such as the FinFET device disclosed above. The design workstation 1400 includes a hard disk 1401 that contains operating system software, supporting files, and design software such as Cadence or OrCAD. The design workstation 1400 also includes a display 1402 that facilitates the design of the circuit 1410 or the fin-based structure 1412, such as a FinFET device. The storage medium 1404 is provided for tangibly storing the design of the circuit 1410 or the fin-based structure 1412. The design of the circuit 1410 or the fin-based structure 1412 may be stored on the storage medium 1404 in a file format such as GDSII or GERBER. The storage medium 1404 may be a CD-ROM, DVD, hard disk, flash memory, or other suitable device. In addition, the design workstation 1400 includes a drive 1403 for accepting input from the storage medium 1404 or writing output to the storage medium 1404.The data recorded on the storage medium 1404 may specify a logic circuit configuration, pattern data for a lithography mask, or mask pattern data for a string writing tool, such as electron beam lithography. The data may further include logic verification data associated with the logic simulation, such as a timing diagram or a mesh circuit. Providing data on the storage medium 1404 facilitates the design of the circuit 1410 or the fin-based structure 1412 by reducing the number of processes used to design the semiconductor wafer.For firmware and / or software implementations, these methodologies may be implemented with modules (eg, procedures, functions, etc.) that perform the functions described herein. A machine readable medium tangibly embodying instructions may be used to implement the methodologies described herein. For example, the software code may be stored in memory and executed by the processor unit. The memory can be implemented within the processor unit or external to the processor unit. As used herein, the term "memory" refers to long-term, short-term, volatile, non-volatile type memory, or other memory and is not limited to a specific type of memory or number of memories or media stored thereon type.If implemented in firmware and / or software, the functions may be stored on the computer readable medium as one or more instructions or code. Examples include a computer readable medium encoded with a data structure and a computer readable medium encoded with a computer program. Computer-readable media includes physical computer storage media. The storage medium may be a usable medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or desired programs that can be used to store instructions or data structures Code, and other media which can be accessed by a computer; disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu- Discs often reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.In addition to being stored on a computer-readable medium, the instructions and / or data may also be provided as a signal on a transmission medium included in the communication device. For example, the communication device may include a transceiver having a signal indicative of instructions and data. These instructions and data are configured such that one or more processors implement the functions recited in the claims.Although the present disclosure and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the technology of the disclosure as defined by the appended claims. For example, relational terms such as "above" and "below" are used with respect to a substrate or electronic device. Of course, if the substrate or electronic device is reversed, the top becomes lower and vice versa. In addition, if oriented side-by-side, the top and bottom may refer to the side of the substrate or electronic device. Moreover, the scope of the present application is not intended to be limited to the particular configurations of processes, machines, manufacture, compositions of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure, existing or future developed processes, machines, or manufacturing processes that perform substantially the same function or achieve substantially the same result as the corresponding configurations described herein may be utilized, , Material composition, device, method or step. Thus, the following claims are intended to include such processes, machines, manufacture, compositions of matter, means, methods or steps within the scope thereof.Skilled artisans will further appreciate that the various illustrative boxes, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or a combination of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.The various illustrative boxes, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable Gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. The general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices (eg, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in cooperation with a DSP core, or any other such configuration).The steps of a method or algorithm described in connection with the present disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may reside in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disks, removable disks, CD-ROMs, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor to enable the processor to read information from / to the storage medium. In the alternative, the storage medium may be integrated into the processor. The processor and storage medium may reside in an ASIC. The ASIC may reside in the user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. The storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or can be used to carry or store instructions or data structures Specify other program code means and can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Any connection is also properly termed a computer-readable medium. For example, if the software was delivered from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave The coax, fiber optic cable, twisted pair, DSL, or wireless technology such as infrared, radio, and microwave are included in the definition of media. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks are often reproduced magnetically Data and discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Therefore, the disclosure is not intended to be limited to the examples and designs described herein, but should be given its broadest scope consistent with the principles and novel features disclosed herein. |
One embodiment of the present invention includes the steps of determining the optimal RAID level to implement for a given disk drive array, and to the extent applicable, making unallocated disk space available to the user in the form of unprotected disk space. The method efficiently allocates appropriate RAID volumes for the given disk drive array, and, by making the unallocated disk space available to users, allows disk drives of unequal sizes to be effectively used in the disk drive array. Another embodiment of the present invention reconfigures an existing RAID array such that the storage space available on various disk drives in the disk drive array may be used in the most efficient manner. The alternative embodiment is especially useful if an existing RAID array is upgraded by adding a disk drive to, or modified by replacing one or more disk drives in, the existing disk drive array. |
The invention claimed is:1. A method for adaptively reconfiguring a set of disk drives to implement a reconfigured RAID array comprising RAID 1 and RAID 5 data, wherein at least a subset of the set of disk drives is originally configured to implement an original RAID array comprising RAID 1 data, at least part of the data stored in the original RAID array being migrated into the reconfigured RAID array by converting the data, the method comprising:identifying a first RAID volume of a first RAID type that is associated with the original RAID array, including:selecting a RAID type based on a number of disk drives in the set of disk drives having available storage space, including selecting drives that are a part of the original RAID array;determining a maximum partition size for the RAID type,determining that the number of disk drives having available storage space is greater than two so that the RAID type is RAID 5;applying the maximum partition size to each of the disk drives having available storage space to define the reconfigured RAID volume for storing data in RAID 5, including disk drives that are members of the original RAID array, to establish a RAID 5 partition on each disk drive of the reconfigured RAID array; andmigrating the data from the original RAID array into portions of the reconfigured RAID array, including converting data stored in the original RAID array in RAID 1 format to a RAID 5 format consistent with the reconfigured RAID array, while converting selected partitions on disk drives from the original RAID array into partitions on the disk drives in the reconfigured RAID array.2. The method of claim 1, wherein the first RAID volume is a RAID 1 volume, and the step of migrating includes converting the RAID 1 volume into a RAID 5 volume.3. The method of claim 2, wherein the step of migrating further includes selecting a stripe size.4. The method of claim 1, wherein the step of migrating further includes selecting a stripe size.5. The method of claim 1, further comprising the step of creating unprotected storage space, if the number of disk drives having available storage space is equal to one.6. A computer-readable medium storing instructions for causing a computing device to adaptively reconfigure a set of disk drives to implement a reconfigured RAID array comprising RAID 1 and RAID 5 data, wherein at least a subset of the set of disk drives is originally configured to implement an original RAID array comprising RAID 1 data, at least part of the data stored in the original RAID array being migrated into the reconfigured RAID array by converting the data, by performing the steps of:identifying a first RAID volume of a first RAID type that is associated with the original RAID array, including:selecting a RAID type based on a number of disk drives in the set of disk drives having available storage space, including selecting drives that are a part of the original RAID array;determining a maximum partition size for the RAID type,determining that the number of disk drives having available storage space is greater than two so that the RAID type is RAID 5;applying the maximum partition size to each of the disk drives having available storage space to define the second reconfigured RAID volume for storing data in RAID 5, including disk drives that are members of the original RAID array, to establish a RAID 5 partition on each disk drive of the reconfigured RAID array; andmigrating the data from the original RAID array into portions of the reconfigured RAID array, including converting data stored in the original RAID array in RAID 1 format to a RAID 5 format consistent with the reconfigured RAID array, while converting selected partitions on disk drives from the original RAID array into partitions on disk drives in the reconfigured RAID array.7. The computer-readable medium of claim 6, wherein the first RAID volume is a RAID 1 volume, and the step of migrating includes converting the RAID 1 volume into a RAID 5 volume.8. The computer-readable medium of claim 6, wherein the original RAID array is a RAID 3 volume or a RAID 5 volume, and the step of migrating includes converting the data in the RAID 3 volume or the RAID 5 volume comprising the original RAID array into RAID 5 data.9. The computer-readable medium of claim 6, including selecting a RAID type comprising a RAID 1 type, if the number of disk drives having available storage space is equal to two, and identifying unprotected or unused storage space on one disk drive of the original RAID array, and the step of migrating includes converting the unprotected or unused storage space into a RAID 1 volume.10. The computer-readable medium of claim 6, further comprising the step of creating unprotected storage space, if the number of disk drives having available storage space is equal to one.11. A system for adaptively reconfiguring a set of disk drives to implement a reconfigured RAID array comprising RAID 1 and RAID 5 data, wherein at least a subset of the set of disk drives is originally configured to implement an original RAID array comprising RAID 1 data, at least part of the data stored in the original RAID array being migrated into a second the reconfigured RAID array by converting the data, the system comprising:a memory; anda processor configured to perform the steps of:identifying a first RAID volume of a first RAID type that is associated with the original RAID array, including:selecting a RAID type based on a number of disk drives in the set of disk drives having available storage space, including selecting drives that are a part of the original RAID array,determining a maximum partition size for the RAID type,determining that the number of disk drives having available storage space is greater than two so that the RAID type is RAID 5;applying the maximum partition size to each of the disk drives having available storage space to define the reconfigured RAID volume for storing data in RAID 5, including disk drives that are members of the original RAID array, to establish a RAID 5 partition on each disk drive of the reconfigured RAID array, andmigrating the data from the original RAID array into portions of the reconfigured RAID array including converting data stored in the original RAID array in RAID 1 format to a RAID 5 format consistent with the reconfigured RAID array, while converting selected partitions on disk drives from the original RAID array into partitions on the disk drives in the reconfigured RAID array.12. The system of claim 11, wherein the first RAID volume is a RAID 3 volume or a RAID 5 volume, the step of selecting a second RAID type comprises selecting a RAID 5 type, if the number of disk drives having available storage space is greater than two, and the step of morphing includes converting the RAID 3 volume or the RAID 5 volume comprising the first RAID volume into a RAID 5 volume comprising the second RAID volume.13. The system of claim 11, wherein the step of identifying a first RAID volume comprises identifying unprotected or unused storage space on one disk drive of the first RAID array, the step of selecting a second RAID type comprises selecting a RAID 1 type, if the number of disk drives having available storage space is equal to two, and the step of migrating comprises converting the unprotected or unused storage space into a RAID 1 volume.14. The system of claim 11, further comprising the step of creating unprotected storage space, if the number of disk drives having available storage space is equal to one. |
CROSS-REFERENCE TO RELATED APPLICATIONSThis application claims the priority benefit of U.S. provisional patent application No. 60/676,779, titled "System and Method for Adaptive RAID Configuration," filed May 2, 2005. This related application is hereby incorporated by reference in its entirety.BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates generally to redundant arrays of independent disks (RAIDs) and more specifically to a system and method for adaptive RAID configuration.2. Description of the Related ArtConventional RAID arrays (i.e., any disk drive array upon which one or more RAID volumes are extant) are used to provide some degree of data redundancy by trading disk usage efficiency for reliability in the form of data protection. In the simplest RAID case (RAID 1), data is mirrored onto two disk drives to provide 100% data redundancy, but at a cost of being 50% efficient with respect to using available disk space. Other types of RAID arrays (RAID 3 and RAID 5) are designed with three or more disk drives. Having more disk drives typically increases the storage space and efficiency of these types of RAID arrays.As is commonly known, if disk drives of different sizes are used when configuring a RAID array, then an amount of disk space equal to the storage difference between the disk drives is usually not available to the RAID array. The result is a reduction in the storage efficiency of the RAID array. To illustrate this problem, consider constructing a RAID 1 array from a 100 gigabyte (GB) disk drive and a 150 GB disk drive. The RAID 1 array would consist of two 100 GB partitions (one from each disk drive); however, 50 GB of space would remain unallocated and unavailable to users.In order to eliminate the problem of unused space, disk drives of identical sizes are often chosen to populate the RAID array. While choosing identical drive sizes may be a workable solution when the RAID array is first assembled, maintaining identical drive sizes may be difficult when modifying or expanding an established array. For example, consider a RAID 5 array initially constructed with three 100 GB disk drives. If one of the disk drives within the array were to fail, and the faulty disk drive could only be replaced with a 150 GB disk drive, then the modified RAID array would use only 100 GB of the available 150 GB from the new disk drive. A space of 50 GB would remain unallocated and unavailable to users. The same would hold true if a fourth disk drive, a 150 GB disk drive, were added to a RAID 5 array initially constructed with three 100 GB disk drives. The resulting four disk drive RAID 5 array would use only 100 GB of the available 150 GB from the additional disk drive. Again, a space of 50 GB would remain unallocated and unavailable to users.As the foregoing illustrates, what is needed in the art is a way to configure a RAID array having disk drives of different sizes such that the storage efficiency of the RAID array is increased.SUMMARY OF THE INVENTIONOne embodiment of the present invention sets forth a method for adaptively configuring a set of disk drives with a RAID array. The method includes the steps of selecting a RAID type based on a number of disk drives having available storage space, determining a maximum partition size for defining a RAID volume of the selected RAID type, and applying the maximum partition to each of the disk drives having available storage space to define the RAID volume of the selected RAID type. One advantage of the disclosed method is that it efficiently allocates appropriate RAID volumes for the given disk drive array, and, by making the unallocated disk space available to users, allows disk drives of unequal sizes to be effectively used in the disk drive array.Another embodiment of the present invention reconfigures an existing RAID array such that the storage space available on various disk drives in the disk drive array may be used in the most efficient manner. The alternative embodiment is especially useful if an existing RAID array is upgraded by adding a disk drive to the existing disk drive array or modified by replacing one or more disk drives in the existing disk drive array.BRIEF DESCRIPTION OF THE DRAWINGSSo that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.FIG. 1 is a flowchart of method steps for adaptively configuring a RAID volume, according to one embodiment of the present invention;FIG. 2A is a conceptual illustration of how a disk drive array may be configured to implement a RAID 1 volume, according to one embodiment of the present invention;FIG. 2B is a conceptual illustration of how a disk drive array may be configured to implement a RAID 5 volume and a RAID 1 volume, according to one embodiment of the present invention;FIGS. 3A and 3B present a flowchart of method steps for adaptively reconfiguring an existing RAID array, according to one embodiment of the present invention; andFIG. 4 is a conceptual illustration of a computer system configured to implement one or more aspects of the present invention.DETAILED DESCRIPTIONFIG. 1 is a flowchart of method steps for adaptively configuring a RAID volume, according to one embodiment of the present invention. Persons skilled in the art will understand that any system configured to perform the method steps in any order is within the scope of the invention.As shown in FIG. 1, the method for adaptively configuring a RAID volume begins in step 102 wherein a RAID manager application determines if there are more than two disk drives with available storage space for use in the RAID volume. If there are more than two disk drives with available storage space, then, in step 104, the RAID manager application selects a RAID 5 configuration for the RAID volume. In step 110, the RAID manager application determines the maximum partition size to allocate on each disk drive having available storage space in order to define the RAID 5 volume. In one embodiment, the maximum partition size is the largest amount of available storage space common to all of the disk drives having available storage space. In step 112, the RAID manager application applies this maximum partition to each disk drive having available storage space to define the RAID 5 volume. In step 114, the RAID manager application determines whether any disk drives have more available storage space. If there is available space on any disk drive, then the method returns to step 102. If there is no more available storage space, then the method terminates.If, in step 102, the RAID manager application determines that there are not more than two disk drives with available storage space, then, in step 106, the RAID manager application determines whether there are two disk drives with available storage space. If two disk drives have available storage space, then, in step 108, a RAID 1 configuration is selected for the RAID volume. As previously described, in step 110, the RAID manager application determines the maximum partition size for the RAID 1 volume. Again, in one embodiment, the maximum partition size is the largest amount of available storage space common to both disk drives having available storage space. In step 112, the RAID manager application applies this maximum partition to each of the two disk drives to define the RAID 1 volume. The method then proceeds to step 114, as previously described herein.If, in step 106, the RAID manager application determines that there is only one disk drive with available storage space, then, in step 116, the available storage space on the disk drive is made accessible to the user as unprotected storage space. There is no redundancy associated with this storage region. The unprotected storage space is made visible and accessible to the operating system, and, thus, the user. At the completion of this step, the method terminates.FIG. 2A is a conceptual illustration of how a disk drive array 200 may be configured to implement a RAID 1 volume 206, according to one embodiment of the present invention. The disk drive array 200 consists of two different sized disk drives 202 and 204. For the purposes of this exemplary discussion, disk drive 202 has a 100 GB capacity and disk drive 204 has a 150 GB capacity; however, the method of FIG. 1 may be applied, without limitation, to disk drives 202 and 204 of any size. The RAID manager application examines the disk drive array 200 and determines that there are only two disk drives 202 and 204 with available storage space. As set forth in steps 102, 106 and 108 of FIG. 1, the RAID manager application therefore selects to configure the disk drive array 200 with a RAID 1 volume. The maximum partition size common to both disk drives 202 and 204 is 100 GB; therefore, as set forth in steps 110 and 112 of FIG. 1, the RAID 1 volume 206 defined on disk drive array 200 consists of two 100 GB partitions 208 and 210. The RAID manager application then examines the disk drive array 200 again, as set forth in step 114 of FIG. 1, and determines that partition 212 of disk 204 is not included in the RAID 1 volume 206. Partition 212 therefore constitutes available storage space. Since the RAID manager application sees that this available space resides on only one disk drive 204, the RAID manager application makes partition 212 visible and accessible as unprotected space to the user, as set forth in steps 102, 106 and 116 of FIG. 1. Because there is then no more available storage space in disk drive array 200 after these steps, the RAID manager application terminates the adaptive RAID configuration process.The storage capacity of the configured disk drive array 200 is the sum of the effective storage capacities of the RAID 1 volume 206 and the unprotected space 212. Since partition 210 is an exact copy of partition 202, the effective storage capacity of the RAID 1 volume 206 is the size of one partition (i.e. 100 GB). In addition, the effective storage capacity of the unprotected space is 50 GB. Therefore, the complete storage capacity of the configured disk drive array 200 is 150 GB.FIG. 2B is a conceptual illustration of how a disk drive array 250 may be configured to implement a RAID 5 volume 258 and a RAID 1 volume 260, according to one embodiment of the present invention. For the purposes of this exemplary discussion, the disk drive array 250 consists of three different sized disk drives 252, 254 and 256 having respective sizes 100 GB, 150 GB and 190 GB. Again, however, the method of FIG. 1 may be applied, without limitation, to disk drives 252, 254, and 256 of any size. The RAID manager application examines the disk drive array 250 and determines that there are three disk drives 252, 254, and 256 with available storage space. As set forth in steps 102 and 104 of FIG. 1, the RAID manager application therefore selects to configure the disk drive array 250 with a RAID 5 volume. The maximum partition size common to all three disk drives 252, 254 and 256 is 100 GB. Therefore, as set forth in steps 110 and 112 of FIG. 1, the RAID 5 volume 258 defined on disk drive array 250 consists of three 100 GB partitions 262, 268, and 272. After defining the RAID 5 volume 258, the RAID manager application again examines the disk drive array 250, as set forth in step 114 of FIG. 1, and determines that there is available storage space for another RAID volume on disk drives 254 and 256. The RAID manager application therefore selects to configure disk drives 254 and 256 with a RAID 1 volume, as set forth in step 102, 106 and 108 of FIG. 1. The maximum partition size common to both disk drives 254 and 256 is 50 GB; therefore, the RAID 1 volume 260 defined on disk drives 254 and 256 consists of two 50 GB partitions 264 and 270. The RAID manager application again examines the disk drive array 250, as set forth in step 114 of FIG. 1, and determines that there is a single partition 266 on disk drive 256 of 40 GB that is not included in any RAID volume. The RAID manager application therefore makes partition 266 visible and accessible as unprotected space to the user, as set forth in steps 102, 106 and 116 of FIG. 1.The storage capacity of the configured disk drive array 250 is the sum of the effective storage capacities of RAID 5 volume 258, RAID 1 volume 260 and the unprotected space 266. RAID 5 volume 258 comprises three partitions, each 100 GB in size. As a general rule, the effective storage capacity for a RAID 5 volume is determined by the formula C=(N-1)X, where C is the storage capacity, N is the number of partitions and X is the size of each of those partitions. Applying this formula to RAID 5 volume 258 shows its effective storage capacity to be 200 GB. RAID 1 volume 260 has an effective storage capacity of 50 GB, as determined by the size of the partitions 264 and 270, and the effective storage capacity of the unprotected space 266 is 40 GB. The complete storage capacity of the configured disk drive array 250 is therefore 290 GB.As previously discussed herein, the method of FIG. 1 is not limited to the disk drive arrays shown in FIGS. 2A and 2B. The method may be used in conjunction with any disk drive array comprising any number of disk drives. Thus, the combination of RAID 5 and RAID 1 volumes and unprotected space described above in conjunction with FIGS. 2A and 2B in no way limits the scope of the present invention.The method of FIG. 1 may be applied to initially configure any disk drive array to create a RAID array, as described in FIGS. 2A and 2B. In addition, if a disk drive array is already configured with a RAID array, the method may be used, in part, to reconfigure the existing RAID array such that the storage space available on various disk drives in the disk drive array may be used in the most efficient manner. FIGS. 3A and 3B set forth a method of reconfiguring a existing RAID array that is especially useful if an existing RAID array is upgraded by adding a disk drive to the existing disk drive array or modified by replacing one or more disk drives in the existing disk drive array.FIGS. 3A and 3B present a flowchart of method steps for adaptively reconfiguring a RAID array, according to one embodiment of the present invention. Persons skilled in the art will understand that any system configured to perform the method steps in any order is within the scope of the invention.As shown in FIGS. 3A and 3B, the method for adaptively reconfiguring an existing RAID array begins in step 302 wherein the RAID manager application selects an existing RAID volume or partition from the RAID array. A partition may include any partition that is not a member of an existing RAID volume, unprotected storage space or unused storage space. The RAID manager application determines the partition size of the selected RAID volume or partition in step 304.In step 306, the RAID application manager determines if there are more than two disk drives available in the current disk drive array that can support the partition size of the selected RAID volume or partition. The current disk drive array may be the array of disk drives on which the existing RAID array is configured or the array of upgraded or modified disk drives (in the case where a disk drive is added to an existing disk drive array or where one or more disk drives in the existing disk drive array are replaced). In one embodiment, the number of available disk drives that can support the partition size of the selected RAID volume or partition is the number of disk drives in the selected RAID volume or partition plus the number of disk drives in the current disk drive array that are not members of the selected RAID volume or partition but can nonetheless support the partition size of the selected RAID volume or partition.If more than two disk drives are available, then, in step 308, the RAID manager application selects a RAID 5 configuration for the updated RAID volume. In step 310, the RAID manager application updates the RAID volume on the current disk drive array. The RAID volume includes the existing partition(s) of the selected RAID volume or partition plus an identically sized partition on each disk drive in the current disk drive array that is not a member of the selected RAID volume or partition but can nonetheless support the partition size of the selected RAID volume or partition. In this fashion, the RAID manager application reuses the existing partition(s) of the selected RAID volume or partition to construct the updated RAID array.In step 312, the data from the selected RAID volume or the selected partition is converted to the updated RAID volume. The conversion of step 312 is well known in the art. In one embodiment, the RAID manager application determines the optimal stripe size for the updated RAID volume by evaluating the selected RAID volume or partition. Data is read from the selected volume or partition and is written temporarily to memory. The RAID manager application then reads the data from memory and writes the data to the updated RAID volume. As is well-known, care should be taken to read data well ahead on the existing RAID volume so that, as data is written to the updated RAID volume, the RAID manager application does not overwrite data on the existing RAID volume. Any selected RAID volume or partition may be converted. For example, a selected RAID 3 or RAID 5 volume may be converted into an updated RAID 5 volume when one or more partitions are added to the selected RAID 3 or RAID 5 volume. Similarly, a selected RAID 1 volume may be converted into an updated RAID 5 volume by adding one or more partitions, and existing unprotected space may be converted into a RAID 1 volume by adding one partition.After converting the RAID volumes, the RAID manager application, in step 314, determines whether all of the RAID volumes or partitions of the existing RAID array have been selected. If there are any existing RAID volumes or partitions left to select, then the method returns to step 302. If, however, all of the existing RAID volumes or partitions have been selected, then the method proceeds to step 322. In step 322, the RAID manager application determines whether there is any available storage space on any of the disk drives of the current disk drive array. If there is no available storage space, then the method terminates. However, if there is available storage space, then the method proceeds to step 102 of FIG. 1.If, in step 306, the RAID manager application determines that there are not more than two disk drives in the current disk drive array that can support the partition size of the selected RAID volume or partition, then, in step 316, the RAID manger application determines whether there are two disk drives that can support the partition size of the selected RAID volume or partition. If the RAID manager application determines that there are two such disk drives, then, in step 318, a RAID 1 configuration is selected for the updated RAID volume. As previously described, in step 310, the RAID manager application defines the updated RAID volume on the disk drives of the current disk drive array, and, in step 312, the RAID manager application converts the data from the selected RAID volume or partition to the updated RAID 1 volume. The method then proceeds to step 314, as previously described herein.If, in step 316, the RAID manager application determines that there is only one disk drive in the current disk drive array that can support the partition size of the selected RAID volume or partition, then the selected RAID volume or partition is a partition, and the method proceeds to step 320. In step 320, the RAID manager application creates unprotected storage space by defining a partition on the one disk drive having the same size as the selected partition. The partition of unprotected space is made visible and accessible to the user. The method then proceeds to step 314, as previously described herein.The method of FIGS. 3A and 3B may be applied to any existing RAID volume, whether or not that RAID volume is upgraded or modified. For example, suppose an existing RAID 1 volume is comprised of two disk drives of different sizes such that one of the disk drives contains a certain amount of unused storage space outside of the RAID 1 volume. The method of FIGS. 3A and 3B may be applied to this RAID 1 volume to convert that unused storage space to unprotected storage space that is accessible and visible to the user.The method of FIGS. 3A and 3B also may be used to reconfigure an existing RAID array that is upgraded by adding a disk drive to the existing disk drive array. For example, suppose the RAID array comprising RAID 1 volume 206 and unprotected space 212, as shown in FIG. 2A, is upgraded by adding a third disk drive to the existing disk drive array 200. Suppose further that this third disk drive has a storage capacity of 190 GB.To reconfigure the RAID array of FIG. 2A, the RAID manager application first examines the RAID array of FIG. 2A and selects an existing RAID volume or partition in that RAID array, as set forth in step 302 of FIGS. 3A and 3B. By way of example only, assume that the RAID manager application selects RAID 1 volume 206 comprised of partitions 208 and 210. The RAID manager application determines that the partition size of RAID 1 volume 206 is 100 GB and then inspects the current disk drive array, consisting of disk drives 202, 204 and the 190 GB disk drive, and determines that all three disk drives in the current disk drive array can support the 100 GB partition size, as set forth in steps 304 and 306 of FIGS. 3A and 3B. Since three disk drives can support the partition size, the RAID manager application selects a RAID 5 configuration for the updated RAID volume, as set forth in step 306 and 308 of FIGS. 3A and 3B. The RAID manager application defines the updated RAID 5 volume by defining a 100 GB partition on each of disk drives 202, 204 and the 190 GB disk drive, as set forth in step 310 of FIGS. 3A and 3B. The RAID manager application then converts the data from RAID 1 volume 206 to the updated RAID 5 volume, using any migration technique known in the art, as set forth in step 312 of FIGS. 3A and 3B.The RAID manager application then re-examines the RAID array of FIG. 2A to determine whether all existing RAID volumes or partitions in the RAID array have been selected, as set forth in step 314 of FIGS. 3A and 3B. Since partition 212 has not yet been selected, the RAID manager application selects partition 212, as set forth in step 302 of FIGS. 3A and 3B. The RAID manager application determines that the partition size of partition 212 is 50 GB, as set forth in step 304 of FIGS. 3A and 3B. Since after defining the updated RAID 5 volume only two disk drives of the current disk drive array can support the 50 GB partition size, disk drive 204 and the 190 GB disk drive, the RAID manager application selects a RAID 1 configuration for the updated RAID volume, as set forth by steps 306, 316 and 318 of FIGS. 3A and 3B. The RAID manager application defines the updated RAID 1 volume by defining a 50 GB partition on each of disk drive 204 and the 190 GB disk drive, as set forth in step 310 in FIGS. 3A and 3B. The RAID manager application then converts the data from partition 212 to the updated RAID 1 volume, as set forth in step 312 of FIGS. 3A and 3B.The RAID manager application then re-examines the RAID array of FIG. 2A to determine whether all existing RAID volumes or partitions in the RAID array have been selected, as set forth in step 314 of FIGS. 3A and 3B. Since both RAID 1 volume 206 and partition 212 have already been selected, the RAID manager examines the current disk drive array to determine whether there are any disk drives with available storage space, as set forth in step 322 of FIGS. 3A and 3B. Since the 190 GB disk drive has 40 GB of available storage space, the RAID manager application creates a partition of 40 GB of unprotected storage space on the 190 GB disk drive that is visible and accessible to the user, as set forth in steps 102, 106 and 116 of FIG. 1. After creating the unprotected storage space, there is no more available storage space on any of the disk drives in the current disk drive array, so the RAID volume manager terminates the method.The resulting RAID array is the same as the RAID array shown in FIG. 2B, where the updated RAID 5 volume is depicted as RAID 5 volume 258, the updated RAID 1 volume is depicted as RAID 1 volume 260, and the new unprotected storage space is depicted by unprotected space 266. The storage capacity of reconfigured RAID array is the sum of the updated RAID 5 volume (200 GB), the updated RAID 1 volume (50 GB) and the unprotected storage space (40 GB), which is equal to 290 GB of storage space.FIG. 4 is a conceptual illustration of a computer system 400 configured to implement one or more aspects of the present invention. Computer system 400 may be a desktop computer, server, laptop computer, palm-sized computer, personal digital assistant, tablet computer, game console, cellular telephone, computer-based simulator or any other type of similar computing device. As shown, computer system 400 may include, without limitation, a host processor 410, main memory 405, a chipset 415, an external disk drive controller 430, an disk drive infrastructure device 465, a bus 465, and disk drives 435, 440, 445, 450, 455, and 460.Computer system 400 uses main memory 405 to store programs such as the RAID manager application, described above in conjunction with FIGS. 1-3, and data used by the host processor 410. The host processor 410 runs the RAID manager application and uses the chipset 415 to control and access the disk drives 435, 440, 445, 450. The external disk drive controller 430 may be connected to the chipset 415 by a bus 460. The RAID manager application may control disk drive 455 though the external disk drive controller 430 and disk drive 460 through the external disk drive controller 430 and the disk drive infrastructure device 465. Those skilled in the art will recognize that any number of disk drives may be coupled to computer system 400 using the disk drive infrastructure device 465.In one embodiment of the present invention, the RAID manager application is a stand alone application. In an alternative embodiment, the RAID manager application is part of the operating system. Persons skilled in the art will understand that the functionality of the RAID manager application may be carried out by software, firmware, hardware or any combination thereof.While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. |
Some embodiments include a capacitor. The capacitor has a first electrode with a lower pillar portion, and with an upper container portion over the lower pillar portion. The lower pillar portion has an outer surface. The upper container portion has an inner surface and an outer surface. Dielectric material lines the inner and outer surfaces of the upper container portion, and lines the outer surface of the lower pillar portion. A second electrode extends along the inner and outer surfaces of the upper container portion, and along the outer surface of the lower pillar portion. The second electrode is spaced from the first electrode by the dielectric material. Some embodiments include assemblies (e.g., memory arrays) which have capacitors. Some embodiments include methods of forming capacitors. |
CLAIMSI/we claim,1. A capacitor, comprising:a lower pillar portion; the lower pillar portion having an outer surface; an upper container portion over the lower pillar portion; the upper container portion comprising an inner surface and an outer surface; a first electrode of the capacitor comprising the lower pillar portion and the upper container portion;dielectric material lining the inner and outer surfaces of the upper container portion, and lining the outer surface of the lower pillar portion; anda second electrode of the capacitor extending along the inner and outer surfaces of the upper container portion, and along the outer surface of the lower pillar portion; the second electrode being spaced from the first electrode by the dielectric material.2. The capacitor of claim 1, where the lower pillar portion has a first height, the upper container portion has a second height, and the first and second heights together equal a total height of the first electrode; and where the first height is at least about one- half of the total height.3. The capacitor of claim 1, where the lower pillar portion has a first height, the upper container portion has a second height, and the first and second heights together equal a total height of the first electrode; and where the first height is less than about one-half of the total height.4. The capacitor of claim 1, where the lower pillar portion has a first height, the upper container portion has a second height, and the first and second heights together equal a total height of the first electrode; and where the first height is within a range of from about 10% of the total height to about 75% of the total height.5. A capacitor, comprising:a lower pillar portion; the lower pillar portion including a lower portion of a conductive liner, and including a conductive fill material laterally surrounded by the lower portion of the conductive liner; the lower pillar portion having an outer surface along an outer edge of the lower portion of the conductive liner;an upper container portion over the lower pillar portion; the upper container portion comprising an upwardly-opening conductive container; the upwardly- opening conductive container comprising a sidewall corresponding to an upper portion of the conductive liner, and comprising a bottom corresponding to an upper surface of the conductive fill material; the upper container portion having an inner surface along an inner edge of the sidewall and along the upper surface of the conductive fill material, and having an outer surface along an outer edge of the sidewall; a first electrode of the capacitor comprising the lower pillar portion and the upper container portion;dielectric material lining the inner and outer surfaces of the upper container portion, and lining the outer surface of the lower pillar portion; anda second electrode of the capacitor extending along the inner and outer surfaces of the upper container portion, and along the outer surface of the lower pillar portion; the second electrode being spaced from the first electrode by the dielectric material. 6. The capacitor of claim 5, where the conductive liner comprises metal and the conductive fill material comprises doped semiconductor material.7. The capacitor of claim 6, where the conductive liner comprises titanium and the conductive fill material comprises doped silicon.8. The capacitor of claim 7, where the conductive liner comprises TiN; where the chemical formula indicates primary constituents rather than indicating a specific stoichiometry.9. The capacitor of claim 5, comprising a step along the conductive liner proximate where the upper portion of the conductive liner joins to the lower portion of the conductive liner, the step including a region of the upper portion of the conductive liner which is laterally inset relative to an inner edge of the lower portion of the conductive liner.10. The capacitor of claim 9, comprising an insulative lattice directly against an outer edge of step.11. An assembly, comprising:a pair of neighboring capacitors; one of the neighboring capacitors being a first capacitor and the other of the neighboring capacitors being a second capacitor;the first capacitor comprising a first bottom electrode which includes a first lower pillar portion under a first upper container portion; the first lower pillar portion comprising a first pillar outer surface; the first upper container portion comprising a first container inner surface and a first container outer surface;the second capacitor comprising a second bottom electrode which includes a second lower pillar portion under a second upper container portion; the second lower pillar portion comprising a second pillar outer surface; the second upper container portion comprising a second container inner surface and a second container outer surface;dielectric material along the first pillar outer surface, the first container inner surface, the first container outer surface, the second pillar outer surface, the second container inner surface and the second container outer surface;a common upper electrode extending along the first and second bottom electrodes, and being spaced from the first and second bottom electrodes by the dielectric material;a recess extending downwardly into the first and second upper container portions and partially overlapping each of the first and second upper container portions; regions of each of the first and second upper container portions being recessed, and other regions of each of the first and second upper container portions being non-recessed; and a conductive interconnect electrically coupled with the common upper electrode and being over the recessed regions of the first and second upper container portions.12. The assembly of claim 11, comprising an insulative lattice directly against the non-recessed regions of the first and second upper container portions.13. The assembly of claim 11, where the conductive interconnect includes conductively-doped semiconductor material directly against the common second electrode, and includes metal-containing material directly against the conductively-doped semiconductor material.14. The assembly of claim 13, where the conductively-doped semiconductor material comprises conductively-doped silicon, and where the metal-containing material comprises tungsten.15. The assembly of claim 11, comprising a first step proximate where the first lower pillar portion joins to the first upper container portion, and comprising a second step proximate where the second lower pillar portion joins to the second upper container portion; the first step comprising a region of the first container outer surface which is inset relative to a region of the first pillar outer surface; the second step comprising a region of the second container outer surface which is inset relative to a region of the second pillar outer surface.16. The assembly of claim 15, comprising an insulative lattice directly against the first and second steps.17. A method of forming an assembly, comprising:forming a stack comprising a first sacrificial material over a second sacrificial material, and comprising a lattice layer between the first and second sacrificial materials;forming a pair of neighboring first openings extending through the first and second sacrificial materials, and through the lattice layer;forming conductive liners along inner surfaces of the neighboring first openings to narrow the neighboring first openings; the conductive liner within one of the neighboring first openings being a first conductive liner, and the conductive liner within the other of the neighboring first openings being a second conductive liner;forming conductive fill material within the narrowed neighboring first openings to fill the narrowed neighboring first openings; the conductive fill material within said one of the neighboring first openings, together with the first conductive liner, forming a first conductive structure; and the conductive fill material within said other of the neighboring first openings, together with the second conductive liner, forming a second conductive structure;forming a second opening to partially overlap the first and second conductive structures; the second opening extending to the conductive fill material of the first and second conductive structures, and extending to the first sacrificial material;removing the first sacrificial material to expose a region of the lattice layer between the first and second conductive structures;removing the exposed region of the lattice layer, and then removing the second sacrificial material;removing a portion of the conductive fill material from each of the first and second conductive structures to form the first and second conductive structures into first and second bottom electrodes, respectively; the first bottom electrode having a first lower pillar region comprising a first remaining portion of the conductive fill material laterally surrounded by a lower region of the first conductive liner, and having a first upper container region over the first lower pillar region and comprising an upper region of the first conductive liner; the second bottom electrode having a second lower pillar region comprising a second remaining portion of the conductive fill material laterally surrounded by a lower region of the second conductive liner, and having a second upper container region over the second lower pillar region and comprising an upper region of the second conductive liner; forming dielectric material along the first and second bottom electrodes; andforming a common upper electrode along the dielectric material and spaced from the first and second bottom electrodes by the dielectric material.18. The method of claim 17, where the forming of the neighboring first openings creates a first inset region extending to under the lattice layer within said one of the neighboring first openings, and creates a second inset region extending to under the lattice layer within said other of the neighboring first openings; where the first conductive liner has a first step extending along the first inset region; and where the second conductive liner has a second step extending along the second inset region.19. The method of claim 17, where the lattice layer comprises silicon nitride.20. The method of claim 17, where the first sacrificial material comprises silicon dioxide and the second sacrificial material comprises borophosphosilicate glass.21. The method of claim 17, where the first sacrificial material comprises amorphous silicon and the second sacrificial material comprises borophosphosilicate glass.22. The method of claim 17, comprising forming a conductive interconnect electrically coupled with the common second electrode.23. The method of claim 22, where the conductive interconnect includes conductively-doped semiconductor material directly against the common second electrode, and includes metal-containing material directly against the conductively-doped semiconductor material.24. The method of claim 23, where the conductively-doped semiconductor material comprises conductively-doped silicon, and where the metal-containing material comprises tungsten. |
DESCRIPTIONCAPACITORS, INTEGRATED ASSEMBLIES INCLUDING CAPACITORS, AND METHODS OF FORMING INTEGRATED ASSEMBLIESTECHNICAL FIELDCapacitors, integrated assemblies including capacitors, and methods of forming integrated assemblies having capacitors.BACKGROUNDMemory is one type of integrated circuitry, and is used in electronic systems for storing data. Integrated memory is usually fabricated in one or more arrays of individual memory cells. The memory cells are configured to retain or store memory in at least two different selectable states. In a binary system, the states are considered as either a“0” or a“1”. In other systems, at least some individual memory cells may be configured to store more than two levels or states of information.An example memory is dynamic random access memory (DRAM). The DRAM unit cells may each comprise a capacitor in combination with a transistor. Charge stored on the capacitors of the DRAM unit cells may correspond to memory bits.It would be desirable to develop improved capacitors suitable for utilization in DRAM and/or other integrated circuitry.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a diagrammatic cross-sectional view of an example capacitor configuration.FIGS. 1A and 1B are sectional views along the lines A-A and B-B of FIG. 1, respectively.FIG. 2 is a diagrammatic cross-sectional view of an example assembly having a neighboring pair of example capacitor configurations.FIGS. 3-15 are diagrammatic cross-sectional views of an example construction at example process stages of an example method for forming the example assembly of FIG. 2. The construction of FIG. 15 is identical to the assembly of FIG. 2.FIG. 14A is a sectional view along the line A-A of FIG. 14.FIG. 16 is a schematic diagram of a region of an example memory array. DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTSSome embodiments include capacitors in which a first electrode (a storage node) includes an upper container portion over a lower pillar portion. A dielectric material is along inner and outer sidewalls of the container portion, and along an outer sidewall of the pillar portion. A second electrode (a plate electrode) is also along the inner and outer sidewalls of the container portion, and along the outer sidewall of the pillar portion; and is spaced from the first electrode by the dielectric material.Some embodiments include recognition that container-type capacitors may beneficially provide higher capacitance than pillar-type capacitors of analogous dimensions due to increased surface area along storage nodes of container-type capacitors relative to storage nodes of pillar-type capacitors. It is also recognized that pillar-type capacitors may beneficially be more structurally stable than container-type capacitors due to the rigidity provided by the pillar-shaped storage nodes. Further, it is recognized that there may be a wider spread of capacitances across an array of container- type capacitors as compared to pillar-type capacitors due to difficulties associated with the fabrication of container-type capacitors.Some embodiments include new capacitor configurations having storage nodes which combine container-type structures with pillar-type structures. Such may enable benefits associated with container-type configurations to be achieved together with benefits associated with pillar- type configurations, while reducing (or even eliminating) disadvantages associated with either or both of container-type configurations and pillar- type configurations. In some embodiments, lower portions of capacitor storage nodes have pillar-type configurations, and upper portions of the capacitor storage nodes have container- type configurations. Example embodiments are described with reference to FIGS. 1-16.Referring to FIGS. 1, 1A and 1B, a region of an example assembly 10 is illustrated, with such region comprising an example capacitor 12.The capacitor 12 includes a first electrode 14, a second electrode 16, and dielectric material 18 between the first and second electrodes.The first electrode 14 includes a lower pillar portion 20, and an upper container portion 22 over the lower pillar portion. A conductive liner 24 has a lower portion within the lower pillar portion 20 of the first electrode 16, and has an upper portion within the upper container portion 22 of the first electrode 16. The lower pillar portion 20 also includes a conductive fill material 26 laterally surrounded by the conductive liner 24.The conductive liner 24 may comprise any suitable electrically conductive composition(s), such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-dopedsemiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). In some example embodiments, the conductive liner 24 may comprise, consist essentially of, or consist of TiN (titanium nitride); where the chemical formula indicates primary constituents rather than a specific stoichiometry.The conductive fill material 26 may comprise any suitable electrically conductive composition(s), such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-dopedsemiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). In some example embodiments, the conductive fill material 26 comprises, consists essentially of, or consists of doped silicon (e.g., conductively-doped polycrystalline silicon; such as, for example, n-type doped polycrystalline silicon).The lower pillar portion 20 of the first electrode 14 has an outer edge 13, and an outer surface 15 along such outer edge. The dielectric material 18 may be considered to be configured as a liner which extends along the outer surface 15.The upper container portion 22 of the first electrode 14 includes an upwardly- opening container 28. The container 28 includes a sidewall 30 corresponding to the upper portion of the conductive liner 24, and includes a bottom 32 corresponding to an upper surface of the conductive fill material 26. The upper container portion 22 of the first electrode 14 has an outer surface 29 along an outer edge 27 of the sidewall 30; and has an inner surface 33 which extends along an inner edge 31 of the sidewall 30, as well as along the upper surface 32 of the conductive fill material 26. The dielectric material 18 lines the inner and outer surfaces 29 and 33 of the upper container portion 22.The dielectric material 18 may comprise any suitable insulative composition(s); and in some embodiments may comprise one or more of silicon dioxide, silicon nitride, zirconium oxide, aluminum oxide, etc. In some example embodiments, the dielectric material 18 may comprise one or more high-k materials; where the term high-k means a dielectric constant greater than that of silicon dioxide. For instance, in some example embodiments the dielectric material 18 may comprise, consist essentially of, or consist of zirconium oxide.The second electrode 16 of the capacitor 12 comprises a conductive material 32. The conductive material 32 may comprise any suitable electrically conductive composition(s), such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-dopedsemiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). In some example embodiments, the conductive material 32 may comprise, consist essentially of, or consist of titanium nitride.The second electrode 16 of the capacitor 12 is along the outer surface 15 of the lower pillar portion 20 of the first electrode 14, and is also along the inner and outer surfaces 33 and 29 of the upper container portion 22 of the first electrode 14. The second electrode 16 is spaced from the first electrode 14 by the dielectric material 18.In the illustrated embodiment, an opening 34 remains within a center of the upwardly-opening container 30. In other embodiments, the opening 34 may be filled with material. The material filling the opening 34 may be conductive material (e.g., in some embodiments, conductively-doped silicon may be provided to fill the opening 34), or may be insulative material.The capacitor 12 is supported by a base 36. The base 36 may comprise any suitable material(s), and in some embodiments comprises a conductive pillar 38 which is electrically coupled with the first electrode 14. The conductive pillar may also be coupled with a first source/drain region of a transistor 40 (with the transistor 40 being schematically shown in FIG. 1). The transistor 40 may comprise a gate electrically coupled with a wordline WL, and may comprise a second source/drain region electrically coupled with a digit line DL. The capacitor 12 may be one of a large number of substantially identical capacitors utilized within a memory array (e.g., a memory array analogous to the DRAM array discussed below with reference to FIG. 16); where the term“substantially identical” means identical to within reasonable tolerances of fabrication and measurement.The conductive pillar 38 may extend through an insulative material 41. Such insulative material may comprise any suitable composition or combination of compositions; and in some embodiments may comprise silicon dioxide. The base 36 may be part of a semiconductor substrate, and specifically may be supported by an underlying semiconductor material. Such semiconductor material may, for example, comprise, consist essentially of, or consist of monocrystalline silicon. The term "semiconductor substrate" means any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductor substrates described above. The source/drain regions of the transistor 40 may extend into the semiconductor material of the semiconductor substrate in some embodiments.Support structures 42, 44 and 46 provide lateral support to the capacitor 12. The support structures 42, 44 and 46 may be together considered to form an insulative lattice which supports the capacitor 12. In the illustrated embodiment, the first support structure 42 is along a bottom of the capacitor 12, the second support structure 44 isapproximately centered relative to the capacitor 12, and the third support structure 44 is along a top of the capacitor 12. In other embodiments, the support structures may be provided at other locations along the capacitor 12. Also, although three support structures 42, 44 and 46 are illustrated; in other embodiments there may be more than three support structures, or fewer than three support structures.The support structures 42, 44 and 46 may comprise any suitable composition(s). In some embodiments, all of the support structures 42, 44 and 46 may be a same composition as one another, and in other embodiments at least one of the support structures may be a different composition relative to one or more others of the support structures. In some embodiments, all of the support structures may comprise, consist essentially of, or consist of silicon nitride. The support structures 42, 44 and 46 may have any suitable vertical thicknesses, and may be the same vertical thicknesses as one another or may be of different vertical thicknesses relative to one another. In the illustrated embodiment, the upper support structure 46 is shown to have a larger vertical thickness than the other support structures 42 and 44. In some embodiments, the upper support structure 46 may be formed to be thicker than the other support structures in that it provides stability to the upper part of the container portion 22, whereas the other support structures 42 and 44 are providing support along the bases of the container portion 22 and the pillar portion 20; and the upper part of the container portion 22 may be more structurally unstable than bases of the container portion 22 and the pillar portion20.The upper container portion 22 of the first electrode 14 may join to the lower pillar portion 20 along any suitable interface. In the illustrated embodiment, a step 48 is along the conductive liner 24 in a location proximate to where the upper container portion 22 joins with the lower pillar portion 20. The liner 24 may be considered to comprise an upper portion within the container portion 22, and to comprise a lower portion within the pillar portion 20; and accordingly, the step 48 may be considered to be proximate the location where the upper portion of the conductive liner 24 joins to the lower portion of the conductive liner. In some embodiments, the step 48 may be considered to include a region 51 of the upper portion of the conductive liner 24 which is laterally inset relative to an inner edge 49 of the lower portion of the conductive liner. Alternatively, the step 48 may be considered to comprise a region of the outer edge 27 of the upper portion of the conductive liner 24 which is laterally inward of the outer edge 13 of the lower portion of the conductive liner.In the illustrated embodiment, the support structure 44 (i.e., a portion of the supporting insulative lattice) is directly against an outer edge of the step 48.The lower pillar portion 20 may be considered to have a height Hl, and the upper container portion 22 may be considered to have a height H2. The capacitor 12 may be considered to have a total height H which is a sum of the heights Hl and H2. The relative height of the lower pillar portion is preferably sufficient to provide adequate support to the capacitor 12, and yet small enough to enable a substantial amount of the capacitance within the capacitor 12 to be provided by the upper container portion 22. In some embodiments, the first height HI of the pillar portion 20 will be at least about one-half of the total height H of the capacitor 12, and in some embodiments will be less than about one-half of the total height H. In some embodiments, the first height HI of the pillar portion 20 will be within a range of from about 10% of the total height H to about 75% of such total height; within a range of from about 10% of the total height H to about 60% of such total height; within a range of from about 10% of the total height H to about 50% of such total height; within a range of from about 25% of the total height H to about 50% of such total height, etc.The capacitor 12 of FIG. 1 is an example configuration having a container-type portion over a pillar-type portion. In other embodiments, other configurations may be utilized. For instance, FIG. 2 shows a region of an assembly 10a comprising a pair of neighboring capacitors l2a and l2b; with each of the capacitors l2a and l2b comprising a bottom electrode (l4a, l4b) having a container-type upper portion (22a, 22b) over a pillar-type lower portion (20a, 20b).In some embodiments, the capacitor l2a may be referred to as a first capacitor, and the capacitor l2b may be referred to as a second capacitor. The first capacitor l2a has a first bottom electrode l4a. The first bottom electrode includes a first upper container portion 22a over a first lower pillar portion 20a. The first lower pillar portion comprises a first pillar outer surface 15 a; and the first upper container portion comprises a first container inner surface 33a, and a first container outer surface 29a. The second capacitor l2b has a second bottom electrode l4b. The second bottom electrode includes a second upper container portion 22b over a second lower pillar portion 20b. The second lower pillar portion comprises a second pillar outer surface l5b; and the second upper container portion comprises a second container inner surface 33b, and a second container outer surface 29b.The dielectric material 18 is along the first pillar outer surface l5a, the first container inner surface 33a, the first container outer surface 29a, the second pillar outer surface l5b, the second container inner surface 33b, and the second container outer surface 29b.The first capacitor l2a has a first upper electrode l6a extending along the first bottom electrode l4a, and spaced from the first bottom electrode by the dielectric material 18. The second capacitor l2b has a second upper electrode l6b extending along the second bottom electrode l4b, and spaced from the second bottom electrode by the dielectric material 18. In the illustrated embodiment, the upper electrodes l6a and l6b are electrically coupled with one another, and may be considered to be regions of a common upper electrode 55 associated with both of the capacitors l2a and l2b.The common upper electrode 55 comprises the conductive material 32 described above with reference to FIG. 1, and further comprises additional conductive material 50. The additional conductive material 50 may comprise any suitable electrically conductive composition(s), such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-dopedsemiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). In some embodiments, the additional conductive material 50 may comprise, consist essentially of, or consist of conductively-doped silicon (e.g., n-type polycrystalline silicon).A recess 52 extends downwardly into the first and second upper container portions 22a and 22b, and partially overlaps each of such first and second upper container portions. Regions 54 of the upper container portions 22a/22b are recessed, and regions 56 of the upper container portions 22a/22b are not recessed (i.e., remain non- recessed).A conductive interconnect 58 is electrically coupled with the common upper electrode 55, and in the shown embodiment has a portion directly over the recessed regions 54 of the upper container portions 22a/22b. The conductive interconnect 58 may be utilized to couple the common upper electrode 55 to a suitable reference voltage (e.g., ground, VCC/2, etc.). The conductive interconnect 58 may comprise any suitable composition or combination of compositions. In the illustrated embodiment, the conductive interconnect 58 includes a first material 60 directly against the common upper electrode 55, and includes a second material 62 over the first material. The first and second materials 60 and 62 may comprise any suitable electrically conductive composition(s), such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-dopedsemiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). In some embodiments, the first material 60 may comprise, consist essentially of, or consist of conductively-doped semiconductor material (e.g., conductively-doped silicon); and the second material 62 may be a metal-containing material. In some example embodiments, the second material 62 may comprise, consist essentially of, or consist of tungsten. In some embodiments, the materials 60 and 50 may comprise a same composition as one another (for instance, both may be n-type doped polycrystalline silicon), and accordingly may merge with one another rather than being the discrete separate materials shown in FIG. 2.The support structures 42, 44 and 46 are provided adjacent the capacitors l2a and l2b, and may be considered to be configured as a supporting insulative lattice which provides structural support to the capacitors. In the shown embodiment, the upper support structure 46 of the insulative lattice is directly against upper portions of the non- recessed regions 56 of the first and second upper container portions 22a and 22b. The first and second capacitors l2a and l2b comprise first and second steps 48a and 48b, respectively; with such steps being analogous to the step 48 described above with reference to FIG. 1. The support structure 44 of the insulative lattice is directly against the first and second steps 48a and 48b.The first and second capacitors l2a and l2b are shown to be supported by the base 36. Conductive pillars 38a and 38b are coupled with the bottom electrodes l4a and l4b of the first and second capacitors l2a and l2b. Transistors analogous to the transistor 40 of FIG. 1 may be coupled to the bottom electrodes l4a and l4b through the conductive pillars 38a and 38b (such transistors are not shown in FIG. 2). In some embodiments, the capacitors l2a and l2b may be considered to be representative of a large number of capacitors formed across a memory array (e.g., a memory array analogous to the DRAM array discussed below with reference to FIG. 16).The capacitors described above may be fabricated with any suitable processing. Example processing which may be utilized to form the neighboring capacitors l2a and l2b of FIG. 2 is described with reference to FIGS. 3-15.Referring to FIG. 3, a construction 64 comprises a stack 66 formed over the base 36. The stack 66 includes the support structures 42, 44 and 46; which may be referred to as lattice layers. The lattice layers 42, 44 and 46 may comprise any suitable composition or combination of compositions; and in some embodiments may comprise, consist essentially of, or consist of silicon nitride.The stack 66 comprises a first sacrificial material 68 over a second sacrificial material 70, and in the shown embodiment the first and second sacrificial materials are spaced from one another by the lattice layer 44. The first and second sacrificial materials 68 and 70 may comprise any suitable composition(s). In some embodiments, the first sacrificial material may comprise, consist essentially of, or consist of silicon oxide or amorphous silicon; and the second sacrificial material may comprise, consist essentially of, or consist of borophosphosilicate glass.The various structures of the stack 66 may be formed utilizing any suitable processing; including, for example, atomic layer deposition (ALD), chemical vapor deposition (CVD), plasma enhanced chemical vapor deposition (PECVD), etc.The various structures of the stack 66 may be formed to any suitable vertical thicknesses. In some embodiments, the lattice layers 42 and 44 will be formed to vertical thicknesses within a range of from about 10 nanometers (nm) to about 50 nm (e.g., vertical thicknesses of about 20 nm), the lattice layer 46 will be formed to a vertical thickness within a range of from about 150 nm to about 300 nm (e.g., a vertical thickness of about 200 nm), the first sacrificial material 68 will be formed to a vertical thickness within a range of from about 400 nm to about 700 nm (e.g., a vertical thickness of about 450 nm), and the second sacrificial material 70 will be formed to a vertical thickness within a range of from about 500 nm to about 800 nm (e.g., a vertical thickness of about 600 nm).The base 36 comprises the conductive pedestals 38a and 38b. In some applications, such conductive pedestals may be coupled with source/drain regions of transistors (not shown in FIG. 3, but such transistors may be analogous to the transistor 40 described above with reference to FIG. 1).Referring to FIG. 4, openings 72a and 72b are formed through the stack 66 to expose upper surfaces of the conductive pedestals 38a and 38b. The openings 72a and 72b may be considered to be a neighboring pair of first openings. Although only two openings are shown, it is to be understood that such openings may be representative of a large number of openings formed through the stack during fabrication of integrated circuitry (e.g., during fabrication of capacitors associated with a memory array).The openings 72a and 72b may be formed utilizing any suitable processing. For instance, in some embodiments a patterned mask (not shown) may be provided over an upper surface of stack 66 to define locations of the openings 72a and 72b. Subsequently, one or more suitable etches may be utilized to transfer a pattern from the patterned mask into the stack and thereby fabricate the openings 72a and 72b, and then the mask may be removed to leave the construction shown in FIG. 4. In the illustrated embodiment, the etching undercuts sacrificial material 70 beneath the lattice layer 44, and accordingly forms inset regions 73a and 73b extending under the lattice layer 44 within the openings 72a and 72b, respectively. Such inset regions may result if the etching conditions utilized to extend the openings 72a and 72b through the second sacrificial material 70 have an isotropic component even though the conditions are primarily anisotropic. The sizes of the inset regions 73a and 73b may be controlled by controlling the relative amount of the isotropic component and the anisotropic component of the etch utilized to extend the openings 72a and 72b through the sacrificial material 70.Referring to FIG. 5, the conductive liner 24 is formed within the openings 72a and 72b to narrow the openings, and the conductive fill material 26 is provided to fill the narrowed openings. Eventually, the conductive liner 24 is shown to comprise a region corresponding to a first conductive liner 24a within the first opening 72a, and a region corresponding to second conductive liner 24b within the second opening 72b. The first and second conductive liners 24a and 24b will be separated from one another at a processing stage described below with reference to FIG 9.The first conductive liner 24a has a first step 48a extending along the first inset region 73a, and the second conductive liner 24b has a second step 48b extending along the second inset region 73b.The conductive liner 24 and conductive fill material 26 may comprise the compositions described above with reference to FIG. 1.The conductive liner 24a and the conductive fill material 26 within the first opening 72a together form a first conductive structure 74a; and the conductive liner 24b and the conductive fill material 26 within the second opening 72b together form a second conductive structure 74b.Referring to FIG. 6, the conductive fill material 26 is removed from over an upper surface of stack 66. Such removal may be accomplished utilizing any suitable processing. In some embodiments, such processing may include a dry etch-back of the material 26 (for instance, a dry etch-back of polysilicon in applications in which material 26 is conductively-doped silicon).Referring to FIG. 7, insulative material 76 is formed over an upper surface of the stack 66, and over upper surfaces of the conductive structures 74a and 74b. The insulative material 76 may comprise any suitable composition(s); and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide.Referring to FIG. 8, materials 78 and 80 are formed over the insulative material 76. The material 78 may comprise photoresist, and the material 80 may be a multilayer resist system (for instance, may comprise polymethylmethacrylate). The photoresist 78 may be photolithographic ally patterned to have the opening 82 extending therethrough.Referring to FIG. 9, the opening 82 is extended into the conductive structures 74a and 74b, and into the first sacrificial material 68. The opening 82 may be referred to as a second opening. The second opening 82 partially overlaps the first and second conductive structures 74a and 74b, and exposes the first sacrificial material 68. The second opening 82 may be considered to partially overlap each of the conductive structures 74a and 74b, and to recess regions of the conductive structures while leaving other regions not recessed. Referring to FIG. 10, materials 76, 80 and 78 (FIG. 9) are removed, and the second sacrificial material 68 (FIG. 9) is also removed. The removal of the materials 68, 76, 78 and 80 may be accomplished with any suitable etch or combination of etches.After the first sacrificial material 68 (FIG. 9) is removed, a region of the lattice layer 44 between the conductive structures 74a and 74b is exposed; with such exposed region being labeled as a region 84 at the processing stage of FIG. 10.Referring to FIG. 11, the exposed region 84 (FIG. 10) of the lattice layer 44 is removed, and then the second sacrificial material 70 (FIG. 10) is removed.Referring to FIG. 12, a portion of the conductive fill material 26 is removed from each of the first and second conductive structures 74a and 74b to form the first and second conductive structures into first and second bottom electrodes l4a and l4b, respectively. The conductive fill material 26 may be removed with any suitable processing. In some embodiments, the conductive fill material 26 comprisespolycrystalline silicon; and is removed with a wet etch utilizing tetramethyl ammonium hydroxide (TMAH). The amount of the conductive fill material 26 which is removed may be controlled by adjusting etchant concentration, etching time, temperature, etc. Accordingly, the amount of the conductive fill material 26 which is removed may be tailored for specific applications.The first bottom electrode l4a has a first lower pillar region 20a comprising a remaining portion of the conductive fill material 26 laterally surrounded by a lower region of the first conductive liner 24a. The first bottom electrode l4a also has a first upper container region 22a comprising an upper region of the first conductive liner 24a.The second bottom electrode l4b has a second lower pillar region 20b comprising a second remaining portion of the conductive fill material 26 laterally surrounded by a lower region of the second conductive liner 24b. The second bottom electrode l4b also has a second upper container region 22b comprising an upper region of the second conductive liner 24b.Referring to FIG. 13, dielectric material 18 is formed along the first and second bottom electrodes l4a and l4b.Referring to FIG. 14, the conductive materials 32 and 50 are provided to form the common upper electrode 55. The common upper electrode 55 extends along the dielectric material 18, and is spaced from the first and second bottom electrodes l4a and l4b by the dielectric material 18. An opening 86 is formed to extend into the upper electrode 55. In some embodiments, the opening 86 may be considered to be over the recessed regions of the bottom electrodes l4a and l4b. For instance, recessed regions of the bottom electrodes l4a and l4b are diagrammatically illustrated as being approximately within a region labeled“R” in FIG. 14. The opening 86 extends across such region R. FIG. 14A shows a cross-section along a line A-A of FIG. 14 and diagrammatically illustrates an approximate location of the region R relative to the shown embodiment.Referring to FIG. 15, the conductive interconnect 58 is formed to extend into the opening 86 (with the opening 86 being labeled in FIG. 14), and to be electrically coupled with the upper electrode 55. In the illustrated embodiment, the conductive interconnect 58 comprises the first and second materials 60 and 62 described above with reference to FIG. 2. In some embodiments, the first material 60 may comprise conductively-doped semiconductor material directly against the upper electrode 55, and the second material 62 may be a metal-containing material. For instance, the material 60 may comprise conductively-doped silicon, and the material 16 may comprise tungsten.The capacitors described above may be utilized in a memory array, such as, for example, a DRAM array. FIG. 16 schematically illustrates an example DRAM array.The array includes a plurality of wordlines WL1, WL2 and WL3 extending along rows of the array; and includes a plurality of digit lines DL1, DL2 and DL3 extending along columns of the array. Memory cells 90 comprise transistors 40 in combination with capacitors 12. The capacitors may have the configuration of FIG. 1, or the configuration of FIG. 2. If the capacitors have the configuration of FIG. 2, then some of the capacitors will have the configuration l2a (FIG. 2) while others have the configuration l2b (FIG.2).Each of the transistors 40 of FIG. 16 has a gate electrode coupled with one of the wordlines (WL1, WL2, WL3), and has a source/drain region coupled with one of the digit lines (DL1, DL2, DL3). Each of the transistors 40 also has a source/drain region electrically coupled with one of the capacitors 12. Each of the capacitors 12 has a storage node (or bottom electrode) which is electrically coupled with the source/drain region of the associated transistor 40, and has a second electrode (or upper electrode) which is electrically coupled with a reference voltage 92. The reference voltage 92 may be, for example, ground, VCC/2, etc. The schematic illustration of FIG. 16 shows nine memory cells 90. Such memory cells may be part of a large memory array; and may be representative of hundreds, thousands, millions, billions, etc., of the memory cells within the array.The assemblies and structures discussed above may be utilized within integrated circuits (with the term“integrated circuit” meaning an electronic circuit supported by a semiconductor substrate); and may be incorporated into electronic systems. Such electronic systems may be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and may include multilayer, multichip modules. The electronic systems may be any of a broad range of systems, such as, for example, cameras, wireless devices, displays, chip sets, set top boxes, games, lighting, vehicles, clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc.Unless specified otherwise, the various materials, substances, compositions, etc. described herein may be formed with any suitable methodologies, either now known or yet to be developed, including, for example, atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), etc.The terms“dielectric” and“insulative” may be utilized to describe materials having insulative electrical properties. The terms are considered synonymous in this disclosure. The utilization of the term“dielectric” in some instances, and the term “insulative” (or“electrically insulative”) in other instances, may be to provide language variation within this disclosure to simplify antecedent basis within the claims that follow, and is not utilized to indicate any significant chemical or electrical differences.The particular orientation of the various embodiments in the drawings is for illustrative purposes only, and the embodiments may be rotated relative to the shown orientations in some applications. The descriptions provided herein, and the claims that follow, pertain to any structures that have the described relationships between various features, regardless of whether the structures are in the particular orientation of the drawings, or are rotated relative to such orientation.The cross-sectional views of the accompanying illustrations only show features within the planes of the cross-sections, and do not show materials behind the planes of the cross-sections, unless indicated otherwise, in order to simplify the drawings.When a structure is referred to above as being "on" or“against” another structure, it can be directly on the other structure or intervening structures may also be present. In contrast, when a structure is referred to as being "directly on" or“directly against” another structure, there are no intervening structures present.Structures (e.g., layers, materials, etc.) may be referred to as“extending vertically” to indicate that the structures generally extend upwardly from an underlying base (e.g., substrate). The vertically-extending structures may extend substantially orthogonally relative to an upper surface of the base, or not.Some embodiments include a capacitor which includes a first electrode having a lower pillar portion and an upper container portion over the lower pillar portion. The lower pillar portion has an outer surface. The upper container portion has an inner surface and an outer surface. Dielectric material lines the inner and outer surfaces of the upper container portion, and lines the outer surface of the lower pillar portion. A second electrode extends along the inner and outer surfaces of the upper container portion, and along the outer surface of the lower pillar portion. The second electrode is spaced from the first electrode by the dielectric material.Some embodiments include a capacitor having a lower pillar portion. The lower pillar portion includes a lower portion of a conductive liner, and includes a conductive fill material laterally surrounded by the lower portion of the conductive liner. The lower pillar portion has an outer surface along an outer edge of the lower portion of the conductive liner. An upper container portion is over the lower pillar portion. The upper container portion comprises an upwardly-opening conductive container. The upwardly- opening conductive container comprises a sidewall corresponding to an upper portion of the conductive liner, and comprises a bottom corresponding to an upper surface of the conductive fill material. The upper container portion has an inner surface along an inner edge of the sidewall and along the upper surface of the conductive fill material, and has an outer surface along an outer edge of the sidewall. A first electrode of the capacitor comprises the lower pillar portion and the upper container portion dielectric material lines the inner and outer surfaces of the upper container portion, and lines the outer surface of the lower pillar portion. A second electrode of the capacitor extends along the inner and outer surfaces of the upper container portion, and along the outer surface of the lower pillar portion. The second electrode is spaced from the first electrode by the dielectric material.Some embodiments include an assembly comprising a pair of neighboring capacitors. One of the neighboring capacitors is a first capacitor and the other of the neighboring capacitors is a second capacitor. The first capacitor comprises a first bottom electrode which includes a first lower pillar portion under a first upper container portion. The first lower pillar portion comprises a first pillar outer surface. The first upper container portion comprises a first container inner surface and a first container outer surface. The second capacitor comprises a second bottom electrode which includes a second lower pillar portion under a second upper container portion the second lower pillar portion comprises a second pillar outer surface the second upper container portion comprises a second container inner surface and a second container outer surface.Dielectric material is along the first pillar outer surface, the first container inner surface, the first container outer surface, the second pillar outer surface, the second container inner surface and the second container outer surface. A common upper electrode extends along the first and second bottom electrodes, and is spaced from the first and second bottom electrodes by the dielectric material. A recess extends downwardly into the first and second upper container portions and partially overlaps each of the first and second upper container portions. Regions of each of the first and second upper container portions are recessed, and other regions of each of the first and second upper container portions are non-recessed. A conductive interconnect is electrically coupled with the common upper electrode and is over the recessed regions of the first and second upper container portions.Some embodiments include a method of forming an assembly. A stack is formed to comprise a first sacrificial material over a second sacrificial material, and to comprise a lattice layer between the first and second sacrificial materials. A pair of neighboring first openings are formed to extend through the first and second sacrificial materials, and through the lattice layer. Conductive liners are formed along inner surfaces of the neighboring first openings to narrow the neighboring first openings. The conductive liner within one of the neighboring first openings is a first conductive liner, and the conductive liner within the other of the neighboring first openings is a second conductive liner. Conductive fill material is formed within the narrowed neighboring first openings to fill the narrowed neighboring first openings. The conductive fill material within said one of the neighboring first openings, together with the first conductive liner, form a first conductive structure; and the conductive fill material within said other of the neighboring first openings, together with the second conductive liner, form a second conductive structure. A second opening is formed to partially overlap the first and second conductive structures. The second opening extends to the conductive fill material of the first and second conductive structures, and extends to the first sacrificial material. The first sacrificial material is removed to expose a region of the lattice layer between the first and second conductive structures. The exposed region of the lattice layer is removed, and then the second sacrificial material is removed. A portion of the conductive fill material from each of the first and second conductive structures is removed to form the first and second conductive structures into first and second bottom electrodes, respectively. The first bottom electrode has a first lower pillar region comprising a first remaining portion of the conductive fill material laterally surrounded by a lower region of the first conductive liner, and has a first upper container region over the first lower pillar region and comprising an upper region of the first conductive liner. The second bottom electrode has a second lower pillar region comprising a second remaining portion of the conductive fill material laterally surrounded by a lower region of the second conductive liner, and has a second upper container region over the second lower pillar region and comprising an upper region of the second conductive liner. Dielectric material is formed along the first and second bottom electrodes. A common upper electrode is formed along the dielectric material and is spaced from the first and second bottom electrodes by the dielectric material. |
A method and computer program product (202) are provided for generating (Fig. 1A-1) a shader program (160). Initially, a file (202) associated with a graphics effect is a selected. Such file (202) is then read and processed. A shader program (160) is subsequently generated based on the processing of the file to apply the graphics effect to an object. |
CLAIMS What is claimed is: 1. A method for generating a shader program, comprising: selecting a file associated with a graphics effect; reading the file; processing the file; and generating the shader program based on the processing of the file to apply the graphics effect to an object. 2. The method as recited in claim 1, wherein the file is selected from a library of files each associated with a unique graphics effect. 3. The method as recited in claim 1, wherein the file includes a plurality of interface data capable of being processed to generate the shader program for different graphics application program interfaces. 4. The method as recited in claim 1, wherein the file includes a plurality of implementation data capable of being processed to generate the shader program for different hardware graphics pipeline platforms. 5. The method as recited in claim 1, wherein the file is written in an extensible markup language (XML). 6. The method as recited in claim 1, wherein the file includes a text file. 7. The method as recited in claim 1, wherein the selecting, reading, processing, and generating are carried out utilizing a plug-in. <Desc/Clms Page number 36> 8. The method as recited in claim 1, wherein the selecting, reading, processing, and generating are carried out utilizing an interface. 9. The method as recited in claim 8, wherein the interface includes a Component Object Model (COM) interface. 10. The method as recited in claim 8, wherein the processing includes initializing the interface. 11. The method as recited in claim 1, wherein the processing includes registering at least one of custom types and custom functions, the shader program being generated based on the at least one of registered custom types and custom functions. 12. The method as recited in claim 1, wherein the processing includes setting up a plurality of objects. 13. The method as recited in claim 12, wherein the processing includes selecting one of the objects. 14. The method as recited in claim 13, wherein the processing includes selecting one of a plurality of graphics effects. 15. The method as recited in claim 14, wherein the processing includes selecting a render pass. 16. The method as recited in claim 15, wherein the processing includes setting up the render pass. <Desc/Clms Page number 37> 17. The method as recited in claim 16, wherein the render pass is set up by pointing to parameters, the shader program being generated based on the parameters. 18. The method as recited in claim 16, wherein the processing includes drawing the object with the selected graphics effect. 19. The method as recited in claim 18, wherein the object is drawn with the selected graphics effect utilizing attributes supplied by an application. 20. The method as recited in claim 18, wherein the processing includes determining whether more render passes exist, and selecting another render pass if more render passes exist. 21. The method as recited in claim 18, wherein the processing includes determining whether more objects exist, and selecting another object if more objects exist. 22. The method as recited in claim 18, wherein the processing includes determining whether more graphics effects exist, and selecting another graphics effect if more graphics effects exist. 23. The method as recited in claim 1, wherein the file includes requirements, the shader program being generated based on the requirements. 24. The method as recited in claim 23, wherein the requirements include a call back function. 25. The method as recited in claim 23, wherein the requirements include a default set of requirements. <Desc/Clms Page number 38> 26. The method as recited in claim 1, wherein the graphics effect is displayed utilizing a graphical user interface. 27 The method as recited in claim 26, wherein the graphics effect is capable of being altered by a user utilizing the graphical user interface. 28. The method as recited in claim 27, wherein the graphics effect is capable of being altered by altering parameters. 29. The method as recited in claim 28, wherein the shader program is generated based on the altered parameters. 30. The method as recited in claim 1, wherein the shader program is capable of being altered by tweaking the file. 31. The method as recited in claim 8, wherein the interface is capable of generating primitives. 32. The method as recited in claim 1, wherein the file includes a syntax including a name, a type and a content. 33. The method as recited in claim 1, wherein the file is capable of referencing both compiled and un-compiled code. 34. A system for generating a shader program, comprising: an interface; an application program for working in conjunction with the interface to process a file; and wherein the shader program is generated based on the processing of the file to apply the graphics effect to an object. <Desc/Clms Page number 39> 35. A system for generating a shader program, comprising: means for selecting a file associated with a graphics effect; means for reading the file ; means for processing the file; and means for generating the shader program based on the processing of the file to apply the graphics effect to an object. 36. A computer program product for generating a shader program, comprising: computer code for selecting a file associated with a graphics effect; computer code for reading the file; computer code for processing the file; and computer code for generating the shader program based on the processing of the file to apply the graphics effect to an object. 37. A data structure stored in memory for generating a shader program, comprising: a file including: a textual descriptive object for identifying a graphics effect associated with the file, and a requirements object for identifying requirements for the shader program necessary to generate the shader program; wherein the shader program is capable of being generated based on the objects of the file. 38. A method for generating a shader program utilizing an application, comprising : selecting a file associated with a graphics effect; selecting a graphics application program interface; receiving implementation data representing a plurality of different hardware graphics pipeline platforms based on the selection; receiving parameters based on the implementation data; and <Desc/Clms Page number 40> deciding which of the hardware graphics pipeline platforms to use based on the parameters; wherein the shader program is generated for use with the hardware graphics pipeline platforms. 39. The method as recited in claim 38, wherein the decision as to which of the hardware graphics pipeline platforms is to be used is based on whether the parameters are capable of being supplied. 40. The method as recited in claim 38, wherein the decision as to which of the hardware graphics pipeline platforms is to be used is based on whether the parameters are understood. 41. The method as recited in claim 38, and further comprising mapping attributes of an object to the parameters. 42. A method for generating a shader program utilizing an interface, comprising: generating implementation data representing a plurality of different hardware graphics pipeline platforms; generating parameters based on the implementation data; and deciding which of the hardware graphics pipeline platforms to use based on the parameters; wherein the shader program is generated for use with the hardware graphics pipeline platforms. 43. The method as recited in claim 42, wherein the implementation data is generated by determining whether the different hardware graphics pipeline platforms meet a plurality of requirements. <Desc/Clms Page number 41> 44. The method as recited in claim 43, wherein the implementation data is further generated by sorting the different hardware graphics pipeline platforms that meet the requirements. 45. A method for generating a shader program, comprising: initializing an interface; registering at least one of custom types and custom functions; setting up a plurality of objects ; selecting one of the objects; selecting one of a plurality of graphics effects; selecting a render pass; setting up the render pass by pointing to parameters; drawing the object with the selected graphics effect; determining whether more render passes exist; selecting another render pass if more render passes exist; determining whether more graphics effects exist; selecting another graphics effect if more graphics effects exist; determining whether more objects exist; and selecting another object if more objects exist. 46. A computer implemented method for determining whether a file is distributable, comprising: identifying a file stored in memory; determining whether the file is distributable; and indicating whether the file is distributable. 47. A data structure stored in memory for identifying a shader program, comprising : a file including: a textual descriptive object for identifying a graphics effect associated with the file, and <Desc/Clms Page number 42> a plurality of shader code segments capable of executing the graphics effect in a plurality of operating environments; wherein the shader code segments are organized in terms of the different operating environments. 48. A method for generating a shader program using a graphical user interface, comprising: displaying a plurality of graphics effects for allowing a user to select one graphics effect; displaying the selected graphics effect as applied to an object using a file ; modifying the file based on user input; processing the file; and generating a shader program based on the processing of the file. <Desc/Clms Page number 43> AMENDED CLAIMS [received by the International Bureau on 23 April 2003 (23.04. 03) original claim 24 cancelled, remaining claims renumbered claims 1-47) ] ; What is claimed is: 1. A method for generating a shader program, comprising: selecting a file associated with a graphics effect; reading the file; processing the file; and generating the shader program based on the processing of the file to apply the graphics effect to an object. 2. The method as recited in claim 1, wherein the file is selected from a library of files each associated with a unique graphics effect. 3. The method as recited in claim 1, wherein the file includes a plurality of interface data capable of being processed to generate the shader program for different graphics application program interfaces. 4. The method as recited in claim 1, wherein the file includes a plurality of implementation data capable of being processed to generate the shader program for different hardware graphics pipeline platforms. 5. The method as recited in claim 1, wherein the file is written in an extensible markup language (XML). 6. The method as recited in claim 1, wherein the file includes a text file. <Desc/Clms Page number 44> 7. The method as recited in claim 1, wherein the selecting, reading, processing, and generating are carried out utilizing a plug-in. 8. The method as recited in claim 1, wherein the selecting, reading, processing, and generating are carried out utilizing an interface. 9. The method as recited in claim 8, wherein the interface includes a Component Object Model (COM) interface. 10. The method as recited in claim 8, wherein the processing includes initializing the interface. 11. The method as recited in claim 1, wherein the processing includes registering at least one of custom types and custom functions, the shader program being generated based on the at least one of registered custom types and custom functions. 12. The method as recited in claim 1, wherein the processing includes setting up a plurality of objects. 13. The method as recited in claim 12, wherein the processing includes selecting one of the objects. 14. The method as recited in claim 13, wherein the processing includes selecting one of a plurality of graphics effects. 15. The method as recited in claim 14, wherein the processing includes selecting a render pass. <Desc/Clms Page number 45> 16. The method as recited in claim 15, wherein the processing includes setting up the render pass. 17. The method as recited in claim 16, wherein the render pass is set up by pointing to parameters, the shader program being generated based on the parameters. 18. The method as recited in claim 16, wherein the processing includes drawing the object with the selected graphics effect. 19. The method as recited in claim 18, wherein the object is drawn with the selected graphics effect utilizing attributes supplied by an application. 20. The method as recited in claim 18, wherein the processing includes determining whether more render passes exist, and selecting another render pass if more render passes exist. 21. The method as recited in claim 18, wherein the processing includes determining whether more objects exist, and selecting another object if more objects exist. 22. The method as recited in claim 18, wherein the processing includes determining whether more graphics effects exist, and selecting another graphics effect if more graphics effects exist. 23. The method as recited in claim 1, wherein the file includes requirements, the shader program being generated based on the requirements. 24. The method as recited in claim 23, wherein the requirements include a default set of requirements. <Desc/Clms Page number 46> 25. The method as recited in claim 1, wherein the graphics effect is displayed utilizing a graphical user interface. 26 The method as recited in claim 25, wherein the graphics effect is capable of being altered by a user utilizing the graphical user interface. 27. The method as recited in claim 26, wherein the graphics effect is capable of being altered by altering parameters. 28. The method as recited in claim 27, wherein the shader program is generated based on the altered parameters. 29. The method as recited in claim 1, wherein the shader program is capable of being altered by tweaking the file. 30. The method as recited in claim 8, wherein the interface is capable of generating primitives. 31. The method as recited in claim 1, wherein the file includes a syntax including a name, a type and a content. 32. The method as recited in claim 1, wherein the file is capable of referencing both compiled and un-compiled code. 33. A system for generating a shader program, comprising: an interface; an application program for working in conjunction with the interface to process a file ; and <Desc/Clms Page number 47> wherein the shader program is generated based on the processing of the file to apply the graphics effect to an object. 34. A system for generating a shader program, comprising: means for selecting a file associated with a graphics effect; means for reading the file ; means for processing the file ; and means for generating the shader program based on the processing of the file to apply the graphics effect to an object. 35. A computer program product for generating a shader program, comprising: computer code for selecting a file associated with a graphics effect; computer code for reading the file; computer code for processing the file ; and computer code for generating the shader program based on the processing of the file to apply the graphics effect to an object. 36. A data structure stored in memory for generating a shader program, comprising : a file including: a textual descriptive object for identifying a graphics effect associated with the file, and a requirements object for identifying requirements for the shader program necessary to generate the shader program; wherein the shader program is capable of being generated based on the objects of the file. <Desc/Clms Page number 48> 37. A method for generating a shader program utilizing an application, comprising: selecting a file associated with a graphics effect; selecting a graphics application program interface; receiving implementation data representing a plurality of different hardware graphics pipeline platforms based on the selection; receiving parameters based on the implementation data; and deciding which of the hardware graphics pipeline platforms to use based on the parameters; wherein the shader program is generated for use with the hardware graphics pipeline platforms. 38. The method as recited in claim 37, wherein the decision as to which of the hardware graphics pipeline platforms is to be used is based on whether the parameters are capable of being supplied. 39. The method as recited in claim 37, wherein the decision as to which of the hardware graphics pipeline platforms is to be used is based on whether the parameters are understood. 40. The method as recited in claim 37, and further comprising mapping attributes of an object to the parameters. 41. A method for generating a shader program utilizing an interface, comprising: generating implementation data representing a plurality of different hardware graphics pipeline platforms; generating parameters based on the implementation data; and <Desc/Clms Page number 49> deciding which of the hardware graphics pipeline platforms to use based on the parameters; wherein the shader program is generated for use with the hardware graphics pipeline platforms. 42. The method as recited in claim 41, wherein the implementation data is generated by determining whether the different hardware graphics pipeline platforms meet a plurality of requirements. 43. The method as recited in claim 42, wherein the implementation data is further generated by sorting the different hardware graphics pipeline platforms that meet the requirements. 44. A method for generating a shader program, comprising: initializing an interface; registering at least one of custom types and custom functions; setting up a plurality of objects; selecting one of the objects; selecting one of a plurality of graphics effects; selecting a render pass; setting up the render pass by pointing to parameters; drawing the object with the selected graphics effect; determining whether more render passes exist; selecting another render pass if more render passes exist; determining whether more graphics effects exist; selecting another graphics effect if more graphics effects exist; determining whether more objects exist; and selecting another object if more objects exist. <Desc/Clms Page number 50> 45. A computer implemented method for determining whether a file is distributable, comprising: identifying a file stored in memory; determining whether the file is distributable; and indicating whether the file is distributable. 46. A data structure stored in memory for identifying a shader program, comprising : a file including: a textual descriptive object for identifying a graphics effect associated with the file, and a plurality of shader code segments capable of executing the graphics effect in a plurality of operating environments; wherein the shader code segments are organized in terms of the different operating environments. 47. A method for generating a shader program using a graphical user interface, comprising: displaying a plurality of graphics effects for allowing a user to select one graphics effect; displaying the selected graphics effect as applied to an object using a file ; modifying the file based on user input; processing the file; and generating a shader program based on the processing of the file. |
<Desc/Clms Page number 1> SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR GENERATING A SHADER PROGRAM FIELD OF THE INVENTION The present invention relates to computer graphics, and more particularly to shading operations within a graphics pipeline. BACKGROUND OF THE INVENTION Rendering and displaying 3-D graphics typically involves many calculations and computations. For example, to render a 3-D object, a set of coordinate points or vertices that define the object to be rendered must be formed. Vertices can be joined to form polygons that define the surface of the object to be rendered and displayed. Once the vertices that define an object are formed, the vertices must be transformed from an object or model frame of reference to a world frame of reference and finally to 2-D coordinates that can be displayed on a flat display device, such as a monitor. Along the way, vertices may be rotated, scaled, eliminated or clipped because they fall outside of a viewable area, lit by various lighting schemes and sources, colorized, and so forth. The processes involved in rendering and displaying a 3-D object can be computationally intensive and may involve a large number of vertices. To create a 3-D computer graphical representation, the first step is to represent the objects to be depicted as mathematical models within the computer. 3- D models are made up of geometric points within a coordinate system consisting of an x, y and z axis; these axes correspond to width, height, and depth respectively. Objects are defined by a series of points, called vertices. The location of a point, or vertex, is defined by its x, y and z coordinates. When three or more of these points are connected, a polygon is formed. The simplest polygon is a triangle. <Desc/Clms Page number 2> 3-D shapes are created by connecting a number of 2-D polygons. Curved surfaces are represented by connecting many small polygons. The view of a 3-D shape composed of polygon outlines is called a wire frame view. hi sum, the computer creates 3-D objects by connecting a number of 2-D polygons. Before the 3- D object is ultimately rendered on a 2-D display screen, however, the data of sophisticated graphics objects undergoes many different mathematical transformations that implicate considerably specialized equations and processing unique to 3-D representation. For a long time now, 3-D rendering systems have been able to describe the "appearance"of objects according to parameters. These and later methods provide for the parameterization of the perceived color of an object based on the position and orientation of its surface and the light sources illuminating it. In so doing, the appearance of the object is calculated therefrom. Parameters further include values such as diffuse color, the specular reflection coefficient, the specular color, the reflectivity, and the transparency of the material of the object. Such parameters are globally referred to as the shading parameters of the object. Early systems could only ascribe a single value to shading parameters and hence they remained constant and uniform across the entire surface of the object. Later systems allowed for the use of non-uniform parameters (transparency for instance) which might have different values over different parts of the object. Two prominent and distinct techniques have been used to describe the values taken by these non-uniform parameters on the various parts of the object's surface: procedural shading and texture mapping. Texture mapping is pixel based and resolution dependent. Procedural shading describes the appearance of a material at any point of a 1- D, 2-D or 3-D space by defining a function (often called the procedural shader) in this space into shading parameter space. The object is"immersed"in the original 1- <Desc/Clms Page number 3> D, 2-D or 3-D space and the values of the shading parameters at a given point of the surface of the object are defined as a result of the procedural shading function at this point. For instance, procedural shaders that approximate appearance of wood, marble or other natural materials have been developed and can be found in the literature. The rendering of graphics data in a computer system is a collection of resource intensive processes. The process of shading i. e. , the process of performing complex techniques upon set (s) of specialized graphics data structures, used to determine values for certain primitives, such as color, etc. associated with the graphics data structures, exemplifies such a computation intensive and complex process. For each application developer to design these shading techniques for each program developed and/or to design each program for potentially varying third party graphics hardware would be a Herculean task, and would produce much inconsistency. Consequently, generally the process of shading has been normalized to some degree. By passing source code designed to work with a shader into an application, a shader becomes an object that the application may create/utilize in order to facilitate the efficient drawing of complex video graphics. Vertex shaders and pixel shaders are examples of such shaders. Prior to their current implementation in specialized hardware chips, vertex and pixel shaders were sometimes implemented wholly or mostly as software code, and sometimes implemented as a combination of more rigid pieces of hardware with software for controlling the hardware. These implementations frequently contained a CPU or emulated the existence of one using the system's CPU. For example, the hardware implementations directly integrated a CPU chip into their design to perform the processing functionality required of shading tasks. While a CPU adds a lot of flexibility to the shading process because of the range of functionality that a standard processing chip offers, the incorporation of a CPU adds overhead to the <Desc/Clms Page number 4> specialized shading process. Without today's hardware state of the art, however, there was little choice. Today, though, existing advances in hardware technology have facilitated the ability to move functionality previously implemented in software into specialized hardware. As a result, today's pixel and vertex shaders are implemented as specialized and programmable hardware chips. Unfortunately, programming such new vertex and pixel engines necessitates a meld of art and code resources never before required. Several digital content creation (DCC) applications have done an admirable job of supporting vertex and pixel shaders as far as they go, but it is not obvious how to allow artists to play with various shading options without having them become full-fledged shader programmers. <Desc/Clms Page number 5> DISCLOSURE OF THE INVENTION A method and computer program product are provided for generating a shader program. Initially, a file associated with a graphics effect is a selected. Such file is then read and processed. A shader program is subsequently generated based on the processing of the file to apply the graphics effect to an object. Thus, a shader program may be correctly applied to an object for display or other purposes. In one embodiment, the file may be selected from a library of files each associated with a unique graphics effect. Further, the file may include interface data capable of being processed to generate the shader program for different graphics application program interfaces. In a similar manner, the file may include implementation data capable of being processed to generate the shader program for different hardware graphics pipeline platforms. Thus, the file may be processed in a way to generate shader programs for working in conjunction with various different graphics application program interfaces (i. e. OpenGLO, Direct 3D, etc. ), and a variety of platforms (i. e. hardware graphics chips manufactured by different companies). In another embodiment, the file may be written in an extensible markup language (XML). Moreover, the file may include a text file. Still yet, the selecting, reading, processing, and generating may be carried out utilizing an interface [i. e. Component Object Model (COM) ], plug-in, etc. As an option, the file may take the form of a data structure having a textual descriptive object for identifying a graphics effect associated with the file. Further provided may be a requirements object for identifying requirements necessary to generate the shader program. <Desc/Clms Page number 6> Thus, the file may include requirements, with the shader program being generated based on the requirements. In general, the requirements may include a default set of requirements, which may be optionally custom tailored. Optionally, the requirements may include a call back function. The file may further include a plurality of shader code segments capable of executing the graphics effect in a plurality of operating environments (i. e. platform implementation, interface, etc. ). Such shader code segments may be organized in terms of the different operating environments. Thus, the present embodiment may optionally be used as a reference for obtaining desired shader code segments. During operation of one particular embodiment, the processing may include initializing an interface. Such processing may further include registering custom types and/or custom functions. Thus, the shader program may be generated based on the registered custom types and/or custom functions. By this feature, the present embodiment allows a user to customize the resulting shader program. Still yet, the processing may include setting up a plurality of objects, selecting one of the objects, selecting one of a plurality of graphics effects, selecting a render pass, setting up the render pass, and drawing the object with the selected graphics effect. As an option, the render pass may be set up by pointing to parameters so that the shader program may be generated based on the parameters. Further, the object may be drawn with the selected graphics effect utilizing attributes supplied by an application. During a rendering pass, it may be determined whether more render passes exist, and another render pass selected if more render passes exist. Further, it may be determined whether more objects exist, and another object selected if more objects exist. Still yet, it maybe determined whether more graphics effects exist, and another graphics effect selected if more graphics effects exist. <Desc/Clms Page number 7> One exemplary system that may be used to carry the foregoing functionality may include an interface and an application program for working in conjunction to process a file. Thus, the shader program is generated based on the processing of the file to apply the graphics effect to the object. As mentioned earlier, the processing includes setting up a plurality of objects. From the perspective of the application in the context of the present system embodiment, this may be accomplished by selecting a file associated with a graphics effect, selecting a graphics application program interface, and receiving implementation data representing a plurality of different hardware graphics pipeline platforms based on the selection. Next, parameters are received based on the implementation data. Further, it may be decided which of the hardware graphics pipeline platforms to use based at least in part on the parameters. By this design, the shader program is generated for use with the appropriate hardware graphics pipeline platform. As an option, the decision as to which of the hardware graphics pipeline platforms is to be used may be based on whether the parameters are capable of being supplied. Still yet, the decision as to which of the hardware graphics pipeline platforms may be used is based on whether the parameters are understood (i. e. , able to be correctly interpreted) by the application. Once such decisions have been made, attributes of an object are mapped to the parameters. From the perspective of the interface in the context of the present system embodiment, the objects are set up by generating implementation data representing a plurality of different hardware graphics pipeline platforms. Parameters are then generated based on the implementation data. Still yet, the interface works in conjunction with the application to decide as to which of the hardware graphics pipeline platforms to use based on the parameters. <Desc/Clms Page number 8> Optionally, the implementation data may be generated by determining whether the different hardware graphics pipeline platforms meet a plurality of requirements. Moreover, the implementation data may be further generated by sorting the different hardware graphics pipeline platforms that meet the requirements. Associated with the foregoing framework is a computer-implemented method for generating a license agreement. Initially, a license agreement stored in memory is identified. Next, files associated with the license agreement are identified. It is then determined as to whether one or more files are not distributable. If it is determined that one or more files are not distributable, a non-disclosure term is included in the license agreement. Another computer implemented method is provided for determining whether a file is distributable. Such method may include identifying a file stored in memory, determining whether the file is distributable, and simply indicating whether the file is distributable. In order to allow a user to visually experiment and use the shader program, an optional graphical user interface is provided. In use, the aforementioned graphics effect may be displayed utilizing such graphical user interface. Further, the graphics effect may be capable of being altered by a user utilizing the graphical user interface. In particular, the graphics effect may be capable of being altered by altering parameters, and the shader program may be generated based on the altered parameters. Such parameters may altered by tweaking the aforementioned file. Another graphical user interface may also be provided in which a plurality of graphics effects are displayed for allowing a user to select one graphics effect. Such selected graphics effect is then displayed as applied to an object using a file. Further, the file is modified based on user input and the file is processed. Thus, the shader program may be generated based on the processing of the file. <Desc/Clms Page number 9> As a further option, the interface may be capable of generating primitives. Further, the file may include a syntax including a name, a type and a content. Still yet, the file may be capable of referencing both compiled and un-compiled code. These and other advantages of the present invention will become apparent upon reading the following detailed description and studying the various figures of the drawings. <Desc/Clms Page number 10> BRIEF DESCRIPTION OF THE DRAWINGS The foregoing and other aspects and advantages are better understood from the following detailed description of a preferred embodiment of the invention with reference to the drawings, in which: Figure 1A is a block diagram of a digital processing system, in accordance with one embodiment. Figure lA-1 illustrates a more detailed diagram showing the internal structure of one exemplary embodiment of the hardware graphics pipeline of Figure 1A. Figure 1A-2 illustrates an exemplary file that may be used to generate a shader program, in accordance with one embodiment. Figures 1B and 1C each illustrate a method for generating a shader program, in accordance with one embodiment. Figure 2 illustrates an"effect binding"method by which objects are set up in accordance with operation 1080 of Figures 1B and 1C. Figure 3 illustrates a method for generating implementation data representing a plurality of different hardware graphics pipeline platforms, in accordance with operation 212 of Figure 2. Figure 4 illustrates an exemplary method by which it may be decided which of the hardware graphics pipeline platforms to use, in accordance with operation 218 of Figure 2. <Desc/Clms Page number 11> Figure 5 illustrates a business method associated with the present invention. <Desc/Clms Page number 12> DESCRIPTION OF THE PREFERRED EMBODIMENTS Figure 1A is a block diagram of a digital processing system, in accordance with one embodiment. With reference to Figure 1A, a computer graphics system is provided that may be implemented using a computer 10. The computer 10 includes one or more processors, such as processor 11, which is connected to a communication bus 12. The bus 12 can be implemented with one or more integrated circuits, and perform some logic functions; for example, a typical personal computer includes chips known as north bridge and south bridge chips. The computer 10 also includes a main memory 14. Control logic (software) and data are stored in the main memory 14 which may take the form of random access memory (RAM). The computer also includes a hardware graphics pipeline 18 and a display 20, i. e. a computer monitor. The computer 10 may also include a secondary storage 16. The secondary storage 16 includes, for example, a hard disk drive and/or a removable storage drive, representing a floppy disk drive, a magnetic tape drive, a compact disk drive, etc. The removable storage drive reads from and/or writes to a removable storage unit in a well known manner. Computer programs, or computer control logic algorithms, are stored in the main memory 14 and/or the secondary storage 16. Such computer programs, when executed, enable the computer 10 to perform various functions. Memory 14 and storage 16 are thus examples of computer-readable media. In one embodiment, the techniques to be set forth are performed by the hardware graphics pipeline 18 which may take the form of hardware. Such hardware implementation may include a microcontroller or any other type of custom or application specific integrated circuit (ASIC). In yet another embodiment, the method of the present invention may be carried out in part on the processor 11 by way of a computer program stored in the main memory 14 and/or the secondary <Desc/Clms Page number 13> storage 16 of the computer 10. One exemplary architecture for the hardware graphics pipeline 18 will be set forth during reference to Figure 1A-1. Figure 1A-1 illustrates a more detailed diagram showing the internal structure of one exemplary embodiment of the hardware graphics pipeline 18 of Figure 1A. As shown, a geometry stage 151 is provided which transforms primitives into a screen-aligned coordinate system. Other computations may be performed by the geometry stage 151 such as lighting to determine the visual properties (e. g., color, surface normal, texture coordinates) of each vertex describing the primitives. The transformed vertices form the input for a rasterizer 152. The rasterizer 152 computes a fragment for each pixel covered by each of the primitives. A coverage mask stored with the fragment indicates which portions of the pixel the fragment covers. Also included is a shader 153 that computes the final fragment, e. g. by applying texture maps or shader programs to the fragment. Such shader programs may be generated in various ways. One system and method for generating the shader programs will be set forth hereinafter in greater detail. It should be noted that in the context of the present description, shader programs may refer to vertex shader programs, pixel shader programs, or any other type of program capable of shading. An optional sample expansion stage 154 generates multiple samples for each fragment. With continuing reference to Figure lA-1, after multi-sampling, the individual samples are sent to a raster-processor (ROP) 155 as if they were regular fragments. The raster-processor 155 performs various operations on the fragments, including z/stencil testing and color or alpha blending. This may require the rasterprocessor 155 to read a frame buffer memory 156 in order to retrieve the destination <Desc/Clms Page number 14> Z or the destination color. To this end, the final pixel color and Z are written back to the frame buffer memory 156. When all primitives in the scene have been rendered in this manner, the contents of the frame buffer memory 156 are scanned out by a video refresh unit 157 and sent to the display 20. In one embodiment, all of the foregoing components of the graphics system 106 except the frame buffer memory 156 (and possibly other memories, such as texture memory) may be situated on a single semiconductor platform. Of course, the various modules may also be situated separately or in various combinations of semiconductor platforms per the desires of the user. An interface may be used in conjunction with the various components set forth in Figures 1A and 1A-1. In one embodiment, such interface may include at least in part the Open Graphics Library (OpenGL@), Direct3D application program interfaces (APIs), a proprietary application program interface, or the like. In use, a shader program may be generated for use with the shader 153 of Figure lA-1. Initially, a single file associated with a graphics effect is a selected. Such file is then read and processed. In the context of the present description, a file may include any type of data structure, stream of data, network connection, etc. capable of communicating information. A shader program is subsequently generated based on the processing of the file to apply the graphics effect to an object. More information will now be set forth regarding various exemplary techniques in carrying out such functionality. Figure 1A-2 illustrates an exemplary file 160 that maybe used to generate a shader program, in accordance with one embodiment. It should be noted that the present file 160 may be used to generate a shader program in the context of the foregoing architecture of Figures 1A and lA-1, or any another architecture desired. <Desc/Clms Page number 15> An exemplary file 160 is set forth in Appendix A. The lines in Appendix A are numbered for reference. In one embodiment, the file 160 may be selected from a library of files each associated with a unique graphics effect. Internally, such libraries may use a particular class. Such class may be a hierarchical database very similar to a file system. It may support links and functions, and allow user-defined types and functions to override and intermix with the pre-existing functions. Other functions may also be involved including volatile functions that have the same structure as a regular function, however, volatile functions are always executed. Additionally, no time is spent checking if parameter dependencies have changed, as in the case of a regular function. Any function called by a volatile function is also treated as volatile for the duration of the function. The class is where files 160 may be stored and accessed at runtime. Further, the class may be dumped to text at any time to facilitate debugging and archiving. As an option, the class may be compiled in order to make sure that links point to a valid field of the same type, and that functions are well formed. As an option, the function strings may be compiled into an internal byte-code style representation. The class may also support just-in-time compilation, so that if a function is never called, it is never compiled. One may compile sub-trees of the class as needed to ensure links and functions are correct and fully specified. In another embodiment, the file 160 may be written in an extensible markup language (XML). Moreover, the file 160 may include a text file. The example file 160 shown in Appendix A is in XML. As an option, the file 160 may include implementation data 161 capable of being processed to generate the shader program for different hardware graphics pipeline platforms. For example, the implementation data 161 may represent a <Desc/Clms Page number 16> variety of platforms (i. e. hardware graphics chips manufactured by different graphics companies for various purposes). Still yet, the file 160 may include interface data 162 capable of being processed to generate the shader program for different graphics application program interfaces. In particular, the file 160 may be processed in a way to generate shader programs for working in conjunction with various different graphics application program interfaces (i. e. OpenGL@, Direct 3D, etc). In Appendix A, the tag " < imps > "at line 30 designates implementations, and lines 31 and 378 designate the beginning of DirectX8 and OpenGL implementations, respectively. With continuing reference to 1A-2, a textual descriptive object 164 may be provided for identifying a graphics effect associated with the file using intuitive text. For example, the graphics effect may include a"shiny"characteristic, as shown in Figure 1A-2, and at lines 2 and 3 in Appendix A. Of course, any other type of visual effect (i. e. motion blur, etc.) may be described by the textual descriptive object 164. Ideally, such textual descriptive object 164 allows an intuitive identification of the graphics effect associated with a shader program to be generated. Further provided is at least one requirements object 166 for identifying requirements necessary to generate the shader program. As shown, various requirements are set forth for each of a plurality of render passes identified by way of pass identifiers 168. For example, each render pass may have different required textures, render states, multi-pass effects, and sources of L-vectors, as well as tangent space requirements, texture type requirements, or any other type of capability required to display a shader program correctly. Optionally, the requirements may even include a call back function. In Appendix A, the requirements for the DirectX8 are potentially different for the three implementations shown: (1) implementation 1, starting at line 32, has its requirements described in lines 37 through 50; (2) implementation 2, starting at <Desc/Clms Page number 17> line 185, has its requirements described in lines 190 through 199; and (3) implementation 3, starting at line 282, has its requirements described in lines 287 through 296. Note that implementations 2 and 3 have the same requirements, but implementation 1 has different requirements. In general, the requirements may include a default set of requirements, which may be optionally custom tailored. Such tailorable requirements, or"tweakables," represent artist-controllable parameters for shader-specific items. Tweakables are required by a shader program, but are not necessarily exposed through standard tool paths. Shader program authors may decide which parts of the shader program to expose to artist manipulation. Tweakables may refer to any requirement ranging from a transparency factor to an alpha blend factor. Table 1 illustrates exemplary tweakables in the context of the file 160 of Figure lA-1. Table 1 < tweakables > < shininess > < string name ="description"type ="value" content ="Relative Opacity"/ > < string name ="type"type ="value" content ="float"/ > < string name ="field"type ="value" content ="../../settings/opacity"/ > < string name ="gui"type ="value" content ="slider"/ > < float name ="min"type ="value" content ="0. 0"/ > < float name ="max"type ="value" content ="1. 0"/ > < float name ="step"type ="value" content ="0. 1"/ > < /shininess > < /tweakables > In Appendix A, the tweakables are designated at lines 14 through 29. The tweakables are generally outside the designation of any of the implementations because they generally apply to all the implementations. In this example, a minimum value (lines 22 and 23), a maximum value (lines 24 and 25), and a step size (lines 26 and 27) are included. <Desc/Clms Page number 18> Further provided with the file 160 is a plurality of shader code segments 170 capable of executing the graphics effect in a plurality of operating environments. As shown, such shader code segments 170 include a syntax including a name, a type and a content. Still yet, the file may be capable of referencing both compiled and un-compiled shader program code. As shown in Figure 1A-2, the shader code segments 170 may be organized in terms of the different operating environments. Thus, the present embodiment may optionally be used as a reference for obtaining desired shader code segments 170. In Appendix A, an example of shader code is shown at lines 60 through 88. Table 2 illustrates a summary of various elements of an exemplary shader implementation in Direct3D@ 8. Table 2 1. Preamble/declaration: These elements provide a priority for a particular implementation/interface, and a string description of the implementation/interface. 2. Requirements: These specify the various requirements for the implementation/interface to run correctly. In particular, they include the Dx8 caps that are required for the shader. All requirements evaluate to type bol'. 3. Texture handles: These refer to texture handles that are created either from data in texture files (i. e. png, dds, tga, etc.) or generated textures such as normalization cube maps. The handles can be referenced in subsequent sections of the file, and are independent of the render pass or texture unit. 4. Vertex shader & Pixel Shader Handles: These are the dx8- provided handles that are created either compiled shader <Desc/Clms Page number 19> strings or from precompiled shader files. The handles can be referenced in subsequent sections of the file, and are independent of the render pass or texture unit. If a user does not want a vertex shader applied, the handle may be set to the FVF code being used. If the user does not specify a pixel shader for a pass, it may be set to zero, thus turning off pixel shading. 5. Vertex Mapping: This section is highly recommended and encouraged, but optional. This is where one may specify the meaning of the various vertex attributes, (such as v0, vl, v5) in a shader program. By specifying the mapping and exposing the shader program in string form, an application with a different geometry layout may have the shader program re- written with the new geometry format. 6. A shader implementation can comprise multiple render passes, each with unique render states, texture stage states, vertex mapping, pixel and vertex shaders. 7. There may be a file that represents the default render and texture stage states for the system. If one does not specify a renderstate or texture stage state in a pass of a shader program, it is reset to the default state in the file. By using the file, one may gain improved interoperability with shader programs that use the same file. If one does not wish to make changes, he or she can do so, but at the cost of having to update shaders to reflect the render state changes. Table 3 illustrates a summary of various elements of an exemplary shader implementation in OpenGLO. Table 3 1. Preamble/declaration: These elements provide a priority for the implementation/interface, and a string description of the implementation/interface. <Desc/Clms Page number 20> 2. Requirements: These specify the various requirements for the implementation/interface to run correctly. In particular, they include the OpenGLs extensions that are required for the shader. If these are not available, the OpenGL implementation may not load the shader program. 3. Texture handles: These refer to texture handles that are created either from data in texture files (i. e. png, dds, tga, etc. ) or generated textures such as normalization cube maps. The handles can be referenced in subsequent sections of the file, and are independent of the render pass or texture unit. 4. A shader implementation can comprise multiple render passes, each with a unique vertex program, texture shader and register combiner definitions. Figure 1B illustrates a method 1000 for generating a shader program, in accordance with one embodiment. This method 1000 is generally done under control of an application program that renders an image with one or more three- dimensional objects. While the present method 1000 may be implemented in the context of the framework of the foregoing figures, it may readily be implemented in the context of any desired architecture and data structure. As an option, the various operations may be carried out utilizing an interface [i. e. Component Object Model (COM) ], plug-in, etc. Moreover, various steps may be optionally excluded and/or reordered during the course of the processing that is required to generate the shader program. Initially, in operation 1020, the processing may include initializing an interface. In a preferred embodiment, the interface is an API to the library of effects, and can be implemented as a plug-in. Next, any number of custom types and custom functions are registered in operation 1040. Thus, the shader program may be generated based on the registered custom types and/or custom functions. By this <Desc/Clms Page number 21> feature, the present embodiment allows a user to customize the resulting shader program. Next, one of the objects to be rendered is selected in operation 1060 after which such object is set up in operation 1080. This set up process is carried out for each of a plurality of objects to be rendered, as indicated by decision 1090. Thus, a plurality of objects is set up. This preparation facilitates the generation of the shader program by taking various information relating to the implementation and interface associated with the environment in which the shader program is to be used. More information relating to an exemplary embodiment of such set up operation will be set forth in greater detail during reference to Figures 2 through 4. With continuing reference to Figure 1B, one of the objects is selected along with one of a plurality of graphics effects, and a render pass. See operations 1100- 1140. The selected render pass is then set up in operation 1160 after which the selected object is drawn with the selected graphics effect. See operation 1180. As an option, the render pass may be set up by pointing to parameters. The shader program may then be generated based on the parameters. Further, the object may be drawn with the selected graphics effect utilizing attributes supplied by an application. Parameters that are not passed in during render pass setup 1160 generally use default values supplied in the file 160. The parameters can be supplied in any order, and the use of pointers to the parameters provides a mechanism for parameters to be shared amongst a plurality of objects. During a rendering pass, it may be determined whether more render passes exist, and another render pass selected if more render passes exist. See decision 1200. Further, it may be determined whether more graphics effects exist, and another graphics effect selected if more graphics effects exist. Note decision 1220. Still yet, it may be determined whether more objects exist, and another object selected if more objects exist, as indicated by decision 1240. <Desc/Clms Page number 22> It should be noted that the various operations included in the box 1300 may be carried out in any order. See, for example, Figure 1C. Of course, any feasible permutation of the operations may be employed. Figure 2 illustrates an"effect binding"method 200 by which objects are set up in accordance with operation 1080 of Figures 1B and 1C. Such method 200 is carried out in the context of an exemplary system including an interface 204 and an application program 202 for working in conjunction to process the file. Thus, the shader program is generated based on the processing of the file to apply the graphics effect to the object. Of course, the present method 200 may be implemented in the context of any desired system. As mentioned earlier, the processing includes setting up a plurality of objects. From the perspective of the application program 202 in the context of the present system embodiment, this may be accomplished by selecting a file associated with a desired graphics effect in operation 206. In one embodiment, a. dll file may be used by a tool or graphics engine to read the file. Next, in operation 208, a graphics application program interface is selected. Thereafter, the interface 204 is called. See operation 210. In response to such call, implementation data representing a plurality of different hardware graphics pipeline platforms is received based on the selection of the particular graphics application program interface. In one embodiment, any platform that supports the selected graphics application program interface may be represented by the implementation data. Next, parameters are requested and received based on the implementation data, as indicated by operation 214. Further, it may be decided which of the hardware graphics pipeline platforms to use based on the parameters in operation 218. As will soon become apparent, this decision may be made using the application <Desc/Clms Page number 23> program 202 in conjunction with the interface 204. More information relating to such decisions will be set forth in greater detail during reference to Figure 4. From the perspective of the interface 204 in the context of the present system embodiment, the objects are set up by generating implementation data representing a plurality of different hardware graphics pipeline platforms, in response to the call of operation 210. Note operation 212. More information as how this may be accomplished in accordance with one embodiment will be set forth with reference to Figure 3. Parameters are then generated based on the implementation data in operation 216. As mentioned earlier, the interface 204 works in conjunction with the application 202 in operation 218 to decide as to which of the hardware graphics pipeline platforms to use based on the parameters. As an option, the interface 204 may be capable of generating primitives. For example, a sphere may be generated from a point and radius, etc. This can be done by defining a geometry generator (for example, with a tag" < geogenerator > "), which is analogous to the pixel shader (as shown with the tag" < pixelshader > ") or the vertex shader (as shown with the tag < vertexshader > "). This primitive generation technique may be useful in many contexts. For example, it may be used when generating grass or other similar objects. Figure 3 illustrates a method 300 for generating implementation data representing a plurality of different hardware graphics pipeline platforms, in accordance with operation 212 of Figure 2. This method 300 is done within the interface 204. It should be noted that the present method 300 is set forth for illustrative purposes only, and should not be construed as limiting in any manner. As shown in Figure 3, implementation data is retrieved in operation 302, which, for example, finds all the implementations (inside the designation" < imps > ", shown at line30 in Appendix A) in the file 160. Next, it is determined whether the implementation data meets the requirements outlined under the appropriate graphics <Desc/Clms Page number 24> application program interface in the current file. If it is determined in decision 304 that the requirements are met, the implementation data is sorted in a list in operation 306. This may be accomplished using a floating point priority provided by a user. This process is continued for all implementation data associated with the selected graphics application program interface. Note decision 308. Figure 4 illustrates an exemplary method 400 by which it may be decided which of the hardware graphics pipeline platforms to use, in accordance with operation 218 of Figure 2. Generally, this method 400 is performed by the application 202. Again, it should be noted that the present method 400 is set forth for illustrative purposes only, and should not be construed as limiting in any manner. Initially, in operation 402, the parameters associated with a particular implementation are identified. This is done by calling the interface and requesting the list of parameters for an implementation. Again, each implementation may correspond with a specific platform (i. e. hardware graphics chips manufactured by different graphics companies). It is then determined, in decision 404, whether the parameters supplied by the interface are understood by the application (i. e. , whether the parameter names can be correctly interpreted by the application). Further, it is determined whether the parameters can be supplied by the application. See decision 406. Both of these decisions must render a positive response if the present implementation is to be utilized by the application program. As an option, the current decisions can be carried out in a place other than the application program. Next, in operation 408, it is determined whether data is matching. If not, any mismatching data is corrected in operation 407. The correction operation 407 can include, for example, swapping the order of the data and/or making the needed data from existing data. Unlike the previous decisions, the present decision 408 may optionally be carried out by the interface. <Desc/Clms Page number 25> The foregoing decisions are made for each of the implementations that are available. See decision 410. Next, graphic effects are assigned to the object in operation 412. Generally, the application selects from the implementations kept in operation 402. In order to allow a user to visually experiment and use a shader program, an optional graphical user interface is provided. In use, the aforementioned graphics effect may be displayed utilizing a graphical user interface. Further, the graphics effect may be capable of being altered by a user utilizing the graphical user interface. In particular, the graphics effect may be capable of being altered by altering parameters (i. e. tweakables), and the shader program may be generated based on the altered parameters. This may be accomplished by way of a sliders, edit boxes, etc. The parameters may be altered by tweaking the associated file. Another graphical user interface may also be provided in which a plurality of graphics effects are displayed for allowing a user to select one graphics effect. Such selected graphics effect is then displayed as applied to an object using a file. Further, the file is modified based on user input and the file is processed. Thus, the shader program may be generated based on the processing of the file. Figure 5 illustrates a business method 500 associated with the present invention. In use, the file (i. e. see Figure lA-1) may be sold or otherwise distributed by way of a license agreement. The various shader programs or portions thereof in the file may or may not be distributable to the public for one reason or another. The present computer-implemented business method 500 allows the automated generation of a license agreement that takes into consideration whether nondistributable shader programs exist in a particular file to be licensed. Initially, in operation 502, a license agreement stored in memory is identified. Further, files associated with the license agreement are identified. <Desc/Clms Page number 26> It is then determined as to whether one or more of the files are not distributable at least in part. See decision 506. This may be accomplished by specifically tagging non-distributable code, or comparing the contents of the file with a database of known non-distributable code. If it is determined that one or more files are not distributable in decision 506, a non-disclosure term is included in the license agreement. This non-disclosure term may be of a boilerplate nature and incorporated into the license agreement automatically in any other manner that is well known to those of ordinary skill. See operation 508. In a simplified associated computer-implemented method, a technique is provided for determining whether a file is distributable. Such method may include identifying a file stored in memory, determining whether the file is distributable, and simply indicating whether the file is distributable. While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. The order of elements within claims does not indicate any particular order of steps or operations. <Desc/Clms Page number 27> APPENDIX A depots , description type--string- value- ^Apply Diffuse Texture and Glossy Reflection-/ > cequation type ~'string value-tobject/diffuseTexture"/ > ccategorieso cshiny type-string- value-IIPlain-/ > < /categories > < settings > < shininess type--float- value-0 800000-/ > < /settings > ctweakables < shininess > < description type--string- value-"Object Shininess"/ < element type-float' linh-"sfx/EselfJ/seccings/shininesa"/ > < gui type--string- value- ^slider"/ > , min type--float" valve = ^o. oooooo"/ > < max type."float" value-1. 000000-/ > step type. 8float- value.'0 100000"/ > < /shininess, < /tweakables > i. p. > zdx3 > simply priority type"float- value "1. 000000"/ > description type = string- value. ^Shiny with Gloss Alpha Channel"/ > < requirements > < os type-"bool- function-"GreaterEqual~float (parameters/os/vers ion, float (5. 0/ > < api type "bool" function. ^GreaeerEqual~float (parameters/api/veraion, floatt 8. 0)) ^/ > eNbeTexCUreSUpport type ^bool^ function."AllsetS D3DCAPS6/TextureCaps, bitset (D3DPTEXTURECAPS CUBEMAP))-/ > < TextureSupport type-"bool < Vert : exShaders cype-"bool" < VertexShaders Cype. ^bool" function "GreaterEqual float (D3CAP58/VertexShaderVereion, float (1. 1))"/ > < PixelShaders type."bool" function."GreaterEqual~floatf D3DCAPSe/PixelShaderVersion, float (1. 1))"/ > < /requirementso p... e. < passcount type value pass0 > < renderscat : es > < D3DRSCULLMODE type ="D3DCULL" "value-"D3DCULLNONE"/ > < /renderstates > vertexshaderz shader type. string- value. vs 1 1 mov oTO, v7 mul rO, vO x, clO mad ru, vu y, cul, ru mad rO, vO z, c12, rO mad oPos, vO w, cl3, ru <Desc/Clms Page number 28> mul ro. xyz, v3. x, C4 mad xy 3. y C5 r mad rO xyz, v3 z, c6, rO dp3 r0. w, r0. xyz, r0. xyz raq rO w, rO w m I to, to, rO. w sge rO w, rO. w, rO. w mul rl, vO. x, c4 mad r1, v0. y, c5, r1 mad r1, v0. z, c6, r1 mad r1, vo. w, c7, r1 dp3 rl w, rl xyz, rl xyz rsq rl. w mul rl, rl, rl w sgerl. w, rl. w, rl. w dp3 r2, rO, rl add r2, r2, r2 mul r4 xyz, rl r2 add OTJ xyz, r,-r4 age OT1. w, rO. w, rO w mov oDO, vS mpv aDl, v5"/ > < handle typo uin- funcEion-"compiledx8vsf../shader)"/ > equation type. string- value'."dp3 RDOT2. REYEVECTOR. R EYENORMAL add RDOT2, RDOT2, RDOT2 mul REYE NORMAL, REYENORMAL, R DOT2 add oTO, R EYE VECTOR,-R EYE~NORMAL mov oTO. w, c [CVONE]-x"7 > < mapping > v6 type-"string- value ~"position-/ > v3 type = string- value ~"normal-/ > , v5 type ~ string- valve ="diEfuse^/ > < v7 type-string- value. texO-/ > < /mapping > < constants > < c20 type-"vector- value."0. 500000 0. 500000 0. 500000 0. 000000"/ > < c10 type "matrix9" link, parameters/transforms/mvp"/ > < c4 type "matrix4" function. ^POatMUl (parameters/transforms/world, parameters/transforms/view) ^/ > ,/constants, < /vertexshader > < pixelshader > < smootILshader type--string- value-P.. I. l tex tO tex t1 tex t2 sub rO, t0, tl dp3 r0, r0, r0 sub rl, t2, tl dp3 rl, rl, rl sub tO a, ro. a, rl. a add sat ro. a, tO a, eO a cnd r0, r0. a, t0, t2 lrp rO rgb, tl. a, tu, tu + mov rO. a, tl. a-/ > cshader type. string- value."ps l l tex tO tex tl mul x2 sat rl, l-tO a, cO. a mad sat rO, rl, tl, t0"/ > cDVC shader type. string- value ="ps. l. 1 tex t0 dp3~sat to, to, cO sub sat rl, tO, rO mad~sat to, cl. a, rl, to-/ > < handle type--uint function ^compile dxe psi../ahader)"J > < constants > <Desc/Clms Page number 29> < cO type-ve tor4- function ="Construct vector4~floats (floatt 0. 11), float (0 20), float (O S9), sfx/tself I/settings/shininess I / > , cl type-vector4' function.-Const=ctvector4~floats (float ( 0. 5 float (0. 5), float (O S), float (0. 8))"/ > < /constants > < /pixelshader > < texturestageo > < stageo > D3DTES MINFILTER type-D3DTEXTUREFILTERTYPE- value-"D3DTEXFLINEAR"/ > < D3DTS5 MIPFILTER type. ^D3DTEXTUREFILTERTYPE" value-"D3DTEXFLIHERR"/ > D3DTSS MAGFILTER type. D3DTEXTUREFILTERTYPEt value-"D3DTEXP LINEAR"/ > < /stageo > stage < D3DTSS MINFILTER type."D3DTEXTUREFILTERTYPE" value-"D3DTEXFLINEAR"/ > < D3DTSS MIPFILTER type."D3DTEXTUREFILTERTYPE" value. D3DTEXF~LINEAR-/ > < D3DTSS MAGFILTER type "o3DTEXTUREFILTERTYPE" value. D3DTEXF~LINEAR-/ > c/stagelz < /texturestages > textures < t0 type-^vint" link. parameters/object/texture/diffuse/2D/RBGBBAA8/handle") > ctl type-"uint- link-tparameters/obJect/texture/normal/~2D/RSG8B8A8/handle-/ > < /textures > c/passO passes /1. pl > c imp2 > < priority type-float- value-0 500000-/' description type. string- value ="Shiny with Gloss Alpha Channel Fixed Function- < requirements > cos type. etbool- function. GreaterEqual~floatf parameters/os/version, float (5. 0/ > cCubeTextureSupport type. bool- function. AllSet (D3DCAPS6/TextureCaps, bitset (D3DPTEXTURECAPS CUBEMAP))-/ > capi type. bool- function-GreaterEqual~float (parameters/api/version, float (8. 0/ > cTextureSupport type. @bool- function-eGreaterEqual uinti D3DCAPSS/MaxSimultaneousTextures, uint (2/ > < /requirements, p... e, < passcount type values < pasa0 > < renderstates > 30R5 NLLMODE type."D30NLL" value-"D3DCULLNOHE"/ > D3DRS TEXTUREFACTOR type. uint- function. vector4~to d3dcolor (Construct vector4 floats (float (0 0), float (0. 0), float (0. 0 sfx/ [self]/settings/shininess)) ^/ > < /randerstates > , transforms, cD3DTS NORLD type-matrix4" link ~ parameters/transforms/world-/ > zD3DTS~VIEw type = watrix4- link. epsrameters/transEorms/viewW cD3DTS FROJECTION type ~ matrix4- <Desc/Clms Page number 30> link = ^parameters/transforms/pmjection"/ > cD3DTS TEXTUREl type-matrix4- function = ^Transpose (parameters/transfozms/world 1"/ > c/transformso < vertexshader > bandle type. uint- link ~ parameters/object/vertex/FVF"/ > < /vertexsbader > < texturestages > cstageoz < D3DTSS MINFILTER type."D3DTEXTUREFILTERTYPE" "value-"D3DTEXFHNEAR"/ > < D3DTSSMIPFILTER type-"D3DTEXTUREFILTERTYPE" value."D3DTEXF~LINEAR-/ > < D3DTSSMAGFILTER type-"D3DTEXTUREFILTERTYPE" value-"D3DTEXFLINEAR"/ > < D3DTSS COLORARG1 type-"D3DTA" value. D3DTA TEXTURE"/ > < D3DTSS COLOROP type-"D3DTEXTUREOP" "value-"D3DTOPSELECTARG1"/ > cD3DTSS ALP8AARG1 type. D3DTA" value-"D3DTATEXTURE) D3DTACOMPLEMENT"/ > < D3DTSS ALPHAOP type."D3DTEXTUREOP" "value-"D3DTOPMODULATE2X"/ > eD3DTSS ALP8AARG2 type. 6zD3DTA- value. D3DTA TFACTOR-/ > ./. t. 9. 0 > , stagel > < D3DTSSTEXCODRDINDEX type-"D3DTSSTCI" value."D3DTStTCI CAMERASFACEREFLECTIONVECTOR | < D3DTSS~TEXTURETRANSFORMFLAGS type"D3DTEXTURETRANSFORMFLAGS" value-"D3DTTFFCOUHT3"/ > < D3DT5S MINFILTER type ="D3DTEXTUREFILTERTYPE" value. D3DTEXF~LINEAR-/, < D3DTSS MIPFILTER type "D3DTEXTUREFILTERTYPE" value "D3DTEXF LINEAft"/ > < D3DTSSMAGFILTER type""D3DTEXTUREFILTERTYPE" value-"D3DTEXFLIMEAR"/ > < D3DTSS COLORARGO type-"D3DTA" value."D3DTA CURRENT"/ > < D3DTSSCOLORARG1 type-"D3DTA" value-"D3DTATEXTURE"/ > < D3DTSSCOLORARG2 type-"D3DTA" value."D3DTA CURRENT D3DTA ALPHAREPLICATE"/ > < D3DTSS COLOROP type ="D3DTEXTUREOP" value ~ D3DTOF MULTIPLYADD"/ > < D3DTSSALPHAOP type-"D3DTEXTUREOP" value.'D3DTOP~SELECTARGl < D3DTSS ALPHAARG1 type-"D3DTA" value-"D3DTACURREHT"/ > < /stagel > c/texturestagesG cpixelshadero c/pixelsbaderz < textures > < tC type--uint" link. ^parameters/object/texture/diffuse/2D/R8G8B8A8/handle"/ > < tl type-Iluint" link-^parameters/locale/texture/environment/cube/RBGBBBAB/handle-/ > < /textures > /pa.. O < /passesz < /imp2 > i. p3. cpriority type-float" value ~ 0 600000-/ > cdescription type. string- value = ^Shiny with Gloss Alpha Channel Fixed Function-/ > erequirementsz < os type---bool- function ^GreaterEqual float (parameters/os/version, EloaCl 5. 0)) ^/ > <Desc/Clms Page number 31> < NbeTextureSupport type."bool" function. AllSeti D3DCAPS8/TextureCaps, bitset {D3DPTEXTURECAP5CUBEMAP))"/ > < api type = ^bool" function-nGreaterEqual~float (psrameters/apl/version, float (8. 0)) ^/ > cTextureSupport type. bool2 funcCion ="GreaCerEqual uintf D3DC4PS8/MaxSimvltaneousTextures, vint 2))"/a c/requirementso cpassmsz , passcount type-uint- value. 1-/ > spassOo < renderstates > D3DRS CULLMODE type. D3DCULL value-"D3DCULLHOME"/ > D3DRS~TEXTUREFACTOR type."uint- Eunction = ^vectora eo d3dcolor (Conatruct= ector9 floats (float ( 0. 0), float (0. 0 float (O 0) sfx/ [self7/settings/ehininess)) ^/ > < /renderstateso , transforms, D3DTS WORLD type-matrix4|t link parameters/transforms/world-/ > < D3DTS VIEW type."matrix4" link-"parameters/tran3forma/view-/ > < D3DTSPROJECTIOM type-"mat : rix4" link--parameters/tnsforms/projection < D3DT5 TEXTUFE1 type "matrix9" function. ^Transposet parameters/transforms/world)"/ > < /transfoms > < vertexshader > xhandle type. uint- link. ^parametersJobject/vertex/FVP"/ > < /vertexshader > texturestageso stage < D3DTSSMINFILTER type ="D3DTEXTUREFILTERTYPE" value ~"D3DTEXF~LINEAR-/ > < D3DTSSMIPFILTER type-"D3DTEXTUREFILTERTYPE" value ~ D3DTEXF LINEARs < D3DTSSMAGFILTER type ="D3DTEXTUREFILTERTYPE" value-"D3DTEXFLINEAR"/ > < D3DTS5 COLORARG1 type ^D3DTA" value."D3DTA TEXTURE"/ > < D3DTS COLOROP type-"D3DTEXTUREOP" "value-"D3DTOPSELECTARG1"/ > < D3DTSS ALPHAARG1 type."D3DTA" "value-"D3DTATFACTOR"/ > < D3DTSS ALPHAOP type ="D3DTEXTUREOP" value. D3DTOP~MODULATE2X-/ > < D3DTSS ALPHAARG2 type."D3DTA" value = ^D3DTA TEXTURE /a < /stages > stage D3DTSS TEXCOORDINDEX type-"D3DTSS TCI" valve = 3DTSS TCI C4MERASPAC6REFLECfIONVECTOR 1"/ > aD3DTSS TEXTURETRANSFORMFLAGS type "D3DTEXTURETRANSFORMFLAGS" value. D3DITFF COUNT3 cD3DTSS MINFILTER type ~ 2D3DTEXTUREFILTERTYFE- value. D3DTEXF LINEAR-/ > < D3DTSSHIPFILTER type-"D3DTEXTUREFILTERTYPE" value e ^3DTEXF LINEAR"/ > < D3DTS5 MAGFILTER type a"D3DTEXTUREFILTERTYPE" value ~ D3DTEXF LlNEAR"/ > cD3DTSS COLORARGO type ~ D3DTA- value-"D3DTACURRENT"/ > < D3DT55 COLOROP type ="D3DTEXTUREOP" value-"D3DTOPMODULATEALPHAADDCOLOR"/ > cD3DTSS COLORARGl type-D3DTA value. D3DTA~TEXTURE-/ > sD3DTSS ALPHAARGl type. D3DTA- value ~ D3DTA TEXTURE-/ > D3DTSS ALPHAOP type-"D3DTEXTUREOP" value. D3DTOP~SELECTARGl-/ > <Desc/Clms Page number 32> < /stagel > < /texturestages, splxelshadero < /pixelshaderz < textures > atO type. uint- link."parameters/object/texture/diffuse/2D/RBGBB8AB/handle"/ > tl type-uint- link-"parameters/locale/texture/environment/cube/R8GSBAAS/handle"/ > < /textures > < /pass0 > < /passesp < /imp3 > < /dxBo < 091 > imply < element name type value-1 000000-/ > eelement name-description- type-string- value = ^Shiny with GlOSs Alpha Cbanneln/ > < requirements > element name ~ os" type"bool- function-GreaterEqual~float (parameters/os/version, float (5. 0/ > , element name-api- type-bool- function. GreaterEqual float (parameters/api/version, float (1. 0/ > type ="bool" type function ~ 2RequiredNumRegisterCombiners (uint (2} extensions > element nsme. GL NV vertex program- type. bol- function = ^InitExtensionistring (GL NV vertexmgram)) ^/ > element name. GL NV register combiners- type. bool- function. InitExtension (string (GL~NV register combiners))-/ > < element name-^GL ARB texture compression" type. bool- Punction ="InitExtension (stringGL ARB texture compression))"/ > , element name type = ^bool function-InitExtension (string (GL EXT texture compression s3tc))-/ > < /extensions < /requirements > textureHsndles handles ele. ent name-name-- type-string- value. decalTex"/ > element : name-"handle" type. GLuint- Evnceion-^afx~glGenTexture () ^/ > < element name-data- type."sfxTexData" function- LoadTextureDataFromFile (string (stonearchaic tga), string (rgb)) / > , element name-mipmap- type s"bool" value-true- , element name-target- type = sfx GLenum^ value""GLTEXTURE2D"/ > element name. internalFormat- type. sfx GLenum- value-"GL RGBB"/ > aelement name. externalFormat- type-"sfx GLenum^ value."GL RGB"/ > , element name type ="9fX GLEnum" value."GL UNSIGNED BYTE"/ > < /handleO > < handlel > le. ent name--name- type ~ string- value ="envmap^/ > , element name--data- <Desc/Clms Page number 33> type-sfxTexData- function- ^LoadTextureDataFromFile (string (aky cube mipmap. ddel, atring (nu111)"/ > < element name--mipmap- type-bool- value-true"/ > < element name--target type-sfx GLenum66 value-GL TEXTURE arRE MAP EXT-/ > < element : name""GLTEXTUREMRAPS" type ="af =GLenum value-"GLCIAMPTOEDGE"/ > element : name-"GLTEXTUREMRAPT" type-^ef =GLenum^ value."GL CLAMP TO EDGE"/ > element name ="GLTEXTURENRAPR" type. sfx GLenumW value ="GL CLAMP TO EDGE"/ > element name-'Gl~TEXTURELMII~FILTER- type-sfx GLenum- value."GL LINEAR"/ > < element name. GL TEXTURE MAG FILTER- type-^sEx GLenum^ value-GL LINEAR-/ > < /handlel > c/textureHandles , passes, xelement name ~ passCount- type-vint value. 1-/ > < pa9s0 > ctransformso -del type link- parametere/transforms/world^/ > < view type--matrix4" link-parameters/transforms/view-/ > cprojection type-mstrix4e link--paramters/transfoms/pjection-/ > < /transformso < vertexshader > mapping < element name- type--string- value-positim-/ > < element name-'V [NRMLI-I type--string- value-normal"/ > < elemenc name-"v [COLO]" type. string" value. diffuse-/ > element name-'V [TEXOI' type ="string" value = ^tex0"/ > < /mapping > < element name-"ehader^ type-"string value ="''VPl. O MOV o [TEXO], v [TEXO] ; DP4 o [HPOS]. x, c [0], v [QPOSI ; DP4 o [HPOSJ. y, ctl], v [OPOS] ; DP4 o [HPOS]. =, c [2], v [OPOS] f DP4 o [HPOS]. w, c [3], v [OPOSJ ; DP4 RO. x, C [81, V [NP. MLI ; DP4 RO. y, ct91, viNRMLI : DP4 RO.., c (IO], VINRML] ; DP3 RO. w, RO, RO ; RSQ RO. w, RO. w ; MUL R0, R0, RO. w ; SGE RO. w, RO. w, RO. w ; DP4 Rl. x, c [4], v [OPOS] ; DP4 Rl. y, ctSI, vtOPOSI : DP4 Rl. z, c [6], v [OPOS] ; DP4 Rl. w, c [7], v [OPOS] ; DP3 Rl. w, R1, R1 ; RSQ Rl. w, Rl. w ; MUL Rl, Rl, RI. w ; SGE Rl. w, Rl. w, Rl. w ; DP3 R2, R0, R1 ADD R2, R2, R2 ; MUL R4. xyz, Rl, R2 ; ADD o [TEXl]. xyz, RO,-R4 ; SGE o [TEX1]. w, RO. w, RO. w ; MOV o [COLOJ. v [COLO) ; MOV oECOLlJ, v [COLO] ; END"/ > element name-handle- type ="og1 vs handle" funet3on ="compile ogl= s handle (../shader) ^/ > <Desc/Clms Page number 34> < consta ts > zelement name-cO- type-nvTrackMatrixParams- value-GL MODELVIEN~PROifECTION~NV GL IDENTITY NV-/ > elem-t name-'c4' type--nvTrackMatrixParams- value-GL~MODELVIEW GL IDENTITY NVs < element name-1-811 type. nvTrackMatrixParams" value. GL MODELVIEW GL INVERSE TkANSPOSE NV-/ > < /constantso < /vertexshader > textures < unito > element name ~"handle- type,-string- link-sfx/tself)/Imps/ogl/impl/textureHandles/handleO/name/ > < /unito > < unitl > element name-handle" type ~"string- link-"sfx/[selfl/imps/ogl/impl/textureHandles/handlel/name-/ > < /unitl > a/texturea > < registercombiners > constants < element name-consto- type.-vector4 function--ConstCt~yeCtor4~floatS (float ( 0. 0 float (0. 0 float (0. 0), sfx/tselfl/settings/shininess < /conatanta > rgb { type ="etring" value"IIRC1. 0 rgb { rgb spareO. unsigned~lnvert (texO al'consto a : ! { rgb { discard = unsigned (zpareO) *texl : di3card-texO : spareO-sum () ; out rgb. spareO ; out. rgb. spare0 ;"/ > element name'handle- type-"ogl-c handle^ function. ncompile ogl rc bandle (/nvparseInlineRegisterCombiner)"/ > < /registercombiners > /p... 0 > '/passes /i. pl > /ogl > < /imps > < /depot, |
A method and circuit for device specific configuration of an operating voltage is provided. A circuit design is analyzed (309) to determine a maximum gate-level delay for the circuit design. A minimum voltage value corresponding to the maximum gate-level delay is determined along with a default voltage value corresponding to a default gate-level delay (306). A voltage scaling factor corresponding to the minimum voltage and default voltage values is determined (320). The circuit design is synthesized such that the synthesized design includes the voltage scaling factor (408). The synthesized design specifies setting an operating voltage to a value of a startup voltage (410) value scaled by the voltage scaling factor (408). |
CLAIMS What is claimed is: 1 . A method for synthesis of a circuit design, the method comprising: 5 inputting delay-voltage data that describes a plurality of delay values, the delay values corresponding to operating voltage values of a target device; determining from analysis of the circuit design a maximum gate-level delay for the circuit design; determining a minimum voltage value corresponding to the maximum 10 gate-level delay and a default voltage value corresponding to a default gate-level delay; determining a voltage scaling factor corresponding to the minimum voltage value and the default voltage value; and synthesizing the circuit design, wherein the synthesized circuit design i s includes the voltage scaling factor, and the synthesized circuit design specifies setting an operating voltage to a value of a startup voltage value scaled by the voltage scaling factor, wherein the startup voltage value is stored in a target device for implementing the synthesized circuit design. 20 2. The method of claim 1 , wherein determining a maximum gate-level delay includes determining whether the maximum gate-level delay is within user- defined delay constraints. 3. The method of claim 1 , wherein determining a maximum gate-level delay 25 includes determining whether a voltage value corresponding to the maximum gate-level delay in the delay-voltage data is within user-defined voltage constraints. 4. The method of claim 1 , wherein determining a maximum gate-level delay 30 includes determining whether a user-defined voltage scaling parameter scales the maximum gate-level delay to a selected delay value. 5. The method of claim 1 , wherein determining a maximum gate-level delay includes determining whether a voltage value corresponding to the maximumgate-level delay in the delay-voltage data is equal to a user-defined operating voltage parameter. 6. The method of any one of claims 1 -5, further comprising: determining maximum delay requirements of each path of the circuit design and performing place-and-route optimizations according to the maximum delay requirements of each path; wherein the voltage-delay data further specifies respective delay parameters for areas of the target device. 7. The method of claim 1 , wherein determining a maximum gate-level delay includes: simulating the circuit design with a gate-level delay equal to the default delay; verifying whether output of the simulation is correct; and in response to verifying the output of the simulation is correct: increasing the simulation delay by a selected amount; and repeating simulation of the circuit design and verification of output using the increased simulation delay. 8. The method of claim 1 , wherein determining a maximum gate-level delay includes: simulating the circuit design with a supply voltage equal to the default voltage; verifying whether output of the simulation is correct; in response to verifying the output of the simulation is correct: decreasing the supply voltage by a selected amount; and repeating simulation of the circuit design and verification of output using the decreased supply voltage; determining a least supply voltage wherein simulation of the circuit design produced correct output; and determining a gate-level delay of the simulation corresponding to the least supply voltage. 9. The method of any one of claims 1 -8, further comprising:generating a bitstream from the synthesized circuit design; wherein the bitstream is further configured to program the target device to set the operating voltage of the target device by signaling an external power supply. 10. A programmable integrated circuit comprising: a plurality of programmable resources; a plurality of programmable routing resources for coupling the programmable resources; a plurality of configuration memory cells coupled to the programmable resource and to the programmable routing resources; a non-volatile memory unit; and a power controller unit coupled to the non-volatile memory unit, wherein the power controller unit is configured to set the operating voltage to a minimum value stored in the non-volatile memory unit. 1 1 . The programmable integrated circuit of claim 10, wherein: the power controller unit is further coupled to an output port; and the power controller unit is configured to set the operating voltage by outputting the minimum value on the output port. 12. The programmable integrated circuit of claim 10 or 1 1 , wherein the minimum value stored in the non-volatile memory unit is equal to a determined minimum operating voltage required for a maximum operating delay. 13. The programmable integrated circuit of claim 10 or 12, wherein the power controller unit is configured to set the operating voltage to a value equal to the minimum value stored in the non-volatile memory unit scaled by a voltage parameter stored in the non-volatile memory unit. 14. The programmable integrated circuit of any one of claims 10-13, wherein the power controller unit is implemented on a subset of the programmable resources and programmable routing resources using a subset of the configuration memory cells. 15. The programnnable integrated circuit of any one of claims 10-13, wherein the power controller unit is implemented with dedicated hardware. |
DEVICE SPECIFIC CONFIGURATION OF OPERATING VOLTAGE FIELD OF THE INVENTION An embodiment generally relates to integrated circuits, and particularly programmable voltage of integrated circuits. BACKGROUND The minimum dimension that a given photolithography process can resolve is alternatively called the minimum feature-size or the critical dimension. The feature-size is a parameter of interest as reductions in the feature-size tend to improve speed performance of the IC. The feature-size of a printed integrated circuit (IC) is not uniform. The printing process results in slight variation of the feature-size from lot-to-lot, from wafer-to wafer, and from device to device within each wafer. As a result, programmable ICs, such as field programmable gate arrays (FPGAs) vary in static power and circuit delay due to variations in the manufacturing process. Slow devices usually have lower static power and fast devices usually have higher static power requirements. As circuit designs continue to increase the speed and power efficiency requirements of target devices, it becomes increasingly important for developers to simulate and test circuit designs on target devices using precise power and delay specifications prior to realization. Many programmable IC vendors, such as Xilinx, Inc., measure switching speed of several printed devices of a product design to determine a minimum operating voltage and maximum delay that can be guaranteed to designers. Due to variations from device to device, in order for the guaranteed specifications to apply to a majority of the printed devices, the guaranteed voltage and delay specifications are offset to include a certain amount of headroom. For example, measurements may indicate that the majority of product devices can operate on average at or above 1 10 megahertz (MHz) at 1V operating voltage but a small percentage of the devices will operate as low as 102 MHz at the same voltage. The specification may offset average speed of 1 10 by a headroom of 10 MHz to ensure devices perform as indicated in the specification. The presence of process variations degrade the performance and power specifications that manufactures can guarantee to customers. The larger the amount of variation, the larger the specification is offset by a headroom.Because of the included headroom, many printed devices in a product design are capable of performing with better voltage and delay parameters than that guaranteed in the vendor product specification. One or more embodiments may address one or more of the above issues. SUMMARY In one embodiment, a method for synthesis of a circuit design is provided. Delay-voltage data that describes a plurality of delay values can be input. The delay values can correspond to operating voltage values of a target device. The circuit design can be analyzed to determine a maximum gate-level delay for the circuit design. A minimum voltage value corresponding to the maximum gate- level delay can be determined along with a default voltage value corresponding to a default gate-level delay. A voltage scaling factor corresponding to the minimum voltage and default voltage values can be determined. The circuit design can be synthesized such that the synthesized design includes the voltage scaling value. The synthesized design can specify setting an operating voltage to a value of a startup voltage value scaled by the voltage scaling value. The startup voltage value can be a value stored in the target device for implementing the synthesized circuit design. In this embodiment, determining a maximum gate-level delay can include determining whether the maximum gate-level delay is within user-defined delay constraints. Determining a maximum gate-level delay can include determining whether a voltage value corresponding to the maximum gate-level delay in the delay-voltage data is within user-defined voltage constraints. Determining a maximum gate-level delay can include determining whether a user-defined voltage scaling parameter scales the maximum gate-level delay to a selected delay value. Determining a maximum gate-level delay can include determining whether a voltage value corresponding to the maximum gate-level delay in the delay-voltage data is equal to a user-defined operating voltage parameter. This embodiment of the method can further comprise determining maximum delay requirements of each path of the circuit design and performing place-and-route optimizations according to the maximum delay requirements of each path; wherein the voltage-delay data can further specify respective delay parameters for areas of the target device. Determining a maximum gate-level delay can include: simulating the circuit design with a gate-level delay equal tothe default delay; verifying whether output of the simulation is correct; and in response to verifying the output of the simulation is correct: increasing the simulation delay by a selected amount; and repeating simulation of the circuit design and verification of output using the increased simulation delay. 5 In this embodiment, determining a maximum gate-level delay can include: simulating the circuit design with a supply voltage equal to the default voltage; verifying whether output of the simulation is correct; in response to verifying the output of the simulation is correct: decreasing the supply voltage by a selected amount; and repeating simulation of the circuit design and verification of output 10 using the decreased supply voltage; determining a least supply voltage wherein simulation of the circuit design produced correct output; and determining a gate- level delay of the simulation corresponding to the least supply voltage. This embodiment of the method can further comprise generating a bitstream from the synthesized circuit design; wherein the bitstream can be further configured to i s program the target device to set the operating voltage of the target device by signaling an external power supply. In another embodiment, a programmable integrated circuit is provided. The programmable integrated circuit can include a plurality of programmable resources and a plurality of programmable routing resources for coupling the 20 programmable resources. A plurality of configuration memory cells can be coupled to the programmable resource and to the programmable routing resources. The programmable integrated circuit can also include a non-volatile memory unit and a power controller unit coupled to the non-volatile memory unit. The power controller unit can be coupled and configured to set the operating 25 voltage to a minimum value stored in the non-volatile memory unit. In this embodiment, the power controller unit can be further coupled to an output port; and the power controller unit can be configured to set the operating voltage by outputting the minimum value on the output port. The minimum value stored in the non-volatile memory unit can be equal to a determined minimum 30 operating voltage required for a maximum operating delay. The power controller unit can be configured to set the operating voltage to a value equal to the minimum value stored in the non-volatile memory unit scaled by a voltage parameter stored in the non-volatile memory unit. The power controller unit can be implemented on a subset of the programmable resources and programmablerouting resources using a subset of the configuration memory cells. The power controller unit can be implemented with dedicated hardware. In yet another embodiment, a method for synthesis of a circuit design is provided. Delay-voltage data that describes a plurality of delay values corresponding to operating voltage values of a target device can be input. A maximum gate-level delay for the circuit design can be determined by a processor from analysis of the circuit design. The one of the operating voltage values corresponding to one of plurality of delay values that is equivalent to the determined maximum gate-level delay can be determined. The circuit design can be synthesized such that the synthesized design specifies storing a voltage scaling value in a non-volatile memory. The synthesized design can further specify setting an operating voltage of a realized circuit of the synthesized design to a value of the one operating voltage value. In yet another embodiment, a method for synthesis of a circuit design is provided. This embodiment of the method can include inputting delay-voltage data that describes a plurality of delay values corresponding to operating voltage values of a target device; determining from analysis of the circuit design a maximum gate-level delay for the circuit design; determining one of the operating voltage values corresponding to one of plurality of delay values that is equivalent to the determined maximum gate-level delay; and synthesizing the circuit design, wherein the synthesized circuit design specifies: storing a voltage scaling value in a non-volatile memory; and setting an operating voltage of a realized circuit of the synthesized circuit design to a value of the one operating voltage value. This embodiment of the method can further include inputting a design constraint; wherein the determining a maximum gate-level delay for the circuit design can include determining a maximum gate-level delay that meets the design constraint; and wherein the synthesizing can be performed in response to the determined one of the operating voltage values being less than or equal to the design constraint. The design constraint can be a specific operating voltage. The design constraint can be a maximum operating voltage. The design constraint can be a maximum user-defined gate-level delay. It will be appreciated that various other embodiments are set forth in the Detailed Description and Claims which follow.BRIEF DESCRIPTION OF THE DRAWINGS Various aspects and advantages will become apparent upon review of the following detailed description and upon reference to the drawings in which: FIG. 1 -1 shows a graph of voltage versus clock speed for five example 5 devices; FIG. 1 -2 shows an example table of voltage scaling factors and corresponding delay scaling factors; FIG. 1 -3 shows a graph of an equation representing the table shown in FIG. 1 -2; i o FIG. 2 shows a graph of power v. delay of five devices after voltage scaling; FIG. 3 shows a flowchart of a process to determine the voltage scaling factor for a target device; FIG. 4-1 shows a block diagram of a programmable integrated circuit 15 configured with a power controller and coupled to an external programmable power supply in accordance with various embodiments; FIG. 4-2 shows a block diagram of a programmable integrated circuit configured with a power controller and internal power regulator in accordance with various embodiments; 20 FIG. 5 shows a flowchart of a process in which a target device configured with a power controller may adjust voltage in accordance with several embodiments; FIG. 6 shows a flowchart of a process in which a target device configured with a power controller implemented in dedicated hardware may adjust voltage in 25 accordance with several embodiments; FIG. 7 illustrates a block diagram of a programmable integrated circuit for implementing a circuit design with programmable operating voltage in accordance with various embodiments; and FIG. 8 illustrates a block diagram of a general purpose processor 30 computing arrangement for implementing a data bus controller in accordance with various embodiments. DETAILED DESCRIPTION OF THE DRAWINGS The various embodiments of this disclosure provide methods of using programmable voltage to improve power delay variation in integrated circuits.Due to variation in the lithography process of integrated circuit manufacture, different devices of the same design require different voltages to achieve the same gate switching speed. Faster devices can meet a specified timing requirement with lower voltages, and slower devices can be sped up to achieve the specified timing requirement with a higher voltage. Reducing the variance of power and delay distributions can improve both power and delay specifications of a product design. In one embodiment, each device is tested to determine a minimum operating voltage (Vmin) for a nominal delay indicated in the product specification. This voltage is stored in a non-volatile memory on the die. Vmin can then be used to signal a programmable power supply to set the operating voltage of the device to Vmin. For example, FIG. 1 -1 shows a graph of voltage v. speed performance for five hypothetical devices cut from the same wafer. Each device can operate at a slightly different speed at a given operating voltage due to variation in the printing of the devices. A device specification from the manufacturer may indicate that 120 megahertz (MHz) operation can be guaranteed at an operating voltage of 1 .0 volts. This would ensure that at 1 .0 volts, all devices sold by the manufacturer would perform as specified. However, four of the devices can operate at 120MHz under lower operating voltages 102. By measuring each device to determine the minimum operating voltage for a speed that will be indicated in the specification, the determined minimum operating voltage can be stored in non-volatile memory of each device and used to set the operating voltage at startup. To efficiently determine Vmin for each realized device, a different final test flow is employed in a manufacturing test. Special speed testing is placed towards the beginning of the test flow after some gross open/short and gross defect testing. These special speed tests are performed at different voltage levels between typical specification and minimum guaranteed level. The lowest voltage level necessary for all tested devices to pass the requirements that will be used in the product specification is recorded. A functional testing voltage level at which the device can achieve required speed is determined. The device is then tested at the functional test voltage level to guarantee functionality at the programmed Vmin level. It is understood that each device need not be measured individually. Several devices cut from the same wafer can be used togeneralize the minimum voltage of the wafer. Each wafer could also be divided into regions and several devices cut from the same region can be used to generalize the minimum voltage of the wafer. In one embodiment, further testing can be performed on several of the 5 printed devices to determine a common scaling between a first set of minimum voltages necessary to operate each device at a first speed and a second set of minimum voltages necessary to operate each device at a second speed. Several common scaling factors of a minimum voltage may be provided in a device specification to indicate voltages necessary to operate devices at several 10 different operating speeds. Because the scaling is common, the same scaling factor can be used with the Vmin stored on several devices to determine the scaled operating voltage necessary to operate each device at a certain operating speed. For example, the graph in FIG. 1 -1 shows the voltage required for five i s devices at several clock speeds. A common scaling factor can be determined to scale the voltage necessary to operate at 120 MHz 102 to a voltage necessary to operate at 100MHz 104. In this example the voltage of a device necessary to operate at 100 MHz (VIOOMHZ) is given by the equation, V-i oOMHz = VscalelOO * V-|20MHz 20 where V12OMHz is the operating voltage of the device necessary to operate at 120 MHz and Vscaieioo is a scaling factor to scale between the two operating speeds. In this example, the common scaling between devices is a linear equation. It will be recognized that some product designs may require a nonlinear equation to represent a common scaling of an operating speed. 25 By including several scaling factors in a device specification, automated design tools can be used by a designer to program a desired one of those scaling factors into a synthesized design or bitstream. When the design is printed or programmed onto programmable logic, the scaling factor can be read at startup along with a minimum voltage value stored in nonvolatile memory. 30 The scaling factor can scale the stored minimum voltage value to achieve a voltage level corresponding to a desired operating speed. In this manner, designers can determine a needed operating speed for their design and configure their design to operate at the minimum necessary voltage to achieve the required operating speed.The scaling factor may be stored in non-volatile memory internal or external to the device. For example, if the scaling factor is stored in the bitstream of an FPGA, the bitstream may be stored in internal or external nonvolatile memory prior to device configuration at startup. 5 The scaling factor may not necessarily be linear. For example, in FIG. 1 -1 the scaling factor to scale from 120 MHz to 1 15 MHz may be different from the scaling factor to scale from 1 15 MHz to 1 10 MHz. When a voltage scaling factor is used in conjuncture with a Vmin stored on a device, Vmin of each device should correspond to one operating speed. In this manner, the same scaling 10 factor can be used to scale Vmin of each device. Likewise, the scaling factor programmed into the bitstream used should scale the voltage of the one operating speed to a voltage necessary to operate the device at a designed operating speed. To enable designers to operate a device at optimal voltages at different i s operating speeds, several scaling factors can be included in a device specification. FIG. 1 -2 shows a table of voltage scaling factors and corresponding delay scaling factors to scale the 120MHz voltages 102 in FIG. 1 - 1 to voltages at the other operating speeds shown. The delay scaling factor (Vdelay) scales the delay at one voltage to the delay at another voltage. In 20 addition to, or in lieu of, the scaling factor table shown in FIG. 1 -2, an equation to convert a delay scaling factor to a voltage scaling factor can be included in a device specification. For example, FIG. 1 -3 shows a graph of the equation 4(Vdelay)2- 9(Vdelay)+6.6 which can be used to calculate the voltage scaling factors for delay scaling factors not included in the table shown in FIG. 1 -2. 25 Voltage scaling can be used to reduce the voltage to reduce power consumption or used to increase the voltage to improve performance. FIG. 2 shows a power versus delay distribution of five tested devices operating at a nominal voltage. Devices a 208 and b 210 can be slowed down by operating at voltages lower than the nominal voltage. Devices d 206 and e 204 are sped up 30 by operating at voltages higher than nominal voltage. As a result all devices operate at a delay of DO 202. Consequently, the timing specification is improved from Dvar to DO, and the power specification is improved from Pvar to Pnew. It should be noted that in scaling voltages of devices, low voltage may affect the functionality of the device and high voltages may adversely affect reliability. Aproduct specification may include a safe operating voltage range to ensure that the operating voltage is not scaled outside a safe operating range. In some embodiments, software design tools may be used to determine whether a target device can operate at reduced voltages based on various user constraints such as a maximum operating speed, maximum operating voltage, etc. If the design tools determine that the user constraints can be met through voltage scaling, an appropriate voltage scaling factor is determined and programmed into the bitstream or otherwise incorporated into the realized circuit design. Software design tools can be used to determine a maximum delay that produces correct output for a specific circuit design. For example, the design tools may analyze a circuit design and determine that the specified timing constraints can be met even if delay parameters are 1 % lower than indicated in the specification. The tools may determine that a delay scaling factor of 1 .1 corresponds to a voltage scaling factor of 0.88 using the example table shown in FIG. 1 -2. The voltage scaling factor of 0.88 can be stored in the bitstream of the synthesized circuit design and used along with the minimum voltage stored in non-volatile memory to set the operating voltage of a programmable power supply when the device is powered on. In one embodiment, a timing analysis is iteratively performed on the circuit design. In each iteration, the delay parameters in the delay specification of the target device are derated by an incrementally increasing scaling factor. The iterating stops when the design fails to meet the timing constraints. The last delay scaling factor that meets the timing constraints is used as the delay scaling factor. FIG. 3 shows a flowchart of an example process for determining a voltage scaling factor for a specific circuit design. A circuit design 302 and a voltage/delay specification 304 are received at step 306. The voltage/delay specification 304 corresponds to a target device that will be used to realize circuit design 302. A default gate level delay is determined for the target device from the voltage/delay specification 304 at step 306. The default gate level delay corresponds to the Vmin operating voltage programmed on the target device. For example, the maximum guaranteed delay indicated at a nominal voltage in the specification may be used as the default gate level delay. Timinganalysis is performed at step 309 to determine the performance and functionality of the circuit design 302 with the set gate level delay. In some embodiments, optimizations may be performed at step 308 to improve functionality and performance of the circuit. For example, the circuit design may be re-mapped, re-placed, and/or re-routed to improve throughput or meet timing constraints of the circuit design. In addition to producing functionally correct output, timing and design constraints may include a number of user defined limitations, such as a specific operating voltage, a specific voltage scaling factor, a specific gate level delay, a specific operating frequency of the target device, etc. If the circuit design is determined to produce correct output and timing and/or design constraints are met at decision step 310, the current gate level delay or a scaling factor of the default gate level delay is stored at step 312. The gate level delay is increased at step 316 and timing analysis is performed on circuit design at step 309. The circuit design may also be further optimized at step 308. This process is repeated until circuit design 302 is determined to produce incorrect output or fails to meet the timing and/or design constraints at decision step 310. After the circuit design 302 fails to produce correct output or meet design/timing constraints, the most recent stored delay, corresponding to the largest functional gate level delay, is retrieved at step 318. The delay scaling factor is converted to a voltage scaling factor 322 and output at step 320. The mapping of delay scaling factor to voltage scaling factor can be determined by characterizing FPGA delay parameters at multiple voltages and provided in a table or equation as discussed above. In some other embodiments, a voltage scaling factor for a specific circuit design and target device can be determined by iteratively simulating the circuit design on a model of the target device using incrementally decreasing operating voltage levels. In each iteration, the simulation can simulate the latching speed of transistors of the target device for the current voltage level. The iterating stops when the design fails to meet the timing or design constraints. The last operating voltage level where the design meets the timing and design constraints is used as the operating voltage of the device. Once a voltage scaling factor is determined, a bitstream of the design including a specific voltage or a voltage scaling factor may be generated and loaded onto a target device.In another embodiment, the user can request the design tools to produce a design with sufficient performance headroom to allow the operating voltage to be scaled by a certain voltage scaling factor. Alternatively, the user may request a precise operating voltage. The tool determines the necessary delay scaling factor using the example mapping table in FIG. 1 -2 and runs the timing-driven implementation flow using a nominal voltage indicated in the specification where the delay parameters are derated by the delay scaling factor. If the tools succeed in meeting the timing constraints, then the resulting design will be able to operate under a voltage scaling factor (or actual voltage) requested by the user. The power controller would signal a programmable power supply to set the operating voltage at Vmin scaled by the voltage scaling factor. For example, the user may ask the tools to produce a Virtex-5 design that can operate at 0.88V (or scaling factor of 0.88). The tool determines that the design must operate with a timing delay indicated in the specification derated by a delay scaling factor of 1 .10. The tools run a timing-driven flow using delay parameters indicated in the specification that are adjusted by 1 .10. When successful, the resulting design can meet timing at 0.88V. If Vmin is used, the power controller sets each part at .88*Vmin. If Vmin is not used, the power controller sets each part at a fixed voltage of 0.88V. In some embodiments, Vmin is not used or may not be stored on the target device. In these embodiments, a specific operating voltage may be programmed into the bitstream. Alternatively, a scaling factor to scale the nominal voltage indicated in the product specification can be determined. The specific scaling factor meeting defined user constraints can be determined using the methods discussed above. The determined scaling factor is then programmed into a bitstream and loaded onto a target device. When the target device is powered on, a power controller circuit can simply set the supply voltage at nominal voltage scaled by the voltage scaling factor. FIG. 4-1 shows a block diagram of a target device equipped with programmable voltage control. On the integrated circuit device 404, information about the part's minimum supply voltage (Vmin) is stored in nonvolatile memory 410. When the device is activated, Vmin 410 is retrieved by power controller 406 and is used to configure the device to a default operating voltage. To set the operating voltage to the Vmin value, power controller 406 sends a voltage identification code (VID) 414 corresponding to the target operating voltage topower supply 402. Power supply 402 in turn powers the integrated circuit at a voltage that corresponds to the received VID. In one embodiment, the power controller also sends a status signal 416 to the power supply to indicate when the VID signals are valid. Depending on the state of the status signal, the power supply outputs either a fixed nominal voltage, or the VID voltage, to Vcc input 412. This may be useful when the target device is an FPGA. In some embodiments, the initial voltage can be set using pull-up and pull-down resistors to set a valid VID before the FPGA is configured. In another embodiment, the target device may include an operating voltage regulator to set or adjust the desired operating voltage of the target device internally in lieu of a programmable power supply. FIG. 4-2 shows a block diagram of a target device equipped with an internal power regulator. Information about the part's minimum supply voltage (Vmin) is stored in nonvolatile memory 410 on the integrated circuit device 404. Power supply 420 is configured to output a fixed nominal voltage to Vcc input 412. When the device is activated, power regulator 422 outputs a Vcc internal voltage that is used to power logic contained in target device 404. Power controller 406 is configured to retrieve Vmin 410 and Voltage scaling factor 408, determine an operating voltage and signal power regulator 422 to output the determined operating voltage. FIG. 5 shows a flowchart of an example process in which a target FPGA device having a power controller implemented in programmable logic may adjust voltage in accordance with several embodiments. The target device is powered on at step 502, and the power supply sets Vcc to an initial default nominal value. This voltage may either be set by pull-up and pull-down resistors or be preset to respond to the status signal. If the status signal is used, it must be valid before the FPGA is configured. The FPGA programmable logic is configured at step 504. After configuration of the FPGA, power controller 506 reads Vmin and/or Vscale from nonvolatile memory at step 506 and determines a minimum operating voltage for the target device. The power controller indicates the minimum operating voltage to the programmable power supply using a valid VID at step 508. The power supply sets Vcc to the voltage indicated in the VID at step 510.The various embodiments, may implement a circuit design on a number of target devices. It is understood that the target device may be an application specific integrated circuit (ASIC) or a programmable logic integrated circuit such as an FPGA. If the target device implements programmable logic, the power control logic may be implemented in dedicated hardware or in programmable logic. If a status signal is not used to signal the programmable power supply, the power controller may be a dedicated hardware or a programmable logic. However, if the status signal is used to set the initial voltage before an FPGA is configured, the power controller should be a dedicated hardware so that it is active before the device is configured. The power controller can then set the status signal to indicate to the power supply that the FPGA has been configured and VID is now valid. If a status signal is not used, then this step may be skipped. FIG. 6 shows a flowchart of an example process in which a target FPGA device having a power controller implemented in dedicated hardware may adjust voltage in accordance with several embodiments. The target device is powered on at step 602 and the power supply sets Vcc to an initial default nominal value. A dedicated power controller reads Vmin from non-volatile memory and Vscale from configuration memory at step 604 and determines a minimum operating voltage. The power controller sets power supply signal line to the minimum operating voltage Vmin at step 606. The FPGA is configured at step 608. The Power controller sets status to signal VID is valid at step 610. Power supply sets Vcc to the minimum operating voltage at step 612. FIG. 7 is a block diagram of an example programmable integrated circuit that may be used in implementing a circuit design with programmable operating voltage in accordance with various embodiments. A power controller, as previously described, may be implemented on the programmable logic and interconnect resources of programmable integrated circuit. FPGAs can include several different types of programmable logic blocks in the array. For example, FIG. 7 illustrates an FPGA architecture (700) that includes a large number of different programmable tiles including multi-gigabit transceivers (MGTs 701 ), configurable logic blocks (CLBs 702), random access memory blocks (BRAMs 703), input/output blocks (lOBs 704), configuration and clocking logic (CONFIG/CLOCKS 705), digital signal processing blocks (DSPs 706), specialized input/output blocks (I/O 707), for example, e.g., clock ports,and other programmable logic 708 such as digital clock managers, analog-to- digital converters, system monitoring logic, and so forth. Some FPGAs also include dedicated processor blocks (PROC 710) and internal and external reconfiguration ports (not shown). In some FPGAs, each programmable tile includes a programmable interconnect element (INT 71 1 ) having standardized connections to and from a corresponding interconnect element in each adjacent tile. Therefore, the programmable interconnect elements taken together implement the programmable interconnect structure for the illustrated FPGA. The programmable interconnect element INT 71 1 also includes the connections to and from the programmable logic element within the same tile, as shown by the examples included at the top of FIG. 7. For example, a CLB 702 can include a programmable resource such as, e.g., a configurable logic element CLE 712 that can be programmed to implement user logic plus a single programmable interconnect element NT 71 1 . A BRAM 703 can include a BRAM logic element (BRL 713) in addition to one or more programmable interconnect elements. Typically, the number of interconnect elements included in a tile depends on the height of the tile. In the pictured embodiment, a BRAM tile has the same height as four CLBs, but other numbers (e.g., five) can also be used. A DSP tile 706 can include a DSP logic element (DSPL 714) in addition to an appropriate number of programmable interconnect elements. An IOB 704 can include, for example, two instances of an input/output logic element (IOL 715) in addition to one instance of the programmable interconnect element INT 71 1 . As will be clear to those of skill in the art, the actual I/O pads connected, for example, to the I/O logic element 715 are manufactured using metal layered above the various illustrated logic blocks, and typically are not confined to the area of the input/output logic element 715. In the pictured embodiment, a columnar area near the center of the die (shown shaded in FIG. 7) is used for configuration, clock, and other control logic. Horizontal areas 709 extending from this column are used to distribute the clocks and configuration signals across the breadth of the FPGA. Some FPGAs utilizing the architecture illustrated in FIG. 7 include additional logic blocks that disrupt the regular columnar structure making up a large part of the FPGA. The additional logic blocks can be programmable blocksand/or dedicated logic. For example, the processor block PROC 710 shown in FIG. 7 spans several columns of CLBs and BRAMs. Note that FIG. 7 is intended to illustrate only an exemplary FPGA architecture. The numbers of logic blocks in a column, the relative widths of the columns, the number and order of columns, the types of logic blocks included in the columns, the relative sizes of the logic blocks, and the interconnect/logic implementations included at the top of FIG. 7 are purely exemplary. For example, in an actual FPGA more than one adjacent column of CLBs is typically included wherever the CLBs appear, to facilitate the efficient implementation of user logic. Those skilled in the art will appreciate that various alternative computing arrangements, including one or more processors and a memory arrangement configured with program code, would be suitable for hosting the processes and data structures of the different embodiments. FIG. 8 is a block diagram of an example computing arrangement on which the processes described herein may be implemented using a general purpose processor. Those skilled in the art will appreciate that various alternative computing arrangements, including one or more processors and a memory arrangement configured with program code, would be suitable for hosting the processes and data structures and implementing the algorithms of one or more embodiments. The computer code, comprising the processes encoded in a processor executable format, may be stored and provided via a variety of computer-readable storage media or delivery channels such as magnetic or optical disks or tapes, electronic storage devices, or as application services over a network. Processor computing arrangement 800 includes one or more processors 802, a clock signal generator 804, a memory unit 806, a storage unit 808, and an input/output control unit 810 coupled to host bus 812. The arrangement 800 may be implemented with separate components on a circuit board or may be implemented internally within an integrated circuit. When implemented internally within an integrated circuit, the processor computing arrangement is otherwise known as a microcontroller. The architecture of the computing arrangement depends on implementation requirements as would be recognized by those skilled in the art. The processor 802 may be one or more general purpose processors, or acombination of one or more general purpose processors and suitable coprocessors, or one or more specialized processors (e.g., RISC, CISC, pipelined, etc.). The memory arrangement 806 typically includes multiple levels of cache memory and a main memory. The storage arrangement 808 may include local and/or remote persistent storage such as provided by magnetic disks (not shown), flash, EPROM, or other non-volatile data storage. The storage unit may be read or read/write capable. Further, the memory 806 and storage 808 may be combined in a single arrangement. The processor arrangement 802 executes the software in storage 808 and/or memory 806 arrangements, reads data from and stores data to the storage 808 and/or memory 806 arrangements, and communicates with external devices through the input/output control arrangement 810. These functions are synchronized by the clock signal generator 804. The resource of the computing arrangement may be managed by either an operating system (not shown), or a hardware control unit (not shown). One or more embodiments is thought to be applicable to a variety of devices and circuit designs implementing programmable logic. Other aspects and embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. It is intended that the specification and illustrated embodiments be considered as examples only, with a true scope and spirit of the invention being indicated by the following claims. |
A processor of an aspect includes a decode unit to decode a persistent store fence instruction. The processor also includes a memory subsystem module coupled with the decode unit. The memory subsystem module, in response to the persistent store fence instruction, is to ensure that a given data corresponding to the persistent store fence instruction is stored persistently in a persistent storage before data of all subsequent store instructions is stored persistently in the persistent storage. The subsequent store instructions occur after the persistent store fence instruction in original program order. Other processors, methods, systems, and articles of manufacture are also disclosed. |
CLAIMS1. A processor comprising:a decode unit to decode a persistent store fence instruction; anda memory subsystem module coupled with the decode unit, the memory subsystem module, in response to the persistent store fence instruction, to ensure that a given data corresponding to the persistent store fence instruction is stored persistently in a persistent storage before data of all subsequent store instructions, which occur after the persistent store fence instruction in original program order, is stored persistently in the persistent storage.2. The processor of claim 1, wherein the persistent store fence instruction comprises a store and persistent store fence instruction that is to indicate a source operand having the given data and that is to indicate a location in the persistent storage where the given data is to be stored.3. The processor of claim 1, wherein the given data is to be included in a source operand of a store instruction that implicitly to the persistent store fence instruction is to be one of immediately before and immediately after the persistent store fence instruction in the original program order.4. The processor of claim 1, wherein the memory subsystem module, in response to the persistent store fence instruction, is not to ensure that data of all previous store instructions, which occur before the persistent store fence instruction in the original program order, is stored persistently in the persistent storage before the data of the subsequent store instructions.5. The processor of claim 1, further comprising a set of one or more caches, and wherein the memory subsystem module, in response to the persistent store fence instruction, is to cause the given data to bypass the set of the one or more caches.6. The processor of any one of claims 1 to 5, further comprising a persistent store fence buffer, and wherein the memory subsystem module, in response to the persistent store fence instruction, is to cause the given data to be stored in the persistent store fence buffer.7. The processor of claim 6, further comprising persistent store fence buffer management unit to store at least one cache line from the persistent store fence buffer to the persistent storage based on a signal indicative of an intent to remove a cache line from a cache before the cache line is removed from the cache.8. The processor of claim 6, wherein the persistent store fence buffer comprises a write combining buffer that is to allow a second data corresponding to a second persistent store fence instruction to be stored in a same cache line of the persistent store fence buffer as the given data.9. The processor of claim 6, wherein an instruction set of the processor does not include a user-level load instruction to read data from the persistent store fence buffer.10. The processor of claim 6, wherein the persistent store fence buffer does not implement a cache coherency protocol.11. The processor of any one of claims 1 to 5, wherein the processor is to store a cache line having the given data and a second data corresponding to a second persistent store fence instruction to the persistent storage in a common set of one or more cycles to be transmitted on an interconnect that is to be used to couple the processor with the persistent storage.12. A method in a processor comprising:receiving a persistent store fence instruction; andensuring, responsive to the persistent store fence instruction, that a given datacorresponding to the persistent store fence instruction is stored persistently in a persistent storage before data of all subsequent store instructions, which occur after the persistent store fence instruction in original program order, is stored persistently in the persistent storage.13. The method of claim 12, wherein receiving the instruction comprises receiving a store and persistent store fence instruction that indicates a source operand having the given data and that indicates a location in the persistent storage where the given data is to be stored.14. The method of claim 12, further comprising receiving a store instruction indicting a source operand having the given data, wherein the store instruction is one of immediately before and immediately after the persistent store fence instruction in the original program order.15. The method of claim 12, further comprising causing the given data to bypass a set of one or more caches of the processor responsive to the persistent store fence instruction.16. The method of claim 12, wherein ensuring comprises ensuring that the given data is stored persistently in the persistent storage before the data of the subsequent store instructions is stored persistently in the persistent storage without ensuring that data of all previous store instructions, which occur before the persistent store fence instruction in the original program order, is stored persistently in the persistent storage before the data of said all subsequent store instructions is stored persistently in the persistent storage.17. The method of claim 12, further comprising storing a cache line having the given data and a second data corresponding to a second persistent store fence instruction to the persistent storage in a common set of one or more cycles transmitted on an interconnect.18. The method of claim 12, further comprising storing the given data in a persistent store fence buffer responsive to the persistent store fence instruction, wherein an instruction set of the processor does not include a user-level load instruction to load data from the persistent store fence buffer.19. The method of claim 18, further comprising:receiving a signal indicating an intent to remove a cache line from a cache; and storing at least one cache line from the persistent store fence buffer to the persistent storage, after receiving the signal, and before the cache line is removed from the cache to the persistent storage.20. The method of claim 12, further comprising storing the given data to a write- ahead log in the persistent memory.21. A system to process instructions comprising:an interconnect;a persistent storage coupled with the interconnect, the persistent storage storing a set of instructions of a write-ahead logging algorithm, the set of instructions including a store and persistent store fence instruction that indicates a location in the persistent storage and that is used by the write-ahead logging algorithm to store a given data to a write-ahead log in the persistent storage; anda processor coupled with the interconnect, the processor to receive the store and persistent store fence instruction, the processor, in response to the store and persistent store fence instruction, to ensure that the given data is stored persistently in the persistent storage before data of all subsequent store instructions, which occur after the store and persistent store fence instruction in the write-ahead logging algorithm in original program order, is stored persistently in the persistent storage.22. The system of claim 21, wherein the persistent store and persistent store fence instruction comprises a non-temporal instruction that is to cause the given data to bypass a set of one or more caches of the processor.23. An apparatus comprising means for performing the method of any one of claims 12 to 20.24. An article of manufacture comprising a non-transitory machine -readable medium that stores an instruction that if executed by a machine is operative to cause the machine to perform the method of any one of claims 12 to 20.25. An electronic device comprising an interconnect, the processor of any one of claims 1 to 11 coupled with the interconnect, and a persistent storage coupled with the interconnect, the persistent storage storing a write-ahead logging algorithm including the persistent store fence instruction and using the persistent store fence instruction to store data to a write-ahead log in the persistent storage. |
PERSISTENT STORE FENCE PROCESSORS, METHODS, SYSTEMS, ANDINSTRUCTIONSBACKGROUNDTechnical FieldEmbodiments described herein generally relate storage of data. In particular, embodiments described herein generally relate to storage of data in persistent memory.Background InformationProcessors are commonly operable to execute instructions to access memory. For example, processors may execute load instructions to load or read data from main memory and/or store instructions to write or otherwise store data to main memory.Intel® 64 and IA-32 Architectures Software Developer's Manual Combined Volumes: 1, 2A, 2B, 2C, 3A, 3B and 3C, Order Number: 325462-051US, published June 2014, by Intel Corporation of Santa Clara California, describes an SFENCE (store fence) instruction to serialize store operations. The SFENCE instruction may perform a serializing operation on all store-to- memory instructions that were issued prior to the SFENCE instruction. This serializing operation may guarantee that every store instruction that precedes the SFENCE instruction in program order becomes globally visible before any store instruction that follows the SFENCE instruction.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments. In the drawings:Figure 1 is a block diagram of an embodiment of a computer system in which embodiments of the invention may be implemented.Figure 2 is a block diagram of an embodiment of a processor that is operable to perform an embodiment of persistent store fence instruction.Figure 3 is a block flow diagram of an embodiment of a method of performing an embodiment of a persistent store fence instruction.Figure 4 is a block diagram of an example embodiment of a memory sub-system module having an example embodiment of a persistent store fence buffer.Figure 5 is a block diagram of an example embodiment of a cache line for a persistent store fence buffer that has data corresponding to different persistent store fence instructions.Figure 6 is a block diagram of an embodiment of a persistent memory having data and a write-ahead log.Figure 7 is a block flow diagram of one possible method of write-ahead logging performed without a persistent store fence instruction as disclosed herein.Figure 8 is a block flow diagram of an example embodiment of a method of write-ahead logging performed with an embodiment of a persistent store fence instruction.Figure 9 is a block diagram illustrating various suitable locations for an embodiment of a persistent store fence buffer.Figure 10A is a block diagram illustrating an embodiment of an in-order pipeline and an embodiment of a register renaming out-of-order issue/execution pipeline.Figure 10B is a block diagram of an embodiment of processor core including a front end unit coupled to an execution engine unit and both coupled to a memory unit.Figure 11A is a block diagram of an embodiment of a single processor core, along with its connection to the on-die interconnect network, and with its local subset of the Level 2 (L2) cache.Figure 11B is a block diagram of an embodiment of an expanded view of part of the processor core of Figure 11A.Figure 12 is a block diagram of an embodiment of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics.Figure 13 is a block diagram of a first embodiment of a computer architecture.Figure 14 is a block diagram of a second embodiment of a computer architecture.Figure 15 is a block diagram of a third embodiment of a computer architecture.Figure 16 is a block diagram of an embodiment of a system-on-a-chip architecture.Figure 17 is a block diagram of use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, according to embodiments of the invention.DETAILED DESCRIPTION OF EMBODIMENTSDisclosed herein are persistent store fence instructions, processors to execute the instructions, methods performed by the processors when processing or executing the instructions, and systems incorporating one or more processors to process or execute the instructions. In the following description, numerous specific details are set forth (e.g., specific instruction operations, processor configurations, microarchitectural details, sequences of operations, uses of the instructions, etc.). However, embodiments may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail to avoid obscuring the understanding of the description.Figure 1 is a block diagram of an embodiment of a computer system 100 in which embodiments of the invention may be implemented. The computer system includes a processor 102, an optional volatile or otherwise non-persistent storage 122, and a non-volatile or otherwise persistent storage 124. The non-persistent storage 122 is optional not required. The processor may be coupled with the non-persistent storage 122 and the persistent storage 124 by one or more interconnection structures 120, such as, for example, one or more buses or other interconnects, one or more hubs or other chipset components, combinations thereof, etc. Various ways of coupling processors with volatile and non-volatile memories known in the arts are suitable.Volatile memory represents a type of memory or storage that loses its contents when power is not applied. In contrast, non-volatile memory represents a type of memory or storage that is able to retain its contents for long durations even when power is not applied. For example, data may be read from non-volatile memory even after weeks, months, or years without power. Examples of suitable types of non-persistent storage include, but are not limited to, dynamic random access memory (DRAM) and other forms of RAM including types developed in the future. Examples of suitable types of persistent storage include, but are not limited to, hard disks, magnetic tape, other types of magnetic storage devices, flash memory, various types of read-only memory (ROM), optical discs, ferroelectric RAM (F-RAM), and magnetoresistive RAM, and other types developed in the future.In some embodiments, both the non-persistent storage 122 and the persistent storage 124 may optionally be used together or collectively as a primary storage and may both be accessible to (e.g., addressable by) the processor. In other embodiments, the non-persistent storage 122 may optionally be omitted, and the persistent storage 124 may be used as a primary storage that is accessible to (e.g., addressable by) the processor. In still other embodiments, the non- persistent storage 122 may be deployed as a primary storage (e.g., main memory) and the persistent storage may be deployed as a secondary or backing storage, but the persistent storage may be accessible to (e.g., addressable by) the processor.The processor 102 has an instruction set 104. The instruction set is part of the instruction set architecture (ISA) of the processor and includes the native instructions that the processor is operable to execute. The instructions of the instruction set represent macroinstructions, assembly language instructions, or machine-level instructions that are provided to the processor for execution as opposed to microinstructions or other instructions that have been decoded from such instructions of the instruction set. As shown, the instruction set may include one or more load instructions 106 to load or read data from the non-persistent and/or persistent storage. The instruction set also includes one or more store instructions 108 to move, write, or otherwise store data in the non-persistent and/or persistent storage. The processor has a pipeline 112 to process the instructions of the instruction set. By way of example, the pipeline may include an instruction fetch unit to fetch instructions, a decode unit to decode the instructions, one or more execution units to execute the decoded instructions, etc. Various different processor pipeline designs known in the arts are suitable. The scope of the invention is not limited to any known pipeline design. The processor also has a memory subsystem 114 to interface with the non-persistent and/or persistent storage. The memory subsystem may include one or more caches 118 (e.g., one or more levels of cache). For example, certain processors have a combined level 1 (LI) instruction and data cache relatively closer to the pipeline and/or farther from the persistent storage, and a level 2 (L2) data cache relatively farther from the pipeline and/or closer to the persistent storage. Other processors may have a single level of cache, or three or more different levels of cache. Each cache may hold instructions and/or data as desired for the particular implementation.One reason for the cache(s) 118 is to help reduce the latency of accesses by the processor to data in the non-persistent and/or persistent storage. Accesses to data in the non-persistent and/or persistent storage generally tends to be significantly slower than accesses to data in the cache(s). For example, commonly accesses to data in the cache(s) take no more than a few processor clock cycles, whereas accesses to data in the primary storage may representatively take from tens to hundreds of clock cycles. Consequently, in order to help improve performance, the processor may bring certain data (e.g., data with spatial and/or temporal locality) into the cache(s) from the non-persistent and/or persistent storage so that if that same data is needed again in the near future it can be accessed quickly from the cache(s) instead of more slowly from the non-persistent and/or persistent storage.In addition, the store instruction(s) 108 may not store data directly and/or immediately from the processor to the non-persistent and/or persistent storage. Rather, the data may initially be cached or stored in the cache(s) 118. Again, this may help to keep the data close to the processor in case it is needed again in the near future and/or may help to avoid a longer latency access to the storage. The memory sub-system of the processor may have a cache coherency mechanism or module 116 to help ensure that the data is coherently stored to the non-persistent and/or persistent storage at appropriate times so that all entities in the system (e.g., another processor) view correct and current versions of the data. By way of example, the cache coherency mechanism or module may help to implement a MESI protocol in which each cache line is in one of the four states modified, exclusive, shared, or invalid.One advantage to storing data in the persistent storage 124 (e.g., non-volatile memory) is persistency or durability of the data. Persistency or durability generally means that the data stored is not lost in the event of a power loss, operating system failure, system crash, processor failure, or most other types of errors (e.g., in which the computer system needs to be rebooted). Once the data is stored in the persistent storage, it is typically retained even if there is a loss of power, operating system failure, or the like. Moreover, even if the processor fails or the computer system otherwise fails due to a hardware failure, as long as the persistent storage survives, it may generally be possible to recover the data. In contrast, data stored in the non- persistent storage 122 (e.g., in volatile memory) is generally not regarded as being persistent or durable. Similarly, data stored in the cache(s) 118 as well as load/store buffers and/or various other temporary caching and/or buffering structures of the processor (not shown in the illustration for simplicity) is generally also not regarded as being persistent or durable. Such data stored in the non-persistent storage, the cache(s), and the like, may be lost in the event of a loss of power, operating system failure, system crash, processor failure, and certain other types of errors.In addition, certain applications and/or implementations need data to be stored persistently or durably. For example, in certain database applications and/or data transactions it is very important not to lose data. Also, in some applications and/or implementations it may be useful to store data persistently and/or durably in a particular order (e.g., store one piece of data persistently and/or durably before another piece of data). By way of example, this may be the case in an implementation of write-ahead logging, other serial store algorithms, and the like). In some embodiments, the instruction set 104 of the processor may include an embodiment of a persistent store fence instruction 110 to cause or ensure that an associated store of data is performed to the persistent storage 124 before a subsequent store of data is performed to the persistent storage 124.Figure 2 is a block diagram of an embodiment of a processor 202 that is operable to perform an embodiment of persistent store fence instruction 210. In some embodiments, the processor may be a general-purpose processor (e.g., a general-purpose microprocessor or central processing unit (CPU) of the type used in desktop, laptop, or other computers). Alternatively, the processor may be a special-purpose processor. Examples of suitable special-purpose processors include, but are not limited to, network processors, communications processors, cryptographic processors, graphics processors, co-processors, embedded processors, digital signal processors (DSPs), and controllers (e.g., microcontrollers). The processor may have any of various complex instruction set computing (CISC) architectures, reduced instruction set computing (RISC) architectures, very long instruction word (VLIW) architectures, hybrid architectures, other types of architectures, or have a combination of different architectures (e.g., different cores may have different architectures).During operation, the processor 202 may execute, run, or perform code 230 (e.g., a program). For example, the code may be fetched, loaded, or otherwise received into the processor from persistent storage 224 and/or an optional non-persistent memory (not shown). The persistent storage 224 is shown in dashed lines to indicate it is not generally part of the processor. The code may include various different types of instructions. Among those instructions, the code includes the persistent store fence instruction 210. In some embodiments, the persistent store fence instruction may itself optionally be a persistent store instruction to move, write, or otherwise store data to persistent storage 224 (e.g., the instruction 210 may be a persistent store and persistent store fence instruction). Such a persistent store and persistent store fence instruction 210 may have an optional associated persistent store operation 228 to store associated data to the persistent storage 224. In such embodiments, the instruction 210 may explicitly specify (e.g., through one or more fields or a set of bits), or otherwise indicate (e.g., implicitly indicate), a source operand that has data to be stored to the persistent storage. The instruction 210 may explicitly specify (e.g., through one or more fields or a set of bits), or otherwise indicate (e.g., implicitly indicate), an address or other location in the persistent storage 224 where the data is to be stored. Notice that in some embodiments the persistent storage 224 may be addressable by instructions of an instruction set of the processor. Alternatively, in other embodiments, the persistent store fence instruction may not have the associated persistent store operation 228. For example, the persistent store fence instruction may be designed or intended to work with a separate but related persistent store instruction 208E that is operative to store data to the persistent storage 224. For example, the separate persistent store instruction 208E may be designed or implicitly understood to be (e.g., immediately) before (or alternatively (e.g., immediately) after) the persistent store fence instruction 210 in original program or code order. The code may also include a set of one or more persistent store instructions 208L that occur later than and/or after the persistent store fence instruction 210 in program order. The earlier persistent store instruction 208E also occurs earlier than and/or before all of the later persistent store instruction(s) 208L in original program or code order.Referring again to Figure 2, the processor includes a decode unit or decoder 226. The decode unit may receive and decode the persistent store fence instruction 210. The persistent store fence instruction may represent a macroinstruction, assembly language instruction, machine code instruction, or other instruction or control signal of an instruction set of the processor. The decode unit may output one or more relatively lower-level instructions or control signals (e.g., one or more microinstructions, micro-operations, micro-code entry points, decoded instructions or control signals, etc.), which reflect, represent, and/or are derived from the relatively higher-level persistent store fence instruction. In some embodiments, the decode unit may include one or more input structures (e.g., port(s), interconnect(s), an interface) to receive the instruction, an instruction recognition and decode logic coupled therewith to recognize and decode the instruction, and one or more output structures (e.g., port(s), interconnect(s), an interface) coupled therewith to output the lower-level instruction(s) or control signal(s). The decode unit may be implemented using various different mechanisms including, but not limited to, microcode read only memories (ROMs), look-up tables, hardware implementations, programmable logic arrays (PLAs), and other mechanisms used to implement decode units known in the art.In some embodiments, instead of the persistent store fence instruction being provided directly to the decode unit, an instruction emulator, translator, morpher, interpreter, or other instruction conversion module may optionally be used. Various types of instruction conversion modules are known in the arts and may be implemented in software, hardware, firmware, or a combination thereof. In some embodiments, the instruction conversion module may be located outside the processor, such as, for example, on a separate die and/or in a memory (e.g., as a static, dynamic, or runtime emulation module). By way of example, the instruction conversion module may receive the persistent store fence instruction, which may be of a first instruction set, and may emulate, translate, morph, interpret, or otherwise convert the persistent store fence instruction into one or more corresponding intermediate instructions or control signals, which may be of a second different instruction set. The one or more intermediate instructions or control signals of the second instruction set may be provided to a decode unit (e.g., decode unit 226), which may decode them into one or more lower-level instructions or control signals executable by native hardware of the processor (e.g., a memory sub-system module).Referring again to Figure 2, a memory sub-system module 214 is coupled with the decode unit 226. The memory sub-system module may receive the one or more decoded or otherwise converted instructions or control signals that represent and/or are derived from the persistent store fence instruction. In embodiments, in which the persistent store fence instruction is a persistent store and persistent store fence instruction, the memory sub-system module may also receive data pertaining to the source operand specified or indicated by the instruction 210 and an indication of the address or location in the persistent storage 224 specified or indicated by the instruction 210 where the data is to be stored. The memory sub-system module is operative in response to and/or as a result of the persistent store fence instruction (e.g., in response to one or more instructions or control signals decoded from the instruction) to cause and/or ensure that data of a given store operation (e.g., store operation 228 or store instruction 208E) corresponding to the persistent store fence instruction is stored persistently and/or durably in the persistent storage 224 before data from all later or subsequent store operations and/or instructions (i.e., those which occur after the given store operation in original program order) is stored persistently and/or durably in the persistent storage. In some embodiments, the persistent store fence instruction may not to cause and/or ensure that data of all preceding store operations and/or instructions is stored persistently and/or durably in the persistent storage before data from all later or subsequent store operations and/or instructions, but rather this fencing may be performed selectively for only the given store operation. That is, there is no need to fence all preceding store instructions and/or operations, but rather only the given store instruction and/or operation. This may help to avoid a higher performance cost to fence all the preceding store instructions and/or operations. In some embodiments, the data from these other non-fenced store instructions and/or operations may be stored in the processor cache(s) whereas the data from the given fenced store instruction and/or operation may be non-temporal and may bypass the cache(s) and be stored in a different persistent store fence buffer (e.g., buffer 446).In some embodiments, the persistent store fence instruction is a persistent store and persistent store fence instruction having the given store operation (e.g., store operation 228). In such embodiments, in some cases the persistent store and persistent store fence instruction may be a non-temporal instruction whose execution is operative to cause the data to be stored to the persistent memory 224 bypassing and without being stored in one or more caches (not shown) of the processor. In other embodiments, the given store operation may correspond to a separate but related instruction (e.g., immediately) before or after the persistent store fence instruction (e.g., store instruction 208E). In some embodiments, the persistent store fence instruction causes the corresponding data of the given store operation to be stored in a new dedicated persistent store fence buffer (e.g., buffer 446 in Figure 4). In some embodiments, the buffer may optionally be write only and/or may not implement a cache coherency protocol used by one or more cache(s) of the processor (e.g., may not use a MESI protocol implemented by the processor). In some embodiments, as will be described further below, the persistent store fence buffer may implement write combining to allow data corresponding to different persistent store fence instructions to be stored or combined in a same cache line. In some embodiments, as will be described further below, the persistent store fence instruction may be used to store data to a write-ahead log in order to improve the performance of write-ahead logging.The memory sub-system module and/or the processor may include specific or particular logic (e.g., transistors, integrated circuitry, or other hardware potentially combined with firmware (e.g., instructions stored in non- volatile memory) and/or software) that is operable to perform the persistent store fence instruction and/or store the result in response to and/or as a result of the persistent store fence instruction (e.g., in response to one or more instructions or control signals decoded from the persistent store fence instruction). In one aspect, the memory sub-system module may also be regarded generally as an execution unit to execute the decoded persistent store fence instruction and/or as a unit to perform the decoded persistent store fence instruction. In some embodiments, the memory sub-system module may include the circuitry or logic shown and described for one or more of Figures 4-5, which are illustrative examples of a suitable implementations, although the scope of the invention is not so limited.Advantageously, the persistent store fence operation may be used to cause, ensure, or guarantee that data from a given store operation is stored in the persistent storage before data from all subsequent store operations. Once the data is in the persist storage it is persistent and/or durable. This may offer certain advantages in certain implementations. For example, this may help to increase the efficiency of performing write-ahead logging, as will be discussed further below, although the scope of the invention is not so limited. In other instances, this may be used serialize persistent stores for various other types of algorithms and/or for other reasons.Compared to the SFENCE instruction discussed in the background section, the SFENCE instruction does not serialize stores to persistent storage and/or does not serialize persistency or durability. Rather, the SFENCE instruction may be used to fence or serialize global visibility of stores to main memory (e.g., DRAM or other volatile memory), but such data may be lost in the event of certain conditions (e.g., a power failure, an operating system failure, a processor failure, a system crash, etc.). As a result, such instructions are not able to serialize the persistency or durability of data storage operations. In addition, the SFENCE instruction fences or serializes all preceding store instructions relative to all following store instructions, whereas in some embodiments, the persistent store fence instruction may only serialize a single given corresponding store instruction and/or operation relative to all following store instructions and/or operations.To avoid obscuring the description, a relatively simple processor 202 has been shown and described. However, the processor may optionally include other well-known processor components. Possible examples of such components include, but are not limited to, general- purpose registers, a status register (sometimes called a flags register), system control registers, an instruction fetch unit, prefetch buffers, an instruction translation lookaside buffer (TLB), a data TLB, a branch prediction unit, a floating-point execution unit, a SIMD or vector execution unit, out-of-order execution support units (e.g., an instruction scheduling unit, a register rename and/or allocation unit, an instruction dispatch unit, a reorder buffer (ROB), a reservation station, a memory order buffer, a retirement unit, etc.), a bus interface unit, an address generation unit, a debug unit, a performance monitor unit, a power management unit, other components included in processors, and various combinations thereof. Such components may be coupled together in various different suitable combinations and/or configurations known in the arts. Embodiments are not limited to any known such combination or configuration. Moreover, embodiments may be included in processors have multiple cores at least one of which is operative to perform a persistent store fence instruction.Figure 3 is a block flow diagram of an embodiment of a method 340 of performing an embodiment of a persistent store fence instruction. In various embodiments, the method may be performed by a processor, instruction processing apparatus, or other digital logic device. In some embodiments, the method 340 may be performed by and/or within the processor 102 of Figure 1 and/or the processor 202 of Figure 2. The components, features, and specific optional details described herein for the processors 102, 202, also optionally apply to the method 340. Alternatively, the method 340 may be performed by and/or within a similar or different processor or apparatus. Moreover, the processors 102, 202 may perform methods the same as, similar to, or different than the method 340.The method includes receiving the persistent store fence instruction, at block 341. In various aspects, the instruction may be received at a processor or a portion thereof (e.g., an instruction fetch unit, a decode unit, a bus interface unit, etc.). In various aspects, the instruction may be received from an off-processor and/or off-die source (e.g., from memory, interconnect, etc.), or from an on-processor and/or on-die source (e.g., from an instruction cache, instruction queue, etc.).The method includes guaranteeing that, ensuring that, enforcing, or otherwise causing given data corresponding or related to the persistent store fence instruction to be stored persistently in a persistent storage before data from all subsequent store instructions (i.e., which are subsequent to the persistent store instruction in original program order) is stored persistently in the persistent storage, at block 342. In some embodiments, the method may also include storing the given data responsive to the persistent store instruction (e.g., in the case of a persistent store and persistent store fence instruction), although this is not required. In some embodiments, the instruction may cause the given data to be stored non-temporally bypassing processor caches to a persistent store fence buffer (e.g., buffer 446), although the scope of the invention is not so limited.Figure 4 is a block diagram of an example embodiment of a memory sub-system module 414 and illustrating an example embodiment of a persistent store fence buffer 446. A persistent storage 424 is coupled with the memory sub-system module. The persistent storage may be similar to or the same as those previously described.A set of one or more decoded persistent store fence instructions and/or operations 411 may be provided to the memory sub-system module 414. In this example, for simplicity, it is assumed that the persistent store fence instruction that was decoded incorporated a persistent store operation (e.g., store operation 228), although the scope of the invention is not so limited. The memory sub-system module includes the persistent store fence buffer 446 and a corresponding persistent store fence buffer management unit 444. The buffer management unit is operative to manage the persistent store fence buffer, for example, to manage storage of data in, and flushing or other removal of data from, the buffer. The management unit may be implemented in hardware (e.g., integrated circuitry, transistors or other circuit elements, etc.), firmware (e.g., ROM, EPROM, flash memory, or other persistent or non-volatile memory and microcode, microinstructions, or other lower-level instructions stored therein), software (e.g., higher-level instructions stored in memory), or a combination thereof (e.g., hardware potentially combined with one or more of firmware and/or software).The persistent store fence buffer 446 is operative to temporarily buffer or store data associated with the persistent store fence instruction (e.g., data from store operation 228 or store instruction 208E). The scope of the invention is not limited to any particular type of memory for the persistent store fence buffer. Various types of volatile memory are suitable, such as, for example, static random access memory (SRAM), types of memory used to implement processor caches, and the like. Virtually any type of memory or data storage device that can be fabricated on a die with a processor is potentially suitable. In some embodiments, the persistent store fence buffer may optionally be organized similarly to a processor cache and may have a plurality of cache lines 448. As shown, the persistent store fence buffer may have a cache line 0 448-0, a cache line L 448-L, through a cache line N 448-N, where N may represent any desired number suitable for the particular implementation. In some embodiments, there may be on the order of from about four to about several hundred cache lines, or from about eight to about one hundred twenty eight cache lines, although the scope of the invention is not so limited.In some embodiments, in contrast to processor cache(s), the persistent store fence buffer may optionally be write only but not ordinarily readable. For example, the processor (e.g., a core) may not ordinarily be able to perform a regular user-level load from memory instruction to load or read data from the persistent store fence buffer. It is to be appreciated that the processor, under certain limited circumstances, may be able to read the contents of the persistent store fence buffer, for example, during debugging or testing (e.g., during a built-in self test (BIST)). In some embodiments, cache coherency may not be maintained in the persistent store fence buffer other than those operations related to maintaining cache coherency in cache(s) 418 that may be used to implement the persistent store fence. For example, the cache(s) may implement a MESI protocol (e.g., the cache lines of the caches may each have two MESI bits) but the persistent store fence buffer may not (e.g., the cache lines of the buffer may not have the two MESI bits).The cache coherency module 416 is coupled with the persistent store fence buffer management unit 444. In some embodiments, when the cache coherency module determines to evict, flush, or otherwise remove a cache line from one or more caches 418 of the processor, the cache coherency module may provide an indication, notification, or other signal 450 (e.g., an intent to flush cache line signal) to the persistent store fence buffer management unit, before actually flushing or removing the cache line from the cache(s). The signal 450 may indicate, notify, communicate, or otherwise signal to the management unit that a cache line is about to be flushed or otherwise removed from the cache(s), and may help to allow the management unit to flush or otherwise remove or store one or more cache line(s) from the buffer to the persistent memory before the cache line from the cache(s) are flushed and become persistent. In some embodiments, in order to maintain the persistent store fence, the persistent store fence buffer management unit may perform a buffer flush, eviction, or other removal operation 452 to flush, evict, or otherwise remove or store a cache line (e.g., cache line L 448-L) from the persistent store fence buffer to the persistent storage. In some embodiments, the processor and/or the memory sub-system module may guarantee and/or ensure and/or cause this to occur, responsive to the associated persistent store fence instruction, before a cache flush or other cache line removal operation 454 associated with the signal 450 is performed to flush the cache line from the cache(s) 418 to the persistent storage. The buffer may flush to persistent memory transparently in the background based on signals from the cache coherency module that cache lines are going to be evicted or flushed. In some embodiments, the entire persistent store fence buffer may optionally be flushed to the persistent storage when any cache line is flushed from the cache(s) to the persistent storage. This may help to provide a relatively simpler implementation. In other embodiments, additional information may optionally be stored in the persistent store fence buffer to allow individual cache lines in the buffer to be selectively flushed to the persistent storage based on individual corresponding cache lines being flushed from the cache(s).In some embodiments, the data in the persistent store fence buffer may not need to be flushed or removed to the persistent storage until right before a cache line is about to be flushed or removed from the cache(s) to the persistent storage and/or a subsequent store operation is about to become persistently stored in the persistent storage. Generally avoiding flushing the buffer except when needed helps to avoid relatively long latency memory accesses. Advantageously, the persistent store fence buffer may help to avoid needing to wait for the data corresponding to the persistent store fence instruction to be stored to the persistent storage and become persistent. If such data was stored directly to the persistent storage, a generally much longer latency operation would generally be needed (e.g., storing data to the persistent memory often takes on the order of tens to hundreds of clock cycles). In some embodiments, the data may be stored in the persistent store fence buffer in no more than several clock cycles (e.g., no more than about five clock cycles).In some embodiments, there may optionally be no persistent order requirement between different persistent store fence instructions. In some embodiments, this may optionally help to allow an even more efficient implementation of the persistent store fence instructions by allowing data corresponding to multiple different persistent store fence instructions to be stored in the same cache line in a persistent store fence buffer.Figure 5 is a block diagram of an example embodiment of a cache line 548 for a persistent store fence buffer that has data 560-1, 560-2 corresponding to different persistent store fence instructions 511-1, 511-2, and an example embodiment of a cache line storage operation 552 of the cache line to persistent memory 524 in the same signal or cycle on one or more interconnects 520. A first persistent store fence instruction 511-1 may have a first associated or corresponding data 560-1 that may be stored in the cache line. Likewise, a second persistent store fence instruction 511-2 may have a second associated or corresponding data 560-2 that may be stored in the same cache line and at the same time as the data 560-1. In some embodiments, this may be performed through a write-combining operation in the persistent store fence buffer. That is, the persistent store fence buffer may represent a write-combining buffer.Later, at an appropriate time (e.g., based on an intent to flush a cache line signal received from a cache coherency module), the cache line 548 may be flushed, evicted, or otherwise removed or stored to the persistent storage 524 through a cache line storage operation 552. The cache line storage operation may store the cache line having the first data 560-1 and the second data 560-2 corresponding to the different persistent store fence instructions 511-1, 511-2. In some embodiments, the cache line storage operation may be performed in a single and/or a common set of one or more cycles or signals on the one or more interconnects 520 (e.g., both the data 560-1 and 560-2 may go on a same set of one or more bus cycle(s)). That is, data corresponding to multiple different persistent store fence instructions may be written or otherwise stored to the persistent memory in the same bus or interconnect cycle. For simplicity, data from only two different persistent store fence instructions is described in this example, but in some cases data from three or more different persistent store fence instructions may potentially be combined in the same cache line. Advantageously, such ability to combine data corresponding to different persistent store fence instructions in the same cache line and perform a single cache line write to the persistent memory may help to avoid or eliminate one or more relatively long latency stores to the persistent memory. In addition, this may also help to reduce the amount of bandwidth on the one or more interconnects leading to the persistent memory.The processor and/or the memory sub-system unit may perform in-order stores of the data to the persistent store fence buffer and when the data is subsequently flushed or removed from the persistent store fence buffer all the data in the same cache line may be atomically written to the persistent storage. By in-order it is meant that the data may be stored in the persistent store fence buffer in the same order as the original program order of the corresponding persistent store fence instructions. In some embodiments, different cache lines may be flushed or removed from the persistent store fence buffer out-of-order to fully exploit the memory parallelism in the underlining persistent memory system. By out-of-order it is meant that the data may be flushed or removed from the persistent store fence buffer in a different order than the original program order of the corresponding persistent store fence instructions.In some embodiments, the instructions and processors disclosed herein may be used to improve the efficiency of write-ahead logging. Write-ahead logging is a known technique to achieve atomicity and durability/persistency when modifying data. Figure 6 is a block diagram of an embodiment of a persistent memory 624 having data 664 and a write-ahead log 662. The persistent memory may represent any of the previously described types of persistent memory. The data may represent various different types of data used in computer systems, databases, or the like. Examples of suitable data include, but are not limited to, files, records, data structures, tables, database records, images, videos, and the like. The write-ahead log is generally located in a different region of the persistent memory than the data. In the illustration, a dashed line is used to indicate that the write ahead log may optionally be located or stored on a different persistent storage device (e.g., a different disk) than the data. This may further help to ensure data durability/persistency (e.g., in the event of a disk failure), but is not required.In write-ahead logging, the data and/or modifications to the data may be written to the write-ahead log before the modifications to the data are actually stored over the data in the persistent memory. For example, before a given piece of data 670 is changed or modified, an unmodified copy of the given piece of data 668 may be stored in the write-ahead log 662. In this way, even if a loss of power or other event occurs that could cause the given piece of data to be lost from a non-persistent (e.g., volatile) memory (e.g., a processor cache) while the given piece of data is being modified within a processor, the copy of the given piece of data may be recovered from the write-ahead log after the event has occurred. Advantageously, this may help to prevent the given piece of data from being lost while being modified even in the face power failures or various other potentially disastrous errors. To further illustrate, if a program is in the middle of performing an operation that modifies a set of data when a computer system experiences power loss or a disastrous error. Upon restart and reboot, the program generally needs to know whether the operation fully completed, partially completed, or failed entirely. If write-ahead logging were used, the program could examine the write-ahead log to determine what portions of the operation had actually been completed before the error occurred. The program may use this information to decide how to proceed and/or how to continue or restart the operation. For example, the program may reattempt the operation starting with the first uncompleted modification as determined from the write- ahead log.Write ahead logging is often implemented as a transaction in which multiple different pieces of data are modified within the same transaction. Only after all of the different pieces of data have been successfully logged, and modified, and the modified pieces of data have been stored in the persistent memory is the transaction successfully completed. Generally, only when the transaction is entirely successfully completed is the transaction "committed." Committing the transaction basically declares that the entire transaction has completed successfully and/or indicates that all of the attempted modifications have completed successfully and have been stored in the persistent memory. At this point, the data stored or preserved in the write-ahead log are no longer needed, since even if a disastrous event occurs, all of the modified data is already stored in the persistent memory. The write-ahead logging provides persistency and/or durability to the given set of data throughout the change or modification, since a copy of the given set of data is stored in the persistent memory before any change is made to the given set of data. In addition, the write-ahead logging provides atomicity, since a given set of data is either entirely updated or not updated during the transaction by either committing or not committing the entire transaction.In write-ahead logging, two persistency orders should generally be maintained. Firstly, a log persistency order should generally be maintained. According to the log persistency order, the original data that are to be modified should be persistently stored in the write-ahead log in persistent storage before the corresponding modified data are stored in the persistent storage. Otherwise, if the modified data are stored in the persistent storage over the original data, and the original data to be modified is in the cache(s) and not yet stored to the write-ahead log in the persistent storage, then if a disastrous event (e.g., a power failure) occurs, the original data to be modified is not preserved and may be lost thereby preventing recovery in the event of an unsuccessful completion of the transaction. A second persistency order that should generally be maintained is a commit persistency order. According to the commit persistency order all modified data in the transaction should be persistently stored to the persistent storage before the commit indication is persistently stored to the write-ahead log in the persistent storage. Otherwise, if the commit indication is persistently stored to the write-ahead log in the persistent storage while some modified data is stored in the cache(s) this modified data may be lost during a disastrous event even though the commit indication in the write-ahead log would indicate the transaction completed successfully. One challenge is that caching of data in one or more processor caches may violate one or more of these two persistency orders if the proper precautions are not taken. The caches are generally implemented in volatile or otherwise non- persistent storage and are susceptible to disastrous events.Figure 7 is a block flow diagram of one possible method 772 of write-ahead logging performed without the persistent store fence instructions disclosed herein. Data in a persistent storage which is to be modified or changed is stored to a write-ahead log in the persistent storage, at block 773. Due to one or more caches being present in a processor the data that is to be modified may not actually be stored directly in the write-ahead log but rather may be cached in these one or more caches. These processor caches represent non-persistent storage and may lose their contents when certain events occur (e.g., a loss of power, etc.).The data to be modified is removed (e.g., flushed) from the one or more caches to the persistent storage, at block 774. For example, this may be performed with a cache line flush type of instruction. This is generally needed in order to satisfy the log persistency order. One drawback with this approach is that it generally takes a lot of time and/or has a high latency due to the time needed to write or store to the persistent memory (e.g., on the order of tens to hundreds of clock cycles.Then, the data that is to be modified may actually be modified at block 775. Notice that the modification of the data at block 775 takes place after the data to be modified has been removed from the one or more caches to the persistent storage at block 774 thereby ensuring that a copy of the data to be modified is persistently stored in the persistent storage instead of in the non-persistent processor caches. This helps to ensure data persistency/durability, as previously described.At block 776, a determination is made whether or not there is more data to be modified. If there is more data to be modified (i.e., "yes" is the determination at block 776), the method may revisit blocks 773-775. Conversely, if there is no more data to be modified in this transaction (i.e., "no" is the determination at block 776), the method may advance to block 777. Notice that for each piece of data modified, between the time the data to be modified is stored to the write - ahead log at block 773, and the time the data is actually modified at block 775, the data to be modified needs to be flushed from the cache(s) to the persistent storage (i.e., actually stored in the write-ahead log in the persistent storage instead of in the caches) at block 775. A drawback with all of these flushes is that they take a lot of time to perform.At block 777, all modified data is removed (e.g., flushed) from the cache(s) to the persistent storage. This is performed because the modification of the data at block 775 may not actually store the modified data in the persistent storage but rather in the cache(s). This generally needs to be done before the commit indication is stored in the write-ahead log in the persistent storage in order to satisfy the commit persistency order.Then, a commit indication may be stored to the write-ahead log, at block 778. The commit indication may indicate that the transaction has completed successfully, as previously described. At block 779, the commit indication may be removed (e.g., flushed) from the cache(s) to the persistent storage.If a disastrous event had occurred before the commit indication was stored in the write- ahead log, all the partial data updates of the transaction may be recovered back to their original data using the original data in the write-ahead log. Conversely, if a disastrous event occurs after commit indication is stored in the write-ahead log, there is no need to for a recovery, since all the data updates have completed successfully.As previously described, the removal (e.g., flushing) of the data to be modified from the cache(s) at block 774 before each data update at block 775 tends to take an excessive amount of time. In some embodiments, since the updated data typically stays in the caches, there may be no need to remove (e.g., flush) the data to be modified from the cache(s) to the persistent storage until the modified data are actually stored back from the cache(s) to the persistent storage, which in many implementations is relatively infrequent (e.g., due to data locality in the program). In such implementations, significantly more efficient write-ahead logging may be achieved by omitting such removal (e.g., flushing) of the data at block 774. Unfortunately, the write-back of the modified data from the cache(s) to the persistent storage is generally performed by hardware (e.g., by a cache coherency module) and is therefore not under the control of software in many implementations. It is noted that some suitable implementations may alternatively perform software controlled cache coherency. Figure 8 is a block flow diagram of an example embodiment of a method 880 of write- ahead logging performed with an embodiment of a persistent store fence instruction. In some embodiments, data in a persistent storage which is to be modified or changed may be stored to a write-ahead log in the persistent storage with (or in conjunction with) an embodiment of a persistent store fence instruction, at block 881. Either the persistent store fence instruction itself may store the data, or a corresponding separate store instruction may store the data, as previously described. In some embodiments, the data may initially and/or temporarily be stored in a persistent store fence buffer (e.g., buffer 446). In some embodiments, the persistent store fence instruction may be a non-temporal instruction and the data may bypass the processor cache(s). This may help to avoid the data taking up space in and/or polluting the cache(s).Then, the data that is to be modified may actually be modified at block 882. This modified data may be initially and/or temporarily cached in the processor cache(s).Significantly, at bock 883, the processor responsive to the persistent store fence instruction may ensure, guarantee, or enforce that the data to be modified is removed (e.g., flushed, evicted, etc.) and persistently stored to persistent storage before the modified data is removed (e.g., flushed, evicted, etc.) from the cache(s) and persistently stored to the persistent storage. Advantageously, there is no need to flush or otherwise remove the data to be modified from cache(s) to the persistent storage, as was performed at block 774 in Figure 7. Significantly, this may help to avoid a relatively high latency memory access operation (e.g., from tens to hundreds of clock cycles) for each piece of data modified. The persistent fence instruction may ensure that the log persistency order is still maintained. It is worth noting that in many cases the data to be modified may not actually be persistently stored to the persistent storage except if or until just before the modified data is persistently stored to the persistent storage. If the modified data is not stored to the persistent storage, the instruction does not guarantee that the data to be modified is stored in the persistent storage.At block 884, a determination is made whether or not there is more data to be modified. If there is more data to be modified (i.e., "yes" is the determination at block 884), the method may revisit blocks 881-883. Conversely, if there is no more data to be modified in this transaction (i.e., "no" is the determination at block 884), the method may advance to block 885.At block 885, all modified data is removed (e.g., flushed) from the cache(s) to the persistent storage. This is performed because the modification of the data at block 882 may not actually store the modified data in the persistent storage but rather in the cache(s). Then, a commit indication may be stored to the write-ahead log, at block 886. At block 887, the commit indication may be removed (e.g., flushed) from the cache(s) to the persistent storage. Advantageously, the use of the persistent store fence instruction may help to avoid relatively high latency memory access operations for each piece of data modified. In some embodiments, if all of the modified data is able to fit or be stored in the cache(s), the algorithm may only flush or remove the data from the persistent store fence buffer to the persistent storage once before flushing all the modified data from the cache(s) to the persistent storage at the commit time. Further, in some embodiments, if the persistent store fence buffer is able to write combine data corresponding to different persistent store fence instructions in the same cache line, this may further help to avoid some long latency data writes to the persistent storage.In some embodiments, the software may implement write-ahead logs carrying sequence numbers in cache line unit. In case of system crash during the flush or removal of cache lines from the persistent store fence buffer, only consecutive logs with correct sequence numbers may be used to recover the data. For example, sequence numbers 1, 2, 3, and 5 may be present, but sequence number 4 may be missing. When doing recovery, the sequence numbers have information about which are needed to recover and which are not.Although the description above has emphasized write-ahead logging, it is to be appreciated that the scope of the invention is not so limited. The persistent store fence instructions described herein are general-purpose instructions and may be used for various different purposes. In addition, similar or related techniques to write-ahead logging may also benefit from the persistent store fence instructions described herein. For example, other techniques that store a copy of data to a different persistent memory location before data is modified, other techniques that provide atomicity and durability of data during updates, and the like, may potentially benefit. Examples of other techniques that may also benefit include, but are not limited to, shadow paging, journaling in file system updates, and the like.Figure 9 is a block diagram illustrating various examples of suitable locations for an embodiment of a persistent store fence buffer. Computer systems typically have multiple different types of components that a store of data goes through on its way to persistent storage. In the illustrated example, these components include a store buffer 992, one or more levels of cache or a cache hierarchy 918 (e.g., including an LI cache 993 and an L2 cache 994), a memory controller 996, and finally the persistent storage 924. A store may potentially be cached or buffered at any of these or other components or hardware structures between the processor pipeline and the persistent storage.A persistent store fence buffer may be variously located among these components or hardware structures and/or at various different distances between the processor pipeline and the persistent storage. Depending on the particular location, data flushed or removed from that hardware structure may induce a flush or removal of data from the persistent store fence buffer. Generally, the closer the persistent store fence buffer is to the processor pipeline, the lower persistent store fence instruction latency the data needs to be stored to the persistent store fence buffer before a subsequent non-persistent store fence instruction in program order is able to store the data to the cache. On the other hand, the closer the persistent store fence buffer to the processor pipeline, the more frequent persistent store fence buffer flush operations will be (e.g., since there is less caching before the buffer), and the higher the latency of such persistent store fence buffer flush operations (e.g., since there is a longer path from the persistent store fence buffer to the persistent storage).In some embodiments, as shown at reference A, the persistent store fence buffer may be located or disposed at various places between an output of the store buffer 992 and an input to the persistent storage 924. In some embodiments, as shown at reference B, the persistent store fence buffer may optionally be located or disposed at various places between an output of a first level cache closest to the processor pipeline (e.g., the LI cache 993) and an output of a memory controller 996. In some embodiments, as shown at reference C, the persistent store fence buffer may optionally be located or disposed between an output of a last level cache (e.g., an L2 cache 994 or alternatively an L3 cache) and an input of the memory controller. In some embodiments, as shown at reference D, the persistent store fence buffer may optionally be located or disposed between two different levels of cache (e.g., between the LI cache and the L2 cache). In one aspect, the LI cache may be dedicated to a first core 990-1, whereas the L2 cache may be shared by the first core and a second core 990-2. In some embodiments, as shown at reference E, the persistent store fence buffer may optionally be located or disposed within the memory controller. The scope of the invention is not limited to any known location of the persistent store fence buffer. The desired location of the persistent store fence buffer may be determined without undue experimentation by a person skilled in the art and having the benefit of the present disclosure to satisfy the needs of the particular implementation based on the relative tradeoffs of persistent store fence instruction latency, persistent store fence buffer flush overhead, or other considerations.On multi-core systems, another design choice is to place the persistent store fence buffer in a shared component or hardware structure or per-core private or dedicated component or hardware structure. The private/dedicated hardware structures are closer to the process pipeline and the shared hardware structures are closer to persistent storage. Placing the persistent store fence buffer at shared hardware structure may tend to introduce more persistent store fence buffer flushes due to the data update change from a different software thread. On the other hand, placing the persistent store fence buffer at a private hardware structure may tend to involve flushing the persistent store fence buffer at operating system context switches of a software thread to a different core. That may involve hardware to flush te persistent store fence buffer on all hardware interrupts and/or exceptions that may lead to an operating system context switch. In some embodiments, the persistent store fence buffer may optionally be partitioned into a plurality of slices based on cache line address hashing. This may allow the persistent store fence buffer to be flushed in all cache slices in case of eviction of cache data in any cache slice.Exemplary Core Architectures, Processors, and Computer ArchitecturesProcessor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.Exemplary Core ArchitecturesIn-order and out-of-order core block diagramFigure 10A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 10B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 10A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 10A, a processor pipeline 1000 includes a fetch stage 1002, a length decode stage 1004, a decode stage 1006, an allocation stage 1008, a renaming stage 1010, a scheduling (also known as a dispatch or issue) stage 1012, a register read/memory read stage 1014, an execute stage 1016, a write back/memory write stage 1018, an exception handling stage 1022, and a commit stage 1024.Figure 10B shows processor core 1090 including a front end unit 1030 coupled to an execution engine unit 1050, and both are coupled to a memory unit 1070. The core 1090 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 1090 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 1030 includes a branch prediction unit 1032 coupled to an instruction cache unit 1034, which is coupled to an instruction translation lookaside buffer (TLB) 1036, which is coupled to an instruction fetch unit 1038, which is coupled to a decode unit 1040. The decode unit 1040 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 1040 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 1090 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 1040 or otherwise within the front end unit 1030). The decode unit 1040 is coupled to a rename/allocator unit 1052 in the execution engine unit 1050.The execution engine unit 1050 includes the rename/allocator unit 1052 coupled to a retirement unit 1054 and a set of one or more scheduler unit(s) 1056. The scheduler unit(s) 1056 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1056 is coupled to the physical register file(s) unit(s) 1058. Each of the physical register file(s) units 1058 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1058 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 1058 is overlapped by the retirement unit 1054 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 1054 and the physical register file(s) unit(s) 1058 are coupled to the execution cluster(s) 1060. The execution cluster(s) 1060 includes a set of one or more execution units 1062 and a set of one or more memory access units 1064. The execution units 1062 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1056, physical register file(s) unit(s) 1058, and execution cluster(s) 1060 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1064). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of- order issue/execution and the rest in-order.The set of memory access units 1064 is coupled to the memory unit 1070, which includes a data TLB unit 1072 coupled to a data cache unit 1074 coupled to a level 2 (L2) cache unit 1076. In one exemplary embodiment, the memory access units 1064 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1072 in the memory unit 1070. The instruction cache unit 1034 is further coupled to a level 2 (L2) cache unit 1076 in the memory unit 1070. The L2 cache unit 1076 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1000 as follows: 1) the instruction fetch 1038 performs the fetch and length decoding stages 1002 and 1004; 2) the decode unit 1040 performs the decode stage 1006; 3) the rename/allocator unit 1052 performs the allocation stage 1008 and renaming stage 1010; 4) the scheduler unit(s) 1056 performs the schedule stage 1012; 5) the physical register file(s) unit(s) 1058 and the memory unit 1070 perform the register read/memory read stage 1014; the execution cluster 1060 perform the execute stage 1016; 6) the memory unit 1070 and the physical register file(s) unit(s) 1058 perform the write back/memory write stage 1018; 7) various units may be involved in the exception handling stage 1022; and 8) the retirement unit 1054 and the physical register file(s) unit(s) 1058 perform the commit stage 1024.The core 1090 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1090 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1034/1074 and a shared L2 cache unit 1076, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Specific Exemplary In-Order Core ArchitectureFigures 11A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.Figure 11A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 1102 and with its local subset of the Level 2 (L2) cache 1104, according to embodiments of the invention. In one embodiment, an instruction decoder 1100 supports the x86 instruction set with a packed data instruction set extension. An LI cache 1106 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 1108 and a vector unit 1110 use separate register sets (respectively, scalar registers 1112 and vector registers 1114) and data transferred between them is written to memory and then read back in from a level 1 (LI) cache 1106, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 1104 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 1104. Data read by a processor core is stored in its L2 cache subset 1104 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1104 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring datapath is 1012-bits wide per direction.Figure 11B is an expanded view of part of the processor core in Figure 11A according to embodiments of the invention. Figure 11B includes an LI data cache 1106A part of the LI cache 1104, as well as more detail regarding the vector unit 1110 and the vector registers 1114. Specifically, the vector unit 1110 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 1128), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 1120, numeric conversion with numeric convert units 1122A-B, and replication with replication unit 1124 on the memory input. Write mask registers 1126 allow predicating resulting vector writes.Processor with integrated memory controller and graphicsFigure 12 is a block diagram of a processor 1200 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 12 illustrate a processor 1200 with a single core 1202A, a system agent 1210, a set of one or more bus controller units 1216, while the optional addition of the dashed lined boxes illustrates an alternative processor 1200 with multiple cores 1202A-N, a set of one or more integrated memory controller unit(s) 1214 in the system agent unit 1210, and special purpose logic 1208.Thus, different implementations of the processor 1200 may include: 1) a CPU with the special purpose logic 1208 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1202A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1202A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1202A-N being a large number of general purpose in-order cores. Thus, the processor 1200 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1200 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1206, and external memory (not shown) coupled to the set of integrated memory controller units 1214. The set of shared cache units 1206 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1212 interconnects the integrated graphics logic 1208, the set of shared cache units 1206, and the system agent unit 1210/integrated memory controller unit(s) 1214, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1206 and cores 1202-A-N.In some embodiments, one or more of the cores 1202A-N are capable of multi-threading. The system agent 1210 includes those components coordinating and operating cores 1202A-N. The system agent unit 1210 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1202A-N and the integrated graphics logic 1208. The display unit is for driving one or more externally connected displays. The cores 1202A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1202A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Computer ArchitecturesFigures 13-16 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 13, shown is a block diagram of a system 1300 in accordance with one embodiment of the present invention. The system 1300 may include one or more processors 1310, 1315, which are coupled to a controller hub 1320. In one embodiment the controller hub 1320 includes a graphics memory controller hub (GMCH) 1390 and an Input/Output Hub (IOH) 1350 (which may be on separate chips); the GMCH 1390 includes memory and graphics controllers to which are coupled memory 1340 and a coprocessor 1345; the IOH 1350 is couples input/output (I/O) devices 1360 to the GMCH 1390. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1340 and the coprocessor 1345 are coupled directly to the processor 1310, and the controller hub 1320 in a single chip with the IOH 1350.The optional nature of additional processors 1315 is denoted in Figure 13 with broken lines. Each processor 1310, 1315 may include one or more of the processing cores described herein and may be some version of the processor 1200.The memory 1340 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1320 communicates with the processor(s) 1310, 1315 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1395.In one embodiment, the coprocessor 1345 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1320 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 1310, 1315 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 1310 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1310 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1345. Accordingly, the processor 1310 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1345. Coprocessor(s) 1345 accept and execute the received coprocessor instructions.Referring now to Figure 14, shown is a block diagram of a first more specific exemplary system 1400 in accordance with an embodiment of the present invention. As shown in Figure 14, multiprocessor system 1400 is a point-to-point interconnect system, and includes a first processor 1470 and a second processor 1480 coupled via a point-to-point interconnect 1450. Each of processors 1470 and 1480 may be some version of the processor 1200. In one embodiment of the invention, processors 1470 and 1480 are respectively processors 1310 and 1315, while coprocessor 1438 is coprocessor 1345. In another embodiment, processors 1470 and 1480 are respectively processor 1310 coprocessor 1345.Processors 1470 and 1480 are shown including integrated memory controller (IMC) units1472 and 1482, respectively. Processor 1470 also includes as part of its bus controller units point-to-point (P-P) interfaces 1476 and 1478; similarly, second processor 1480 includes P-P interfaces 1486 and 1488. Processors 1470, 1480 may exchange information via a point-to-point (P-P) interface 1450 using P-P interface circuits 1478, 1488. As shown in Figure 14, IMCs 1472 and 1482 couple the processors to respective memories, namely a memory 1432 and a memory 1434, which may be portions of main memory locally attached to the respective processors.Processors 1470, 1480 may each exchange information with a chipset 1490 via individual P-P interfaces 1452, 1454 using point to point interface circuits 1476, 1494, 1486, 1498. Chipset 1490 may optionally exchange information with the coprocessor 1438 via a high-performance interface 1439. In one embodiment, the coprocessor 1438 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 1490 may be coupled to a first bus 1416 via an interface 1496. In one embodiment, first bus 1416 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in Figure 14, various I/O devices 1414 may be coupled to first bus 1416, along with a bus bridge 1418 which couples first bus 1416 to a second bus 1420. In one embodiment, one or more additional processor(s) 1415, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1416. In one embodiment, second bus 1420 may be a low pin count (LPC) bus.Various devices may be coupled to a second bus 1420 including, for example, a keyboard and/or mouse 1422, communication devices 1427 and a storage unit 1428 such as a disk drive or other mass storage device which may include instructions/code and data 1430, in one embodiment.Further, an audio I/O 1424 may be coupled to the second bus 1420. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 14, a system may implement a multi-drop bus or other such architecture.Referring now to Figure 15, shown is a block diagram of a second more specific exemplary system 1500 in accordance with an embodiment of the present invention. Like elements in Figures 14 and 15 bear like reference numerals, and certain aspects of Figure 14 have been omitted from Figure 15 in order to avoid obscuring other aspects of Figure 15.Figure 15 illustrates that the processors 1470, 1480 may include integrated memory andI/O control logic ("CL") 1472 and 1482, respectively. Thus, the CL 1472, 1482 include integrated memory controller units and include I/O control logic. Figure 15 illustrates that not only are the memories 1432, 1434 coupled to the CL 1472, 1482, but also that I/O devices 1514 are also coupled to the control logic 1472, 1482. Legacy I/O devices 1515 are coupled to the chipset 1490.Referring now to Figure 16, shown is a block diagram of a SoC 1600 in accordance with an embodiment of the present invention. Similar elements in Figure 12 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 16, an interconnect unit(s) 1602 is coupled to: an application processor 1610 which includes a set of one or more cores 202A-N and shared cache unit(s) 1206; a system agent unit 1210; a bus controller unit(s) 1216; an integrated memory controller unit(s) 1214; a set or one or more coprocessors 1620 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1630; a direct memory access (DMA) unit 1632; and a display unit 1640 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1620 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high- throughput MIC processor, embedded processor, or the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 1430 illustrated in Figure 14, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine -readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine -readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable' s (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the invention also include non-transitory, tangible machine- readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.Emulation (including binary translation, code morphing, etc.)In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.Figure 17 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 17 shows a program in a high level language 1702 may be compiled using an x86 compiler 1704 to generate x86 binary code 1706 that may be natively executed by a processor with at least one x86 instruction set core 1716. The processor with at least one x86 instruction set core 1716 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1704 represents a compiler that is operable to generate x86 binary code 1706 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1716. Similarly, Figure 17 shows the program in the high level language 1702 may be compiled using an alternative instruction set compiler 1708 to generate alternative instruction set binary code 1710 that may be natively executed by a processor without at least one x86 instruction set core 1714 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1712 is used to convert the x86 binary code 1706 into code that may be natively executed by the processor without an x86 instruction set core 1714. This converted code is not likely to be the same as the alternative instruction set binary code 1710 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1712 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1706.Components, features, and details described for any of Figures 1, 4-6, and 9 may also optionally apply to any of Figures 2-3. Moreover, components, features, and details described for any of the apparatus may also optionally apply to any of the methods, which in embodiments may be performed by and/or with such apparatus. Any of the processors described herein may be included in any of the computer systems disclosed herein.In the description and claims, the terms "coupled" and/or "connected," along with their derivatives, may have be used. These terms are not intended as synonyms for each other. Rather, in embodiments, "connected" may be used to indicate that two or more elements are in direct physical and/or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical and/or electrical contact with each other. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. For example, a unit may be coupled with a decode unit through one or more intervening components. In the figures, arrows are used to show connections and couplings.The term "and/or" may have been used. As used herein, the term "and/or" means one or the other or both (e.g., A and/or B means A or B or both A and B).In the description above, specific details have been set forth in order to provide a thorough understanding of the embodiments. However, other embodiments may be practiced without some of these specific details. The scope of the invention is not to be determined by the specific examples provided above, but only by the claims below. In other instances, well-known circuits, structures, devices, and operations have been shown in block diagram form and/or without detail in order to avoid obscuring the understanding of the description. Where considered appropriate, reference numerals, or terminal portions of reference numerals, have been repeated among the figures to indicate corresponding or analogous elements, which may optionally have similar or the same characteristics, unless specified or clearly apparent otherwise.Certain operations may be performed by hardware components, or may be embodied in machine -executable or circuit-executable instructions, that may be used to cause and/or result in a machine, circuit, or hardware component (e.g., a processor, potion of a processor, circuit, etc.) programmed with the instructions performing the operations. The operations may also optionally be performed by a combination of hardware and software. A processor, machine, circuit, or hardware may include specific or particular circuitry or other logic (e.g., hardware potentially combined with firmware and/or software) is operable to execute and/or process the instruction and store a result in response to the instruction.Some embodiments include an article of manufacture (e.g., a computer program product) that includes a machine-readable medium. The medium may include a mechanism that provides, for example stores, information in a form that is readable by the machine. The machine -readable medium may provide, or have stored thereon, an instruction or sequence of instructions, that if and/or when executed by a machine are operable to cause the machine to perform and/or result in the machine performing one or operations, methods, or techniques disclosed herein.In some embodiments, the machine -readable medium may include a non-transitory machine -readable storage medium. For example, the non-transitory machine -readable storage medium may include a floppy diskette, an optical storage medium, an optical disk, an optical data storage device, a CD-ROM, a magnetic disk, a magneto-optical disk, a read only memory (ROM), a programmable ROM (PROM), an erasable-and-programmable ROM (EPROM), an electrically-erasable-and-programmable ROM (EEPROM), a random access memory (RAM), a static-RAM (SRAM), a dynamic-RAM (DRAM), a Rash memory, a phase-change memory, a phase-change data storage material, a non-volatile memory, a non-volatile data storage device, a non-transitory memory, a non-transitory data storage device, or the like. The non-transitory machine -readable storage medium does not consist of a transitory propagated signal. In some embodiments, the storage medium may include a tangible medium that includes solid matter.Examples of suitable machines include, but are not limited to, a general-purpose processor, a special-purpose processor, a digital logic circuit, an integrated circuit, or the like. Still other examples of suitable machines include a computer system or other electronic device that includes a processor, a digital logic circuit, or an integrated circuit. Examples of such computer systems or electronic devices include, but are not limited to, desktop computers, laptop computers, notebook computers, tablet computers, netbooks, smartphones, cellular phones, servers, network devices (e.g., routers and switches.), Mobile Internet devices (MIDs), media players, smart televisions, nettops, set-top boxes, and video game controllers.Reference throughout this specification to "one embodiment," "an embodiment," "one or more embodiments," "some embodiments," for example, indicates that a particular feature may be included in the practice of the invention but is not necessarily required to be. Similarly, in the description various features are sometimes grouped together in a single embodiment, Figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of the invention.EXAMPLE EMBODIMENTSThe following examples pertain to further embodiments. Specifics in the examples may be used anywhere in one or more embodiments.Example 1 is a processor or other apparatus that includes a decode unit to decode a persistent store fence instruction. The apparatus also includes a memory subsystem module coupled with the decode unit. The memory subsystem module, in response to the persistent store fence instruction, is to ensure that a given data corresponding to the persistent store fence instruction is stored persistently in a persistent storage before data of all subsequent store instructions, which occur after the persistent store fence instruction in original program order, is stored persistently in the persistent storage.Example 2 includes the processor of Example 1, optionally in which the persistent store fence instruction includes a store and persistent store fence instruction that is to indicate a source operand having the given data and that is to indicate a location in the persistent storage where the given data is to be stored.Example 3 includes the processor of Example 1, optionally in which the given data is to be included in a source operand of a store instruction that implicitly to the persistent store fence instruction is to be one of immediately before and immediately after the persistent store fence instruction in the original program order.Example 4 includes the processor of any one of Examples 1 to 3, optionally in which the memory subsystem module, in response to the persistent store fence instruction, is not to ensure that data of all previous store instructions, which occur before the persistent store fence instruction in the original program order, is stored persistently in the persistent storage before the data of the subsequent store instructions.Example 5 includes the processor of any one of Examples 1 to 3, further including a set of one or more caches. Also, optionally in which the memory subsystem module, in response to the persistent store fence instruction, is to cause the given data to bypass the set of the one or more caches.Example 6 includes the processor of any one of Examples 1 to 5, further including a persistent store fence buffer, and optionally in which the memory subsystem module, in response to the persistent store fence instruction, is to cause the given data to be stored in the persistent store fence buffer.Example 7 includes the processor of Example 6, further including persistent store fence buffer management unit to store at least one cache line from the persistent store fence buffer to the persistent storage based on a signal indicative of an intent to remove a cache line from a cache before the cache line is removed from the cache.Example 8 includes the processor of any one of Examples 6 to 7, optionally in which the persistent store fence buffer includes a write combining buffer that is to allow a second data corresponding to a second persistent store fence instruction to be stored in a same cache line of the persistent store fence buffer as the given data.Example 9 includes the processor of any one of Examples 6 to 8, optionally in which an instruction set of the processor does not include a user-level load instruction to read data from the persistent store fence buffer.Example 10 includes the processor of any one of Examples 6 to 9, optionally in which the persistent store fence buffer does not implement a cache coherency protocol.Example 11 includes the processor of any one of Examples 1 to 6, optionally in which the processor is to store a cache line having the given data and a second data corresponding to a second persistent store fence instruction to the persistent storage in a common set of one or more cycles to be transmitted on an interconnect that is to be used to couple the processor with the persistent storage.Example 12 is a method in a processor that includes receiving a persistent store fence instruction. The method also includes ensuring, responsive to the persistent store fence instruction, that a given data corresponding to the persistent store fence instruction is stored persistently in a persistent storage before data of all subsequent store instructions, which occur after the persistent store fence instruction in original program order, is stored persistently in the persistent storage.Example 13 includes the method of Example 12, optionally in which receiving the instruction includes receiving a store and persistent store fence instruction that indicates a source operand having the given data and that indicates a location in the persistent storage where the given data is to be stored.Example 14 includes the method of Example 12, further including receiving a store instruction indicting a source operand having the given data, optionally in which the store instruction is one of immediately before and immediately after the persistent store fence instruction in the original program order.Example 15 includes the method of any one of Examples 12 to 14, further including causing the given data to bypass a set of one or more caches of the processor responsive to the persistent store fence instruction.Example 16 includes the method of any one of Examples 12 to 15, optionally in which ensuring includes ensuring that the given data is stored persistently in the persistent storage before the data of the subsequent store instructions is stored persistently in the persistent storage without ensuring that data of all previous store instructions is stored persistently in the persistent storage before the data of said all subsequent store instructions is stored persistently in the persistent storage. The previous store instructions occur before the persistent store fence instruction in the original program order.Example 17 includes the method of any one of Examples 12 to 16, further including storing a cache line having the given data and a second data corresponding to a second persistent store fence instruction to the persistent storage in a common set of one or more cycles transmitted on an interconnect.Example 18 includes the method of any one of Examples 12 to 17, further including storing the given data in a persistent store fence buffer responsive to the persistent store fence instruction. Also, optionally in which an instruction set of the processor does not include a user- level load instruction to load data from the persistent store fence buffer.Example 19 includes the method of Example 18, further including receiving a signal indicating an intent to remove a cache line from a cache, and storing at least one cache line from the persistent store fence buffer to the persistent storage, after receiving the signal, and before the cache line is removed from the cache to the persistent storage.Example 20 includes the method of any one of Examples 18 to 19, optionally in which storing the given data in the persistent store fence buffer includes storing the given data in a cache line of the persistent store fence buffer that has second data corresponding to a second persistent store fence instruction.Example 21 includes the method of any one of Examples 12 to 20, further including storing the given data to a write-ahead log in the persistent memory.Example 22 is a system to process instructions that includes an interconnect, and a persistent storage coupled with the interconnect. The persistent storage stores a set of instructions of a write-ahead logging algorithm. The set of instructions including a store and persistent store fence instruction that indicates a location in the persistent storage and that is used by the write-ahead logging algorithm to store a given data to a write-ahead log in the persistent storage. The system also includes a processor coupled with the interconnect. The processor is to receive the store and persistent store fence instruction. The processor, in response to the store and persistent store fence instruction, is to ensure that the given data is stored persistently in the persistent storage before data of all subsequent store instructions, which occur after the store and persistent store fence instruction in the write-ahead logging algorithm in original program order, is stored persistently in the persistent storage.Example 23 includes the system of Example 22, optionally in which the persistent store and persistent store fence instruction includes a non-temporal instruction that is to cause the given data to bypass a set of one or more caches of the processor.Example 24 is an article of manufacture that includes a non-transitory machine-readable storage medium. The non-transitory machine-readable storage medium stores a store and persistent store fence instruction. The store and persistent store fence instruction is to indicate a source operand that is to have a given data and to indicate a location in a persistent storage where the given data is to be stored. The store and persistent store fence instruction, if executed by a machine, is to cause the machine to perform operations including ensuring that the given data is stored persistently in the persistent storage before data of all subsequent store instructions, which occur after the persistent store fence instruction in original program order, is stored persistently in the persistent storage.Example 25 includes the article of manufacture of Example 24, optionally in which the store and persistent store fence instruction, if executed by the machine, is not to cause the machine to ensure that data of all previous store instructions, which occur before the store and persistent store fence instruction in the original program order, is stored persistently in the persistent storage before the data of the subsequent store instructions.Example 26 is a processor or other apparatus that is operative to perform the method of any one of Examples 12 to 21.Example 27 is a processor or other apparatus that includes means for performing the method of any one of Examples 12 to 21.Example 28 is a processor or other apparatus that includes modules to perform the method of any one of Examples 12 to 21.Example 29 is a processor that includes any combination of modules and/or units and/or logic and/or circuitry and/or means for performing the method of any one of Examples 12 to 21.Example 30 is an article of manufacture that includes an optionally non-transitory machine -readable medium, which optionally stores or otherwise provides an instruction, which if and/or when executed by a processor, computer system, electronic device, or other machine, is operative to cause the machine to perform the method of any one of Examples 12 to 21.Example 31 is a computer system, other electronic device, or other apparatus including a bus or other interconnect, the processor of any one of Examples 1 to 11 coupled with the interconnect, and at least one component coupled with the interconnect that is selected from a dynamic random access memory (DRAM), a network interface, a graphics chip, a wireless communications chip, a Global System for Mobile Communications (GSM) antenna, a phase change memory, and a video camera.Example 32 is a processor or other apparatus substantially as described herein.Example 33 is a processor or other apparatus that is operative to perform any method substantially as described herein.Example 34 is a processor or other apparatus that is operative to perform any persistent store fence instruction substantially as described herein. |
Methods of forming display structures, and structures formed thereby are described. Display structures formed may include a display device comprising an emissive layer that includes an array of pixels, wherein each of the individual pixels of the pixel array is capable of emitting light in at least two directions. A controllable opacity layer may be disposed on the emissive layer, wherein the controllable opacity layer is capable of at least partially blocking light emission from the array of pixels. |
1.A display device comprising:An emissive layer, the emissive layer comprising a pixel array, wherein each of the individual pixels of the pixel array is capable of emitting light in at least two directions;A controllable opacity layer disposed on the emissive layer, wherein the controllable opacity layer is capable of at least partially blocking light emission from the array of pixels.2.The display device of claim 1, wherein the display device comprises a first viewing side and a second viewing side.3.The display device of claim 1, wherein the emissive layer comprises one of: an organic light emitting diode OLED structure, a quantum dot LED structure, or a micro LED structure.4.The display device of claim 1, wherein the controllable opacity layer comprises at least one of a liquid crystal material, an electronic ink structure, an electrochromic structure, or a shutter structure.5.The display device of claim 1 wherein a second controlled opacity layer is disposed on the second side of the emissive layer.6.The display device of claim 5 wherein said display device is electrically and physically coupled to a computing device, and wherein an image generated by said computing device is capable of being from said first viewing side of said display device Or at least one of the second viewing sides to view.7.The display device of claim 1 wherein at least one of the emissive layer or the controllable opacity layer comprises an integrated touch or stylus function.8.The display device of claim 6 wherein said controllable opacity layer is capable of modulating an opacity level in response to a control mechanism electrical signal received from said computing device.9.A display structure comprising:An emissive layer, the emissive layer comprising a pixel array, wherein the pixel array is capable of emitting light in at least two directions;a first controllable opacity layer disposed on a first side of the emissive layer, wherein the first controllable opacity layer is capable of at least partially blocking from the pixel array Light emission;a second controlled opacity layer, the second controllable opacity layer being on a second side of the emissive layer, wherein the second controllable opacity layer is capable of at least partially blocking light from the pixel array emission.10.The display structure of claim 9 wherein said display device comprises at least one of a foldable display device or a rollable display device.11.The display structure of claim 9 wherein said controllable opacity layer is optically transparent and comprises a controllable opacity level.12.The display structure of claim 9 wherein said display structure is included in a display screen of a computing device, and wherein images generated by said computing device are capable of being from said first side of said display screen Viewed on both sides, wherein the first side and the second side are opposite each other.13.The display structure of claim 9 wherein said controllable opacity layer is capable of changing opacity in accordance with a block of pixels of said array or altering opacity by individual pixels.14.The display structure of claim 9 wherein said opacity controllable structure is capable of changing from a transparent level to an opaque level in response to an electrical signal received by a computing device coupled to said display structure.15.The display structure of claim 14 wherein the image generated by said computing device is from a first side of a display screen of said computing device and from a second side of said display screen of said computing device Side view.16.The display structure of claim 15 wherein said computing device comprises a foldable mobile device, and wherein said second side comprises a foldable mobile device capable of being in a closed position of said foldable mobile device The back side that was viewed in the middle.17.A system comprising:a processor for processing data;a memory for storing data;Display devices, including:An emissive layer, the emissive layer comprising a pixel array, wherein the pixel array is capable of emitting light in at least two directions;a first controllable opacity layer disposed on a first side of the emissive layer, wherein the first controllable opacity layer is capable of at least partially blocking from the pixel array Light emission;a second controllable opacity layer disposed on a second side of the emissive layer, wherein the second controllable opacity layer is capable of at least partially blocking from the pixel array Light emission.18.The system of claim 17 wherein said first controllable opacity layer is capable of blocking a first side of a display screen from said display device or a second side of said display screen of said display device One of the views of the image generated by the system.19.The system of claim 18 wherein said second controllable opacity layer is capable of blocking viewing from one of said first side of said display screen or said second side of said display screen.20.The system of claim 17 wherein said system comprises one of: a laptop, a notebook, a 2-in-1 device, a mobile device, a foldable device, or a rollable display device.21.The system of claim 17 wherein said display device comprises a display screen, wherein said display device is configured to allow an image to be displayed in a first portion or portions of said display screen, and wherein At least a portion of the display screen is configured to block an image from the display screen.22.The system of claim 21, further comprising a split screen in the horizontal portion or the vertical portion of the display screen configured to display the first portion or portions of the image.23.The system of claim 21 wherein said first portion or portions of said image are displayed in a central portion of said display screen.24.The system of claim 17 wherein said emissive layer comprises one of: an organic light emitting diode OLED structure, a quantum dot LED structure, or a micro LED structure.25.The system of claim 17 wherein said controllable opacity layer comprises at least one of: a liquid crystal material, an electronic ink structure, an electrochromic structure, or a shutter structure. |
Multi-side viewable stacked displayBackground techniqueCommon computing systems that use displays, such as mobile phones and laptops, for example, utilize displays/displays that are viewable in one direction (ie, from either the front side or the back side of such display screens). In the case of flat panel liquid crystal displays (LEDs), a backlight can be employed in which bright light is passed through the LCD display structure to see an image on the display screen.DRAWINGSWhile the specification concludes with particular reference to the claims of the specific embodiments, it is believed that the advantages of the embodiments may1a-1g show cross-sectional views of a display structure in accordance with an embodiment.Figures 2a-2c show cross-sectional views of a transparent opaque controlled structure included in an embodiment.Figures 3a-3c show the opacity controlled structure included in the embodiment.4a-4c illustrate a configuration of a user view in accordance with an embodiment.Figures 5a-5b illustrate the configuration of a user view in accordance with an embodiment.Figures 6a-6d illustrate the configuration of a user view in accordance with an embodiment.Figures 7a-7e illustrate the configuration of a user view in accordance with an embodiment.Figures 8a-8g illustrate the configuration of a user view in accordance with an embodiment.Figure 9 illustrates a method in accordance with an embodiment.Figure 10 shows a schematic diagram of a computing device in accordance with an embodiment.Detailed waysIn the following detailed description, reference is made to the drawings in the claims These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. It will be understood that the various embodiments, although different, are not necessarily mutually exclusive. For example, the particular features, structures, or characteristics described herein together with one embodiment can be implemented in other embodiments without departing from the spirit and scope of the embodiments. In addition, it is to be understood that the position or arrangement of the individual elements in each of the disclosed embodiments can be modified without departing from the spirit and scope of the embodiments.Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of the embodiments are defined by the full scope of the appended claims and the appended claims. In the figures, the same reference numerals may refer to the The terms "above", "to", "between" and "on" are used herein to refer to the relative position of one layer relative to the other. A layer "on top" or "on" another layer or "on" another layer may be in direct contact with other layers or may have one or more intervening layers. A layer "between" the layers may be in direct contact with the layers or may have one or more intervening layers. Layers and/or structures that are "adjacent" to one another may or may not have intervening structures/layers therebetween. The layer(s)/structure(s) located directly on/in another layer(s) structure/directly in contact with another layer(s) structure(s) may have no intervening layer(s) between them / (se) structure.Embodiments of a method of forming a display structure, such as a multi-directional viewable display structure, are described. Those methods/structures can include a display device that includes an emissive layer that includes an array of pixels, wherein each of the individual pixels of the array of pixels is capable of emitting light in at least two directions. A controllable opacity layer can be disposed on the emissive layer, wherein the controllable opacity layer is capable of at least partially blocking light emission from the pixel array. Embodiments herein enable the manufacture of a display that allows a viewer to see on the display of a computing device from either the front side or the back side of the display screen or by simultaneously viewing both the front side and the back side The displayed image where the viewing direction is controllable. Embodiments may utilize an emissive layer, such as an organic light emitting diode display (OLED) that includes dual transmission properties, such that a user may view an image, for example, from either direction of the display screen. In an embodiment, one or more controlled/controlled opacity layers (which may include liquid crystal (LC) layers, electronic ink structures, shutters, or other types of materials capable of varying the degree of opacity) may be placed in the emission On the floor. The emissive layer can include an OLED, or can include any other suitable emissive layer material capable of transmitting images in multiple directions. Combining the light emissive layer with the opacity controllable layer enables the display structures included herein to be controllably viewable in multiple directions.Figure 10 is a block diagram illustrating an example computing device/system. The computing device 1000 can be, for example, a laptop computer, a desktop computer, a tablet computer, a mobile device, or a server, and the like. Computing device 1000 can include a central processing unit (CPU) 1002 configured to execute the stored instructions; and a memory device 1004 that stores instructions executable by CPU 1002. CPU 1002 can be coupled to memory device 1004 via bus 1006. Additionally, CPU 1002 may also be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. Further, computing device 100 can include more than one CPU 1002. Memory device 1004 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory system. For example, memory device 1004 can include dynamic random access memory (DRAM).Computing device 1000 can also include a graphics processing unit (GPU) 1008. As shown, CPU 1002 can be coupled to GPU 1008 via bus 1006. In some cases, GPU 1008 is embedded in CPU 1002. In some cases, GPU 1008 may be a discrete component with respect to CPU 1002. GPU 1008 can include a cache and can be configured to perform any number of graphics operations within computing device 1000. For example, GPU 1008 can be configured to render or manipulate graphical images, graphics frames, video, etc. for display to a user of computing device 1000. Display image data may be executed by one or more engines 109 of GPU 1008, display driver 1015, display interface 106, and the like.Memory device 1004 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory system. For example, memory device 1004 can include dynamic random access memory (DRAM). The memory device 1004 can include a device driver 1010 that is configured to execute instructions for device discovery. Device driver 1010 can be software, applications, application code, and the like.The CPU 1002 can also be coupled via a bus 1006 to an input/output (I/O) device interface 1012 that is configured to connect the computing device 1000 to one or more I/O devices 1014. I/O device 1014 can include, for example, a keyboard and pointing device, where the pointing device can include a touch pad or touch screen or the like. I/O device 1014 may be a built-in component of computing device 1000 or may be a device that is externally connected to computing device 1000. In some examples, memory 1004 can be communicatively coupled to I/O device 1014 by direct memory access (DMA).CPU 1002 may also be linked to display interface 1016 via bus 1006, which is configured to connect computing device 1000 to display device 1018, where display device 1018 may include, for example, one of the display structure embodiments included herein or A plurality, such as, for example, portions of display structure 100 of Figures 1a-1g. Display device 1018 can include a display screen that can or may not be a built-in component of computing device 1000. Display device 1018 may also include a computer monitor, television, or projector, etc., internal to computing device 1000 or externally connected to computing device 1000. In some examples, display device 1018 includes a timing controller that can include an internal clock oscillator. The oscillator can be used to manage the display device to refresh the video data. In some examples, display device 1018 can also include a sink interface controller that includes a FIFO for receiving video data to be displayed. For example, the FIFO can be of any suitable size, such as from about four kilobytes to ten megabytes or more in size.The computing device also includes a storage device 1020. Storage device 1020 is a physical memory such as a hard drive, an optical drive, a thumb drive, a drive array, or any combination thereof. Storage device 1020 can also include a remote storage drive. Computing device 1000 can also include a network interface controller (NIC) 1026. The NIC 1026 can be configured to connect the computing device 1000 to the network 1028 over the bus 1006. Network 1028 can be a wide area network (WAN), a local area network (LAN), or the Internet, and the like. In some examples, the device can communicate with other devices via wireless technology. For example, Bluetooth.RTM. or similar technology can be used to connect with other devices.Computing device 1000 can also include a display controller 1022. Display controller 1022 can be implemented as logic, at least in part, including hardware logic. In other cases, display controller 1022 can be implemented as part of software stored in storage device 1004, implemented as display driver 1015, display interface 1016, engine 1009 of GPU 1008, CPU 1002, any other suitable controller Software or firmware instructions, or any combination thereof.In still other cases, display controller 1022 can be implemented as electronic logic that at least partially includes hardware logic executed by an electronic circuit, circuitry that is executed by the integrated circuit, and the like. Display controller 1022 can be configured to operate independently, in parallel, distributed, or as part of a broader process. In still other cases, display controller 1022 can be implemented as a combination of software, firmware, hardware logic, and the like. In some examples, display controller 1022 can be used to receive a video transfer request packet and send an acknowledgment response packet to sink interface controller 1024.In some examples, sink interface controller 1024 can be included within display device 1018. Display controller 1022 can transmit a video burst and receive a second acknowledgment response packet from sink interface controller 1024. The sink interface controller 1024 can be used to send a video transfer request packet to the display controller 1022. The sink interface controller 1024 can receive an acknowledgment response and a video burst in response to the request packet. The sink interface controller 1024 can also send an acknowledgment response to the video burst.The block diagram of FIG. 10 is not intended to indicate that computing device 1000 includes all of the components shown in FIG. Rather, computing device 1000 may include fewer or additional components not illustrated in FIG. 10, such as sensors, power management integrated circuits, additional network devices, and the like. Computing device 1000 may include any number of additional components not shown in FIG. 10, depending on the particular implementation. Moreover, any of the functions of CPU 1002 may be implemented partially or completely in hardware and/or processor.The various figures included herein, for example, illustrate embodiments that make and utilize display structures that enable multi-sided viewing by a user, such as in a handheld mobile device. The display structure of an embodiment can be incorporated into a display/display device of a computing device, such as the computing system depicted in FIG. The display structure herein can include an emissive layer disposed on one or more controllable opacity layers that can be integrated into a display device (such as the display(s) of FIG. In device 118).In an embodiment, display structure 100 is depicted as including an emissive layer 102 that can be disposed on/attached to controllable opacity layer 104 (FIG. 1a). In an embodiment, the controllable opacity layer 104 can include a separate layer or can be included within a portion of the emissive layer 102. In an embodiment, the emissive layer 102 can include a layer that allows light to pass through both directions of the emissive layer (from the first side 103 and the second side 105). In some cases, light can also pass through the sides of the emissive layer 102. The emission layer 102 can include an array of pixels that can transmit/display content, such as image/video images generated from computing devices that can be coupled to the display structure 100. In some embodiments, the emission layer 102 can include multiple sub-layers. Emissive layer 102 can include, for example, a portion of an organic light emitting diode (OLED), a portion of a quantum light emitting diode (QLED), a quantum dot LED, and/or a variation of a micro LED display.An example of an OLED structure that can be used as the emissive layer 102 is shown in Figures 2a-2c. In an embodiment, OLED structure 200 may comprise an active matrix OLED (AMOLED) and/or a passive matrix OLED (PMOLED) (Fig. 2a, side view). The OLED structure 200 can include a cathode 201 (which can be a reflective or transparent cathode), an electron transport layer (ETL) 203, a barrier layer (BL) 205, an emissive layer 207 (which can include a host and a phosphorescent diode emitter (PHOLED) a hole transport layer (HTL) 209, a hole injection layer (HIL) 211, an anode 213 (which may be reflective or transparent), and a substrate 215 (which may be transparent and may be an OLED structure 200) (Fig. 2a) provides support.) (Fig. 2a) The OLED structure 200 can include a barrier material (not shown) that can provide coverage disposed on the OLED structure 200 and can be used to protect the OLED 200 from, for example, oxygen, Humidity and physical damage. The OLED structure 200 can generate images/videos that can be displayed on the display screen (via an array of pixels located in/on the emissive layer 207) depending on the requests and inputs received from the computing device.In Figure 2b (side cross-sectional view), a voltage 220 can be applied between the cathode 201 and the anode 213 of the OLED 200, wherein holes and electrons are injected from the HIL and ETL layers 211, 203 to the emitter 207, respectively (such as An organic emitter within the emissive layer 207). For example, the organic emitter of emissive layer 207 can include pixels 208, such as red, green, and blue (RGB) pixels. OLED pixel 208 can scatter light in all directions and is also known as a lambertain device. The holes and electrons can be recombined within the organic emitter/pixel 208 of the emissive layer 207, and once recombined, a light pulse 221 can be created based on the energy pulses generated by the recombination, which can include, for example, RGB colors.2c depicts a cross-sectional view of an OLED structure 200 including a cover (such as a sealing cover glass 219) and a buffer layer 217 (including a desiccant, etc.) disposed on the cathode 201. The electron injection layer 202 is disposed on the ETL 203. The emission layer 207 (including pixels) may be disposed on the HTL 209, and the HIL layer 211 may be disposed on the HTL 209. A thin film transistor (TFT) layer 212 can be disposed over the substrate 215 and can be between the anode 213 and the substrate 215. In an embodiment, the anode 213 may comprise an indium titanium oxide (ITO) material. The TFT structure 212 can be used to drive an OLED structure in which the image displayed on the display screen is dependent on the current that each pixel of the emissive layer 207 can receive. The OLED structure 200 can include a solid state semiconductor including a carbon-based emitter material that emits light when power is applied. The pixel array of emissive layer 207 can include an array of individual pixels that emit/scatter light in all directions.Referring back to FIG. 1a, the controllable opacity layer 104 can be disposed on one side, such as the second side 105 of the emissive layer 102, or in other embodiments can be disposed on the first side 103. The controlled/controlled opacity layer 104 can include an optically transparent material/structure, wherein the opacity of the controllable opacity layer 104 can be based on a particular viewing direction/image requirement of a particular computing device, and/or a particular viewer/user Demand is controlled/changed. The controllable opacity layer/structure 104 can include layers that can change opacity to selectively enable different viewing directions, and the opacity can be altered by electrical means, such as by selection of an appropriate current or voltage level. The opacity changeable layer 104 can be made from a variety of structures/materials, such as liquid crystal (LC) materials, electronic ink structures, shutter structures, electrochromic structures, and the like. The controllable opacity layer 104 can include transparency that can be optimized, and can include transparency at non-binary levels (other than full or complete blocking). For example, the controllable opacity layer 104 can be designed to be pixel level or The larger block size level changes the degree of opacity depending on the material selected and the desired function. The controllable opacity layer can modulate the level of opacity in response to any suitable control mechanism, such as mechanical and/or electrical control mechanism inputs received from the computing device.In an embodiment, the controllable opacity layer 104 can include portions of a liquid crystal display (LCD) structure. Figures 3a-3c depict an example of an LCD structure that can be used, for example, as the controllable opacity layer 104 in the display structure 100 of Figure 1a. In FIG. 3a (side view), LCD structure 300 can include a first polarizer 320, an LC material 322, a substrate 324 (which can include a TFT layer and can also include electrodes), a second substrate 324' (which can include Electrode) and second polarizer 320'. In an embodiment, the LCD structure 300 does not include a color filter or a backlight unit. In an embodiment, light 325 can pass through LCD structure 300 with no voltage applied between electrodes 324, 324' (Fig. 3b).In Figure 3c, voltage 327 is applied between electrodes 324, 324' and light 325 is not allowed to pass through LCD structure 300. Thus, by selecting the appropriate voltage 327 level, LC material 322 can be used to allow or block light from passing through LCD structure 300 via control LC material 322. The opacity state of LCD structure 300 (which may be incorporated into display structure 100, such as in Figure 1a) may be controlled by a TFT layer, which may be included in any of electrodes 324, 324'. The exact structure and layout of the LCD structure 300 can be varied depending on the design requirements for a particular application, as well as the level of opacity control required. For example, individual pixels, sub-pixels, or larger regions may be controlled by TFT circuitry.In an embodiment, the controllable opacity layer 104 of FIG. 1a can be designed to change the opacity at the pixel level, or in other embodiments can be configured to change the opacity at the block size level, depending on material selection. And the desired function. For example, when the LC material of the controllable opacity layer 104 includes 640x 480 resolution and the OLED layer of the emission layer 102 includes 4096x2160 resolution, the block of the controllable opacity layer 104 can be larger than the high resolution OLED display pixels. To control, thereby simplifying LC layer electronics. In some embodiments, the controllable opacity layer 104 can be controlled to produce an intermediate opacity level (such as a translucent state) rather than a transparent or opaque binary state opacity level. For example, the controllable opacity layer 104 can allow/select blur low light to be displayed on the viewable side of the display screen, or the controllable opacity layer 104 can allow/select a brownish or shaded effect to be displayed on the display screen. Various effects can be employed by allowing/selecting portions of the image to be transmitted from the controllable opacity layer 104. Such effects can be utilized, for example, in security applications.In FIG. 1a, a primary/first viewing direction 107 of the display structure 100 is illustrated, wherein the controllable opacity layer 104 is configured to block images generated from/through the display structure 100 from the backside (auxiliary) viewing direction 109 Watching. The primary and secondary viewing directions 107, 109 may correspond to, for example, the positive side of the display screen (such as the front side of a laptop (or any computing device including the display) and the back side of the laptop) (see Figure 4a). And 4c, respectively showing a front side view 407 and a back side view 409). In other embodiments, the primary and secondary views 107, 109 may correspond to other directions, such as the first and second directions. In FIG. 1b, the controllable opacity layer 104 can be configured to allow viewing of an image from the backside (auxiliary) viewing direction 109 of the display structure 100 by allowing light to pass through the sides of the emissive layer 102.In another embodiment, the emissive layer 102 can be disposed between the first controllable opacity layer 104 and the second controllable opacity layer 104' (Fig. 1c). Simultaneous viewing from the front side view 107 and the back side view 109 of the display device is enabled, wherein both the first and second controllable opacity layers 104, 104&apos; are selected to be transparent, such as via a control mechanism, such as a voltage. In another embodiment, the second viewing direction 109 (FIG. 1d) or the first viewing direction 107 (FIG. 1e) may be selected by blocking the first controllable opacity layer 104 or the second controllable opacity layer 104', respectively. Any of them. In an embodiment, the cover lens can be included within at least a portion of one of the first controllable opacity layer 104 or the second controllable opacity layer 104'. In another embodiment, a cover lens can be included in both the first and second controllable opacity layers 104, 104&apos;.In an embodiment, display structure 100 can include a lens substrate or a cover lens. The substrate/covering lens may also comprise a plastic material or any other material used to protect the display. In an embodiment, the cover lenses 110, 110' can be incorporated into at least one of the controllable opacity layers 104, 104' (Fig. 1f). The substrate/covering lenses 110, 110' can be used to protect the display 100 from scratches and/or other types of physical damage. In another embodiment, display structure 100 can include at least one separate substrate/covering lens 110, 110' disposed on at least one of opacity-controlled layers 104, 104' (Fig. 1g). In other embodiments, at least one of the controllable opacity layers 104, 104&apos; can include a touch and/or stylus layer. In another embodiment, the touch and/or stylus layer can be disposed on the surface of the cover lens/protective substrate 110, 110', or in another embodiment, the touch and/or stylus layer/structure can It is incorporated into the emissive layer 102. In other embodiments, at least one of the controllable opacity layers 104, 104&apos; can be integrated into the emissive layer 102.In an embodiment (see Figures 4a-4c), a user may view the display of image 440 from a first side 407 (which may include the positive side of display screen 405 of computing device 400 such as, but not limited to, a laptop computer) , wherein the display structure (eg, such as any of the display structures in FIGS. 1a-1g) can be incorporated into the display/computing device, and the opacity of the controllable opacity layer can be controlled such that the controllable opacity The layer can block light from being emitted from the back side 409 of the display screen 405 of the computing device 400 (Fig. 4a) or can block light from being emitted from the positive side of the display screen (Fig. 4b). In an embodiment, computing device 400 can be folded/closed, and display image 440 can be viewed on the back side 409 of the computing device display screen when the computing device display screen is closed (Fig. 4c). In another embodiment, the controllable opacity layer can simultaneously allow a front side view 407 and a back side view 409 of the image 440. In an embodiment, the plus side and the back side may be opposite each other.In another embodiment, the opacity controlled layer 104 may allow a portion of the light to be emitted from the first or second side 507, 509 of the display screen 505 and displayed (Figs. 5a-5b). In an embodiment, a partial view of image 540 may be located at a central portion of the display device/screen of computing device 500, or may be located at any other portion of the display/device. In an embodiment, the image may be viewed in multiple partial views that may be present within one display device/screen. In another embodiment, a partial view may be located in a vertical or horizontal portion of display screen 605 of computing device 600, and image 604 may be on the positive side (Fig. 6a, 6c) or back side of display screen 605 of computing device 600 (figure 6b, 6d) are displayed. Thus, the display structure of Figures 1a-1g enables a split screen display in a computing device.7a-7e depict an embodiment that can be employed with computing device 700. In FIG. 7a, computing device 700 can display an image on back side 709. A display structure 100 that can be incorporated into computing device 700 is depicted in FIG. 7d and can include a first controllable opacity layer 104 disposed on emissive layer 102. Display structure 100 may block the emission of images/light from the positive side 707 of computing device 700. The second controllable opacity layer 104&apos; may allow the light/image 740 to be viewed from the back side 709 of the computing device 700.In Figure 7b, computing device 700 (such as a laptop computer) can display image 740 through positive side 707 of display screen 705 of computing device 700, and in Figure 7c, computing device 701 (such as mobile phone/handheld device) The image may also be displayed by at least a portion of the positive side 707 of the display screen 705. In an embodiment, both computing devices 700, 701 can block light from the back side 709 of the computing device. Figure 7e depicts a display structure 100 that is incorporated into a display screen of computing devices 700, 701, which may include a first controllable opacity layer 104 that allows images/ The light 740 is emitted from the first side 707 of the display screen of the computing device 700, 701, while the second controllable opacity layer 104' can block light/images from being viewed from the back side 709 of the display screen of the computing device 700, 701.Figures 8a-8b further depict embodiments in which the display structure 100 of Figures 1a-1g can be employed, for example. In an embodiment, the back side 809 of the display screen 805 of the computing device 800 (Fig. 8a) including the hinge 830 can have a first portion that incorporates the back side 809 of the display in accordance with the display structure 100 of the present embodiment, and does not merge/not A second portion of the back side 809' of the display structure 100 is included. Only a portion of the back side of display 805 includes a viewable image 840 (Fig. 8b) as it is folded. In another embodiment, the display screen 805 (Fig. 8c) of the computing device 800 can include two portions separated by a hinge mechanism 830, such as the backs 809, 809' of the display screen 805 separated by a hinge 830, both of which Display structure 100 may be included such that image/video or the like 840 may be seen from first portion and second portion 809, 809' (Fig. 8d, 8e) when device 800 is closed/folded, regardless of the orientation of computing device 800 . In other embodiments, the viewable portion can be employed with respect to the positive side of display device 800.In another embodiment, a display structure, such as the display structure of Figures 1a-1g, can be incorporated into a rollable display device 800 (such as depicted in Figures 8f-8g). In an embodiment, when the rollable display is unfolded in the first orientation 807 (FIG. 8f), the sensors incorporated into the display 800 can detect the orientation/direction of the display screen 805 so that the image 840 can be on the display screen 805. It is viewed on one side 807. In another embodiment, the sensor can detect that the rollable display device 800 is oriented in the second direction 809 (Fig. 8g) so that the image can be displayed on the second side 809 of the rollable display device screen. In other embodiments, the display device can be configured to allow a user to select a first viewable side, a second viewable side, or both first and second viewable sides to view an image on the rollable display side.Embodiments of Display System/Structure describe herein a new way of constructing a display device that includes multiple viewable sides. When an OLED is used in a display structure, such as display structure 100, light is typically directed to the user to improve performance. Embodiments herein may utilize the capabilities of such a lambertain device to emit light in all directions. For example, the display structures herein may include various viewable embodiments when incorporated into a computing system, such as a laptop or mobile phone. In an embodiment, the display is viewable when closed (ie, in a flat view). Embodiments herein can be incorporated into foldable displays and devices. In a typical folded display, the device is folded inward and cannot be seen without opening the device.Embodiments herein enable an entire display or a portion of the display to be viewed from the outside. For example, this is very useful for notifications or other displayed content. This feature avoids adding a second display to allow viewing while the device is closed, or must open the device to retrieve information. Embodiments may be utilized in notebooks, two-in-one devices, tablet devices, point of sale devices, and/or any foldable devices. No backlight is needed, so the device can be thinner. For example, the device can be fabricated to include approximately 0.79 mm or 0.79 mm or less in thickness.FIG. 9 depicts a method 900 of viewing an image on a display screen in accordance with an embodiment described herein. At step 902, a display device is provided that includes an emission layer disposed between a first controllable opacity layer and a second controllable opacity layer, wherein the display device includes a portion of a display screen of the computing device. At step 904, at least one of a first viewable direction or a second viewable direction is selected. For example, at least one of the opacity controlled layers may allow images to be viewed from the front or back side of the display screen depending on the desired selection. In other embodiments, both the first and second viewable sides can be viewed simultaneously.At step 906, light is at least partially blocked from being emitted from one of the first and second controllable opacity layers in response to the selection, wherein the blocked one of the first and second controllable opacity layers is set The side of the emissive layer opposite the one of the first and second controllable opacity layers that is not blocked. In an embodiment, at least one of the opacity controlled layers allows for emission of a non-binary opacity level from either the positive side or the back side of the display screen.Embodiments of the display structures included herein can be used in system-on-a-chip (SOC) products, and applications can be found in devices such as smart phones, notebooks, tablets, wearable devices, and other electronic mobile devices. In various implementations, the package structure can be included in a laptop, ultrabook, personal digital assistant (PDA), ultra mobile PC, mobile phone, desktop computer, server, printer, scanner, monitor, set top box , entertainment control unit, digital camera, portable music player or digital video recorder. In further implementations, the display structures described herein can be included in any other type of electronic device, such as those that process data.ExampleExample 1 is a display device including an emission layer including a pixel array, wherein each of the individual pixels of the pixel array is capable of emitting light in at least two directions; and a controllable opacity layer, the controllable A transparency layer is disposed on the emissive layer, wherein the controllable opacity layer is capable of at least partially blocking light emission from the pixel array.Example 2 includes the display device of Example 1, wherein the display device includes a first viewing side and a second viewing side.Example 3 includes the display device of example 1, wherein the emissive layer comprises one of: an organic light emitting diode OLED structure, a quantum dot LED structure, or a micro LED structure.Example 4 includes the display device of Example 1, wherein the controllable opacity layer comprises at least one of: a liquid crystal material, an electronic ink structure, an electrochromic structure, or a shutter structure.Example 5 includes the display device of Example 1, wherein the second controlled opacity layer is disposed on the second side of the emissive layer.Example 6 includes the display device of example 5, wherein the display device is electrically and physically coupled to the computing device, and wherein the image generated by the computing device is viewable from at least one of a first viewing side or a second viewing side of the display device .Example 7 includes the display device of Example 1, wherein the controllable opacity layer comprises an integrated touch or stylus function.Example 8 includes the display device of example 6, wherein the controllable opacity layer is capable of modulating an opacity level in response to an electrical signal received from the computing device.Example 9 is a display structure comprising: an emission layer including a pixel array, wherein the pixel array is capable of emitting light in at least two directions;a first controllable opacity layer disposed on a first side of the emissive layer, wherein the first controllable opacity layer is capable of at least partially blocking light emission from the pixel array; and And a second controlled opacity layer on the second side of the emissive layer, wherein the second controllable opacity layer is capable of at least partially blocking light emission from the pixel array.Example 10 includes the display structure of Example 9, wherein the display device comprises at least one of a foldable display device or a rollable display device.Example 11 includes the display structure of Example 9, wherein the controllable opacity layer is optically transparent and includes a controllable opacity level.Example 12 includes the display structure of Example 9, wherein the display structure is included in a display screen of the computing device, and wherein the image generated by the computing device is viewable from the first side and the second side of the display screen, wherein the first side and The second sides are opposite each other.Example 13 includes the display structure of Example 9, wherein the controllable opacity layer is capable of changing the opacity in the pixel block of the array or changing the opacity by the individual pixels.Example 14 includes the display structure of Example 9, wherein the opacity controllable structure is capable of changing from a transparent level to an opaque level in response to an electrical signal received by a computing device coupled to the display structure.Example 15 includes the display structure of Example 14, wherein the signal generated by the computing device is viewable from a first side of a display screen of the computing device and from a second side of a display screen of the computing device.Example 16 includes the display structure of Example 15, wherein the computing device comprises a foldable laptop, and wherein the second side comprises a back side of the foldable laptop that is viewable in a closed position of the foldable laptop .Example 17 is a system comprising: a processor for processing data;a memory for storing data; a display device comprising: an emission layer, the emission layer comprising a pixel array, wherein the pixel array is capable of emitting light in at least two directions; a first controllable opacity layer, the first A controllable opacity layer is disposed on the first side of the emissive layer, wherein the first controllable opacity layer is capable of at least partially blocking light emission from the pixel array; and a second controllable opacity layer, the second An opaque layer is disposed on the second side of the emissive layer, wherein the second controllable opacity layer is capable of at least partially blocking light emission from the pixel array.Example 18 includes the system of example 17, wherein the first controllable opacity layer is capable of blocking viewing of the image generated by the system from one of a first side of a display screen of the display device or a second side of a display screen of the display device .Example 19 includes the system of example 18, wherein the second controllable opacity layer is capable of blocking viewing from one of the first side of the display screen or the second side of the display screen.Example 20 includes the system of example 17, wherein the system comprises one of: a laptop, a notebook, a 2-in-1 device, a mobile device, a foldable device, or a rollable display device.Example 21 includes the system of example 17, wherein the display device comprises a display screen, wherein the display device is configured to allow an image to be displayed in the first portion or portions of the display screen, and wherein at least a portion of the display screen is configured to Block images from the display.Example 22 includes the system of example 21, further comprising a split screen in the horizontal portion or the vertical portion of the display screen in which the first portion or portions configured to display an image.Example 23 includes the system of example 21, wherein the first portion or portions configured to display an image are located in a central portion of the display screen.Example 24 includes the system of example 17, wherein the emissive layer comprises one of: an organic light emitting diode OLED structure, a quantum dot LED structure, or a micro LED structure.Example 25 includes the system of example 17, wherein the controllable opacity layer comprises at least one of: a liquid crystal material, an electronic ink structure, an electrochromic structure, or a shutter structure.Example 26 is a method of displaying an image on a display screen, comprising: providing a display device including an emission layer disposed between a first controllable opacity layer and a second controllable opacity layer, wherein the display device comprises a computing device a portion of the display screen; selecting at least one of the first viewable direction or the second viewable direction; responsive to the selecting to at least partially block light emission from one of the first and second controllable opacity layers, wherein The blocked one of the first and second controllable opacity layers is disposed on a side of the emissive layer opposite one of the first and second controllable opacity layers that are not blocked.Example 27 includes the method of example 26, wherein the first and second controllable opacity layers comprise at least one of: a liquid crystal material, an electronic ink structure, an electrochromic structure, or a shutter structure.Example 28 includes the method of example 26, wherein the emissive layer comprises one of: an organic light emitting diode OLED structure, a quantum dot LED structure, or a micro LED structure.Example 29 is at least one computer readable medium for selecting a viewable direction of a display screen of a computing device, the at least one computer readable medium having instructions stored thereon that are responsive to being executed on a computing device Enabling a computing device to: select at least one of a first viewable direction or a second viewable direction of the display structure of Example 26 via a processor, and at least partially block light from the first and second in response to the selecting One of the controllable opacity layers is emitted, wherein the blocked one of the first and second controllable opacity layers is disposed in the first and second controllable opacity layers of the emissive layer Block one side of the opposite side.While the foregoing description has specified certain steps and materials that can be used in the methods of the various embodiments, those skilled in the art will understand that many modifications and alternatives are possible. Therefore, all such modifications, variations and substitutions are intended to be within the spirit and scope of the embodiments as defined by the appended claims. Additionally, the figures provided herein show only certain portions of the exemplary microelectronic devices and associated package structures that are involved in the implementation of the various embodiments. As such, the various embodiments are not limited to the structures described herein. |
A computer system having multiple components capable of being in either a wake or sleep state includes a controller and a voltage regulator. The controller may generate a power state status signal indicating the power states of the components, and this signal may be provided to the voltage regulator. In response, the voltage regulator increases its output voltage level to the components when the power state status signal indicates that the components enter a sleep state. |
CLAIMS What is claimed is: 1. A computer system comprising : a controller to generate a power state status signal indicating the power states of a first plurality of components of the computer system; and a voltage regulator coupled to the plurality of components to increase an output voltage level to the first plurality of components when the first plurality of components enter a sleep state, as indicated by the power state status signal. |
<Desc/Clms Page number 1> METHOD AND APPARATUS FOR REGULATING THE VOLTAGE SUPPLIED TO A COMPUTER SYSTEM The present invention relates to computer systems and more particularly to reducing overall transient voltage ranges of a supply voltage from a voltage regulator resulting from variations in the supply current from the voltage regulator. BACKGROUND Computer systems, from small handheld electronic devices to medium- sized mobile and desktop systems to large servers and workstations, are becoming increasingly pervasive in our society. Computer systems typically include one or more processors. A processor manipulates and controls the flow of data in a computer by executing instructions. To provide more powerful computer systems for consumers, processor designers strive to continually increase the operating speed of the processor. Unfortunately, as processor speed increases, the power consumed by the processor tends to increase as well. Historically, the power consumed by a computer system has been limited by two factors. First, as power consumption increases, the computer tends to run hotter, leading to thermal dissipation problems. Second, the power consumed by a computer system may tax the limits of the power supply used to keep the system operational, reducing battery life in mobile systems and diminishing reliability while increasing cost in larger systems. One method of reducing the amount of electric power drawn by a computer system is to design the system such that it is capable of operating in two different modes. In a first mode of operation, only the most vital functions of the system, such as those dedicated to monitoring for user input, are active. This may be referred to as a"sleep mode. "During the sleep mode, the computer system ' < <Desc/Clms Page number 2> draws very little power from the voltage regulator (alternatively referred to as the power/voltageNcc supply or power/voltageNcc source). In a second mode of operation, the computer system is busy executing instructions to accomplish a particular task. This is referred to as the"wake mode. "During the wake mode, the computer system consumes a significant amount of power from the power supply. Unfortunately, there is a side effect associated with switching a computer system between sleep and wake modes. The rapid change in current drawn from the power supply when the computer switches between modes causes fluctuations in the voltage supplied to the computer by the voltage regulator. Going from a wake mode to a sleep mode may cause a rapid decrease in current, resulting in an upwardly spiking voltage transient. Similarly, going from a sleep mode to a wake mode may cause a rapid increase in current, resulting in a downwardly spiking voltage transient. The present invention addresses this and other issues associated with the prior art. BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example and not limitation in the accompanying figures in which like references indicate similar elements and in which:Figure 1 includes a computer system formed in accordance with an embodiment of the present invention;Figure 2 includes a timing diagram in accordance with an embodiment of the present invention; <Desc/Clms Page number 3> Figure 3 includes a circuit in accordance with an embodiment of the present invention;Figure 4 includes a timing diagram in accordance with another embodiment of the present invention; andFigure 5 includes a flow chart showing a method of the present invention. DETAILED DESCRIPTIONIn accordance with an embodiment of the present invention, a voltage regulator supplies power (alternatively referred to as a voltage level or Vcc level) to multiple devices within a computer system. The voltage level from the voltage regulator may be maintained at a first voltage level that is below the nominal voltage level of the regulator while the devices are in a wake state. In doing so, less power is consumed by the computer system because power consumption is proportional to the square of the voltage level. The voltage level from the voltage regulator may then be increased to a second voltage level when the devices switch to a sleep state. The output of the voltage regulator may be set to an intermediate voltage level, between the first and second voltage levels, when some of the devices are in a wake state and some of the devices are in a sleep state. Wake and sleep states of the devices are indicated by a power state status signal provided to the voltage regulator. The associated output voltage levels from the voltage regulator are predetermined to be values that will maintain the voltage levels within an appropriate tolerance range despite voltage transients. These voltage transients are the expected result of current fluctuations associated with transitions between wake and sleep states of the devices. A more detailed <Desc/Clms Page number 4> description of embodiments of the present invention, including various configurations and implementations, is provided below. As used herein, the terms"wake"and"sleep"are relative indications of the power state of a device. A device in a wake state may generally consume more power, on average, than the same device in a sleep state. In accordance with one embodiment of the present invention, a device in a wake state is either in an operational state or is ready for operation (i. e. receiving, transmitting, or accessing data or ready to receive, transmit, or access data). A device in a sleep state is in a non-operational state. For example, a hard drive, floppy drive, or DVD may be considered to be in a wake state while it's storage medium is spinning and in a sleep state while it's storage medium is not spinning (or is spinning at a speed that is less than a predetermined speed). For one embodiment of the present invention, the terms"wake"and"sleep" may be interpreted in accordance with the ACPI specification (Advanced Configuration and Power Interface Specification, Rev. 2.0, published July 27, 2000, by Compaq, Intel, Microsoft, Phoenix, and Toshiba), but is not to be so limited. Note that what is referred to herein as a sleep state may alternatively be referred to as an inactive, power-down, deep power-down, deep sleep, low- power, or idle state. In accordance with one embodiment of the present invention, the power state status signal provided to the voltage regulator to indicate wake and sleep states of the devices in the computer system may be a signal defined by the ACPI specification. For example, the power state status signal may be the SLP-S3# signal, as described in the ACPI specification. Alternatively, the power state status signal may be any signal generated by any controller within the computer <Desc/Clms Page number 5> system to indicate the power state of individual or multiple devices within the system. This controller may reside centrally within a hub or bridge (often contained in a chipset) of a computer system (as described in more detail below), or, alternatively, it may reside centrally within another device of the computer system, or as a discrete component. In accordance with an alternate embodiment, it may be distributed across multiple devices or discrete components of the computer system. For example, each device coupled to a voltage regulator may send its own power state status signal separately to the voltage regulator to indicate its power state. It is to be noted that the power state status signal proved to the voltage regulator may indicate a power state change of an associated device (or of multiple devices) before, after, or during the power state transition of the device. As used herein, the term"when"is used to indicate the temporal nature of any of these power state transitions. For example, the phrase"a signal is sent to the voltage regulator when the device enters the sleep state"is to be interpreted to mean that the signal may be sent before, after, or during the transition into the sleep state, but is nonetheless associated with that transition into the sleep state. Figure 1 includes a computer system formed in accordance with an embodiment of the present invention. Processor 101 is coupled to Hub A 105 to communicate with memory 107, graphics device 106, and Hub B. Hub B is, in turn, coupled to several peripheral input/output devices, including, for example, keyboard 110, modem 111, audio device 112, floppy disk drive 113, hard disk drive 114, and DVD 115. The computer system of Figure 1 additionally includes multiple voltage regulators (VRs) to supply power at different voltage levels to the various components of the system. For example, VR1 102 supplies power to <Desc/Clms Page number 6> processor 101. VR2 103 supplies power to both processor 101 and to Hub A 105. VR3 104 supplies power to graphics device 106. VR4 105 supplies power to Hub A 105, memory 107, and to Hub B 109. VR5 116 also supplies power to Hub B 109 as well as to keyboard 110, modem 111, audio device 112, floppy disk drive 113, hard disk drive 114, and DVD 115. Note that some voltage regulators supply power to a single component while other voltage regulators supply power to multiple components. In addition, some components receive a voltage supply from only a single voltage regulator while other components receive multiple voltage supplies from multiple voltage regulators. It is to be appreciated that in accordance with alternate embodiments of the present invention, alternate couplings of voltage regulators to these and other components of a computer system may be implemented. Multiple components of the computer system of Figure 1 may be capable of entering wake and sleep states. For example, as described above, hard disk drive 114 and DVD 115 may be considered to be in a sleep state when their respective storage mediums are not spinning. Other components, such as processor 101, may have various wake and sleep states. For example, processor 101 may have a fully operational wake state, a partially operational wake state, a partial sleep state, a regular sleep state, a deeper sleep state, etc. These different levels of wake and sleep states may have various current consumption levels associated with them. In accordance with an embodiment of the present invention, the power state status of the various components of the computer system of Figure 1 may be indicated to one or more voltage regulators to appropriately set the output voltage levels supplied to the components. The power state status may be <Desc/Clms Page number 7> provided to a voltage regulator by a power state status signal via a power state status signal line. For example, the power state of peripheral devices 110-115 of Figure 1 may be indicated to VR5 116 by a power state status signal via power state status signal line 117. In accordance with one embodiment of the present invention, the power state status signal provided to VR5 116 via signal line 117 of Figure 1 indicates the power state of multiple ones of peripheral devices 110-115. For example, in accordance with an embodiment in which the power state status signal is the SLP S3# signal in an ACPI-compliant computer system, the power state status signal may indicate the power state of drives 113-115, collectively. In accordance with an alternate embodiment of the present invention, the power state status signal is a serial or parallel signal that indicates the power state of various components independently, collectively, or in any grouping. Figure 2 includes a timing diagram of the current and voltage from a voltage regulator to a plurality of components of a computer system in accordance with an embodiment of the present invention. The current from the voltage regulator is shown in timing diagram 200. As shown, during period of time 210 in diagram 200, the components powered by the voltage regulator are in a wake state and consume a first amount of current. During period of time 211, the components are in a sleep state and consume a lesser amount of current. During period of time 212, the components reenter the wake state and again consume the first amount of current. Conventionally, the voltage level output of the voltage regulator is set to the nominal voltage, as shown in timing diagram 201 of Figure 2. This nominal voltage may be, for example, five volts, but may alternatively be any target voltage <Desc/Clms Page number 8> sufficient to power the components supplied by the voltage regulator. Typically, there is a tolerance range within which the voltage level output of the voltage regulator may fluctuate while still enabling the associated components to operate properly and allowing the computer system to perform within established electrical, thermal, and other specified limits. As shown in Figure 2, this tolerance range may be +/-5% of the nominal voltage level. This tolerance range may vary in alternate embodiment of the present invention, and may not necessarily be symmetrical about the nominal voltage level. When the power state of components of the computer system transitions from a wake state 210 to a sleep state 211, an upwardly spiking voltage transient occurs as shown in timing diagram 201 of Figure 2. The computer system is designed such that this upwardly spiking voltage transient remains within the tolerance range. When the power state of components of the computer system transitions from a sleep state 211 to a wake state 212, a downwardly spiking voltage transient occurs as shown in timing diagram 201. The computer system is designed such that this downwardly spiking voltage transient remains within the tolerance range. In, accordance with an embodiment of the present invention, a power state status signal is used to regulate the target voltage level from the voltage regulator in a manner that reduces the power consumption of the computer system. For example, as shown in timing diagram 202 of Figure 2, the voltage level output from the voltage regulator may be set at or near the lower end of the tolerance range while the components are in a wake state during period of time 210, as indicated to the voltage regulator by a power state status signal. For an alternate embodiment of the present invention, this voltage level may be set to any <Desc/Clms Page number 9> intermediate value between the lower end of the tolerance range and the nominal voltage level during period of time 210. When the components transition to the sleep state during period of time 210, the voltage level output from the voltage regulator may spike up. This transition is indicated to the voltage regulator by the power state status signal. Instead of dropping back to the initial voltage level, the voltage level target may be reset to a higher value during period of time 211, as shown in timing diagram 202 of Figure 2. In accordance with one embodiment of the present invention, this higher value may be at or near the nominal voltage level. In accordance with an alternate embodiment of the present invention, this voltage level may be at or near a voltage level that can accommodate a downwardly spiking voltage transient without allowing the voltage level to fall below the lower end of the tolerance range. As shown in timing diagram 202 of Figure 2, when the components transition back to the wake state during period of time 212, as indicated by a power state status signal to the voltage regulator, the voltage level output from the voltage regulator may spike down, but remains within the tolerance range. Instead of dropping back to the voltage level set during period of time 211, the voltage level target may be reset to a different value during period of time 212. As shown in timing diagram 202, this value may be the initial voltage level set during wake period 210, near the lower end of the tolerance range. Figure 3 includes a circuit in accordance with an embodiment of the present invention. The circuit of Figure 3 is the voltage regulator that supplies power to a computer system formed in accordance with an embodiment of the present invention. The power state status signal may be coupled to the gate of n- <Desc/Clms Page number 10> channel transistor 330. The source of transistor 330 is coupled to ground while its drain is coupled to one end of resistor 331. The opposite end of resistor 331 is coupled to the inverting input of comparator 334. The non-inverting input to comparator 334 is coupled to a constant reference voltage Vref. The output of comparator 334 is coupled to an input of voltage regulator 335. the output of voltage regulator 335 is fed back to the inverting input of comparator 334 through resistor 333. The inverting input to comparator 334 is coupled to ground through resistor 332. When the inverting input to comparator 334 of Figure 3 falls below the reference voltage Vref, the output of comparator 334 sends a signal to voltage regulator 335 to increase the voltage V at its output. Conversely, when the inverting input to comparator 334 is raised above the voltage value Vref, comparator 334 sends a signal to voltage regulator 335 to reduce the voltage V at its output. In this manner, the voltage V supplied by the voltage regulator to the components of the computer system is regulated and held at a relatively constant value. Note that in the interest of clarity, voltage regulator 335 is merely represented in block form. The boundaries of voltage regulator block 335 have been arbitrarily selected in the manner shown to highlight the relationship between the power state status signal and its effect on the output voltage level of the voltage regulator. The voltage regulator of the computer system may alternatively be defined to include any or all of components 330-334. For this reason, it is understood that sending the power state status signal to the gate of transistor 330 is equivalent to stating that the signal is simply sent to the voltage regulator itself. <Desc/Clms Page number 11> In accordance with one embodiment of the present invention, Vref is set at or near the lower end of the tolerance range of the voltage regulator. The power state status signal may be driven high to indicate that the associated components of the computer system are in a sleep state, and driven low to indicate that the associated components of the computer system are in a wake state. Alternatively, the circuit of Figure 3 may be redesigned to accommodate a power state status signal that is driven high to indicate a wake state and driven low to indicate a sleep state. Alternatively, the circuit of Figure 3 may be modified to include a storage element, such as a latch, to store one or more bits indicating power state statuses of associated components of the computer system. For another embodiment of the present invention, the circuit of Figure 3 may be redesigned to accept additional power state status signals associated with other components, the power to which is supplied by voltage regulator 335. For this embodiment, more than two different output voltage levels may be generated by the voltage regulator, as described in more detail below. Figure 4 includes a timing diagram of the current and voltage from a voltage regulator to a plurality of components of a computer system in accordance with another embodiment of the present invention. The current from the voltage regulator is shown in timing diagram 400. As shown, during period of time 402 in diagram 400, the components powered by the voltage regulator are in a wake state and consume a first amount of current. During period of time 403, some components enter a sleep state and consume a lesser amount of current while other components remain in a wake state, resulting in a total consumption of a second amount of current that is less than the first amount of current. During <Desc/Clms Page number 12> period of time 404, more components enter the sleep state, resulting in a further decrease in the amount of current consumed. During period of time 405, the components reenter the wake state and again consume the first amount of current. In accordance with an embodiment of the present invention, a power state status signal is used to regulate the target voltage level from the voltage regulator in a manner that reduces the power consumption of the computer system. For example, as shown in timing diagram 401 of Figure 4, the voltage level output from the voltage regulator may be set at or near the lower end of the tolerance range while the components are in a wake state during period of time 402, as indicated to the voltage regulator by a power state status signal. This assumes most of the high current drawing components are in the wake state during period of time 402. In accordance with an alternate embodiment of the present invention, this voltage level may be at or near a voltage level that can accommodate a downwardly spiking voltage transient without allowing the voltage level to fall below the lower end of the tolerance range. When some of the components transition to the sleep state during period of time 403, the voltage level output from the voltage regulator may spike up as shown in timing diagram 401. This transition is indicated to the voltage regulator by the power state status signal. Instead of dropping back to the initial voltage level, the voltage level target may be reset to a higher value during period of time 403, as shown in timing diagram 401 of Figure 4. In accordance with one embodiment of the present invention, this higher value may be at some intermediate position between the lower end of the tolerance range and the nominal voltage level. In accordance <Desc/Clms Page number 13> with an alternate embodiment of the present invention, this voltage level may be at or near a voltage level that can accommodate a downwardly spiking voltage transient without allowing the voltage level to fall below the lower end of the tolerance range. When additional components transition to the sleep state during period of time 404, the voltage level output from the voltage regulator may spike up again, as shown in timing diagram 401. This transition is also indicated to the voltage regulator by the power state status signal. The voltage level target may again be reset to an even higher value during period of time 404, as shown in timing diagram 401 of Figure 4. In accordance with one embodiment of the present invention, this higher value may be at or near the nominal voltage level, assuming most of the high current drawing components are in the sleep state during period of time 404. In accordance with an alternate embodiment of the present invention, this voltage level may be at or near a voltage level that can accommodate a downwardly spiking voltage transient without allowing the voltage level to fall below the lower end of the tolerance range. As shown in timing diagram 401 of Figure 4, when the components transition back to the wake state during period of time 405, as indicated by a power state status signal to the voltage regulator, the voltage level output from the voltage regulator may spike down, but remains within the tolerance range. Instead of dropping back to the voltage level set during period of time 404, the voltage level target may be again reset to a different value during period of time 405. As shown in timing diagram 401, this value may be the initial voltage level set during wake period 401, near the lower end of the tolerance range, assuming most of the high current drawing components are in the wake state during period <Desc/Clms Page number 14> of time 405. In accordance with an alternate embodiment of the present invention, this voltage level may be at or near a voltage level that can accommodate a downwardly spiking voltage transient without allowing the voltage level to fall below the lower end of the tolerance range. Figure 5 includes a flow chart showing a method of the present invention. At step 501, the output of the voltage regulator is set to a voltage level within the tolerance range and is provided to multiple components of a computer system. In accordance with one embodiment of the present invention, this voltage level may be at or near a voltage level that can accommodate a downwardly spiking voltage transient without allowing the voltage level to fall below the lower end of the tolerance range. For one embodiment of the present invention, to reduce power consumption, this voltage level is set below the nominal voltage level. At step 502, a power state status signal is sent to the voltage regulator indicating that the power state status of one or more components of the computer system has changed. At step 503, the output voltage level of the voltage regulator is adjusted accordingly. That is, in accordance with one embodiment of the present invention, the target output voltage level of the voltage regulator may be raised if the power state status signal indicates that one or more components of the computer system have entered a sleep mode. Similarly, the target output voltage level of the voltage regulator may be lowered if the power state status signal indicates that one or more components of the computer system have entered a wake mode. This invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident to persons having the benefit of this disclosure that various modifications and changes may be made to these <Desc/Clms Page number 15> embodiments without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
The present invention provides a haptics control system that may include a driver to generate a continuous drive signal and to output the drive signal to a mechanical system on an electrical signal line, wherein the continuous drive signal causes the mechanical system to vibrate to produce a haptic effect. The haptics control system may further include a monitor, coupled to the electrical signal line, to capture a Back Electromotive Force (BEMF) signal generated by the mechanical system in the electrical signal line, to measure a BEMF signals attribute, and to transmit an adjustment signal to the driver based on the BEMF signals attribute. The driver is further configured to adjust the continuous drive signal according to the adjustment signal. |
A haptics control system, comprising: a driver to generate a continuous drive signal to an output pin; and a monitor, coupled to the output pin, to capture a Back Electromotive Force (BEMF) signal generated thereon, to measure a BEMF signal attribute, and to transmit an adjustment signal to the driver based on the BEMF signal attribute, wherein the driver is configured to adjust the continuous drive signal generation according to the adjustment signal. The haptics control system of claim 1, wherein the haptics control system is an integrated circuit. The haptics control system of claim 1, wherein the BEMF signal attribute is a frequency of the BEMF signal. The haptics control system of claim 1, wherein the BEMF signal attribute is an amplitude of the BEMF signal. The haptics control system of claim 1, wherein the monitor comprises: a DC canceller element to remove a DC offset corresponding to the drive signal from the captured signal; an amplifier; and an analog to digital converter. The haptics control system of claim 5, wherein the monitor further comprises: a rectifier to invert negative phases of the captured signal. The haptics control system of claim 5, wherein the monitor further comprises: a pair or resistors that mirror a resistance in the mechanical system. The haptics control system of claim 5, wherein the monitor further comprises: a rectifier to invert negative phases of the captured signal. The haptics control system of claim 5, wherein the DC canceller element comprises: a current source to produce a DC canceling current. The haptics control system of claim 5, wherein the DC canceller element comprises: a voltage source to produce a DC canceling current. The haptics control system of claim 5, wherein the DC canceller element is implemented digitally. The haptics control system of claim 1, wherein the BEMF signal attribute is a BEMF signal frequency, and the frequency is measured by capturing reference points corresponding to BEMF signal zero crossings. The haptics control system of claim 1, wherein the BEMF signal attribute is the BEMF signal frequency, and the frequency is measured by capturing reference points corresponding to BEMF signal peak value. The haptics control system of claim 1, wherein the BEMF signal attribute is the BEMF signal amplitude and the amplitude is measured by monitoring BEMF signal peak values. The haptics control system of claim 1, wherein the driver is configured to operate in two modes, a switched drive mode to generate a switched drive signal and a linear drive mode to generate a linear drive signal. The haptics control system of claim 15, wherein the driver is configured to operate in linear mode when the monitor is capturing the BEMF signal. The haptics control system of claim 1, wherein the continuous drive signal is a current signal and the captured signal is a voltage signal. The haptics control system of claim 1, wherein the continuous drive signal is a voltage signal and the captured signal is a current signal. The haptics control system of claim 18, further comprising a sensing resistor. The haptics control system of claim 1, wherein the continuous drive signal is a square wave drive signal. The haptics control system of claim 1, wherein the continuous drive signal is a rhombic shaped drive signal. The haptics control system of claim 1, wherein the continuous drive signal is a sinusoidal drive signal both saturated or not. The haptics control system of claim 1, wherein the continuous drive signal is a multilevel pseudo sinusoidal drive signal. The haptics control system of claim 22, wherein the monitor measures the BEMF signal when the sinusoidal drive signal's rate of current change is zero. The haptics control system of claim 22, wherein the sinusoidal drive signal is a current signal. The haptics control system of claim 22, wherein the sinusoidal drive signal is a voltage signal. The haptics control system of claim 1, wherein the driver outputs the drive signal to a plurality of mechanical systems from the output pin / pins. The haptics control system of claim 27, wherein the drive signal causes each mechanical system to vibrate to produce a haptic effect. The haptics control system of claim 27, wherein the monitor captures a sum of all the BEMF signal's generated by the plurality of mechanical systems. The haptics control system of claim 1, wherein the output pin includes a pair of pins for a differential continuous drive signal. A method to generate a haptic effect, comprising: generating a continuous drive signal; outputting the continuous drive signal to an actuator via a signal line, wherein the continuous drive signal vibrates the actuator to generate a haptic effect; capturing a BEMF signal generated by the actuator on the signal line during the application of the continuous drive signal; measuring a BEMF signal property from the BEMF signal; and adjusting a corresponding continuous drive signal property based on the measured BEMF signal property. The method of claim 31, wherein the BEMF signal property is a BEMF signal frequency. The method of claim 31, wherein the BEMF signal property is a BEMF signal amplitude. The method of claim 31, wherein capturing the BEMF signal comprises removing a DC offset in a captured signal, wherein the DC offset corresponds to the drive signal. The method of claim 31, further comprising amplifying the BEMF signal and converting the BEMF signal to digital values. The method of claim 31, further comprising rectifying the captured signal to invert negative phases. The method of claim 31, wherein the DC offset is removed in an analog domain. The method of claim 31, wherein the DC offset is removed digitally. The method of claim 31, wherein the BEMF signal property is the BEMF signal frequency and the frequency is measured by capturing reference points corresponding to BEMF signal zero crossings. The method of claim 31, wherein the BEMF signal property is the BEMF signal frequency and the frequency is measured by capturing reference points corresponding to BEMF signal peak value. The method of claim 31, wherein the BEMF signal property is the BEMF signal amplitude and the amplitude is measured by monitoring BEMF signal peak values. The method of claim 31, further comprising: generating the continuous drive signal as a switched drive signal in one mode and a linear drive signal in another mode. The method of claim 42, wherein the method generates a linear drive signal when capturing the BEMF signal. The method of claim 31, wherein the continuous drive signal is a current signal and the captured signal is a voltage signal. The method of claim 31, wherein the continuous drive signal is a voltage signal and the captured signal is a current signal. The method of claim 31, wherein the continuous drive signal is a square wave drive signal. The method of claim 31, wherein the continuous drive signal is a rhombic shaped drive signal. The method of claim 31, wherein the continuous drive signal is a sinusoidal drive signal. The method of claim 31, wherein the continuous drive signal is a multilevel pseudo sinusoidal drive signal. The method of claim 48, wherein the sinusoidal drive signal is saturated. The method of claim 48, wherein capturing the BEMF signal when the sinusoidal drive signal's rate of current change is zero. The method of claim 31, wherein the sinusoidal drive signal is a current signal. The method of claim 31, wherein the sinusoidal drive signal is a voltage signal. The method of claim 31, further comprising applying the continuous drive signal to a plurality of actuators via the signal line. The method of claim 54, further comprising capturing a sum of all BEMF signals generated by the plurality of actuators on the signal line. A haptics control system, comprising: a driver to generate a continuous drive signal to an output pin; and a monitor comprising: an input coupled to the output pin, a DC canceling element to separate a BEMF signal from the continuous drive signal, an amplifier, an analog to digital converter; an output to transmit an adjustment signal; wherein the driver is configured to adjust the continuous drive signal generation according to the adjustment signal. The haptics control system of claim 56, wherein the monitor measures a BEMF signal property and generates the adjustment signal based on the BEMF signal property. The haptics control system of claim 57 wherein the BEMF signal property is a frequency. The haptics control system of claim 57, wherein the BEMF signal property is an amplitude. The haptics control system of claim 56, wherein the haptics control system is an integrated circuit. The haptics control system of claim 56, the DC canceling element is implemented using analog circuitry. The haptics control system of claim 56, the DC canceling element is implemented digitally. The haptics control system of claim 56, wherein the driver is configured to operate in two modes, a switched drive mode to generate a switched drive signal and a linear drive mode to generate a linear drive signal. The haptics control system of claim 56, wherein the continuous drive signal is a current signal. The haptics control system of claim 56, wherein the continuous drive signal is a voltage signal. An electronic device comprising: a haptics controller to generate instructions based on a desired haptic effect; a driver to receive to the instructions and generate a continuous drive signal; a linear resonant actuator, coupled to the driver, to receive the continuous drive signal from driver via a signal line and to vibrate a mass within the linear resonant actuator thereby generating the desired haptic effect; and a monitor to capture a BEMF signal produced by the vibration on the signal line, to measure a BEMF signal property, wherein the driver is configured to adjust generation of the continuous drive signal based on the measured BEMF signal property. A method of estimating a resonant period of an actuator comprising: in a first iteration, supplying a drive current to the actuator in a first direction, measuring, a predetermined time after supplying the first direction drive current, a reference BEMF value, and after the BEMF value deviates from the reference BEMF value, searching for a first time when the BEMF value returns to the reference BEMF value; in a second iteration, supplying the drive current to the actuator in a second direction, measuring, a predetermined time after supplying the second direction drive current, a reference BEMF value, and after the BEMF value deviates from the reference BEMF value, searching for a second time when the BEMF value returns to the reference BEMF value; and calculating the resonant period of the actuator based on the first and second times. |
CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 SMART LINEAR RESONANT ACTUATOR CONTROL BACKGROUND [01] This application benefits from the priority of provisional application S.N. 61/450,824, filed March 9, 2011. [02] The present invention relates to generating haptics effects. [03] Haptics refers to the sense of touch. In electronic devices, haptics relates to providing a touch sensory feedback to the user. Electronic devices incorporating haptics may include cell phones, PDAs, gaming devices, etc. The user interacts with electronic devices through a user interface, such as a touch screen; however, without some kind of feedback, the user often does not know if the user's desired function was recognized or is being performed by the electronic device. Thus, electronic devices may generate an audio or haptic feedback in the form of a vibro-tactile sensation (e.g. a "simulated click") to alert the user of the electronic device's performance. Stated differently, haptic feedback lets the user know what is going on with the electronic device. In a gaming electronic device, for example, haptics can provide sensory stimuli according to game interactions. [04] Haptic feedback can be generated by electro-mechanical systems. An electrical system produces a drive signal that will then cause a mechanical system to produce the haptic effect. For example, an actuator incorporating a moving mass can be used to generate haptic effects. A linear resonant actuator (LRA) is an example of one such actuator in which a moving mass is spring loaded. For optimal and efficient haptic generation using an LRA the spring loaded mass should be driven at its mechanical resonant frequency, which is the natural vibration frequency of the spring loaded mass. Also, the "volume" of the haptic effect may be controlled by the amplitude of the actuator driving signal. [05] BEMF (Back Electromotive Force) can be used to optimally program the drive signal at the mechanical resonant frequency and at the desired amplitude. BEMF is an electrical signal that is induced into the electrical connections of the motor by the movement of a permanent magnet (which has a mass) relative to a stationary wire wound coil. Since the mass will vibrate at the natural resonant frequency, the BEMF signal induced will also propagate at this resonant frequency. [06] In some conventional systems, a separate coil is used to capture the BEMF. The BEMF coil, which is not part of the driving coil that energizes the mass, captures the BEMF produced by the mass. However, these systems require extra designated parts such as the BEMF coil CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 specifically for capturing the BEMF, which result in larger electronic devices. Some other conventional systems use a discontinuous drive signal to capture the BEMF signal. These systems' drive signals are stopped at predetermined times so there is no current being applied to the motor during these times. The BEMF can then be captured from the drive signal coil at those predetermined times. Therefore, there is a constant switching between applying the drive signal and measuring the BEMF signal. The constant switching results in less energy being applied to the mechanical system, reducing the quality of the overall haptic effect. The predetermined times also limits the range of frequencies that the drive electronics can tolerate. [07] Accordingly, the inventors recognized a need in the art for an adaptive haptic effect generation that could capture the BEMF signal with a continuous drive signal application without the requirement for extraneous parts. BRIEF DESCRIPTION OF THE DRAWINGS [08] FIG. 1(a) is a simplified block diagram of a smart LRA drive system according to an embodiment of the present invention. [09] FIG. 1(b) is a simplified diagram of an electric-magnetic motor according to an embodiment of the present invention. [10] FIG. 1(c) is an electrical model of a motor according to an embodiment of the present invention. [11] FIG. 2 is a simplified process flow method to generate a haptic effect according to an embodiment of the present invention. [12] FIG. 3 is a simplified block diagram of a BEMF monitor according to an embodiment of the present invention. [13] FIG. 4 is a simplified circuit of a BEMF monitor according to an embodiment of the present invention. [14] FIG. 5 is a simplified circuit of a BEMF monitor according to an embodiment of the present invention. [15] FIG. 6 is a simplified circuit of a BEMF monitor according to an embodiment of the present invention. CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 [16] FIG. 7 is a simplified circuit of a BEMF monitor according to an embodiment of the present invention. [17] FIG. 8 illustrates a timing graph. [18] FIG. 9(a) illustrates a timing graph. [19] FIG. 9(b) illustrates a timing graph. [20] FIG. 10 illustrates a timing graph. [21] FIG. 11(a) is a simplified circuit diagram of a dual mode driver according to an embodiment of the present invention. [22] FIG. 11(b) is a simplified circuit diagram of a dual mode driver according to an embodiment of the present invention. [23] FIG. 11(c) is a simplified circuit diagram of a dual mode driver according to an embodiment of the present invention. [24] FIG. 12 is a simplified circuit diagram of a dual mode driver according to an embodiment of the present invention. [25] FIG. 13 illustrates a timing graph. [26] FIG. 14 is a simplified diagram of smart LRA driver input system according to an embodiment of the present invention. [27] FIG. 15 illustrates drive signal profiles according to embodiments of the present invention. [28] FIG. 16 illustrates drive signal profile according to embodiment of the present invention. [29] FIG. 17 illustrates drive signal profile according to embodiment of the present invention. [30] FIG. 18 is a simplified block diagram of a smart LRA drive system according to an embodiment of the present invention. [31] FIG. 19(a) is a simplified block diagram of an LRA drive system according to another embodiment of the present invention. [32] FIGS. 19(b) and (c) are graphs showing AC transfer functions of an LRA drive system with and without bridging capacitors. CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 [33] FIG. 20 is a simplified block diagram of a BEMF monitor system according to another embodiment of the present invention. [34] FIG. 21 illustrates a method according to an embodiment of the present invention. DETAILED DESCRIPTION [35] Embodiments of the present invention provide a haptics control system that includes a driver to generate a continuous drive signal to an output pin. The haptics control system also includes a monitor, coupled to the output pin, to capture a Back Electromotive Force (BEMF) signal generated thereon, to measure a BEMF signal attribute, and to transmit an adjustment signal to the driver based on the BEMF signal attribute. The driver is configured to adjust the continuous drive signal generation according to the adjustment signal. [36] Embodiments of the present invention also provide a method to generate a haptic effect. The method may include generating a continuous drive signal; applying the continuous drive signal to an actuator via a signal line, wherein the continuous drive signal vibrates the actuator to generate a haptic effect; capturing a BEMF signal generated by the actuator on the signal line during the application of the continuous drive signal; measuring a BEMF signal property from the BEMF signal; and adjusting a corresponding continuous drive signal property based on the measured BEMF signal property. [37] Embodiments of the present invention further provide an electronic device including a haptics controller to generate instructions based on a desired haptic effect, and a driver to receive the instructions and generate a continuous drive signal. The electronic device also includes a linear resonant actuator, coupled to the driver, to receive the continuous drive signal from the driver via a signal line and to vibrate a mass within the linear resonant actuator thereby generating the desired haptic effect. A monitor captures a BEMF signal produced by the vibration on the signal line, to measure a BEMF signal property. The driver is configured to adjust the generation of the continuous drive signal based on the measured BEMF signal property. [38] The invention provides a smart linear resonant actuator (LRA) drive scheme for haptic generation that applies a continuous drive signal. The continuous drive signal is applied to a motor that mechanically generates the desired haptic effect. This drive scheme also allows for the monitoring of a BEMF signal induced by the motor while the continuous drive signal is applied. In other words, the drive signal is applied and the BEMF is monitored simultaneously. The resonant frequency and/or the amplitude of the motor's vibration may be measured from CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 the BEMF signal. Based on the measurements, the continuous output drive signal may be adjusted accordingly [39] FIG. 1(a) is a simplified block diagram of smart LRA drive system 100 according to an embodiment of the present invention. The system 100 may include a haptics controller 110, a continuous LRA driver 120, and a BEMF monitor 130. The continuous LRA driver 120 may be coupled to a motor via a signal line. The continuous LRA driver 120 may include an output pin to which the motor is coupled. The signal line may be a pair of electrical lines, and the output pin may include a pair of pins for differential signals. The BEMF monitor 130 may also be coupled to the signal line. [40] According to a haptic effect request, the haptics controller 110 may generate a corresponding control signal output to the continuous LRA driver 120. For example, a user may select an icon on a touch screen, and the haptic controller 110 may generate a control signal corresponding to a haptic effect such as a clicking vibration for feedback stimulation to the user for his/her selection. The haptics controller 110 may provide a plurality of different haptic effects. The continuous LRA driver 120 may receive the control signal from the haptic controller 110 and may generate a drive signal accordingly. The drive signal may be continuous. Further, the drive signal may vary A, where A is the drive signal's amplitude. [41] The continuous LRA driver 120 may output the generated drive signal to the motor where the drive signal may cause the motor to vibrate and, thus, generating the haptic effect. The drive signal may be outputted to an output pin by the continuous LRA driver 120, and the motor may also be coupled to the output pin. [42] The motor may include a coil motor with a spring loaded mass. The motor may include a permanent magnet. The motor may cause the spring loaded mass to vibrate to generate the haptic effect. The motor may also include magnetic coils to generate the motion. Moreover, the vibration by the motor may induce a BEMF signal to be produced in the electrical signal lines coupled to the motor. The BEMF signal's frequency may correspond to the mechanical system's resonant frequency, and the BEMF signal's amplitude may correspond to the mechanical system's vibration magnitude. [43] FIG. 1(b) is a simplified block diagram of an electro-magnetic motor 190 that may be used in the present invention. The motor may include a coil 191, a permanent magnet 192, a spring 193, and a mass 194. The coil 191 may be coupled to the drive signal output. CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 [44] Returning to FIG. 1(a), the BEMF monitor 130 may capture the BEMF signal from the electrical signal lines that are used to apply the drive signal to the motor. The BEMF monitor 130 may be coupled to the same output pin to where the continuous LRA driver 120 outputs the drive signal and to where the motor is coupled. Since the drive signal may be a continuous signal, the BEMF monitor 130 may separate the BEMF signal from the drive signal. After separating the BEMF signal, the BEMF monitor 130 may measure the BEMF signal's frequency and/or amplitude. The BEMF monitor 130, according to the measurement values, may transmit an adjustment signal to the LRA driver 120. The LRA driver 120 may then adjust the drive signal's frequency and/or amplitude in order to produce an optimum drive signal. [45] Unlike some prior art systems, the system 100 does not pause or stop the drive signal when it captures the BEMF signal on the same signal lines as the drive signal. Moreover, the system 100 does not include separate coils or lines to capture the BEMF signal, but captures the BEMF signal on the same line as the drive signal simultaneous with drive signal application. [46] The haptics controller 110, continuous LRA driver 120, and BEMF monitor 130 may be fabricated on separate integrated circuits or may be combined in a common integrated circuit. For example, the continuous LRA driver 120 and the BEMF monitor 130 may be fabricated on a single integrated circuit. The integrated circuit(s) may be placed on a circuit board, for example a printed circuit board (PCB). [47] To understand the operation of the present invention, consider FIG. 1(c), which illustrates an electrical model of the motor. The motor may be represented by three electrical components. A resistive component R represents a resistance in the motor. An inductive component L represents an inductance in the motor. A BEMF component represents an electrical signal generated by the motor's motion. Thus, the voltage seen at the motor may be characterized by: di d V = Ri + L ¨+ ¨ dt dt where R is the resistance component in the motor, i is the current, L is the inductor component di cL1 in the motor, dt is the rate of change of the current, A is the magnetic flux, and d is the BEMF. The BEMF may be further defined as: e == K v dt where Kg is an EMF constant and v is the velocity. CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 [48] FIG. 2 illustrates a method 200 to generate a haptic effect according to an embodiment of the present invention. Initially, a haptic control signal may be received (Block 210). The haptic control signal may include information regarding the characteristics of the desired haptic effect. Characteristics may include the type of haptic effect, duration of haptic effect, etc. Next, a drive signal may be generated according to the haptic control signal (Block 220). The drive signal may be a continuous signal. For example, the drive signal may be a pulse modulated signal, which is a continuous signal. [49] The generated drive signal may be outputted to a motor (Block 230). The drive signal may excite the motor into motion, which will cause a mass in the motor to vibrate according to the drive signal's profile. The vibration of the mass causes the haptic effect that is felt by the user. The vibration also may induce a BEMF signal in the electrical lines that applied the drive signal to the motor. [50] During the continuous drive signal application to the motor, the BEMF signal may be measured (Block 240). The BEMF signal may be captured in the electrical lines that applied the drive signal. The BEMF signal may be separated from the drive signal because the drive signal is also captured with the BEMF in the electrical lines since the drive signal is continuous. BEMF signal is usually a low frequency signal. Upon separation of the BEMF signal, the frequency and/or amplitude of the BEMF signal is measured. The BEMF signal's frequency may correspond to the mechanical system's resonant frequency, and the BEMF signal's amplitude may correspond to the mechanical system's vibration magnitude. [51] The drive signal's frequency and/or amplitude may be adjusted (Block 250). In a feedback manner, the drive signal's profile may be adjusted according to the BEMF measurement. In an optimal system, the drive signal's frequency will be at the mechanical system's resonant frequency, its amplitude will be at the desired haptic effect magnitude. [52] FIG. 3 is a simplified block diagram a BEMF monitor 300 according to an embodiment of the present invention. The BEMF monitor 300 may receive an input signal from a connected actuator/motor. Also, the BEMF monitor 300 may capture the BEMF signal while a continuous drive signal is applied to the connected motor. The BEMF monitor 300 may include a rectifier 310, a DC canceller 320, an amplifier 330, and an analog-to-digital converter (ADC) 340. [53] The rectifier 310 may invert the negative phases of the input signal. Thus, the rectified signal may always be a positive voltage. The DC canceller 320 may remove the DC offset in the rectified input signal. The DC offset may correspond to the drive signal. The amplifier 330 may amplify the BEMF signal to exaggerate the BEMF signal's profile. The ADC 340 may then CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 convert the amplified signal into a digital signal. The converted digital signal may then be used to measure the frequency and/or amplitude of the BEMF signal as described in further detail below. [54] In one embodiment, the rectifier 310 and DC canceller 320 may be integrated together. In another embodiment, the rectifier 310 may not be needed because the DC canceller 320 may supply a DC canceling signal only in the positive semi-cycles of the input signal. [55] FIG. 4 is a circuit level implementation of a BEMF monitor 400 according to an embodiment of the present invention. The BEMF monitor 400 may receive input signals from a connected motor/actuator. The input signals from the motor may have both positive and negative phases. The BEMF monitor 400 may include mixers 410, resistors 420.1 and 420.2, a current source 430, an amplifier 440, gain resistors 450.1 and 450.2, and an ADC 460. [56] The mixers 410 may receive the input signal from the motor and may rectify the input signal to invert all negative phases. Consequently, the mixers 410 may produce an all positive phased signal. The mixers 410 may switch from semi-cycle to semi-cycle to produce an all- positive signal. [57] The resistors 420.1, 420.2 may be coupled to the mixers' 410 outputs. The resistors 420.1, 420.2 may mirror the resistance in the motor. The current source 430 may produce a DC canceling current to cancel the DC offset in the input signal. The DC offset may correspond to the drive signal that excited the motor into a vibratory motion. The current source's output may be coupled to the amplifier's input. The amplifier 440 may be a differential amplifier. The current source 430, for example, may be coupled to the summing node of the amplifier's 440 inverting input. The gain resistors 450.1, 450.2 may set the gain for the amplifier 440. [58] The ADC 460 may convert the analog input signal into a digital signal which may then be processed to measure the resonant frequency and/or the amplitude of vibration. In an embodiment, the BEMF monitor 400 may also include low pass filters following the amplifier to further narrow the signal of interest of the ADC 460. The low pass filter, for example, may be a RC low pass filter. [59] FIG. 5 is a circuit level implementation of a BEMF monitor 500 according to another embodiment of the present invention. The BEMF monitor 500 may receive input signals from a connected motor/actuator. The input signals from the motor may have both positive and negative phases. The BEMF monitor 500 may include resistors 510.1 and 510.2, a current source 520, mixers 530, an amplifier 540, gain resistors 550.1 and 550.2, and an ADC 560. CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 [60] The resistors 510.1, 510.2 may mirror the resistance in the LRA. The current source 520 may produce a DC canceling current. The mixers 530 may be switches or act as switches to apply the current from the current source 520 at only the positive cycles of the input signal. Thus, in BEMF monitor 500, the DC canceling and rectifying operation may be integrated. [61] The mixer's 530 output may be coupled to amplifier 540's input. The mixers 530 may couple the current source 520 to a different summing node of the amplifier 540 from semi-cycle to semi-cycle. The amplifier 540 may be a differential amplifier. The gain resistors 550.1, 550.2 may set the gain for the amplifier 540. [62] The ADC 560 may convert the analog input signal into a digital signal which may then be processed to measure the resonant frequency and/or the amplitude of vibration. In an embodiment, the BEMF monitor 500 may also include low pass filters following the amplifier to further narrow the signal of interest of the ADC 560. The low pass filter, for example, may be a RC low pass filter. [63] FIG. 6 is a circuit level implementation of a BEMF monitor 600 according to another embodiment of the present invention. BEMF monitor 600 may use a voltage source as the DC canceling source. The BEMF monitor 600 may receive input signals from a connected motor/actuator. The input signals from the motor may have both positive and negative phases. The BEMF monitor 600 may include resistors 610.1 and 610.2, a voltage source 620, two pair of mixers 630.1 and 630.2, matching resistors 640.1 and 640.2, an amplifier 650, gain resistors 660.1 and 660.2, and an ADC 670. [64] The resistors 610.1, 610.2 may mirror the resistance in the LRA. The voltage source VDAC 620 may produce a DC canceling voltage. In some implementations such as a string DAC, a voltage source may be preferable to a current source to cancel the DC offset. The mixers 630.1, 630.2 may be or may act as switches to apply the voltage from Vaqc 620 at only the positive cycles of the input signal. Thus, in BEMF monitor 600, the DC canceling and rectifying operation may be integrated. The matching resistors 640.1, 640.2 may match the resistance of the voltage source 620. [65] The amplifier 650 may be a differential amplifier. The mixers 630.1, 630.2 couple the VDAC 620 to a different summing node of the amplifier 650 from semi-cycle to semi- cycle. The gain resistors 660.1, 660.2 may set the gain for the amplifier 650. [66] The ADC 670 may convert the analog input signal into a digital signal which may then be processed to measure the resonant frequency and/or the amplitude of vibration. In an CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 embodiment, the BEMF monitor 600 may also include low pass filters following the amplifier to further narrow the signal of interest of the ADC 670. The low pass filter, for example, may be a RC low pass filter. [67] In an embodiment of the present invention, the BEMF monitor may be implemented primarily using digital circuitry. A digital implementation may reduce the analog circuitry components and, consequently, may reduce the BEMF monitor size. Also, a primarily digital implementation may be reconfigurable and programmable. FIG. 7 is a circuit level implementation of a BEMF monitor 700 that monitors the BEMF primarily using digital circuitry according to an embodiment of the present invention. [68] The BEMF monitor 700 may receive input signals from a connected motor/actuator. The input signals from the motor may have both positive and negative phases. The BEMF monitor 700 may include resistors 710.1 and 710.2, an amplifier 720, gain resistors 730.1 and 730.2, an ADC 740, and a digital controller 750. [69] The resistors 710.1, 710.2 may mirror the resistance in the LRA. The resistors' 710.1, 710.2 output may be coupled to the amplifier's 720 input. The gain resistors 730.1, 730.2 may set the gain for the amplifier 720. The ADC 740 may convert the analog signal into a digital signal which will then be processed to establish the resonant frequency and/or the amplitude of vibration. The ADC 740 may be a high resolution ADC to measure accurately the BEMF component in a wide dynamic range with the DC offset still remaining in the analog input signal. The digital controller 750 may digitally remove the DC component of digitized signal. [70] In the primarily digital implementation of the BEMF monitor 700, the DC component may be removed from the digitized signal. After removing the DC component, the BEMF signal may be isolated. The ADC 740 in this embodiment may sample both positive and negative phases of the input signal. Accordingly, the amplifier 720 in the embodiment may have a variety of different configurations such as an instrumentation amplifier. Furthermore, the frequency and amplitude measuring techniques described below may be applicable to both analog and digital DC component removing implementations. [71] FIG. 8 is a timing graph simulating a drive signal, a voltage signal seen across a motor, a BEMF signal, and the motor's displacement. The first (top) graph shows a drive signal. The drive signal may be a current signal, and the drive signal may be a rectangular wave signal such as a square wave shown. CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 [72] The second graph shows the voltage signal generated across the terminals of the motor. The voltage signal is generated by the current flowing through the resistance of the motor which produces a DC change in the voltage level. The voltage signal may also include a transient level, which is generated by the sudden change in the current signal applied to the inductance element of the motor and is shown as the spikes in voltages on the second graph. The voltage signal may also incorporate the BEMF signal superimposed on the DC level. [73] The third graph shows the BEMF signal with the DC and transient levels removed. When driving at the mechanical resonant frequency of the motor, the zero crossings of the BEMF signal should optimally correspond to the rising and falling edge of the drive signal. The fourth graph shows the motor's displacement (vibration). The maximum displacement should optimally correspond with the zero crossings of the BEMF signal and the rising/falling edges of the drive signal. [74] In an embodiment of the present invention, BEMF frequency may be calculated by determining the zero crossings of the BEMF signal. FIG. 9(a) is a timing graph showing a method to measure the BEMF signal's zero crossing according to an embodiment of the present invention. The top graph shows a BEMF signal captured from a vibrating motor, and the bottom graph shows an input voltage into the ADC of a BEMF monitor. [75] The BEMF signal may be measured in transition window ti., which starts after a change in current direction of the drive signal. Transition window ti may contain a spike in the ADC input voltage, which represents the transient level caused by the current change. A first reference point for determining the frequency of the BEMF signal may be measured at the end of transition window U. At this point in time, the transient level has decayed sufficiently to begin reference point measurement. [76] During a time period t2, the ADC may continue to sample the BEMF signal or may suspend sampling for a time less than half of the resonant period. After time period t2, the BEMF signal may be monitored again to find a second reference point. The second reference point is the voltage which equates (within a tolerance) to the voltage level of the first reference point. The frequency of the BEMF signal may then be derived using the first and second reference points. The resonant period may be slightly above the time lapse between the first and second measured reference points. Further, the BEMF measurements may be performed iteratively to provide continuous adjustment for drive signal output. [77] The transition windows may be synchronized with a single capture reference value or multiple capture reference values, which are controlled by the ADC clock. Multiple reference CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 values may provide a more accurate measurement while also may use more resources as compared to the single capture reference. [78] In one embodiment, the reference values may correspond to the BEMF signal's peak value. The estimated transition windows for peak value measurements may be preprogrammed using prior knowledge of the system or may be a coarse estimate. The coarse estimate may be updated and/or reconfigured. [79] In another embodiment of the present invention, BEMF frequency may also be calculated by determining the peak voltage of the BEMF signal. FIG. 9(b) is a timing graph showing a method to determine the BEMF signal's frequency using peak voltage measurements according to an embodiment of the present invention. The top graph shows a BEMF signal captured from a vibrating motor, and the bottom graph shows an input voltage into the ADC of a BEMF monitor. [80] Following a change in current direction at time TO, the ADC may continue or suspend sampling for a time period Ti, which is less than a quarter of the resonant period approximately. After a time period Ti, the BEMF signal may be monitored for time period T2, a sampling period, to find a first reference point. The first reference point may be the peak voltage of the BEMF signal, designated by peak time Tp where Tp is the time from the beginning of the sampling period to when the first reference point of the peak voltage is measured. [81] The frequency of the BEMF signal may then be derived. The resonant period may be approximately four times the time period between the change in current direction (TO) and the first measured reference point at the peak time (Ti + Tp). After detecting the first reference point, a change in the current direction is then applied after a time T3 which is approximately equal to the sum of Ti and Tp. The BEMF measurements may be performed iteratively to provide continuous adjustment for drive signal output. [82] In an embodiment of the present invention, a BEMF signal magnitude may be measured by monitoring the maximum amplitude of the BEMF signal. FIG. 10 is a timing graph to determine the BEMF signal's amplitude using peak voltage measurements according to an embodiment of the present invention. The top graph shows a BEMF signal captured from a vibrating motor, and the bottom graph shows an input voltage into the ADC of a BEMF monitor. [83] Maximum amplitude of the BEMF signal usually will occur at the midpoint of the current pulse. The ADC clock may set reference values to determine the window for when the BEMF CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 signal will peak. Based on the reference values, a window for max amplitude may be set by the ADC. The peak value measured in this window may correspond to the maximum amplitude. [84] According to an embodiment of the present invention, a dual mode driver may be provided in a haptic generation system with a continuous drive signal. The two modes may be linear drive mode and switched drive mode. The switched drive mode may have lower power consumption but may generate higher electrical noise than the linear drive mode. Also, the dual mode driver may be in linear drive mode when measuring the BEMF signal. [85] FIG. 11(a) is a simplified diagram of a dual mode driver 1100 according to an embodiment of the present invention. The dual mode driver 1100 may include a current source 1110, a DAC 1120, an op-amp 1130, a switch 1140, a pulse width modulator 1150, a switch 1160, and a pair of drive transistors 1170 and 1180. The dual mode driver 1100 may be coupled to a LRA/motor 1190. The LRA 1190 may be represented by electrical components of a resistor element and inductor element as described above in reference to FIG. 1(b). [86] The op-amp 1130 may control the magnitude of the drive signal in either mode. The op- amp 1130 may amplify a regulated voltage according to the current source 'REF 1110. The drive transistors may be complimentary transistors (one is a p type and the other is an n type transistor). The transistors selectively switch on/off at the same time according to the switched mode signals that are coupled to the transistors' gate. The output of the transistors may be coupled to generate the current drive signal Lam The LRA 1190 may receive the current drive signal 'OUT and generate the reference voltage which is used to regulate the motor current. [87] The switches 1140 and 1160 may control the mode of the dual mode driver 1100. In linear mode, switch 1140 may be closed, and switch 1160 may be open. In switched mode, switch 1140 may be open, and switch 1160 may be closed. [88] FIG. 11(b) is a simplified diagram of the dual mode driver of FIG. 11(a) in linear mode 1101 according to an embodiment of the present invention. The switch 1140 may be closed to provide the linear mode path, and switch 1160 (not shown) may be open. The dual mode driver in linear mode 1101 may include the current source 1110, the DAC 1120, the op-amp 1130, the switch 1140, the drive transistor 1170. The op-amp 1130 may control the magnitude of the drive signal in either mode. The op-amp 1130 may amplify a regulated voltage according to the current source 'REF 1110. The output of the transistor 1170 may generate the current drive signal Lam The LRA 1190 may receive the current drive signal 'OUT and generate the reference voltage which is used to regulate the motor current. Further, a sensing resistor R may sense the voltage across the LRA 1190 to control the drive output. CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 [89] FIG. 11(c) is a simplified diagram of the dual mode driver of FIG. 11(a) in switched mode 1102 according to an embodiment of the present invention. The switch 1160 may be closed to provide the mode path, and switch 11400 (not shown) may be open. The dual mode driver in switched mode 1102 may include a pulse width modulator 1150, a switch 1160, and a pair of drive transistors 1170 and 1180. The op-amp 1130 may control the magnitude of the drive signal in either mode. The op-amp 1130 may amplify a regulated voltage according to the current source 'REF 1110. The pulse width modulator 1150 may include a comparator that receives the op-amp 1130 output as one input, and a saw waveform as the other input. The pulse width modulator 1150 may output a pulsed mode signal. The drive transistors may be complimentary transistors (one is a p type and the other is an n type transistor). The transistors selectively switch on/off at the same time according to the switched mode signals that are coupled to the transistors' gate. The output of the transistors may be coupled to generate the current drive signal 'OUT The LRA 1190 may receive the current drive signal 'OUT and generate the reference voltage which is used to regulate the motor current. Further, a sensing resistor R may sense the voltage across the LRA 1190 to control the drive output. [90] Bi-directional current may be achieved by placing the drive transistors in an H-bridge configuration. FIG. 12 is a simplified diagram of a system 1200 with drive transistors in an H- bridge configuration and shows the direction of current flow for both linear and switched mode configurations. The solid lines represent linear mode and the dotted lines represent switched mode. The system 1200 may include a first set of drive transistors 1210.1, 1210.2, a second set of drive transistors 1220.1, 1220.2, a third set of drive transistors 1230.1, 1230.2, and a sensing resistor 1240. [91] The first set of transistors 1210.1, 1210.2 may be pmos type transistors. The second set of transistors 1220.1, 1220.2 may be nmos type transistors. The third set of transistors 1230.1, 1230.2 may be nmos type transistors. [92] In linear mode during a positive current pulse, current may flow through transistor 1210.1 and 1230.2, and all other transistors may be off. In linear mode during the negative current pulse, current may flow through transistor 1210.2 and 1230.1., and all other transistors may be off. Voltage may be sensed at the sensing resistor 1240. According to the sensed voltage, driving voltages at the gates of the transistors may be adjusted to regulate the motor's current. [93] In switched mode during a positive current pulse, current may flow through transistor 1210.1 and 1230.2 during the first part of the cycle. The current flow may charge the inductor component in the motor. CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 [94] In switched mode during the positive current pulse, transistors 1210.1 may turn off and transistor 1220.1 may turn on, the charge built in the inductor during the first part of the cycle may keep the current flowing through 1220.1 and 1230.2 as shown by the current flow diagram. In switched mode during a negative current pulse, current may flow through transistor 1210.2 and 1230.1 during the first part of the cycle. The current flow may flow charge the inductor component in the motor. Further in switched mode during the negative current pulse, transistors 1210.2 may turn off and transistor 1220.2 may turn on, the charge built in the inductor during the first part of the cycle may keep the current flowing through 1220.2 and 1230.1 as shown by the current flow diagram. Voltage may be sensed at the sensing resistor 1240. According to the sensed voltage, driving voltages at the gates of the transistors may be adjusted to regulate the motor's current. For example, the duty cycle of the gate voltages may be adjusted depending on the sensed voltage level. [95] The embodiments of the present invention described above show a current drive signal and voltage sensed signal from which the BEMF signal is measured. In another embodiment of the present invention, a voltage drive signal may be utilized and a current signal may be monitored to determined BEMF signal. According to the BEMF signal properties, the voltage drive signal's frequency and/or amplitude may be adjusted. A voltage drive signal may reduce the current flowing through the motor. [96] FIG. 13 is a timing graph simulating a drive signal, a sensed current signal in the motor, a BEMF signal, and the motor's displacement. The first (top) graph shows a drive signal. The drive signal may be a voltage signal, and the drive signal may be a rectangular wave signal such as a square wave shown. [97] The second graph shows the current signal generated in the motor. The BEMF signal may be superimposed on the current signal. In the second graph, the BEMF is illustrated as a "trough" on top of the sensed current signal. Thus, the BEMF may reduce the current supplied to the motor. [98] The third graph shows the motor's displacement (vibration). The fourth graph shows the BEMF signal with the DC current removed. The maximum displacement should optimally correspond with the zero crossings of the BEMF and the rising/falling edges of the drive signal. [99] To measure the current signal, the sensed current may be supplied to a sensing resistor in the BEMF monitor. FIG. 14 is a simplified block diagram of a sensing resistor 1410 coupled to a motor with a voltage drive signal according to an embodiment of the present invention. The voltage across the sensing resistor may then be processed similar to the voltage input signal as CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 described herein with references to figures 3-7. Furthermore, the same methods of detecting the resonance frequency and vibration amplitude may apply to voltage drive/sensed current embodiments described herein with reference to current drive/sensed voltage embodiments. [100] Moreover, embodiments of the present invention may be practiced using different drive profiles. While square waves may provide the most energy to the motor because squares waves have the greatest area under the curve, square waves also may include harmonics that are in the audible range. The harmonics, consequently, may generate unwanted buzzing or echoing sounds during the haptic effect. Thus, there may be a trade-off between effect intensity and harmonic side effects. [101] One alternative to a square wave drive signal may be a rhombic shaped drive signal. FIG. 15 illustrates a square wave drive signal and a rhombic shaped drive signal. The square wave drive signal, as illustrated in FIG. 15(a) may provide the most intense drive signal; however, the square wave drive signal may produce undesirable audible ranged harmonics. Additionally, the square wave is not the most energy efficient driving signal as the energy in the harmonics does not turn into motion. In addition, the square wave drive signal may not be perfectly "square" but may also be "rectangular" in an embodiment of the present invention. [102] The rhombic shaped drive signal, as illustrated in FIG. 15(b), may provide less energy to the motor than a similar magnitude square wave, the but the slope of the signal may not produce audible ranged harmonics and be more efficient in the sense that most energy lies in the natural resonance frequency. The drive signal may also be shaped as a triangular wave signal in an embodiment of the present invention. For a rhombic or other non- rectangular shaped drive signal, the reference point for change in current may be the top of the ramp when the current reaches its highest value. [103] In another embodiment of the present invention, a sinusoidal drive signal may be provided to drive the motor. FIG. 16 illustrates a sinusoidal drive signal. The sinusoidal drive signal may provide less energy to the motor than a similar magnitude square wave, but the sinusoidal signal may also not produce audible ranged harmonics and may be the most efficient option because 100% of the energy is applied at the resonant frequency. Alternatively a multi level symmetric drive signal that generates a pseudo sine wave may be provided as a trade-off between harmonic performance (audible noise) and implementation complexity. [104] In one embodiment, a saturated sinusoidal signal may be generated to provide greater energy efficiency. For example, instead of a sinusoidal signal with peaks at +1 and -1, a sinusoidal signal with peaks at +2 and -2 that is saturated between +1 and -1 may be CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 generated. Thus, the saturated sinusoidal signal may be more energy efficient while reducing audible range harmonics. [105] In another embodiment, a higher magnitude sinusoidal signal may be provided to compensate for any loss in energy. FIG. 17 illustrates a sinusoidal drive signal. A sinusoidal drive signal with magnitude 1.27*A may apply approximately the same energy in the resonant frequency than a square wave drive signal with magnitude A while being more efficient. The sinusoidal drive signal may become saturated at magnitude A. [106] The use of sinusoidal drive signals may affect the time when the BEMF can be measured. The voltage seen at the motor may be characterized as: di d V = Ri + L ¨+ ¨ dt dt where R is the resistance component in the motor, i is the current, L is the inductor component di cL1 in the motor, is the rate of change of the current, and d is the BEMF. Stated differently, the voltage at the motor is the sum of the voltages seen at the resistor and inductor components, and the BEMF. Thus, the BEMF may be characterized as: d = V - Ri - L dt dt [107] BEMF may be measured when the rate of change of the current is zero because of the inductor component. The drive current and sensed voltage peaks may occur when the change in current is zero because of the inductor component; hence, BEMF may be measured at this time. Therefore, BEMF voltage may be simplified to: d ¨ - V - Ri dt [108] If the drive signal is not at the resonance frequency, a frequency error may be detected between the drive signal and BEMF signal. In this situation, the peaks of the drive current and sensed voltage may not occur at the same time. The frequency error may be measured, and the drive signal may be adjusted until the drive peaks of the drive current and sensed voltage are synchronized. [109] The sinusoidal drive signal may be a current drive signal or may be a voltage drive signal according to embodiments of the present invention. Also, the present invention may sense either voltage across the LRA or current through the LRA to detect the BEMF generated respectively according to the drive signal properties. As the equation above indicates, the BEMF CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 voltage may be the difference of the sensed voltage and the voltage seen across the resistor component of the LRA. The detected BEMF current, accordingly, may be the difference between the sensed current and the current at the resistor component of the LRA. [110] Adjustment of the sinusoidal drive signal may be executed in two stages. First, the drive frequency may be adjusted by aligning the peak current and peak voltage of the sensed signal. This may correspond to the motor vibrating at its resonance frequency. Second, the amplitude of the drive signal may be adjusted according to the desired vibration strength because the amplitude of the peak voltage/current may be equal to the BEMF, which is proportional to the vibration strength. [111] Adjustment may be accomplished by closed loop control system provided within the BEMF monitor. Various types of control loops may be employed, including proportional loop controls CP-Loops,"), Proportional Integrative Loop controls (PI - Loop) or a full Proportional Derivative Integrative control (PDI - Loop). A P-Loop is likely to be the simplest one to implement and responds faster than PI Loops and without the instability that PDI Loops may involve. The P- Loop may be configured to reduce its proportional gain depending on a difference between a desired BEMF amplitude (vibration) and the required BEMF amplitude. When the vibration is within a programmable limit, the P-Loop gain may be set to one, otherwise may be set to a value selected from a locally-stored stored register map. The gain values may be programmable and may be set as a percentage of the maximum BEMF. [112] In an embodiment of the present invention, multiple LRAs may be arranged in parallel to generate multiple haptic effects. Thus, the user may feel one type of vibration at one part of the device and another type of vibration at another part of the device. FIG. 18 is a simplified block diagram of a multiple LRA system 1800 according to an embodiment of the present invention. [113] The system 1800 may include a haptics controller 1810, a continuous LRA driver 1820, and a BEMF monitor 1830. The continuous LRA driver 1820 may be coupled to a plurality of motors/LRAs 1-n via signal lines from a common output. The signal lines may be pairs of electrical lines. The BEMF monitor 130 may also be coupled to the signal line. [114] According to a haptic effect request, the haptics controller 1810 may generate a corresponding control signal to output to the continuous LRA driver 120. The haptic effect request may include a request for multiple haptic effects. The continuous LRA driver 1820 may receive the control signal from the haptic controller 1810 and may generate a drive signal accordingly. The drive signal may be continuous. The continuous LRA driver 1820 may output CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 the generated drive signal to the plurality of motors 1-n where the drive signal may cause each motor to vibrate and, thus, generating the desired haptic effect(s). The drive signal may be outputted to an output pin by the continuous LRA driver 120, and the motors 1- n may also be coupled to the output pin. [115] Each motor may include a coil motor with a spring loaded mass. The motor may include a permanent magnet. The motor may cause the spring loaded mass to vibrate to generate the haptic effect. The motor may also include magnetic coils to generate the motion. Moreover, the vibration by the motor may induce a BEMF signal to be produced in the electrical signal lines coupled to the motor. The BEMF signal's frequency may correspond to the mechanical system's resonant frequency, and the BEMF signal's amplitude may correspond to the mechanical system's vibration magnitude. [116] The BEMF monitor 1830 may capture the BEMF signal from the electrical signal lines that are used to apply the drive signal to the motor. The BEMF monitor 1830 may be coupled to the same output pin to where the continuous LRA driver 1820 outputs the drive signal and to where the motor is coupled. The BEMF signal may be the sum of all the BEMF signals. Since the drive signal may be a continuous signal, the BEMF monitor 130 may separate the BEMF signal from the drive signal. After separating the BEMF signal, the BEMF monitor 1830 may measure the BEMF signal's frequency and/or amplitude. The BEMF monitor 1830, according to the measurement values, may transmit an adjustment signal to the LRA driver 1820. The LRA driver 1820 may then adjust the drive signal's frequency and/or amplitude in order to produce an optimum drive signal. [117] The haptics controller 1810, continuous LRA driver 1820, and BEMF monitor 1830 may be fabricated on separate integrated circuits or may be combined in a common integrated circuit. For example, the continuous LRA driver 1820 and the BEMF monitor 1830 may be fabricated on a single integrated circuit. The integrated circuit(s) may be placed on a circuit board, for example a printed circuit board (PCB). [118] In an embodiment of the present invention, the LRA system may be a multi-functional actuator. The multi-functional actuator may include a vibration element as described in the above-mentioned embodiments and a speaker element for audio generation. The generated audio may be synchronized with the vibration generation to provide a user multi-sensory feedback system. [119] Other enhancements may be provided to provide robust performance in non-ideal operation environments where sources of interference may be present. Interference may arise CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 from several different sources. For example, transient spikes on power supplies potentially can couple through to the output driver and could introduce an error in the magnitude of the desired output drive signal. A small change in the output drive signal could cause a larger change in the induced BEMF signal. This could lead to a large error in the feedback and significantly reduce the performance of the driver leading to a poor and inconsistent haptic effect for an end user. [120] As another example, mechanical shock induced in the haptic system, for example if a user dropped a handset in which the system resides onto a hard surface, can create interference. Such shocks could induce an undesired BEMF signal which could in turn cause the driver to compensate and drive the actuator at a time when it was not required to. Or, a similar mechanical shock induced during a haptic effect could lead to a distorted BEMF signal which could in turn lead to a distorted haptic effect. Embodiments of the present invention may include interference rejection features to support robust and consistent performance, even in the presence of such interferers. [121] Power supply rejection may be accomplished in a variety of ways. In a first embodiment, the output driver may be designed with good power supply rejection. A variety of techniques can be employed to achieve power supply rejection, many of which ensure constant Vgs and Vds within driver transistors, so the current generated by the driver is independent on any frequency interference. Another technique is to decouple the driver output, which can be done using a buffer. This increases the output driver bandwidth, so the driver control loop can react and correct the current value faster making it independent from power supply interferences. [122] Yet another embodiment is illustrated in FIG. 19 (a). In this embodiment, a haptics system 1900 may include a haptic controller 1910, LRA driver 1920 and BEMF monitor 1930 as in prior embodiments and also may include a capacitor 1940 bridging terminals of the actuator motor. The added capacitor 1940 of this embodiment may add a pole in the overall transfer function which may help to increase power supply rejection. The capacitor 1940 may be sized appropriately in conjunction with impedance of the actuator to place the pole in desired locations in frequency distribution to tailor power supply rejection to individual needs. FIG. 19 (b) illustrates a frequency plot of an AC transfer function for an exemplary driver system constructed in accordance with FIG. 19 (a). For comparative purposes, FIG. 19 (c) illustrates a second plot of an AC transfer function for the same driver system but without the bridging capacitor. CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 [123] FIG. 20 illustrates another embodiment for power supply rejection. In this embodiment, a BEMF monitor 2000 may include a rectifier 2010, a DC canceller 2020, an amplifier 2030, a low pass filter 2040, and an ADC 2050. This embodiment operates in a manner similar to the embodiment of FIG. 3 but the low pass filter 2040 provides protection against transient signals from propagating to the ADC 2050. Characteristics of the low pass filter 2040 may be tailor to suit individual design needs. [124] FIG. 21 illustrates a method 2100 of calculating a resonant frequency of a haptic actuator according to an embodiment of the present invention. The method may begin in block 2102 where an initial estimate of a resonant period Tres is defined. The estimated resonant period Tres may be input to an integrated circuit or stored in a register within the integrated circuit. The method 2100 may apply a driving current to the actuator a first direction (box 2104). Thereafter, the method 2100 may wait for a predetermined period of time and measure a BEMF level, taking the measured level to be a reference BEMF value (box 2106). The method 2100 may sample the BEMF value continuously thereafter and store maximum and minimum BEMF values until a predetermined amount of time has elapsed (box 2108). In an embodiment, the predetermined amount of time may be set to 5/8ths of the estimate of Tres. After the predetermined amount of time has elapsed, the method 2100 may continue to sample the BEMF values and search until time 1/2Tres to detect a BEMF sample value that matches the reference BEMF value (boxes 2110, 2112). If a BEMF sample value is identified that matches the reference value, then at box 2114 the method 2100 may estimate a half period of the actuator's resonant period (1/2Tres) based on an amount of time that elapses between detection of the reference BEMF value at box 2106 and re-detection of the reference BEMF value at box 2110. If the search at box 2110 does not detect a BEMF value that matches the reference BEMF value before time 1/2 Tres, the search may continue for an additional amount of time, after which it times out (boxes 2116, 2118). If the search succeeds and a BEMF sample value is identified that matches the reference value, the method 2100 may advance to box 2114. If not, the method 2100 may enter a reset state (box 2120). [125] At box 2114, after the method 2100 has calculated 1/2 Tres based on measurements obtained during boxes 2106-2118, the method may advance to box 2122 and apply a driving current to the actuator in a second direction, opposite to the direction of current applied in box 2104. By reversing drive current, the method 2100 drives the actuator in a second half cycle of operation. The method 2100 may repeat operation of boxes 2106-2118 for the second half- cycle, shown as boxes 2124-2136 respectively. CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 [126] Specifically, after application of the driving current in box 2122, the method 2100 may wait for a predetermined period of time and measure a BEMF level, taking the measured level to be a reference BEMF value (box 2124). The method 2100 may sample the BEMF value continuously thereafter and store maximum and minimum BEMF values until a predetermined amount of time has elapsed (box 2126). Again, the predetermined amount of time may be set to 5/8ths of the estimate of Tres. After the predetermined amount of time has elapsed, the method 2100 may continue to sample the BEMF values and search until time 1/2Tres to detect a BEMF sample value that matches the reference BEMF value (boxes 2128, 2130). If a BEMF sample value is identified that matches the reference value, the method 2100 may estimate a half period of the actuator's resonant period (1/2Tres) based on an amount of time that elapses between detection of the reference BEMF value at box 2124 and re-detection of the reference BEMF value at box 2128 (box 2132). If the search at box 2110 does not detect a BEMF value that matches the reference BEMF value before time 1/2 Tres, the search may continue for an additional amount of time, after which it times out (boxes 2134, 2136). If the search succeeds and a BEMF sample value is identified that matches the reference value, the method 2100 may advance to box 2132. If not, the method 2100 may enter the reset state at box 2120. [127] At box 2132, after the method 2100 has calculated 1/2 Tres based on measurements obtained during boxes 2124-2136, the method may advance to box 2138 and calculate a final estimated value of Tres based on the calculations obtained at boxes 2114 and 2132. For example the final Tres estimate may be calculated as a moving average of prior Tres estimates, if any, obtained through prior iterations of the method 2100 and the new calculations. In one embodiment, the final Tres estimate may be calculated as: 'Tres Tres (History _ weight * Pr evious Tres) + New _ sample _ weight * _________ Direction A + Direction B Actual Tres ¨ 2 ___________ 2 History _ weight + New _ sample _ weight where the values History_weight and New_Sample_Weight are values that may be programmed by system designers and/or users. Thus, the method 2100 may provide programmable flexibility in determining the relative contributions of new Tres estimates and prior Tres estimates. [128] In the reset state, the method 2100 may put the driver into a high impedance state and the method 2100 may measure BEMF in the absence of active drive signals. The method 2100 may estimate a resonant period based on zero crossings of the BEMF signal with respect to ground. For example, the method 2100 may ensure that a predetermined number (say, 3) of zero crossings are detected, each of which should correspond to a half-period of the actuator's CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 resonant period. The method 2100 may calculate Tres from the zero crossings and conclude reset operation, whereupon the method 2100 may advance to box 2012. [129] In an embodiment, the method 2100 may compare values of the BEMF signal to predetermined thresholds to determine whether to alter Tres estimates based on new calculations. For example, the method may compare maximum BEMF values obtained at boxes 2108 and/or 2126 to a predetermined threshold and, suspend operation of the method 2100 if these values do not exceed a predetermined minimum value. Terminating the method 2100 in this embodiment prevents the Tres estimate from changing when the BEMF values are too small to provide the basis for new estimates. [130] Several embodiments of the present invention are specifically illustrated and described herein. However, it will be appreciated that modifications and variations of the present invention are covered by the above teachings. In other instances, well-known operations, components and circuits have not been described in detail so as not to obscure the embodiments. It can be appreciated that the specific structural and functional details disclosed herein may be representative and do not necessarily limit the scope of the embodiments. [131] Those skilled in the art may appreciate from the foregoing description that the present invention may be implemented in a variety of forms, and that the various embodiments may be implemented alone or in combination. Therefore, while the embodiments of the present invention have been described in connection with particular examples thereof, the true scope of the embodiments and/or methods of the present invention should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims. [132] Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, CA 02829568 2013-09-09 WO 2012/122443 PCT/US2012/028411 symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints. [133] Some embodiments may be implemented, for example, using a computer- readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software. The computer-readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium and/or storage unit, for example, memory, removable or non-removable media, erasable or non-erasable media, writeable or re-writeable media, digital or analog media, hard disk, floppy disk, Compact Disc Read Only Memory (CD-ROM), Compact Disc Recordable (CD-R), Compact Disc Rewriteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of Digital Versatile Disc (DVD), a tape, a cassette, or the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language. |
An apparatus and a method for grouping rays based on quantized ray directions. For example, one embodiment of an apparatus comprises: an apparatus comprising: a ray generator to generate a plurality of rays; ray direction evaluation circuitry/logic to generate approximate ray direction data for each of the plurality of rays; ray sorting circuitry/logic to sort the rays into a plurality of ray queues based, at least in part, on the approximate ray direction data. |
1.A device including:Light generator, which is used to generate multiple light rays;A light direction evaluation circuit/logic for generating approximate light direction data for each of the plurality of light rays; andA light classification circuit/logic for classifying the light into a plurality of light queues based at least in part on the approximate light direction data.2.The apparatus of claim 1, wherein the approximate ray direction data includes a quantized direction value associated with each ray of the plurality of rays.3.The device according to claim 2, wherein the quantized direction value of each ray comprises: first data, which indicates the side surface of the volume intersecting with the ray; and second data, which comprises the ray and the ray The quantified intersection coordinate of the intersection between the sides of the volume.4.The device according to claim 2, wherein the light classification circuit/logic is configured to: based on the combination of the quantized direction value associated with the light and the shader record key, classify the light from the light One or more rays are grouped into the plurality of ray queues.5.The device according to claim 4, wherein the light classification circuit/logic is configured to: first try to use both the quantized light direction value and the shader record key to match the light with the light queue, and only when When no match is found, an attempt is made to match the ray with the ray queue using only the shader record key.6.The device according to claim 5, wherein, when no match is found using the quantized light direction value and the shader record key, the light classification circuit/logic is used to try to allocate a new light containing the light Light queue.7.The device according to claim 6, wherein the classification circuit/logic is configured to: only after determining that the new light queue cannot be allocated, try to use only the shader record key to combine the light with the light queue match.8.The device according to any one of claims 1 to 7, further comprising:The light dispatcher is used to dispatch the plurality of light rays in groups, and the group is defined by the light queue in which the light rays are stored.9.The device according to any one of claims 1 to 8, further comprising:A ray traversal circuit for traversing one or more of the plurality of rays passing through the boundary volume hierarchy; andThe ray intersection circuit is used to determine the intersection between one or more rays of the plurality of rays and one or more objects in the scene.10.One method includes:Generate multiple rays;Determining approximate ray direction data for each of the plurality of rays; andThe rays are classified into multiple ray queues based at least in part on the approximate ray direction data.11.The method of claim 10, wherein the approximate ray direction data includes a quantized direction value associated with each ray of the plurality of rays.12.The method according to claim 11, wherein the quantized direction value of each ray includes: first data indicating the side of the volume intersecting with the ray; and second data including the ray and the ray The quantified intersection coordinate of the intersection between the sides of the volume.13.The method according to claim 11, wherein the classification further comprises:Based on the combination of the quantization direction value and the shader record key associated with the light, the plurality of rays are grouped into the plurality of ray queues.14.The method according to claim 13, further comprising:An initial attempt to use both the quantized light direction value and the shader record key to match the light with the light queue; andOnly when no match is found, an attempt is made to match the ray to the ray queue using only the shader record key.15.The method according to claim 14, further comprising:When no match is found using the quantized ray direction value and the shader record key, an attempt is made to allocate a new ray queue containing the ray.16.The method according to claim 15, wherein the attempt to match the ray with the ray queue using only the shader record key is performed only after it is determined that the new ray queue cannot be allocated.17.The method according to any one of claims 10 to 16, further comprising:The plurality of rays are divided into groups, and the group is defined by the ray queue in which the rays are stored.18.The method according to any one of claims 10 to 17, further comprising:Traverse one or more of the plurality of rays passing through the boundary volume hierarchy; andDetermine the intersection between one or more of the plurality of rays and one or more objects in the scene.19.A machine-readable medium on which a program code is stored, the program code, when executed by a machine, causes the machine to perform the following operations:Generate multiple rays;Determining the approximate ray direction data of each of the plurality of rays; andThe rays are classified into multiple ray queues based at least in part on the approximate ray direction data.20.The machine-readable medium of claim 19, wherein the approximate ray direction data includes a quantized direction value associated with each ray of the plurality of rays.21.The machine-readable medium of claim 20, wherein the quantized direction value of each ray includes: first data indicating a side surface of the volume intersecting with the ray; and second data including the The quantized intersection coordinate of the intersection between the ray and the side of the volume.22.The machine-readable medium of claim 20, wherein the classification further comprises:Based on the combination of the quantization direction value and the shader record key associated with the light, the plurality of rays are grouped into the plurality of ray queues.23.The machine-readable medium according to claim 22, further comprising program code for causing the machine to perform the following operations:An initial attempt to use both the quantized light direction value and the shader record key to match the light with the light queue; andOnly when no match is found, an attempt is made to match the ray to the ray queue using only the shader record key.24.The machine-readable medium of claim 23, further comprising:When no match is found using the quantized ray direction value and the shader record key, an attempt is made to allocate a new ray queue containing the ray.25.The machine-readable medium of claim 24, wherein an attempt to match the ray with the ray queue using only the shader record key is performed only after it is determined that the new ray queue cannot be allocated. |
Apparatus and method for light classification based on quantified convergence directionTechnical fieldThe present invention generally relates to the field of graphics processors. More specifically, the present invention relates to an apparatus and method for light classification based on quantized convergence directions.Background techniqueRay tracing is a technology that simulates light transmission through physically-based rendering. Being widely used in movie rendering, it was considered too resource intensive for real-time performance until only a few years ago. One of the key operations in ray tracing is to process the visibility query of the intersection of rays and scenes called "ray traversal", which calculates the intersection of rays and the scene by traversing and intersecting nodes in the Boundary Volume Hierarchy (BVH).Description of the drawingsA better understanding of the present invention can be obtained from the following detailed description in conjunction with the accompanying drawings, in which:Figure 1 is a block diagram of an embodiment of a computer system with a processor having one or more processor cores and a graphics processor;Figure 2 is a block diagram of one embodiment of a processor having one or more processor cores, an integrated memory controller, and an integrated graphics processor;FIG. 3 is a block diagram of an embodiment of a graphics processor. The graphics processor may be a discrete graphics processing unit, or may be a graphics processor integrated with multiple processing cores;Figure 4 is a block diagram of an embodiment of a graphics processing engine for a graphics processor;Figure 5 is a block diagram of another embodiment of a graphics processor;Figure 6 shows an example of the execution circuit and logic;Figure 7 shows a graphics processor execution unit instruction format according to an embodiment;FIG. 8 is a block diagram of another embodiment of a graphics processor, which includes a graphics pipeline, a media pipeline, a display engine, thread execution logic, and a rendering output pipeline;Fig. 9A is a block diagram showing a graphics processor command format according to an embodiment;Fig. 9B is a block diagram showing a graphics processor command sequence according to an embodiment;Figure 10 shows an exemplary graphics software architecture for a data processing system according to an embodiment;Figures 11A-D show an exemplary IP core development system that can be used to manufacture integrated circuits and exemplary package assemblies;FIG. 12 shows an exemplary system-on-chip integrated circuit that can be manufactured using one or more IP cores according to an embodiment;FIG. 13 shows an exemplary graphics processor of a system-on-chip integrated circuit that can be manufactured using one or more IP cores;Figure 14 shows an exemplary graphics processor architecture;Figure 15 shows an example of a processing architecture including a ray tracing core and a tensor core;Figure 16 shows a ray tracing cluster of nodes;Figure 17 shows additional details of an example ray tracing node;Figure 18 shows the light compression/decompression used in one embodiment;Figure 19 shows an embodiment of a hybrid ray tracing architecture;Figure 20 shows an example call stack reference;Figure 21 shows an example shader record pointer set;Figure 22 shows an example of a boundary volume hierarchy;Figure 23 shows an embodiment of the call stack and the associated traversal state;Figure 24 shows an embodiment of the present invention for classifying light;Figure 25 shows an example set of rays intersecting the volume;Figure 26 shows a classification key according to an embodiment of the present invention; andFigure 27 shows a method according to an embodiment of the present invention.Detailed waysIn the following description, for the purpose of explanation, many specific details are set forth in order to provide a thorough understanding of the embodiments of the present invention described below. However, it will be obvious to those skilled in the art that the embodiments of the present invention can be practiced without some of these specific details. In other cases, well-known structures and devices are shown in the form of block diagrams to avoid making the basic principles of the embodiments of the present invention difficult to understand.Exemplary graphics processor architecture and data typesSystem overviewFIG. 1 is a block diagram of a processing system 100 according to an embodiment. The system 100 can be used in a single-processor desktop system, a multi-processor workstation system, or a server system with a large number of processors 102 or processor cores 107. In one embodiment, the system 100 is a processing platform incorporated in a system-on-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices, such as wired or wireless connection to a local area network or a wide area network. Inside the connected Internet of Things (IoT) device.In one embodiment, the system 100 may include, be coupled with, or be integrated in the following: server-based game platforms; game consoles, including game and media consoles; mobile game consoles, handheld Game console or online game console. In some embodiments, the system 100 is part of a mobile phone, smart phone, tablet computing device, or mobile Internet connected device such as a notebook computer with low internal storage capacity. The processing system 100 may also include, be coupled with, or be integrated in the following: wearable devices, such as smart watch wearable devices; and augmented reality (AR) or virtual reality (VR) features Smart glasses or clothing to provide visual, audio or tactile output to supplement the visual, audio or tactile experience in the real world, or to provide text, audio, graphics, video, holographic images or video or tactile feedback in other ways; other enhancements Reality (AR) equipment; or other virtual reality (VR) equipment. In some embodiments, the processing system 100 includes or is part of a television or set-top box device. In one embodiment, the system 100 may include, be coupled with, or be integrated in the following: such as a bus, a tractor trailer, a car, a motorcycle or an electric bicycle, an airplane, or a glider (or any combination thereof) ) And other autonomous vehicles. An autonomous vehicle may use the system 100 to process the environment sensed around the vehicle.In some embodiments, the one or more processors 102 each include one or more processor cores 107 to process instructions that, when executed, perform operations for system or user software. In some embodiments, at least one of the one or more processor cores 107 is configured to process a specific instruction set 109. In some embodiments, the instruction set 109 may facilitate complex instruction set computing (CISC), reduced instruction set computing (RISC), or computation via very long instruction words (VLIW). One or more processor cores 107 may process different instruction sets 109, which may include instructions for facilitating emulation of other instruction sets. The processor core 107 may also include other processing devices, such as a digital signal processor (DSP).In some embodiments, the processor 102 includes a cache memory 104. Depending on the architecture, the processor 102 may have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among the various components of the processor 102. In some embodiments, the processor 102 also uses an external cache (eg, level three (L3) cache or last level cache (LLC)) (not shown), which can use known cache coherency Technology is shared among processor cores 107. The register file 106 may be additionally included in the processor 102, and may include different types of registers for storing different types of data (for example, integer registers, floating point registers, status registers, and instruction pointer registers). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor 102.In some embodiments, one or more processors 102 are coupled with one or more interface buses 110 to transmit communication signals, such as address, data, or control signals, between the processor 102 and other components in the system 100. In one embodiment, the interface bus 110 may be a processor bus, such as a version of the direct media interface (DMI) bus. However, the processor bus is not limited to the DMI bus, and may include one or more peripheral component interconnection buses (for example, PCI, PCI Express), memory bus, or other types of interface buses. In one embodiment, the processor 102 includes an integrated memory controller 116 and a platform controller hub 130. The memory controller 116 facilitates communication between the memory device and other components of the system 100, while the platform controller hub (PCH) 130 provides a connection to the I/O device via the local I/O bus.The memory device 120 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, a phase change memory device, or some other memory device with suitable performance for use as a process memory. In one embodiment, the memory device 120 may be used as a system memory of the system 100 to store data 122 and instructions 121 for use when one or more processors 102 execute applications or processes. The memory controller 116 is also coupled with an optional external graphics processor 118, which can communicate with one or more graphics processors 108 in the processor 102 to perform graphics and media operations. In some embodiments, graphics, media, and/or computing operations may be assisted by accelerator 112, which is a co-processor that may be configured to perform a set of specialized graphics, media, or computing operations. For example, in one embodiment, the accelerator 112 is a matrix multiplication accelerator for optimizing machine learning or computing operations. In one embodiment, the accelerator 112 is a ray tracing accelerator, which can be used to perform ray tracing operations together with the graphics processor 108. In one embodiment, the external accelerator 119 may be used instead of the accelerator 112 or the external accelerator 119 can be used with the accelerator 112.In some embodiments, the display device 111 may be connected to the processor 102. The display device 111 may be one or more of internal display devices, such as in a mobile electronic device or a laptop computer device or an external display device attached via a display interface (eg, DisplayPort, etc.). In one embodiment, the display device 111 may be a head-mounted display (HMD), such as a stereoscopic display device used in a virtual reality (VR) application or an augmented reality (AR) application.In some embodiments, the platform controller hub 130 enables peripheral devices to be connected to the memory device 120 and the processor 102 via a high-speed I/O bus. I/O peripheral devices include but are not limited to audio controller 146, network controller 134, firmware interface 128, wireless transceiver 126, touch sensor 125, data storage device 124 (for example, non-volatile memory, volatile memory, Hard disk drive, flash memory, NAND, 3D NAND, 3D XPoint, etc.). The data storage device 124 may be connected via a storage interface (e.g., SATA) or via a peripheral bus (e.g., a peripheral component interconnection bus (e.g., PCI, PCI Express)). The touch sensor 125 may include a touch screen sensor, a pressure sensor, or a fingerprint sensor. The wireless transceiver 126 may be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, 5G, or Long Term Evolution (LTE) transceiver. The firmware interface 128 implements communication with system firmware, and may be, for example, a unified extensible firmware interface (UEFI). The network controller 134 may implement a network connection to a wired network. In some embodiments, a high-performance network controller (not shown) is coupled with the interface bus 110. In one embodiment, the audio controller 146 is a multi-channel high-definition audio controller. In one embodiment, the system 100 includes an optional legacy I/O controller 140 for coupling legacy (e.g., personal system 2 (PS/2)) devices to the system. The platform controller hub 130 may also be connected to one or more universal serial bus (USB) controllers 142, which are connected to input devices, such as a keyboard and mouse 143 combination, a camera 144, or other USB input devices.It should be understood that the system 100 shown is exemplary and not restrictive, as other types of data processing systems that are configured differently may also be used. For example, instances of the memory controller 116 and the platform controller hub 130 may be integrated into a separate external graphics processor, such as the external graphics processor 118. In one embodiment, the platform controller hub 130 and/or the memory controller 116 may be external to the one or more processors 102. For example, the system 100 may include an external memory controller 116 and a platform controller hub 130, which may be configured as a memory controller hub and a peripheral controller hub in a system chipset communicating with the processor 102.For example, a circuit board ("cradle") can be used, on which components such as CPU, memory, and other components are placed, and designed to improve thermal performance. In some examples, processing components such as processors are located on the top side of the tray, and near memory such as DIMMs are located on the bottom side of the tray. Because this design provides enhanced airflow, these components can operate at higher frequencies and power levels than in typical systems, thereby improving performance. In addition, these brackets are configured to blindly mate with power and data communication cables in the rack, thereby enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Likewise, the individual components (eg, processors, accelerators, memory, and data storage drives) located on the cradle are configured to be easily upgraded due to the increased spacing between them. In the illustrative embodiment, the component additionally includes hardware attestation features to prove its authenticity.Data centers can utilize a single network architecture ("architecture") that supports a variety of other network architectures, including Ethernet and Omni-Path. The tray can be coupled to the switch via optical fiber, which can provide higher bandwidth and lower latency compared to typical twisted pair cables (eg, category 5, category 5e, category 6, etc.). Due to the high-bandwidth, low-latency interconnection and network architecture, data centers can pool resources in use, such as memory, accelerators (for example, GPU, graphics accelerator, FPGA, ASIC, neural network and/or artificial intelligence accelerator, etc.) Etc.), and physically disaggregated data storage drives, and provide them to computing resources (for example, processors) as needed, so that computing resources can access pooled resources as if they were local.The power supply device or power source may provide voltage and/or current to the system 100 or any components or systems described herein. In one example, the power supply device includes an AC to DC (Alternating Current to Direct Current) adapter to plug into a wall outlet. Such an AC power source may be a renewable energy source (for example, solar energy) power source. In an example, the power source includes a DC power source, such as an external AC to DC converter. In one example, the power supply or power supply device includes wireless charging hardware to charge via proximity to the charging field. In one example, the power source may include an internal battery, an AC power source, a motion-based power supply device, a solar power supply device, or a fuel cell source.Figures 2A-2D show computing systems and graphics processors provided by the embodiments described herein. The elements of FIGS. 2A-2D with the same reference numerals (or names) as the elements of any other figures herein can operate or function in any manner similar to those described elsewhere herein, but are not limited thereto.2A is a block diagram of an embodiment of a processor 200 having one or more processor cores 202A-202N, an integrated memory controller 214, and an integrated graphics processor 208. The processor 200 may include additional cores up to and including the additional core 202N indicated by the dashed box. Each processor core 202A-202N includes one or more internal cache units 204A-204N. In some embodiments, each processor core can also access one or more shared cache units 206. The internal cache units 204A-204N and the shared cache unit 206 represent the cache memory hierarchy within the processor 200. The cache memory hierarchy can include at least one level of instruction and data caches within each processor core, and one or more levels of shared mid-level caches (for example, level 2 (L2), level 3 (L3), Level 4 (L4) or other levels of cache, where the higher level cache before the external memory is classified as LLC). In some embodiments, the cache coherency logic maintains coherency between the various cache units 206 and 204A-204N.In some embodiments, the processor 200 may also include a set of one or more bus controller units 216 and a system agent core 210. One or more bus controller units 216 manage a group of peripheral buses, such as one or more PCI or PCI express buses. The system agent core 210 provides management functions for various processor components. In some embodiments, the system agent core 210 includes one or more integrated memory controllers 214 to manage access to various external memory devices (not shown).In some embodiments, one or more processor cores 202A-202N include support for simultaneous multithreading. In such an embodiment, the system agent core 210 includes components for coordinating and operating the cores 202A-202N during multi-threaded processing. The system agent core 210 may additionally include a power control unit (PCU) that includes logic and components for adjusting the power state of the processor cores 202A-202N and the graphics processor 208.In some embodiments, the processor 200 additionally includes a graphics processor 208 to perform graphics processing operations. In some embodiments, the graphics processor 208 is coupled to a set of shared cache units 206 and the system proxy core 210, and includes one or more integrated memory controllers 214. In some embodiments, the system agent core 210 further includes a display controller 211 to drive the graphics processor to output to one or more coupled displays. In some embodiments, the display controller 211 may also be a separate module coupled with the graphics processor via at least one interconnection, or may be integrated in the graphics processor 208.In some embodiments, the ring-based interconnection unit 212 is used to couple the internal components of the processor 200. However, alternative interconnection units may be used, such as point-to-point interconnection, switched interconnection, or other technologies, including those known in the art. In some embodiments, the graphics processor 208 is coupled to the ring interconnect 212 via an I/O link 213.Exemplary I/O link 213 represents at least one of a variety of I/O interconnects, including on-package I/O that facilitates communication between various processor components and high-performance embedded memory modules 218 (eg, eDRAM modules) /O interconnection. In some embodiments, each of the processor cores 202A-202N and the graphics processor 208 may use the embedded memory module 218 as a shared last level cache.In some embodiments, the processor cores 202A-202N are homogeneous cores that execute the same instruction set architecture. In another embodiment, in terms of instruction set architecture (ISA), the processor cores 202A-202N are heterogeneous, wherein one or more processor cores 202A-202N execute the first instruction set, and at least one other The core executes a subset of the first instruction set or other instruction sets. In one embodiment, the processor cores 202A-202N are heterogeneous in terms of microarchitecture, where one or more cores with relatively high power consumption are coupled with one or more power cores with lower power consumption . In one embodiment, the processor cores 202A-202N are heterogeneous in terms of computing power. In addition, the processor 200 may be implemented on one or more chips, or implemented as an SoC integrated circuit having the components shown (in addition to other components).Figure 2B is a block diagram of the hardware logic of the graphics processor core 219 according to some embodiments described herein. The elements of FIG. 2B having the same reference numerals (or names) as the elements of any other figures herein may operate or function in any manner similar to those described elsewhere herein, but are not limited thereto. The graphics processor core 219 (sometimes referred to as a core slice) may be one or more graphics cores within a modular graphics processor. The graphics processor core 219 is an example of a graphics core slice, and the graphics processor as described herein may include multiple graphics core slices based on target power and performance envelopes. Each graphics processor core 219 may include a fixed function block 230 coupled with a plurality of sub-cores 221A-221F (also referred to as sub-slices), which includes modular blocks of general and fixed function logic.In some embodiments, the fixed function block 230 includes a geometric/fixed function pipeline 231, which may be shared by all sub-cores in the graphics processor core 219, for example, in lower performance and/or lower power graphics processor implementations . In various embodiments, the geometric/fixed function pipeline 231 includes a 3D fixed function pipeline (for example, the 3D pipeline 312 in FIGS. 3 and 4 described below), a video front-end unit, a thread generator and a thread dispatcher, and a unified The return buffer manager, the unified return buffer manager manages the unified return buffer (for example, the unified return buffer 418 in FIG. 4, as described below).In one embodiment, the fixed function block 230 further includes a graphics SoC interface 232, a graphics microcontroller 233, and a media pipeline 234. The graphics SoC interface 232 provides an interface between the graphics processor core 219 and other processor cores in the system-on-chip integrated circuit. The graphics microcontroller 233 is a programmable sub-processor that can be configured to manage various functions of the graphics processor core 219, including thread dispatch, scheduling, and preemption. The media pipeline 234 (e.g., the media pipeline 316 of FIGS. 3 and 4) includes logic to facilitate decoding, encoding, pre-processing, and/or post-processing of multimedia data including image and video data. The media pipeline 234 implements media operations via requests to calculation or sampling logic in the sub-cores 221-221F.In one embodiment, the SoC interface 232 enables the graphics processor core 219 to communicate with the general application processor core (e.g., CPU) and/or other components within the SoC (including memory hierarchy elements, such as shared last-level caches). Memory, system RAM and/or embedded on-chip or packaged DRAM) communications. The SoC interface 232 may also enable communication with fixed function devices (eg, camera imaging pipeline) within the SoC, and enable the use and/or implementation of a global memory that can be shared between the graphics processor core 219 and the CPU within the SoC atom. The SoC interface 232 can also implement power management controls for the graphics processor core 219, and implement the interface between the clock domain of the graphics core 219 and other clock domains in the SoC. In one embodiment, the SoC interface 232 enables receiving command buffers from the command streamer and the global thread dispatcher, which are configured to send to one or more of the graphics processors Each of the three graphics cores provides commands and instructions. When media operations are to be performed, commands and instructions can be assigned to the media pipeline 234, or when graphics processing operations are to be performed, they can be assigned to geometric and fixed-function pipelines (for example, geometric and fixed-function pipelines 231, geometric and fixed-function pipelines). Function pipeline 237).The graphics microcontroller 233 may be configured to perform various scheduling and management tasks for the graphics processor core 219. In one embodiment, the graphics microcontroller 233 can perform graphics and/or computing workload scheduling on various graphics parallel engines in the execution unit (EU) arrays 222A-222F and 224A-224F in the sub-cores 221A-221F. . In this scheduling model, the host software executing on the CPU core of the SoC including the graphics processor core 219 can submit the workload of one of the multiple graphics processor doorbells, which invokes the scheduling operation on the appropriate graphics engine. Scheduling operations include: determining the workload to be run next, submitting the workload to the command stream processor, preempting the existing workload running on the engine, monitoring the progress of the workload, and notifying the host software when the workload is complete. In one embodiment, the graphics microcontroller 233 can also facilitate the low power or idle state of the graphics processor core 219, thereby providing the graphics processor core 219 with the ability to interact with the operating system and/or graphics driver on the system. Regardless of software, the registers in the graphics processor core 219 are saved and restored across low-power state transitions.The graphics processor core 219 may have larger or smaller than the illustrated sub-cores 221A-221F, up to N modular sub-cores. For each group of N sub-cores, the graphics processor core 219 may also include shared function logic 235, shared and/or cache memory 236, geometry/fixed function pipeline 237, and additional fixed functions for accelerating various graphics and computing processing operations Logic 238. The shared function logic 235 may include logic units (for example, sampler, math, and/or inter-thread communication logic) associated with the shared function logic 420 of FIG. 4, which may be used by one of the N sub-cores in the graphics processor core 219 Every one is shared. The shared and/or cache memory 236 may be the last level cache of a set of N sub-cores 221A-221F within the graphics processor core 219, and may also be used as a shared memory that can be accessed by multiple sub-cores. The geometric/fixed function pipeline 237 may be included in the fixed function block 230 instead of the geometric/fixed function pipeline 231, and the same or similar logic units may be included.In one embodiment, the graphics processor core 219 includes additional fixed function logic 238, which may include various fixed function acceleration logic for the graphics processor core 219. In one embodiment, the additional fixed function logic 238 includes additional geometric pipelines used in position only shading. In position-only coloring, there are two geometry pipelines, the complete geometry pipeline in the geometry/fixed function pipelines 238, 231, and the culling pipeline, which is an additional geometry pipeline that can be included in the additional fixed function logic 238. In one embodiment, the culling pipeline is a condensed version of the complete geometry pipeline. The complete pipeline and the elimination pipeline can execute different instances of the same application, and each instance has a separate context. Only position coloring can hide the long culling run of discarded triangles, so that the coloring can be completed earlier in some cases. For example and in one embodiment, the culling pipeline logic within the additional fixed function logic 238 can execute the position shader in parallel with the main application, and generally produce key results faster than the full pipeline, because the culling pipeline only fetches and shades The position attribute of the vertex without performing rasterization and rendering the pixels to the frame buffer. The culling pipeline can use the generated key results to calculate the visibility information of all triangles, regardless of whether those triangles are culled or not. The complete pipeline (in this case, it can be called a replay pipeline) can consume visibility information to skip the culled triangles to color only the visible triangles that are finally passed to the rasterization stage.In one embodiment, the additional fixed function logic 238 may also include machine learning acceleration logic, such as fixed function matrix multiplication logic, for implementation including optimization for machine learning training or inference.Each graphics sub-core 221A-221F includes a set of execution resources, which can be used to execute graphics, media, and computing operations in response to requests from graphics pipelines, media pipelines, or shader programs. Graphics sub-core 221A-221F includes multiple EU arrays 222A-222F, 224A-224F, thread dispatch and inter-thread communication (TD/IC) logic 223A-223F, 3D (e.g., texture) sampler 225A-225F, media sampler 206A-206F, shader processors 227A-227F, and shared local memory (SLM) 228A-228F. The EU arrays 222A-222F and 224A-224F each include multiple execution units, which are general graphics processing units that can serve graphics, media, or computing operations (including graphics, media, or computing shader programs) and execute floating point and integer/ Fixed-point logic operations. The TD/IC logic 223A-223F performs local thread dispatching and thread control operations for the execution unit in the sub-core, and facilitates communication between threads executing on the execution unit of the sub-core. The 3D samplers 225A-225F can read data related to textures or other 3D graphics into the memory. The 3D sampler can read texture data differently based on the configured sampling state and the texture format associated with a given texture. The media samplers 206A-206F can perform similar read operations based on the type and format associated with the media data. In one embodiment, each graphics sub-core 221A-221F may alternatively include a unified 3D and media sampler. The threads executed on the execution units in each sub-core 221A-221F can utilize the shared local memory 228A-228F in each sub-core, so that the threads executed in the thread group can be executed using the common pool of on-chip memory.Figure 2C shows a graphics processing unit (GPU) 239, which includes a dedicated graphics processing resource set arranged into a multi-core group 240A-240N. Although only the details of a single multi-core group 240A are provided, it should be understood that other multi-core groups 240B-240N may be equipped with the same or similar sets of graphics processing resources.As shown in the figure, the multi-core group 240A may include a group of graphics cores 243, a group of tensor cores 244, and a group of ray tracing cores 245. The scheduler/dispatcher 241 schedules and dispatches graphics threads for execution on the respective cores 243, 244, 245. A set of register files 242 store the operand values used by the cores 243, 244, and 245 when the graphics thread is executed. These can include, for example, integer registers for storing integer values, floating point registers for storing floating point values, vector registers for storing packed data elements (integer and/or floating point data elements), and for storing tensors/ The tile register of the matrix value. In one embodiment, the slice register is implemented as a combined set of vector registers.One or more combined level 1 (L1) cache and shared memory unit 247 stores graphics data, such as texture data, vertex data, pixel data, light data, boundary volume data, etc. locally within each multi-core group 240A. One or more texture units 247 may also be used to perform texturing operations, such as texture mapping and sampling. The level 2 (L2) cache 253 shared by all or a subset of the multi-core groups 240A-240N stores graphics data and/or instructions for multiple concurrent graphics threads. As shown, the L2 cache 253 can be shared across multiple multi-core groups 240A-240N. One or more memory controllers 248 couple GPU 239 to memory 249, which may be system memory (e.g., DRAM) and/or dedicated graphics memory (e.g., GDDR6 memory).An input/output (I/O) circuit 250 couples the GPU 239 to one or more I/O devices 252, such as a digital signal processor (DSP), network controller, or user input device. On-chip interconnects can be used to couple I/O devices 252 to GPU 239 and memory 249. One or more I/O memory management units (IOMMU) 251 of the I/O circuit 250 directly couple the I/O device 252 to the system memory 249. In one embodiment, the IOMMU 251 manages sets of page tables to map virtual addresses to physical addresses in the system memory 249. In this embodiment, the I/O device 252, the CPU 246, and the GPU 239 may share the same virtual address space.In one embodiment, IOMMU 251 supports virtualization. In this case, it can manage the first set of page tables used to map the guest/graphic virtual address to the guest/graphic physical address, and manage the mapping of the guest/graphic physical address to the system/host physical address (e.g. , In the system memory 249) the second set of page tables. The base address of each of the first and second set of page tables may be stored in the control register and swapped out at the time of context switching (for example, so that the new context is provided with access to the page tables of the relevant set). Although not shown in Figure 2C, each of the cores 243, 244, 245 and/or the multi-core groups 240A-240N may include a translation lookaside buffer (TLB) to cache the guest virtual to guest physical conversion, guest physical Physical conversion to host and guest virtual to physical conversion of host.In one embodiment, the CPU 246, GPU 239, and I/O device 252 are integrated on a single semiconductor chip and/or chip package. The memory 249 shown may be integrated on the same chip or may be coupled to the memory controller 248 via an off-chip interface. In one embodiment, the memory 249 includes GDDR6 memory, which shares the same virtual address space as other physical system-level memories, but the basic principle of the present invention is not limited to this specific embodiment.In one embodiment, the tensor core 244 includes multiple execution units specifically designed to perform matrix operations, which are basic calculation operations used to perform deep learning operations. For example, simultaneous matrix multiplication operations can be used for neural network training and inference. The tensor core 244 can use various operand precisions to perform matrix processing, including single-precision floating-point numbers (for example, 32-bit), half-precision floating-point numbers (for example, 16-bit), integer (16-bit), byte (8 Bits) and nibbles (4 bits). In one embodiment, a neural network implementation extracts features of each rendered scene to potentially combine details from multiple frames to construct a high-quality final image.In deep learning implementations, parallel matrix multiplication work can be scheduled to execute on the tensor core 244. In particular, the training of neural networks requires a lot of matrix dot product operations. In order to process the inner product formula of the N×N×N matrix multiplication, the tensor core 244 may include at least N dot product processing elements. Before starting the matrix multiplication, load a complete matrix into the slicing register, and load at least one column of the second matrix in each of the N cycles. N dot products are processed in each cycle.Depending on the particular implementation, matrix elements can be stored in different precisions, including 16-bit words, 8-bit bytes (e.g., INT8), and 4-bit nibbles (e.g., INT4). Different precision modes can be specified for the tensor core 244 to ensure that the most effective precision is used for different workloads (for example, quantization can be tolerated to byte and nibble inference workloads).In one embodiment, the ray tracing core 245 accelerates the ray tracing operation for both real-time ray tracing and non-real-time ray tracing implementations. In particular, the ray tracing core 245 includes a ray traversal/intersection circuit for performing ray traversal using a boundary volume hierarchy (BVH) and identifying intersections between rays and primitives enclosed in the BVH volume. The ray tracing core 245 may also include circuitry for performing depth testing and culling (for example, using a Z buffer or similar arrangement). In one embodiment, the ray tracing core 245 performs traversal and intersection operations together with the image noise reduction technology described herein, at least a part of which may be performed on the tensor core 244. For example, in one embodiment, the tensor core 244 implements a deep learning neural network to perform noise reduction on the frames generated by the ray tracing core 245. However, the CPU 246, the graphics core 243, and/or the ray tracing core 245 may also implement all or part of the noise reduction and/or deep learning algorithm.In addition, as described above, a distributed noise reduction method may be adopted, in which the GPU 239 is in a computing device coupled to other computing devices through a network or high-speed interconnection. In this embodiment, the interconnected computing devices share neural network learning/training data to increase the speed of the entire system learning to perform noise reduction for different types of image frames and/or different graphics applications.In one embodiment, the ray tracing core 245 handles all BVH traversal and ray primitive intersections, thereby avoiding the graphics core 243 from being overloaded by thousands of instructions per ray. In one embodiment, each ray tracing core 245 includes a first set of dedicated circuits for performing bounding box tests (e.g., for traversal operations) and a first set of dedicated circuits for performing ray-triangle intersection tests (e.g., intersections that have been traversed). Light) the second set of dedicated circuits. Therefore, in one embodiment, the multi-core group 240A can simply emit a ray probe, and the ray tracing core 245 independently performs ray traversal and intersection and returns hit data (for example, hits, no hits, multiple hits). Etc.) to the thread context. While the ray tracing core 245 performs traversal and intersection operations, the other cores 243, 244 are released to perform other graphics or calculation tasks.In one embodiment, each ray tracing core 245 includes a traversal unit for performing a BVH test operation and an intersection unit for performing a ray primitive intersection test. The intersection unit generates a "hit", "no hit", or "multi-hit" response and provides it to the appropriate thread. During the traversal and intersection operations, the execution resources of other cores (eg, graphics core 243 and tensor core 244) are released to perform other forms of graphics work.In a specific embodiment described below, a hybrid rasterization/ray tracing method is used, where the work is distributed between the graphics core 243 and the ray tracing core 245.In one embodiment, the ray tracing core 245 (and/or other cores 243, 244) includes hardware support for the ray tracing instruction set, such as Microsoft’s DirectX ray tracing (DXR), which includes DispatchRays commands and ray generation, closest Hit, any hit and miss shaders, which enable each object to be assigned a unique set of shaders and textures. Another ray tracing platform that can be supported by the ray tracing core 245, graphics core 243, and tensor core 244 is Vulkan 1.1.85. However, note that the basic principles of the present invention are not limited to any specific ray tracing ISA.Generally, various cores 245, 244, and 243 can support ray tracing instruction sets. The ray tracing instruction set includes ray generation, closest hit, any hit, ray primitive intersection, bounding box construction per primitive and level, and miss , Access and exception instructions/functions. More specifically, one embodiment includes ray tracing instructions for performing the following functions:Light generation-light generation instructions can be executed for each pixel, sample or other user-defined work assignment.Closest hit-The closest hit command can be executed to locate the closest intersection between the light and the primitives in the scene.Any hit-Any hit instruction identifies multiple intersections between a ray and a primitive in the scene, thereby potentially identifying the new closest intersection point.Intersect-The intersect command executes the intersecting test of the ray primitives and outputs the result.Per-pixel bounding box construction-This instruction constructs a bounding box around a given primitive or group of primitives (for example, when constructing a new BVH or other acceleration data structure).Miss – Indicates that the light missed the scene or all geometric shapes in the specified area of the scene.Access – Indicates the sub-volume that the light will pass through.Exceptions-including various types of exception handlers (for example, called for various error conditions).2D is a block diagram of a general graphics processing unit (GPGPU) 270 that may be configured as a graphics processor and/or computing accelerator according to embodiments described herein. The GPGPU 270 may be interconnected with a host processor (eg, one or more CPUs 246) and memories 271, 272 via one or more system and/or memory buses. In one embodiment, the memory 271 is a system memory that can be shared with one or more CPUs 246, and the memory 272 is a device memory dedicated to the GPGPU 270. In one embodiment, the components within the device memory 272 and the GPGPU 270 may be mapped into memory addresses accessible by one or more CPUs 246. Access to the memories 271 and 272 can be facilitated via the memory controller 268. In one embodiment, the memory controller 268 includes an internal direct memory access (DMA) controller 269 or may include logic to perform operations that would otherwise be performed by the DMA controller.The GPGPU 270 includes a plurality of cache memories, including an L2 cache 253, an L1 cache 254, an instruction cache 255, and a shared memory 256, at least a part of which may also be divided into cache memories. GPGPU270 also includes a plurality of computing units 260A-260N. Each calculation unit 260A-260N includes a set of vector register 261, scalar register 262, vector logic unit 263, and scalar logic unit 264. The calculation units 260A-260N may also include a local shared memory 265 and a program counter 266. The calculation units 260A-260N may be coupled with a constant cache 267, which may be used to store constant data, which is data that does not change during the operation of the shader program or kernel executed on the GPGPU 270. In one embodiment, the constant cache 267 is a scalar data cache, and the cached data can be directly extracted into the scalar register 262.During operation, one or more CPUs 246 may write commands into registers or memories in the GPGPU 270 that have been mapped in the accessible address space. The command processor 257 can read commands from registers or memory and determine how to process these commands within the GPGPU 270. Thread dispatcher 258 can then be used to dispatch threads to computing units 260A-260N to execute those commands. Each computing unit 260A-260N can execute threads independently of other computing units. In addition, each calculation unit 260A-260N can be independently configured for conditional calculation, and the calculation result can be conditionally output to the memory. When the submitted command is complete, the command processor 257 may interrupt one or more CPUs 246.Figures 3A-3C show block diagrams of additional graphics processor and computing accelerator architectures provided by the embodiments described herein. The elements of FIGS. 3A-3C with the same reference numerals (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited thereto.FIG. 3A is a block diagram of a graphics processor 300. The graphics processor 300 may be a discrete graphics processing unit, or may be integrated with multiple processing cores or other semiconductor devices (such as but not limited to memory devices or network interfaces). Graphics processor. In some embodiments, the graphics processor communicates with registers on the graphics processor and with commands placed in the processor memory via a memory-mapped I/O interface. In some embodiments, the graphics processor 300 includes a memory interface 314 for accessing memory. The memory interface 314 may be an interface to a local memory, one or more internal caches, one or more shared external caches, and/or system memory.In some embodiments, the graphics processor 300 further includes a display controller 302 to drive the display output data to the display device 318. The display controller 302 includes hardware for one or more overlay planes for displaying and synthesizing multi-layer video or user interface elements. The display device 318 may be an internal or external display device. In one embodiment, the display device 318 is a head-mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. In some embodiments, the graphics processor 300 includes a video codec engine 306 to encode media into one or more media encoding formats, decode from one or more media encoding formats, or in one or more media encoding formats. Transcoding between media encoding formats, including but not limited to Moving Picture Experts Group (MPEG) format (e.g., MPEG-2), Advanced Video Coding (AVC) format (e.g., H.264/MPEG-4AVC), H. 265/HEVC, Open Media Alliance (AOMedia) VP8, VP9, and Society of Motion Picture and Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats (for example, JPEG and Motion JPEG (MJPEG) formats).In some embodiments, the graphics processor 300 includes a block image transfer (BLIT) engine 304 to perform two-dimensional (2D) rasterizer operations including, for example, bit boundary block transfer. However, in one embodiment, one or more components of the graphics processing engine (GPE) 310 are used to perform 2D graphics operations. In some embodiments, GPE 310 is a calculation engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.In some embodiments, the GPE 310 includes a 3D pipeline 312 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act on 3D primitive shapes (eg, rectangles, triangles, etc.). The 3D pipeline 312 includes programmable and fixed-function elements, which perform various tasks within the elements and/or generate execution threads to the 3D/media subsystem 315. Although the 3D pipeline 312 may be used to perform media operations, the embodiment of the GPE 310 also includes a media pipeline 316, which is specifically used to perform media operations, such as video post-processing and image enhancement.In some embodiments, the media pipeline 316 includes fixed-function or programmable logic units to perform one or more specialized media operations, such as video decoding acceleration, video deinterleaving, and video encoding in place of or on behalf of the video codec engine 306 accelerate. In some embodiments, the media pipeline 316 further includes a thread generation unit to generate threads for execution on the 3D/media subsystem 315. The generated threads perform calculations of media operations on one or more graphics execution units included in the 3D/media subsystem 315.In some embodiments, the 3D/media subsystem 315 includes logic for executing the threads generated by the 3D pipeline 312 and the media pipeline 316. In one embodiment, the pipeline sends thread execution requests to the 3D/media subsystem 315, which includes thread dispatch logic for arbitrating various requests and dispatching them to available thread execution resources. Execution resources include a series of graphics execution units for processing 3D and media threads. In some embodiments, the 3D/media subsystem 315 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, for sharing data between threads and storing output data.FIG. 3B shows a graphics processor 320 with a sharding architecture according to the embodiments described herein. In one embodiment, the graphics processor 320 includes a graphics processing engine cluster 322, which has multiple instances of the graphics processing engine 310 of FIG. 3A in the graphics engine slices 310A-310D. Each graphics engine slice 310A-310D can be interconnected via a group of slice interconnects 323A-323F. Each graphics engine slice 310A-310D may also be connected to a memory module or memory device 326A-326D via a memory interconnect 325A-325D. The memory devices 326A-326D can use any graphics memory technology. For example, the memory devices 326A-326D may be graphics double data rate (GDDR) memory. In one embodiment, the memory devices 326A-326D are high-bandwidth memory (HBM) modules, which can be on-die with their corresponding graphics engine slices 310A-310D. In one embodiment, the memory devices 326A-326D are stacked memory devices that can be stacked on top of their corresponding graphics engine tiles 310A-310D. In one embodiment, each graphics engine slice 310A-310D and associated memory 326A-326D reside on a separate chiplet, which is bonded to a base die or base substrate, as shown in FIG. 11B- This is described in further detail in 11D.The graphics processing engine cluster 322 may be connected to the on-chip or on-package structural interconnect 324. The structural interconnect 324 may enable communication between the graphics engine tiles 310A-310D and components such as the video codec 306 and one or more replication engines 304. The replication engine 304 can be used to move data out of, into memory devices 326A-326D and memory external to the graphics processor 320 (e.g., system memory), and to move data between them. The structural interconnect 324 may also be used to interconnect graphics engine segments 310A-310D. The graphics processor 320 may optionally include a display controller 302 to realize the connection with the external display device 318. The graphics processor can also be configured as a graphics or computing accelerator. In the accelerator configuration, the display controller 302 and the display device 318 may be omitted.The graphics processor 320 may be connected to the host system via the host interface 328. The host interface 328 may implement communication between the graphics processor 320, system memory, and/or other system components. The host interface 328 may be, for example, a PCIexpress bus or another type of host system interface.Figure 3C shows a computing accelerator 330 according to embodiments described herein. The computing accelerator 330 may include an architectural similarity to the graphics processor 320 of FIG. 3B and is optimized for computing acceleration. The computing engine cluster 332 may include a set of computing engine shards 340A-340D, which include execution logic optimized for parallel or vector-based general computing operations. In some embodiments, the compute engine segments 340A-340D do not include fixed-function graphics processing logic, but in one embodiment, one or more of the compute engine segments 340A-340D may include logic for performing media acceleration . Compute engine shards 340A-340D may be connected to memories 326A-326D via memory interconnects 325A-325D. The memories 326A-326D and the memory interconnects 325A-325D may be similar technologies as in the graphics processor 320, or may be different. The graphics computing engine slices 340A-340D may also be interconnected via a group of slice interconnects 323A-323F, and may be connected to the structural interconnection 324 and/or interconnected through the structural interconnection 324. In one embodiment, the computing accelerator 330 includes a large L3 cache 336, which can be configured as a device-wide cache. The computing accelerator 330 may also be connected to a host processor and memory via the host interface 328 in a similar manner to the graphics processor 320 of FIG. 3B.Graphics engineFIG. 4 is a block diagram of a graphics processing engine 410 of a graphics processor according to some embodiments. In one embodiment, the graphics processing engine (GPE) 410 is a version of the GPE 310 shown in FIG. 3A, and may also represent the graphics engine segments 310A-310D of FIG. 3B. The elements of FIG. 4 having the same reference numerals (or names) as the elements of any other figures herein may operate or function in any manner similar to those described elsewhere herein, but are not limited thereto. For example, the 3D pipeline 312 and the media pipeline 316 of FIG. 3A are shown. The media pipeline 316 is optional in some embodiments of the GPE 410, and may not be explicitly included in the GPE 410. For example and in at least one embodiment, a separate media and/or image processor is coupled to GPE 410.In some embodiments, the GPE 410 is coupled to the command streamer 403 or includes a command streamer 403 that provides a command stream to the 3D pipeline 312 and/or the media pipeline 316. In some embodiments, the command streamer 403 is coupled with a memory, which may be a system memory, or one or more of an internal cache and a shared cache. In some embodiments, the command streamer 403 receives commands from the memory and sends the commands to the 3D pipeline 312 and/or the media pipeline 316. The commands are commands obtained from a ring buffer that stores commands for the 3D pipeline 312 and the media pipeline 316. In one embodiment, the ring buffer may additionally include a batch command buffer, which stores batches of multiple commands. The commands for the 3D pipeline 312 may also include references to data stored in memory, such as but not limited to vertices and geometric data for the 3D pipeline 312 and/or image data and memory objects for the media pipeline 316. The 3D pipeline 312 and the media pipeline 316 process commands and data by executing operations via logic within the corresponding pipeline or by dispatching one or more execution threads to the graphics core array 414. In one embodiment, the graphics core array 414 includes one or more graphics core blocks (eg, graphics core 415A, graphics core 415B), and each block includes one or more graphics cores. Each graphics core includes a set of graphics execution resources, which include general and graphics-specific execution logic for performing graphics and computing operations, as well as fixed-function texture processing and/or machine learning and artificial intelligence acceleration logic.In various embodiments, the 3D pipeline 312 may include fixed functions and programmable logic to process one or more shader programs by processing instructions and dispatching execution threads to the graphics core array 414, such as vertex shaders, geometry shaders, Pixel shader, fragment shader, compute shader or other shader program. The graphics core array 414 provides a unified block of execution resources for processing these shader programs. The multi-purpose execution logic (eg, execution unit) within the graphics cores 415A-414B of the graphics core array 414 includes support for various 3D API shader languages, and can execute multiple simultaneous execution threads associated with multiple shaders .In some embodiments, the graphics core array 414 includes execution logic to perform media functions, such as video and/or image processing. In one embodiment, in addition to graphics processing operations, the execution unit also includes general logic that is programmable to perform parallel general computing operations. The general logic may execute processing operations in parallel or in combination with the general logic in the processor core 107 of FIG. 1 or the cores 202A-202N shown in FIG. 2A.The output data generated by threads executing on the graphics core array 414 may output the data to a memory in the unified return buffer (URB) 418. URB 418 can store data for multiple threads. In some embodiments, URB 418 may be used to send data between different threads executing on graphics core array 414. In some embodiments, URB 418 may be additionally used for synchronization between threads on the graphics core array and fixed function logic within shared function logic 420.In some embodiments, the graphics core array 414 is scalable such that the array includes a variable number of graphics cores, each graphics core having a variable number of execution units based on the target power and performance level of the GPE 410. In one embodiment, execution resources are dynamically scalable, so that execution resources can be enabled or disabled as needed.The graphics core array 414 is coupled with shared function logic 420, which includes multiple resources shared between the graphics cores in the graphics core array. The shared function in the shared function logic 420 is a hardware logic unit that provides specialized supplementary functions to the graphics core array 414. In various embodiments, the shared function logic 420 includes, but is not limited to, sampler 421, math 422, and inter-thread communication (ITC) 423 logic. Additionally, some embodiments implement one or more caches 425 within the shared function logic 420.At least in the case where the demand for a given dedicated function is insufficient to be included in the graphics core array 414, a shared function is achieved. Instead, a single instantiation of the dedicated function is implemented as a separate entity in the shared function logic 420 and shared among execution resources within the graphics core array 414. The precise set of functions shared between the graphics core array 414 and included in the graphics core array 414 varies from embodiment to embodiment. In some embodiments, specific shared functions in the shared function logic 420 widely used by the graphics core array 414 may be included in the shared function logic 416 in the graphics core array 414. In various embodiments, the shared function logic 416 in the graphics core array 414 may include some or all of the logic in the shared function logic 420. In one embodiment, all logic elements in the shared function logic 420 may be copied in the shared function logic 416 of the graphics core array 414. In one embodiment, the shared function logic 420 is excluded in favor of the shared function logic 416 in the graphics core array 414.Execution unit5A-5B show thread execution logic 500 according to embodiments described herein, which includes an array of processing elements employed in a graphics processor core. The elements of FIGS. 5A-5B with the same reference numerals (or names) as the elements of any other figure in this document can operate or function in any manner similar to that described elsewhere in this document, but are not limited thereto. 5A-5B show an overview of the thread execution logic 500, which can represent the hardware logic shown with each sub-core 221A-221F of FIG. 2B. FIG. 5A shows the execution unit in a general graphics processor, and FIG. 5B shows the execution unit that can be used in a computing accelerator.As shown in FIG. 5A, in some embodiments, the thread execution logic 500 includes a shader processor 502, a thread dispatcher 504, an instruction cache 506, a scalable execution unit array including multiple execution units 508A-508N, a sampler 510. Shared local memory 511, data cache 512, and data port 514. In one embodiment, the scalable execution unit array can enable or disable one or more execution units (for example, execution units 508A, 508B, 508C, 508D, up to any of 508N-1 and 508N) based on the computing requirements of the workload. A) to dynamically zoom. In one embodiment, the included components are interconnected via an interconnect structure that is linked to each component. In some embodiments, the thread execution logic 500 includes one or more of the instruction cache 506, the data port 514, the sampler 510, and the execution units 508A-508N and one of a memory (for example, a system memory or a cache memory). Or multiple connections. In some embodiments, each execution unit (eg, 508A) is an independent programmable general-purpose computing unit capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In various embodiments, the array of execution units 508A-508N can be scaled to include any number of individual execution units.In some embodiments, the execution units 508A-508N are mainly used to execute shader programs. The shader processor 502 can process various shader programs, and dispatch the execution threads associated with the shader programs via the thread dispatcher 504. In one embodiment, the thread dispatcher includes logic for arbitrating thread-initiated requests from the graphics and media pipelines and instantiating the requested threads on one or more of the execution units 508A-508N. For example, the geometry pipeline can dispatch vertices, tessellations, or geometry shaders to threads to execute logic for processing. In some embodiments, the thread dispatcher 504 may also process runtime thread generation requests from the executing shader program.In some embodiments, the execution units 508A-508N support an instruction set that includes native support for many standard 3D graphics shader instructions, so that shader programs (for example, Direct 3D and OpenGL) from the graphics library can be minimized. Translation execution. The execution unit supports vertex and geometry processing (e.g., vertex program, geometry program, vertex shader), pixel processing (e.g., pixel shader, fragment shader), and general processing (e.g., calculation and media shader). Each execution unit 508A-508N can perform multiple single instruction multiple data (SIMD) execution, and in the face of higher latency memory access, multi-threaded operation realizes an efficient execution environment. Each hardware thread in each execution unit has a dedicated high-bandwidth register file and associated independent thread state. Execution is multiple issuances from each clock to the pipeline, which can perform integer, single-precision and double-precision floating-point operations, SIMD branching capabilities, logic operations, a priori operations, and other miscellaneous operations. While waiting for data from one of the shared functions or memory, the dependency logic within the execution units 508A-508N puts the waiting thread to sleep until the requested data has been returned. While the waiting thread is sleeping, hardware resources can be dedicated to processing other threads. For example, during the delay associated with vertex shader operations, the execution unit may perform operations on pixel shaders, fragment shaders, or another type of shader program (including different vertex shaders). Various embodiments may be applied to use execution by using single instruction multithreading (SIMT) instead of or in addition to the use of SIMD. References to SIMD cores or operations can also be applied to SIMT or combined with SIMT to apply to SIMD.Each of the execution units 508A-508N operates on the array of data elements. The number of data elements is the "execution size" or the number of channels of instructions. The execution channel is the logical unit used for the execution of data element access, shielding and flow control in the instruction. The number of channels can be independent of the number of physical arithmetic logic units (ALU) or floating point units (FPU) of a particular graphics processor. In some embodiments, the execution units 508A-508N support integer and floating point data types.The execution unit instruction set includes SIMD instructions. Various data elements can be stored in registers as packed data types, and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256-bit vector is stored in a register, and the execution unit treats the vector as four separate 54-bit packed data elements (quad-word (QW) size data elements), eight Individual 32-bit packed data elements (double word (DW) size data elements), sixteen separate 16-bit packed data elements (word (W) size data elements), or thirty-two separate 8-bit data elements (words Section (B) size data element) to operate. However, different vector widths and register sizes are possible.In one embodiment, one or more execution units may be combined into a fused execution unit 509A-509N with a thread control logic (507A-507N) common to the fused EU. Multiple EUs can be merged into EU groups. Each EU in the fused EU group can be configured to execute a separate SIMD hardware thread. The number of EUs in the fused EU group may vary according to the embodiment. In addition, each EU can implement various SIMD widths, including but not limited to SIMD8, SIMD16, and SIMD32. Each fused graphics execution unit 509A-509N includes at least two execution units. For example, the fusion execution unit 509A includes a first EU 508A, a second EU 508B, and a thread control logic 507A common to the first EU 508A and the second EU 508B. The thread control logic 507A controls the threads executed on the merged graphics execution unit 509A, thereby allowing each EU in the merged execution units 509A-509N to execute using the common instruction pointer register.The thread execution logic 500 includes one or more internal instruction caches (for example, 506) to cache the thread instructions for the execution unit. In some embodiments, one or more data caches (e.g., 512) are included to cache thread data during thread execution. The thread executing on the execution logic 500 may also store the explicitly managed data in the shared local memory 511. In some embodiments, a sampler 510 is included to provide texture samples for 3D operations and media samples for media operations. In some embodiments, the sampler 510 includes a dedicated texture or media sampling function to process the texture or media data during the sampling process before providing the sampling data to the execution unit.During execution, the graphics and media pipeline sends a thread initiation request to the thread execution logic 500 via thread generation and dispatch logic. Once a set of geometric objects have been processed and rasterized into pixel data, the pixel processor logic (eg, pixel shader logic, fragment shader logic, etc.) in the shader processor 502 is called to further calculate the output information and make the result be Write to the output surface (e.g., color buffer, depth buffer, stencil buffer, etc.). In some embodiments, the pixel shader or fragment shader calculates the values of various vertex attributes that will be interpolated across the rasterized object. In some embodiments, the pixel processor logic within the shader processor 502 then executes the pixel or fragment shader program provided by the application programming interface (API). In order to execute the shader program, the shader processor 502 dispatches the thread to the execution unit (e.g., 508A) via the thread dispatcher 504. In some embodiments, the shader processor 502 uses the texture sampling logic in the sampler 510 to access the texture data in the texture map stored in the memory. Arithmetic operations on texture data and input geometric data calculate pixel color data for each geometric segment, or discard one or more pixels from further processing.In some embodiments, the data port 514 provides a memory access mechanism for the thread execution logic 500 to output processed data to the memory for further processing on the graphics processor output pipeline. In some embodiments, the data port 514 includes or is coupled to one or more cache memories (e.g., data cache 512) to cache data for memory access via the data port.In one embodiment, the execution logic 500 may further include a ray tracer 505 that can provide a ray tracing acceleration function. The ray tracer 505 may support a ray tracing instruction set including instructions/functions for ray generation. The ray tracing instruction set may be similar to or different from the ray tracing instruction set supported by the ray tracing core 245 in FIG. 2C.FIG. 5B shows exemplary internal details of the execution unit 508 according to the embodiment. The graphics execution unit 508 may include an instruction fetch unit 537, a general register file array (GRF) 524, an architecture register file array (ARF) 526, a thread arbiter 522, a sending unit 530, a branch unit 532, a set of SIMD floating point units (FPU ) 534, and includes a set of dedicated integer SIMD ALU 535 in one embodiment. GRF 524 and ARF 526 include a set of general register files and architectural register files associated with each simultaneous hardware thread that can be active in graphics execution unit 508. In one embodiment, the architectural state of each thread is maintained in the ARF 526, while the data used during the execution of the thread is stored in the GRF 524. The execution state of each thread (including the instruction pointer of each thread) can be saved in thread-specific registers in ARF 526.In one embodiment, the graphics execution unit 508 has an architecture that is a combination of simultaneous multi-threading (SMT) and fine-grained interleaved multi-threading (IMT). The architecture has a modular configuration, which can be fine-tuned during design based on the target number of simultaneous threads and the number of registers of each execution unit, where the execution unit resources span the logical division used to execute multiple simultaneous threads. The number of logical threads that can be executed by the graphics execution unit 508 is not limited to the number of hardware threads, and multiple logical threads may be assigned to each hardware thread.In one embodiment, the graphics execution unit 508 may jointly issue multiple instructions, and each instruction may be a different instruction. The thread arbiter 522 of the graphics execution unit thread 508 may dispatch the instruction to one of the sending unit 530, the branch unit 532, or the SIMD FPU 534 for execution. Each execution thread can access 128 general-purpose registers in GRF 524. Each register can store 32 bytes and can be accessed as a SIMD 8-element vector of 32-bit data elements. In one embodiment, each execution unit thread can access 4KB in the GRF 524, but the embodiment is not limited to this, and in other embodiments, more or fewer register resources may be provided. In one embodiment, the graphics execution unit 508 is divided into seven hardware threads, which can independently perform calculation operations, but the number of threads of each execution unit may also vary according to the embodiment. For example, in one embodiment, up to 16 hardware threads are supported. In an embodiment where seven threads can access 4KB, GRF 524 can store a total of 28KB. In the case that 16 threads can access 4KB, GRF 524 can store a total of 64KB. The flexible addressing mode may allow registers to be addressed together to build actually wider registers or to represent a strided rectangular block data structure.In one embodiment, memory operations, sampler operations, and other longer-latency system communications are dispatched via a "send" instruction executed by the messaging sending unit 530. In one embodiment, branch instructions are dispatched to dedicated branch unit 532 to facilitate SIMD divergence and final convergence.In one embodiment, the graphics execution unit 508 includes one or more SIMD floating point units (FPU) 534 to perform floating point operations. In one embodiment, FPU 534 also supports integer calculations. In one embodiment, the FPU 534 can perform up to M 32-bit floating point (or integer) operations in SIMD, or up to 2M 16-bit integer or 16-bit floating point operations in SIMD. In one embodiment, at least one of the FPUs provides extended mathematical capabilities to support high-throughput a priori mathematical functions and double-precision 54-bit floating point. In some embodiments, there is also a set of 8-bit integer SIMD ALUs 535, and can be specifically optimized to perform operations associated with machine learning calculations.In one embodiment, an array of multiple instances of the graphics execution unit 508 may be instantiated in a graphics sub-core grouping (eg, sub-slice). For scalability, product architects can choose the exact number of execution units for each sub-core grouping. In one embodiment, the execution unit 508 may execute instructions across multiple execution channels. In another embodiment, each thread executed on the graphics execution unit 508 executes on a different channel.FIG. 6 shows an additional execution unit 600 according to an embodiment. The execution unit 600 may be, for example, an execution unit for calculation optimization in the calculation engine segments 340A-340D in FIG. 3C, but is not limited thereto. As shown in FIG. 3B, variants of execution unit 600 can also be used in graphics engine segments 310A-310D. In one embodiment, the execution unit 600 includes a thread control unit 601, a thread state unit 602, an instruction fetching/prefetching unit 603, and an instruction decoding unit 604. The execution unit 600 also includes a register file 606 that stores registers that can be assigned to hardware threads within the execution unit. The execution unit 600 further includes a sending unit 607 and a branching unit 608. In one embodiment, the sending unit 607 and the branching unit 608 may operate similarly to the sending unit 530 and the branching unit 532 of the graphics execution unit 508 of FIG. 5B.The execution unit 600 further includes a calculation unit 610, and the calculation unit 610 includes multiple different types of functional units. In one embodiment, the calculation unit 610 includes an ALU unit 611, and the ALU unit 611 includes an array of arithmetic logic units. The ALU unit 611 may be configured to perform 64-bit, 32-bit, and 16-bit integer and floating point operations. Integer and floating point operations can be performed simultaneously. The calculation unit 610 may also include a systolic array 612 and a mathematical unit 613. The systolic array 612 includes a W wide and D deep network of data processing units that can be used to perform vector or other data parallel operations in a systolic manner. In one embodiment, the systolic array 612 may be configured to perform matrix operations, such as matrix dot product operations. In one embodiment, the systolic array 612 supports 16-bit floating point operations as well as 8-bit and 4-bit integer operations. In one embodiment, the systolic array 612 may be configured to accelerate machine learning operations. In such an embodiment, the systolic array 612 may be configured to support the bfloat 16-bit floating point format. In one embodiment, a mathematical unit 613 may be included to perform a specific subset of mathematical operations in a manner that is more efficient and lower power than the ALU unit 611. The mathematical unit 613 may include a variant of the mathematical logic (for example, the mathematical logic 422 of the shared function logic 420 of FIG. 4) that can be found in the shared function logic of the graphics processing engine provided by other embodiments. In one embodiment, the math unit 613 may be configured to perform 32-bit and 64-bit floating point operations.The thread control unit 601 includes logic for controlling the execution of threads in the execution unit. The thread control unit 601 may include thread arbitration logic to start, stop, and preempt threads in the execution unit 600. The thread state unit 602 may be used to store the thread state of the thread assigned to be executed on the execution unit 600. The thread state in the storage execution unit 600 enables fast preemption of threads when those threads become blocked or idle. The instruction fetch/prefetch unit 603 can fetch instructions from a higher-level instruction cache of the execution logic (for example, the instruction cache 506 in FIG. 5A). The instruction fetch/prefetch unit 603 may also issue a prefetch request for the instruction to be loaded into the instruction cache based on the analysis of the currently executing thread. The instruction decoding unit 604 may be used to decode instructions to be executed by the computing unit. In one embodiment, the instruction decoding unit 604 can be used as an auxiliary decoder to decode complex instructions into component micro-operations.The execution unit 600 also includes a register file 606 that can be used by hardware threads executed on the execution unit 600. The registers in the register file 606 may be divided across logic used to execute multiple simultaneous threads within the computing unit 610 of the execution unit 600. The number of logical threads that can be executed by the graphics execution unit 600 is not limited to the number of hardware threads, and multiple logical threads may be assigned to each hardware thread. The size of the register file 606 may vary from embodiment to embodiment based on the number of hardware threads supported. In one embodiment, register renaming can be used to dynamically assign registers to hardware threads.Figure 7 is a block diagram illustrating a graphics processor instruction format 700 according to some embodiments. In one or more embodiments, the graphics processor execution unit supports an instruction set with instructions in multiple formats. The solid-line boxes show components that are usually included in the execution unit instructions, while the dashed lines include optional components or components that are only included in a subset of the instructions. In some embodiments, the instruction format 700 described and shown are macro instructions because they are instructions provided to the execution unit, as opposed to micro-operations generated by instruction decoding once the instruction is processed.In some embodiments, the graphics processor execution unit natively supports 128-bit instruction format 710 instructions. Based on the selected instructions, instruction options, and number of operands, a 64-bit compressed instruction format 730 is available for some instructions. The native 128-bit instruction format 710 provides access to all instruction options, while some options and operations are restricted by the 64-bit format 730. The native instructions available in the 64-bit format 730 vary from embodiment to embodiment. In some embodiments, a set of index values in the index field 713 are used to partially compress the instructions. The execution unit hardware references a set of compression tables based on the index value, and uses the compression table output to reconstruct native instructions in the 128-bit instruction format 710. Other sizes and command formats can be used.For each format, the instruction opcode 712 defines the operation that the execution unit will perform. The execution unit executes each instruction in parallel across multiple data elements of each operand. For example, in response to the addition instruction, the execution unit performs a simultaneous addition operation across each color channel representing a texture element or a picture element. By default, the execution unit executes each instruction across all data channels of the operand. In some embodiments, the command control field 714 enables control of certain execution options, such as channel selection (e.g., prediction) and data channel order (e.g., swizzle). For instructions in the 128-bit instruction format 710, the execution size field 716 limits the number of data channels that will be executed in parallel. In some embodiments, the execution size field 716 is not available for the 64-bit compact instruction format 730.Some execution unit instructions have up to three operands, including two source operands src0 720, src1 722, and a destination 718. In some embodiments, the execution unit supports dual-destination instructions, where one of the destinations is alluded to. The data manipulation instruction may have a third source operand (eg, SRC2 724), where the instruction opcode 712 determines the number of source operands. The last source operand of the instruction can be an immediate value passed with the instruction (for example, hard-coded).In some embodiments, the 128-bit instruction format 710 includes an access/address mode field 726, which specifies whether to use a direct register addressing mode or an indirect register addressing mode, for example. When using direct register addressing mode, the register address of one or more operands is directly provided by the bit in the instruction.In some embodiments, the 128-bit instruction format 710 includes an access/address mode field 726, which specifies the address mode and/or access mode of the instruction. In one embodiment, the access mode is used to define data access alignment for instructions. Some embodiments support access modes, including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operand. For example, when in the first mode, the instruction can use byte-aligned addressing for source and destination operands, and when in the second mode, the instruction can use 16-byte aligned addressing for all Source and destination operands.In one embodiment, the address mode portion of the access/address mode field 726 determines whether the instruction will use direct addressing or indirect addressing. When using direct register addressing mode, the bits in the instruction directly provide the register address of one or more operands. When using the indirect register addressing mode, the register address of one or more operands can be calculated based on the address register value and the address immediate digit field in the instruction.In some embodiments, the instructions are grouped based on the opcode 712-bit field to simplify opcode decoding 740. For 8-bit opcodes, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise opcode grouping shown is only an example. In some embodiments, the move and logic operation code group 742 includes data movement and logic instructions (eg, move (mov), compare (cmp)). In some embodiments, the move and logic group 742 shares the five most significant bits (MSB), where the move (mov) instruction has the form 0000xxxxb and the logical instruction has the form 0001xxxxb. The flow control instruction group 744 (e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). The miscellaneous command group 746 includes a mixture of commands, including synchronization commands (for example, wait, send) in the form of 0011xxxxb (for example, 0x30). The parallel math instruction group 748 includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math group 748 performs arithmetic operations in parallel across data channels. The vector math group 750 includes arithmetic instructions (e.g., dp4) in the format 0101xxxxb (e.g., 0x50). The vector math group performs arithmetic operations on vector operands, such as dot product calculations. In one embodiment, the illustrated opcode decoder 740 may be used to determine which part of the execution unit will be used to execute the decoded instruction. For example, some instructions may be designated as systolic instructions to be executed by the systolic array. Other instructions such as ray tracing instructions (not shown) can be routed to the ray tracing logic or ray tracing core within the slice or partition where the logic is executed.Graphics pipelineFIG. 8 is a block diagram of another embodiment of a graphics processor 800. As shown in FIG. The elements of FIG. 8 having the same reference numerals (or names) as the elements of any other figures herein may operate or function in any manner similar to those described elsewhere herein, but are not limited thereto.In some embodiments, the graphics processor 800 includes a geometry pipeline 820, a media pipeline 830, a display engine 840, a thread execution logic 850, and a rendering output pipeline 870. In some embodiments, the graphics processor 800 is a graphics processor in a multi-core processing system, and the multi-core processing system includes one or more general-purpose processing cores. The graphics processor is controlled by register write operations to one or more control registers (not shown) or by commands issued to the graphics processor 800 via the ring interconnect 802. In some embodiments, the ring interconnect 802 couples the graphics processor 800 to other processing components, such as other graphics processors or general purpose processors. The commands from the ring interconnect 802 are interpreted by the command streamer 803, which provides the instructions to the individual components of the geometric pipeline 820 or the media pipeline 830.In some embodiments, the command streamer 803 directs the operation of the vertex fetcher 805, which reads vertex data from the memory and executes the vertex processing commands provided by the command streamer 803. In some embodiments, the vertex fetcher 805 provides vertex data to the vertex shader 807, and the vertex shader 807 performs coordinate space transformation and lighting operations on each vertex. In some embodiments, the vertex fetcher 805 and the vertex shader 807 execute vertex processing instructions by dispatching execution threads to the execution units 852A-852B via the thread dispatcher 831.In some embodiments, the execution units 852A-852B are arrays of vector processors that have instruction sets for performing graphics and media operations. In some embodiments, execution units 852A-852B have attached L1 cache 851 dedicated to each array or shared between arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions.In some embodiments, the geometric pipeline 820 includes a tessellation component to perform hardware-accelerated tessellation of 3D objects. In some embodiments, the programmable hull shader 811 is configured for tessellation operations. The programmable domain shader 817 provides back-end evaluation of the tessellation output. The tessellator 813 operates in the direction of the hull shader 811 and contains special-purpose logic for generating a set of detailed geometric objects based on the rough geometric model provided as input to the geometric pipeline 820. In some embodiments, if tessellation is not used, tessellation components (eg, hull shader 811, tessellator 813, and domain shader 817) can be bypassed.In some embodiments, the complete geometry object can be processed by the geometry shader 819 via one or more threads assigned to the execution units 852A-852B, or it can proceed directly to the clipper 829. In some embodiments, the geometry shader operates on the entire geometry object, rather than the vertices or vertex patches in the previous stages of the graphics pipeline. If tessellation is disabled, the geometry shader 819 receives input from the vertex shader 807. In some embodiments, if the tessellation unit is disabled, the geometry shader 819 can be programmed by the geometry shader program to perform geometric tessellation.Before rasterization, the trimmer 829 processes the vertex data. The trimmer 829 may be a fixed function trimmer or a programmable trimmer with trimming and geometry shader functions. In some embodiments, the rasterizer and depth test component 873 in the rendering output pipeline 870 dispatches pixel shaders to convert geometric objects into a per-pixel representation. In some embodiments, the pixel shader logic is included in the thread execution logic 850. In some embodiments, the application can bypass the rasterizer and depth test component 873, and access the non-rasterized vertex data via the stream output unit 823.The graphics processor 800 has an interconnection bus, interconnection structure, or some other interconnection mechanism that allows data and messages to pass between the main components of the processor. In some embodiments, the execution units 852A-852B and associated logic units (for example, L1 cache 851, sampler 854, texture cache 858, etc.) are interconnected via data port 856 to perform memory access and communicate with the processor Render output pipeline component communication. In some embodiments, the sampler 854, caches 851, 858, and execution units 852A-852B all have separate memory access paths. In one embodiment, the texture cache 858 may also be configured as a sampler cache.In some embodiments, the rendering output pipeline 870 includes a rasterizer and depth test component 873, which converts vertex-based objects into associated pixel-based representations. In some embodiments, the rasterizer logic includes a windower/masker unit to perform fixed-function triangle and line rasterization. In some embodiments, the associated render cache 878 and depth cache 879 are also available. The pixel operation component 877 performs pixel-based operations on the data, but in some cases, pixel operations associated with 2D operations (for example, bit-block image transmission with hybrid) are performed by the 2D engine 841, or used by the display during display The controller 843 covering the display plane is replaced. In some embodiments, a shared L3 cache 875 can be used for all graphics components, allowing data to be shared without using main system memory.In some embodiments, the graphics processor media pipeline 830 includes a media engine 837 and a video front end 834. In some embodiments, the video front end 834 receives pipeline commands from the command streamer 803. In some embodiments, the media pipeline 830 includes a separate command streamer. In some embodiments, the video front end 834 processes the media command before sending the command to the media engine 837. In some embodiments, the media engine 837 includes a thread generation function to generate threads to be dispatched to the thread execution logic 850 via the thread dispatcher 831.In some embodiments, the graphics processor 800 includes a display engine 840. In some embodiments, the display engine 840 is external to the processor 800 and is coupled to the graphics processor via a ring interconnect 802 or some other interconnect bus or structure. In some embodiments, the display engine 840 includes a 2D engine 841 and a display controller 843. In some embodiments, the display engine 840 contains dedicated logic that can operate independently of the 3D pipeline. In some embodiments, the display controller 843 is coupled with a display device (not shown), which may be a system integrated display device as in a laptop computer, or an external display device attached via a display device connector .In some embodiments, the geometric structure pipeline 820 and the media pipeline 830 may be configured to perform operations based on multiple graphics and media programming interfaces, and are not specific to any one application programming interface (API). In some embodiments, the driver software for the graphics processor converts API calls specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support for Open Graphics Library (OpenGL), Open Computing Language (OpenCL) and/or Vulkan graphics and computing APIs all from the Khronos Group is provided. In some embodiments, support for the Direct3D library from Microsoft Corporation can also be provided. In some embodiments, a combination of these libraries can be supported. It can also provide support for the open source computer vision library (OpenCV). If it is possible to map from the pipeline of the future API to the pipeline of the graphics processor, future APIs with compatible 3D pipelines will also be supported.Graphics pipeline programmingFigure 9A is a block diagram illustrating a graphics processor command format 900 according to some embodiments. FIG. 9B is a block diagram showing a graphics processor command sequence 910 according to an embodiment. The solid-line box in FIG. 9A shows the components that are usually included in the graphics command, and the dashed line includes optional components or components that are only included in a subset of the graphics command. The exemplary graphics processor command format 900 of FIG. 9A includes data fields for identifying the client 902, a command operation code (operation code) 904, and data 906 for the command. Some commands also include sub-opcode 905 and command size 908.In some embodiments, the client 902 specifies the client unit of the graphics device that processes the command data. In some embodiments, the graphics processor command parser examines the client field of each command to adjust the further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client unit includes a memory interface unit, a rendering unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline for processing commands. Once the command is received by the client unit, the client unit reads the opcode 904 and (if present) the sub-opcode 905 to determine the operation to be performed. The client unit uses the information in the data field 906 to execute the command. For some commands, an explicit command size 908 is expected to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some commands based on the command opcode. In some embodiments, commands are aligned via multiples of double words. Other command formats can be used.The flowchart in FIG. 9B shows an exemplary graphics processor command sequence 910. In some embodiments, the software or firmware of a data processing system featuring an embodiment of a graphics processor uses the version of the command sequence shown to establish, execute, and terminate a set of graphics operations. The sample command sequences are shown and described for illustrative purposes only, because the embodiments are not limited to these specific commands or this command sequence. In addition, the commands can be issued as a batch of commands in the command sequence, so that the graphics processor will process the command sequence at least partially concurrently.In some embodiments, the graphics processor command sequence 910 may begin with a pipeline refresh command 912 to cause any active graphics pipeline to complete the currently pending commands of the pipeline. In some embodiments, the 3D pipeline 922 and the media pipeline 924 do not operate at the same time. Perform a pipeline refresh so that the active graphics pipeline completes all pending commands. In response to the pipeline refresh, the graphics processor's command parser will suspend command processing until the active graphics engine completes the pending operation and the associated read cache is invalidated. Optionally, all data marked as "dirty" in the rendering cache can be flushed to the memory. In some embodiments, the pipeline refresh command 912 may be used for pipeline synchronization or before putting the graphics processor into a low power state.In some embodiments, when the command sequence requires the graphics processor to explicitly switch between pipelines, the pipeline selection command 913 is used. In some embodiments, before issuing a pipeline command, the pipeline selection command 913 is only required once within the execution context, unless the context is to issue commands for two pipelines. In some embodiments, the pipeline refresh command 912 is required immediately before the pipeline switch via the pipeline selection command 913.In some embodiments, the pipeline control command 914 configures the graphics pipeline for operation and is used to program the 3D pipeline 922 and the media pipeline 924. In some embodiments, the pipeline control command 914 configures the pipeline state for the active pipeline. In one embodiment, the pipeline control command 914 is used for pipeline synchronization and clears data from one or more cache memories in the active pipeline before processing a batch of commands.In some embodiments, the return buffer status command 916 is used to configure a set of return buffers for the corresponding pipeline to write data. Some pipeline operations need to allocate, select, or configure one or more return buffers, and the operation writes intermediate data into these return buffers during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and perform cross-thread communication. In some embodiments, the return buffer status 916 includes the size and number of return buffers selected for a set of pipeline operations.The remaining commands in the command sequence differ based on the active pipeline used for the operation. Based on the pipeline determination 920, the 3D pipeline state 930 starts to customize the command sequence for the 3D pipeline 922, or the media pipeline state 940 starts to customize the command sequence for the media pipeline 924.The commands for configuring the 3D pipeline state 930 include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables to be configured before 3D basic command processing. The values of these commands are determined based at least in part on the specific 3D API in use. In some embodiments, if certain pipeline elements will not be used, the 3D pipeline status 930 command can also selectively disable or bypass those pipeline elements.In some embodiments, the 3D primitive 932 command is used to submit 3D primitives to be processed by the 3D pipeline. The commands and associated parameters passed to the graphics processor via the 3D primitive 932 commands will be forwarded to the vertex acquisition function in the graphics pipeline. The vertex acquisition function uses the 3D primitive 932 command data to generate the vertex data structure. The vertex data structure is stored in one or more return buffers. In some embodiments, the 3D primitive 932 command is used to perform vertex operations on 3D primitives via the vertex shader. To process the vertex shader, the 3D pipeline 922 dispatches the shader execution thread to the graphics processor execution unit.In some embodiments, the 3D pipeline 922 is triggered by executing 934 commands or events. In some embodiments, a register write triggers command execution. In some embodiments, execution is triggered via a "go" or "kick" command in the command sequence. In one embodiment, pipeline synchronization commands are used to trigger command execution to refresh the sequence of commands through the graphics pipeline. The 3D pipeline will perform geometric processing for 3D primitives. Once the operation is completed, the obtained geometric objects are rasterized, and the pixel engine colorizes the obtained pixels. Additional commands for controlling pixel shading and pixel backend operations can be included for those operations.In some embodiments, the graphics processor command sequence 910 follows the media pipeline 924 path when performing media operations. In general, the programming method and specific use for the media pipeline 924 depend on the media or computing operation to be performed. During media decoding, specific media decoding operations can be offloaded to the media pipeline. In some embodiments, the media pipeline can also be bypassed, and resources provided by one or more general-purpose processing cores can be used to perform media decoding in whole or in part. In one embodiment, the media pipeline further includes elements for general graphics processing unit (GPGPU) operations, where the graphics processor is used to execute the SIMD vector using a calculation shader program that is not explicitly related to the rendering of graphics primitives operating.In some embodiments, the media pipeline 924 is configured in a similar manner to the 3D pipeline 922. Before the media object command 942, a set of commands for configuring the media pipeline state 940 is dispatched or placed in the command queue. In some embodiments, the command for the media pipeline state 940 includes data for configuring the media pipeline element that will be used to process the media object. This includes data that configures the video decoding and video encoding logic in the media pipeline, such as encoding or decoding format. In some embodiments, the command for the media pipeline state 940 also supports the use of one or more pointers to "indirect" state elements that contain a batch of state settings.In some embodiments, the media object command 942 provides a pointer to the media object for processing by the media pipeline. Media objects include memory buffers, which contain video data to be processed. In some embodiments, all media pipeline states must be valid before the media object command 942 is issued. Once the pipeline state is configured and the media object command 942 is queued, the media pipeline 924 is triggered via execution of the command 944 or equivalent execution event (e.g., register write). Then, the output from the media pipeline 924 can be post-processed through operations provided by the 3D pipeline 922 or the media pipeline 924. In some embodiments, GPGPU operations are configured and executed in a similar manner to media operations.Graphics software architectureFIG. 10 illustrates an exemplary graphics software architecture for the data processing system 1000 according to some embodiments. In some embodiments, the software architecture includes a 3D graphics application 1010, an operating system 1020, and at least one processor 1030. In some embodiments, the processor 1030 includes a graphics processor 1032 and one or more general-purpose processor cores 1034. The graphics application 1010 and the operating system 1020 are all executed in the system memory 1050 of the data processing system.In some embodiments, the 3D graphics application 1010 includes one or more shader programs, which include shader instructions 1012. The shader language instructions may use high-level shader languages, such as Direct3D's high-level shader language (HLSL), OpenGL shader language (GLSL), and so on. The application also includes executable instructions 1014 in machine language suitable for execution by the general-purpose processor core 1034. The application also includes graphics objects 1016 defined by vertex data.In some embodiments, the operating system 1020 is an operating system from Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. The operating system 1020 may support a graphics API 1022, such as Direct3D API, OpenGL API, or Vulkan API. When Direct3DAPI is used, the operating system 1020 uses the front-end shader compiler 1024 to compile any shader instructions 1012 using HLSL into a lower-level shader language. The compilation can be just-in-time (JIT) compilation, or the application can perform shader pre-compilation. In some embodiments, high-level shaders are compiled into low-level shaders during compilation of the 3D graphics application 1010. In some embodiments, shader instructions 1012 are provided in an intermediate form (e.g., a version of the standard portable intermediate representation (SPIR) used by the Vulkan API).In some embodiments, the user-mode graphics driver 1026 includes a back-end shader compiler 1027 to convert the shader instructions 1012 into a hardware-specific representation. When using the OpenGL API, the shader instructions 1012 of the GLSL high-level language are passed to the user-mode graphics driver 1026 for compilation. In some embodiments, the user mode graphics driver 1026 communicates with the kernel mode graphics driver 1029 using operating system kernel mode functions 1028. In some embodiments, the kernel mode graphics driver 1029 communicates with the graphics processor 1032 to dispatch commands and instructions.IP core implementationOne or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium, the representative code representing and/or defining logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions that represent various logic within the processor. When read by a machine, the instructions can cause the machine to make logic to perform the techniques described herein. Such a representation (referred to as "IP core") is a reusable logic unit of an integrated circuit, which can be stored in a tangible machine-readable medium as a hardware model describing the structure of the integrated circuit. The hardware model can be provided to various customers or manufacturing facilities, which load the hardware model onto a manufacturing machine that manufactures integrated circuits. An integrated circuit can be manufactured such that the circuit performs the operations described in connection with any of the embodiments described herein.FIG. 11A is a block diagram showing an IP core development system 1100 that can be used to manufacture integrated circuits to perform operations according to an embodiment. The IP core development system 1100 can be used to generate modular, reusable designs, which can be incorporated into larger designs or used to build entire integrated circuits (e.g., SOC integrated circuits). The design facility 1130 may generate a software simulation 1110 of the IP core design in a high-level programming language (for example, C/C++). The software simulation 1110 can be used to design, test, and verify the behavior of the IP core using the simulation model 1112. The simulation model 1112 may include function, behavior, and/or timing simulation. A register transfer level (RTL) design 1115 can then be created or synthesized from the simulation model 1112. The RTL design 1115 is an abstraction of the behavior of an integrated circuit, which models the flow of digital signals between hardware registers, including associated logic executed using the modeled digital signals. In addition to the RTL design 1115, lower-level designs at the logic level or transistor level can also be created, designed, or synthesized. Therefore, the specific details of the initial design and the simulation can be different.The RTL design 1115 or equivalent can be further synthesized by the design facility into a hardware model 1120, which can adopt a hardware description language (HDL) or some other representation of physical design data. HDL can be further simulated or tested to verify the IP core design. The non-volatile memory 1140 (for example, a hard disk, flash memory, or any non-volatile storage medium) can be used to store the IP core design for delivery to a third-party manufacturing facility 1165. Alternatively, the IP core design may be sent through a wired connection 1150 or a wireless connection 1160 (e.g., via the Internet). The manufacturing facility 1165 can then manufacture integrated circuits based at least in part on the IP core design. The manufactured integrated circuit may be configured to perform operations according to at least one embodiment described herein.Figure 11B shows a cross-sectional side view of an integrated circuit package assembly 1170 according to some embodiments described herein. The integrated circuit package assembly 1170 shows an implementation of one or more processors or accelerator devices as described herein. The package assembly 1170 includes a plurality of hardware logic units 1172, 1174 connected to a substrate 1180. The logic 1172, 1174 may be implemented at least partially in configurable logic or fixed-function logic hardware, and may include one or more parts of any of the following: a processor core, a graphics processor, or other accelerator devices described herein. Each logic unit 1172, 1174 may be implemented in a semiconductor die and coupled with the substrate 1180 via an interconnect structure 1173. The interconnect structure 1173 may be configured to route electrical signals between the logic 1172, 1174 and the substrate 1180, and may include interconnects such as but not limited to bumps or pillars. In some embodiments, the interconnect structure 1173 may be configured to route electrical signals, such as input/output (I/O) signals and/or power or ground signals associated with the operation of logic 1172, 1174. In some embodiments, the substrate 1180 is an epoxy-based laminate substrate. In other embodiments, the substrate 1180 may include other suitable types of substrates. The package assembly 1170 may be connected to other electronic devices via the package interconnect 1183. The package interconnect 1183 may be coupled to the surface of the substrate 1180 to route electrical signals to other electronic devices, such as motherboards, other chipsets, or multi-chip modules.In some embodiments, the logic unit 1172, 1174 is electrically coupled with the bridge 1182, and the bridge 1182 is configured to route electrical signals between the logic 1172, 1174. The bridge 1182 may be a dense interconnect structure that provides routing of electrical signals. The bridge 1182 may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical wiring features can be formed on the bridge substrate to provide chip-to-chip connections between logic 1172, 1174.Although two logic cells 1172, 1174 and a bridge 1182 are shown, the embodiments described herein may include more or fewer logic cells on one or more dies. One or more dies can be connected by zero or more bridges, because when logic is included on a single die, the bridge 1182 can be excluded. Alternatively, multiple dies or logic cells may be connected through one or more bridges. In addition, multiple logic units, dies, and bridges can be connected together in other possible configurations (including three-dimensional configurations).Figure 11C shows a package assembly 1190 that includes a plurality of hardware logic chiplet units connected to a substrate 1180 (e.g., base die). The graphics processing unit, parallel processor, and/or computing accelerator as described herein may be composed of various silicon chiplets that are manufactured separately. In this context, a chiplet is an at least partially packaged integrated circuit that includes different logic units that can be assembled with other chiplets into a larger package. Various small chips with different IP core logic can be assembled into a single device. In addition, active interposer technology can be used to integrate chiplets into a base die or base chiplet. The concepts described herein enable interconnection and communication between different forms of IP within the GPU. IP cores can be manufactured using different process technologies and combined during manufacturing, thereby avoiding the complexity of converging multiple IPs (especially on large SoCs with multiple IPs) into the same manufacturing process. Enabling the use of multiple processing technologies can improve time to market and provide a cost-effective way to create multiple product SKUs. In addition, the de-aggregated IP is more suitable for independent gated power supply, which can turn off the power of unused components on a given workload, thereby reducing overall power consumption.The hardware logic chiplets may include dedicated hardware logic chiplets 1172, logic or I/O chiplets 1174, and/or memory chiplets 1175. The hardware logic chiplet 1172 and the logic or I/O chiplet 1174 can be implemented at least in part by configurable logic or fixed-function logic hardware, and can include the processor cores, graphics processors, parallel processors, or others described herein. One or more parts of any one of the accelerator devices. The memory chiplet 1175 may be DRAM (eg, GDDR, HBM) memory or cache (SRAM) memory.Each chiplet may be manufactured as a separate semiconductor die and coupled with the substrate 1180 via the interconnect structure 1173. The interconnect structure 1173 may be configured to route electrical signals between the individual chiplets and logic within the substrate 1180. The interconnect structure 1173 may include interconnects, such as but not limited to bumps or pillars. In some embodiments, the interconnect structure 1173 may be configured to route electrical signals, such as input/output (I/O) signals and/or power or ground signals associated with the operation of logic, I/O, and memory chiplets .In some embodiments, the substrate 1180 is an epoxy-based laminate substrate. In other embodiments, the substrate 1180 may include other suitable types of substrates. The package assembly 1190 may be connected to other electronic devices via the package interconnect 1183. The package interconnect 1183 may be coupled to the surface of the substrate 1180 to route electrical signals to other electronic devices, such as motherboards, other chipsets, or multi-chip modules.In some embodiments, the logic or I/O chiplet 1174 and the memory chiplet 1175 may be electrically coupled via a bridge 1187, which is configured to route electrical power between the logic or I/O chiplet 1174 and the memory chiplet 1175. signal. The bridge 1187 may be a dense interconnect structure that provides routing of electrical signals. The bridge 1187 may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical wiring features can be formed on the bridge substrate to provide chip-to-chip connections between the logic or I/O chiplet 1174 and the memory chiplet 1175. The bridge 1187 may also be referred to as a silicon bridge or an interconnection bridge. For example, in some embodiments, the bridge 1187 is an embedded multi-die interconnect bridge (EMIB). In some embodiments, the bridge 1187 may simply be a direct connection from one chiplet to another chiplet.The substrate 1180 may include hardware components for I/O 1191, cache memory 1192, and other hardware logic 1193. The structure 1185 can be embedded in the substrate 1180 to realize communication between various logic chiplets and the logic 1191 and 1193 in the substrate 1180. In one embodiment, the I/O 1191, structure 1185, cache, bridge, and other hardware logic 1193 may be integrated into a base die stacked on the substrate 1180.In various embodiments, the package assembly 1190 may include a smaller or greater number of components and chiplets, which are interconnected by a structure 1185 or one or more bridges 1187. The chiplets in the package assembly 1190 may be arranged in a 3D or 2.5D arrangement. Generally, the bridge structure 1187 can be used to facilitate point-to-point interconnections between, for example, logic or I/O chiplets and memory chiplets. The structure 1185 can be used to interconnect various logic and/or I/O chiplets (eg, chiplets 1172, 1174, 1191, 1193) with other logic and/or I/O chiplets. In one embodiment, the cache memory 1192 in the substrate may be used as a global cache of the package assembly 1190, part of a distributed global cache, or as a dedicated cache of the structure 1185.Figure 11D shows a package assembly 1194 including interchangeable chiplets 1195 according to an embodiment. The interchangeable chiplets 1195 can be assembled into standardized slots on one or more basic chiplets 1196, 1198. The basic chiplets 1196, 1198 may be coupled via a bridge interconnect 1197, which may be similar to the other bridge interconnects described herein, and may be, for example, an EMIB. Memory chiplets can also be connected to logic or I/O chiplets via bridge interconnects. I/O and logic chiplets can communicate via interconnect structures. For one of logic or I/O or memory/cache, each basic chiplet can support one or more slots in a standardized format.In one embodiment, the SRAM and power delivery circuit can be manufactured into one or more basic chiplets 1196, 1198, which can be manufactured using a different process technology than the interchangeable chiplets 1195 stacked on top of the basic chiplets . For example, larger processing techniques may be used to manufacture the basic chiplets 1196, 1198, while smaller process techniques may be used to manufacture interchangeable chiplets. The one or more interchangeable chiplets 1195 may be memory (e.g., DRAM) chiplets. Different storage densities can be selected for the package assembly 1194 based on the target performance and/or power of the product in which the package assembly 1194 is used. In addition, logic chiplets with different numbers of functional unit types can be selected based on the target performance and/or power of the product during assembly. In addition, chiplets containing different types of IP logic cores can be inserted into interchangeable chiplet sockets, thereby realizing a hybrid processor design that can mix and match IP blocks of different technologies.Exemplary System-on-Chip Integrated CircuitFigures 12-13 illustrate exemplary integrated circuits and associated graphics processors according to various embodiments described herein, which may be manufactured using one or more IP cores. In addition to what is shown, other logic and circuits may be included, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores.FIG. 12 is a block diagram illustrating an exemplary system-on-chip integrated circuit 1200 that can be manufactured using one or more IP cores according to an embodiment. The exemplary integrated circuit 1200 includes one or more application processors 1205 (eg, CPU), at least one graphics processor 1210, and may additionally include an image processor 1215 and/or a video processor 1220, any of which may be Modular IP cores from the same or multiple different design facilities. The integrated circuit 1200 includes peripheral or bus logic, including a USB controller 1225, a UART controller 1230, an SPI/SDIO controller 1235, and an I2S/I2C controller 1240. In addition, the integrated circuit may include a display device 1245 that is coupled to one or more of a high-definition multimedia interface (HDMI) controller 1250 and a mobile industry processor interface (MIPI) display interface 1255. Storage may be provided by a flash memory subsystem 1260 including flash memory and a flash memory controller. A memory interface may be provided via the memory controller 1265 for accessing SDRAM or SRAM memory devices. Some integrated circuits also include an embedded security engine 1270.13-14 are block diagrams illustrating exemplary graphics processors used within SoCs according to embodiments described herein. FIG. 13 shows an exemplary graphics processor 1310 of a system-on-chip integrated circuit that can be manufactured using one or more IP cores according to an embodiment. Figure 14 shows an additional exemplary graphics processor 1340 of a system-on-chip integrated circuit that can be manufactured using one or more IP cores, according to an embodiment. The graphics processor 1310 of FIG. 13 is an example of a low-power graphics processor core. The graphics processor 1340 of FIG. 14 is an example of a higher performance graphics processor core. Each of the graphics processors 1310, 1340 may be a variation of the graphics processor 1210 of FIG. 12.As shown in FIG. 13, the graphics processor 1310 includes a vertex processor 1305 and one or more fragment processors 1315A-1315N (for example, 1315A, 1315B, 1315C, 1315D to 1315N-1, and 1315N). The graphics processor 1310 can execute different shader programs via separate logic, so that the vertex processor 1305 is optimized to perform the operations of the vertex shader program, and one or more fragment processors 1315A-1315N execute for fragments or pixels A fragment (for example, pixel) shading operation of a shader program. The vertex processor 1305 executes the vertex processing stage of the 3D graphics pipeline, and generates primitives and vertex data. The fragment processors 1315A-1315N use the primitive and vertex data generated by the vertex processor 1305 to generate a frame buffer for display on the display device. In one embodiment, the fragment processor 1315A-1315N is optimized to execute the fragment shader program as provided in the OpenGL API, which can be used to execute the pixel shader program similar to the pixel shader program provided in the Direct 3D API operating.The graphics processor 1310 additionally includes one or more memory management units (MMU) 1320A-1320B, caches 1325A-1325B, and circuit interconnections 1330A-1330B. One or more MMUs 1320A-1320B provide the graphics processor 1310 with virtual-to-physical address mapping, including the virtual address-to-physical address mapping of the vertex processor 1305 and/or the fragment processor 1315A-1315N, in addition to storing in one or more In addition to the vertices or image/texture data in the two caches 1325A-1325B, they can also refer to the vertices or image/texture data stored in the memory. In one embodiment, one or more MMUs 1320A-1320B can be synchronized with other MMUs in the system, including being associated with one or more application processors 1205, image processors 1215, and/or video processors 1220 in FIG. 12 One or more MMUs, so that each processor 1205-1220 can participate in a shared or unified virtual memory system. According to an embodiment, the one or more circuit interconnections 1330A-1330B enable the graphics processor 1310 to interface with other IP cores within the SoC via the internal bus of the SoC or via a direct connection.As shown in FIG. 14, the graphics processor 1340 includes one or more MMUs 1320A-1320B, caches 1325A-1325B, and circuit interconnections 1330A-1330B of the graphics processor 1310 of FIG. 13. The graphics processor 1340 includes one or more shader cores 1355A-1355N (for example, 1355A, 1355B, 1355C, 1355D, 1355E, 1355F, to 1355N-1 and 1355N), which provides a unified shader core architecture, among which, A single core or a single type of core can execute all types of programmable shader code, including shader program code for implementing vertex shaders, fragment shaders, and/or computational shaders. The exact number of shader cores present can vary from embodiment to implementation. In addition, the graphics processor 1340 includes: an inter-core task manager 1345, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores 1355A-1355N, and a tiling unit 1358 to accelerate A tiled operation based on slice-based rendering, in which the rendering operation of the scene is subdivided in the image space, for example, to take advantage of local spatial consistency within the scene or to optimize the use of internal caches.As described above, in order to quantize vertex components into NV-bit signed space, the index of each vertex component needs to be subtracted from the global index of the axis. Then, the component value is shifted down by the difference. Of course, this may give up some accuracy in the lower part of the weight. To capture this loss, AABB is generated by rounding down the minimum value and rounding up the maximum value after this shift. In order to maintain simplicity, even if there is no error in the quantization process, the vertices are quantized into the unit AABB.Ray tracing architectureIn one embodiment, the graphics processor includes circuitry and/or program code for performing real-time ray tracing. In some embodiments, the graphics processor includes a set of dedicated ray tracing cores to perform various ray tracing operations described herein, including ray traversal and/or ray intersection operations. In addition to the ray tracing core, one embodiment also includes multiple sets of graphics processing cores for performing programmable shading operations and multiple sets of tensor cores for performing matrix operations on tensor data.Figure 15 shows an exemplary portion of one such graphics processing unit (GPU) 1505, which includes a dedicated set of graphics processing resources arranged in a multi-core group 1500A-N. Although only the details of a single multi-core group 1500A are provided, it should be appreciated that other multi-core groups 1500B-N may be equipped with the same or similar sets of graphics processing resources.As shown in the figure, the multi-core group 1500A may include a group of graphics cores 1530, a group of tensor cores 1540, and a group of ray tracing cores 1550. The scheduler/dispatcher 1510 schedules and dispatches graphics threads for execution on various cores 1530, 1540, 1550. A set of register files 1520 stores the operand values used by the cores 1530, 1540, and 1550 when the graphics thread is executed. These can include, for example, integer registers for storing integer values, floating point registers for storing floating point values, vector registers for storing packed data elements (integer and/or floating point data elements), and for storing tensors/ The slice register of the matrix value. In one embodiment, the slice register is implemented as a combined set of vector registers.One or more level 1 (L1) cache and texture units 1560 locally store graphics data within each multi-core group 1500A, such as texture data, vertex data, pixel data, light data, boundary volume data, etc. The level 2 (L2) cache 1580 shared by all or a subset of the multi-core group 1500A-N stores instructions and/or graphics data for multiple concurrent graphics threads. As shown, the L2 cache 1580 can be shared across multiple multi-core groups 1500A-N. One or more memory controllers 1570 couple GPU 1505 to memory 1598, which may be system memory (e.g., DRAM) and/or dedicated graphics memory (e.g., GDDR6 memory).The input/output (IO) circuit 1595 couples the GPU 1505 to one or more IO devices 1595, such as a digital signal processor (DSP), network controller, or user input device. On-chip interconnects can be used to couple I/O device 1590 to GPU 1505 and memory 1598. One or more IO memory management units (IOMMU) 1570 of the IO circuit 1595 directly couple the IO device 1590 to the system memory 1598. In one embodiment, IOMMU 1570 manages sets of page tables used to map virtual addresses to physical addresses in system memory 1598. In this embodiment, the IO device 1590, the CPU 1599, and the GPU 1505 may share the same virtual address space.In one embodiment, IOMMU 1570 supports virtualization. In this case, it can manage the first set of page tables used to map the guest/graphic virtual address to the guest/graphic physical address, and manage the mapping of the guest/graphic physical address to the system/host physical address (e.g. , In the system memory 1598) the second set of page tables. The base address of each of the first and second set of page tables can be stored in the control register and exchanged during context switching (for example, so that the new context is provided with access to the page tables of the relevant set). Although not shown in FIG. 15, each of the cores 1530, 1540, 1550 and/or multi-core group 1500A-N may include a translation lookaside buffer (TLB) to cache guest virtual to guest physical conversion, guest physical to Host physical conversion and guest virtual to host physical conversion.In one embodiment, the CPU 1599, GPU 1505, and IO device 1590 are integrated on a single semiconductor chip and/or chip package. The memory 1598 shown may be integrated on the same chip or may be coupled to the memory controller 1570 via an off-chip interface. In one embodiment, the memory 1598 includes GDDR6 memory, which shares the same virtual address space as other physical system-level memories, but the basic principle of the present invention is not limited to this specific embodiment.In one embodiment, the tensor core 1540 includes multiple execution units, which are specifically designed to perform matrix operations, which are basic calculation operations used to perform deep learning operations. For example, simultaneous matrix multiplication operations can be used for neural network training and inference. The tensor core 1540 can use various operand precisions to perform matrix processing, including single-precision floating-point numbers (for example, 32-bit), half-precision floating-point numbers (for example, 16-bit), integer (16-bit), byte (8-bit) ) And nibbles (4 bits). In one embodiment, a neural network implementation extracts features of each rendered scene to potentially combine details from multiple frames to construct a high-quality final image.In deep learning implementations, parallel matrix multiplication work can be scheduled to execute on the tensor core 1540. In particular, the training of neural networks requires a lot of matrix dot product operations. In order to process the inner product formula of N×N×N matrix multiplication, the tensor core 1540 may include at least N dot product processing elements. Before starting the matrix multiplication, load a complete matrix into the slice register, and load at least one column of the second matrix in each of the N cycles. N dot products are processed in each cycle.Depending on the particular implementation, matrix elements can be stored in different precisions, including 16-bit words, 8-bit bytes (e.g., INT8), and 4-bit nibbles (e.g., INT4). Different precision modes can be specified for the tensor core 1540 to ensure that the most effective precision is used for different workloads (for example, quantization can be tolerated to byte and nibble inference workloads).In one embodiment, for both real-time ray tracing and non-real-time ray tracing implementations, the ray tracing core 1550 accelerates ray tracing operations. In particular, the ray tracing core 1550 includes a ray traversal/intersection circuit for performing ray traversal using Boundary Volume Hierarchy (BVH) and identifying intersections between rays and primitives enclosed in the BVH volume. The ray tracing core 1550 may also include circuitry for performing depth testing and culling (for example, using a Z buffer or similar arrangement). In one embodiment, the ray tracing core 1550 performs traversal and intersection operations together with the image denoising technology described herein, and at least a part of the denoising technology can be performed on the tensor core 1540. For example, in one embodiment, the tensor core 1540 implements a deep learning neural network to perform denoising on the frames generated by the ray tracing core 1550. However, the CPU 1599, the graphics core 1530, and/or the ray tracing core 1550 may also implement all or part of the denoising and/or deep learning algorithm.In addition, as described above, a distributed denoising method may be adopted, where the GPU 1505 is in a computing device coupled to other computing devices through a network or high-speed interconnection. In this embodiment, the interconnected computing devices share neural network learning/training data to increase the speed of the entire system learning to perform denoising for different types of image frames and/or different graphics applications.In one embodiment, the ray tracing core 1550 handles all BVH traversal and ray primitive intersections, so that the graphics core 1530 avoids being overloaded by thousands of instructions per ray. In one embodiment, each ray tracing core 1550 includes a first set of dedicated circuits for performing bounding box tests (e.g., for traversal operations) and a first set of dedicated circuits for performing ray-triangle intersection tests (e.g., traversed intersections). Light) the second set of dedicated circuits. Therefore, in one embodiment, the multi-core group 1500A can simply launch a ray detector, and the ray tracing core 1550 independently performs ray traversal and intersection and returns hit data (for example, hits, no hits, multiple hits, etc.) to Thread context. While the ray tracing core 1550 performs traversal and intersection operations, the other cores 1530, 1540 are released to perform other graphics or calculation tasks.In one embodiment, each ray tracing core 1550 includes a traversal unit for performing a BVH test operation and an intersection unit for performing a ray primitive intersection test. The intersection unit generates a "hit", "no hit", or "multi-hit" response and provides it to the appropriate thread. During the traversal and intersection operations, the execution resources of other cores (eg, graphics core 1530 and tensor core 1540) are released to perform other forms of graphics work.In a specific embodiment described below, a hybrid rasterization/ray tracing method is used, in which work is distributed between the graphics core 1530 and the ray tracing core 1550.In one embodiment, the ray tracing core 1550 (and/or other cores 1530, 1540) includes hardware support for the ray tracing instruction set, such as Microsoft’s DirectX ray tracing (DXR), which includes DispatchRays commands and ray generation, closest The hit, any hit, and miss shaders are implemented to assign a unique set of shaders and textures to each object. Another ray tracing platform that can be supported by the ray tracing core 1550, graphics core 1530, and tensor core 1540 is Vulkan 1.1.85. However, note that the basic principles of the present invention are not limited to any specific ray tracing ISA.Generally, various cores 1550, 1540, 1530 can support a ray tracing instruction set, which includes ray tracing instruction set for ray generation, closest hit, any hit, ray primitive intersection, per primitive and hierarchical bounding box construction, and no Instructions/functions for hits, accesses and exceptions. More specifically, one embodiment includes ray tracing instructions for performing the following functions:Light generation-light generation instructions can be executed for each pixel, sample or other user-defined work assignment.Closest Hit-The Closest Hit command can be executed to locate the closest intersection of a ray with a primitive in the scene.Any hit-Any hit instruction identifies multiple intersections between a ray and a primitive in the scene, thereby potentially identifying the new closest intersection point.Intersect-The intersect command executes the intersecting test of the ray primitives and outputs the result.Per-graphic element bounding box construction-This instruction builds a bounding box around a given element or group of elements (for example, when constructing a new BVH or other acceleration data structure).Missing – Indicates that the light missed the scene or all geometric shapes in the specified area of the scene.Access-Indicates the sub-volume that the light will pass through.Exceptions-including various types of exception handlers (for example, called for various error conditions).Lossy and lossless packet compression in distributed ray tracing systemIn one embodiment, ray tracing operations are distributed across multiple computing nodes coupled together through a network. For example, Figure 16 shows a ray tracing cluster 1600 including multiple ray tracing nodes 1610-1613 performing ray tracing operations in parallel, thereby potentially combining the results on one of the nodes. In the architecture shown, the ray tracing nodes 1610-1613 are communicatively coupled to the client-side ray tracing application 1630 via a gateway.One of the difficulties of the distributed architecture is the large amount of packetized data that must be transmitted between each of the ray tracing nodes 1610-1613. In one embodiment, both the lossless compression technique and the lossy compression technique are used to reduce the data transmitted between the ray tracing nodes 1610-1613.In order to achieve lossless compression, instead of sending packets filled with the results of certain types of operations, data or commands that allow the receiving node to reconstruct the results are sent. For example, randomly sampled area light and ambient light occlusion (AO) operations do not necessarily require direction. Therefore, in one embodiment, the sending node will simply send a random seed, which is then used by the receiving node to perform random sampling. For example, if the scene is distributed across nodes 1610-1612 to sample light 1 at points p1-p3, only the light ID and origin need to be sent to nodes 1610-1612. Then, each node can randomly sample light independently. In one embodiment, the random seed is generated by the receiving node. Similarly, for the main ray hit point, ambient light occlusion (AO) and soft shadow sampling can be calculated on nodes 1610-1612 without waiting for the original point of consecutive frames. In addition, if it is known that a group of light rays will go to the same point light source, an instruction to identify the light source can be sent to the receiving node to apply it to the group of light rays. As another example, if there are N ambient occlusion rays passing through a single point, a command can be sent to generate N samples according to the point.Various additional techniques can be applied to lossy compression. For example, in one embodiment, a quantization factor may be used to quantify all coordinate values associated with BVH, primitives, and rays. In addition, 32-bit floating point values used for data (e.g., BVH nodes and primitives) can be converted to 8-bit integer values. In a specific embodiment, the boundary of the ray group is stored with full precision, but the individual ray points P1-P3 are sent as index offsets to the boundary. Similarly, multiple local coordinate systems using 8-bit integer values as local coordinates can be generated. The position of the origin of each of these local coordinate systems can be encoded using full precision (for example, 32-bit floating point) values, thereby effectively connecting the global coordinate system and the local coordinate system.The following is an example of lossless compression used in an embodiment of the present invention. Examples of the ray data format used internally in the ray tracing program are as follows:Instead of sending raw data for each node generated, the data can be compressed by grouping the values and creating implicit rays with applicable metadata when possible.Bundle and group light dataOne embodiment uses flags for public data or masks with modifiers.E.g:RayPacket.rays=ray_1 to ray_256The origin is sharedExcept for storing only a single origin across all rays, all ray data is packed. RayPacket.flags is set for RAYPACKET_COMMON_ORIGIN. When the RayPacket is unpacked during reception, the origin will be filled according to a single origin value.The origin is only shared between some raysExcept for the rays that share the origin, all ray data are packed. For each unique set of shared origins, operators are packaged. The operator identifies the operation (shared origin), stores the origin, and shields which rays share information. This can be done for any shared value between nodes (for example, material ID, primitive ID, origin, direction, normal, etc.).Send implicit lightUsually, the ray data can be exported on the receiving end, and the least meta-information is used to generate it. A very common example is to generate multiple secondary rays to randomly sample the area. Instead of the sender generating the secondary light, sending it, and the receiver operating on it, the sender can send a command: any dependent information is needed to generate the light, and the light is generated on the receiving end. In the case where the light needs to be first generated by the sender to determine which receiver to send it to, the light is generated, and a random seed can be sent to regenerate the exact same light.For example, in order to sample the area light source with 64 shadow rays to sample the hit point, all 64 rays intersect the area from the same calculation N4. Create a RayPacket with a common origin and normal. If you want the receiver to color the resulting pixel contribution, you can send more data, but for this example, let us assume that we only want to return whether the light hits another node data. Create RayOperation to generate shadow light operation, and assign it the lightID value to be sampled and random number seed. When N4 receives the light grouping, it generates completely filled light data by filling the shared origin data to all the light rays and setting the direction based on the lightID randomly sampled with a random number seed to generate the data generated by the original sender. The same light. When returning the result, only the binary result of each ray needs to be returned, which can be handed over to the ray through the mask.In this example, sending the original 64 rays will use 104 bytes * 64 rays = 6656 bytes. If the returned light is also sent in its original format, this will also be doubled to 13,312 bytes. Using lossless compression (where only the common ray origin, normal and ray generation operations, and seed and ID are sent), only 29 bytes are sent, of which 8 bytes are returned for the intersecting mask. This gets the data compression ratio that needs to be sent over the network of ~360:1. This does not include the overhead for processing the message itself, which needs to be identified in some way, but it depends on the implementation. Other operations can be performed to recalculate the origin and direction of the ray based on the pixelID of the main ray, recalculate the pixelID based on the range in the raypacket, and many other possible value recalculation implementations. Similar operations can be used for any single or group of rays sent, including shadows, reflections, refractions, ambient light occlusion, intersections, volume intersections, shading, bounce reflections in path tracing, etc.Figure 17 shows additional details of two ray tracing nodes 1710-1711, which perform compression and decompression of ray tracing packets. In particular, in one embodiment, when the first ray tracing engine 1730 is ready to send data to the second ray tracing engine 1731, the ray compression circuit 1720 performs lossy and/or lossy ray tracing data as described herein. Lossless compression (for example, converting a 32-bit value to an 8-bit value, replacing the original data with instructions for reconstructing the data, etc.). The compressed light packet 1701 is sent from the network interface 1725 to the network interface 1726 through a local area network (for example, 10Gb/s, 100Gb/s Ethernet). The ray decompression circuit then decompresses the ray packet when appropriate. For example, it can execute commands to reconstruct ray tracing data (eg, use random seeds to perform random sampling for lighting operations). Then, the ray tracing engine 1731 uses the received data to perform a ray tracing operation.In the opposite direction, the light compression circuit 1741 compresses the light data, the network interface 1726 sends the compressed light data through the network (for example, using the technology described herein), the light decompression circuit 1740 decompresses the light data when necessary, and The ray tracing engine 1730 uses this data in ray tracing operations. Although shown as a separate unit in FIG. 17, the ray decompression circuits 1740-1741 may be integrated in the ray tracing engine 1730-1731, respectively. For example, as far as compressed ray data includes commands for reconstructing ray data, these commands may be executed by each corresponding ray tracing engine 1730-1731.As shown in FIG. 18, the light compression circuit 1720 may include a lossy compression circuit 1801 for performing the lossy compression techniques described herein (for example, converting 32-bit floating point coordinates to 8-bit integer coordinates) and a lossless compression circuit 1801 for performing lossless compression. The lossless compression circuit 1803 of compression techniques (for example, sending commands and data to allow the light recompression circuit 1821 to reconstruct the data). The light decompression circuit 1721 includes a lossy decompression circuit 1802 and a lossless decompression circuit 1804 for performing lossless decompression.The method according to one embodiment is shown below. This method can be implemented on the ray tracing architecture described herein, but is not limited to any specific architecture.Receive ray data, which will be sent from the first ray tracing node to the second ray tracing node. The lossy compression circuit performs lossy compression on the first ray tracing data, and the lossless compression circuit performs lossless compression on the second ray tracing data. Send the compressed ray tracing data to the second ray tracing node. The lossy/lossless decompression circuit performs lossy/lossless decompression of the ray tracing data, and the second ray tracing node uses the decompressed data to perform the ray tracing operation.Hybrid ray tracing graphics processor with hardware accelerationOne embodiment of the present invention includes a hybrid rendering pipeline that performs rasterization on the graphics core 1530 and performs ray tracing operations on the ray tracing core 1550, the graphics core 1530, and/or the CPU 1599 core. For example, instead of the main ray casting stage, rasterization and depth testing can be performed on the graphics core 1530. The ray tracing core 1550 can then generate secondary rays for light reflection, refraction, and shadows. In addition, some embodiments may select certain areas of the scene in which the ray tracing core 1550 will perform ray tracing operations (for example, based on material property thresholds such as high reflectivity levels), while other areas of the scene will Render with rasterization on the graphics core 1530. In one embodiment, this hybrid implementation is used for real-time ray tracing applications-where latency is the key issue.An embodiment of the ray traversal architecture described below uses existing single instruction multiple data (SIMD) and/or single instruction multiple thread (SIMT) graphics processors to perform programmable shading and ray traversal control, while using dedicated hardware to accelerate such Key functions such as BVH traversal and/or intersection. In this embodiment, by regrouping the generated shaders at specific points during the traversal and before coloring, the SIMD occupancy rate of the non-uniform path can be improved. This is achieved by dynamically classifying shaders on the chip using dedicated hardware. The recursion is managed by splitting the function into consecutive parts, which executes the returned and regrouped consecutive parts before executing to improve SIMD occupancy.The programmable control of light traversal/intersection is achieved by decomposing the traversal function into internal traversals that can be implemented as fixed-function hardware and external traversals that are executed on the GPU processor and implemented programmable control through user-defined traversal shaders. By conservatively truncating the internal traversal state during the transition between the internal traversal and the external traversal, the cost of transferring the traversal context between hardware and software can be reduced.The programmable control of ray tracing can be expressed through the different shader types listed in Table A below. There can be multiple shaders of each type. For example, each material can have a different hit shader.Shader type function mainly emits primary light hits bidirectional reflection distribution function (BRDF) sampling, emits secondary light any hits calculates the transmittance of alpha-textured geometric structures misses calculations radiation from the light source intersects with custom shapes traverses example selection and Transform the general functions that can be calledTable AIn one embodiment, recursive ray tracing is initiated by an API function that instructs the graphics processor to initiate a set of main shaders or intersecting circuits that can generate rays for the main rays Intersect with the scene. This in turn generates other shaders, such as traversal, hit shaders, or miss shaders. The shader that generates the sub-shader can also receive return values from the sub-shader. A callable shader is a general function, which can be directly generated by another shader and can also return a value to the calling shader.FIG. 19 shows an embodiment of a graphics processing architecture, which includes a shader execution circuit 1900 and a fixed function circuit 1910. The general execution hardware subsystem includes multiple single instruction multiple data (SIMD) and/or single instruction multiple thread (SIMT) cores/execution units (EU) 1901 (that is, each core may include multiple execution units), one or more A sampler 1902 and a level one (L1) cache 1903 or other forms of local storage. The fixed function hardware subsystem 1910 includes a message unit 1904, a scheduler 1907, a ray-BVH traversal/intersection circuit 1905, a classification circuit 1908, and a local L1 cache 1906.In operation, the main dispatcher 1909 dispatches a set of primary rays to the scheduler 1907, which dispatches work to the shaders executed on the SIMD/SIMT core/EU 1901. The SIMD core/EU 1901 may be the aforementioned ray tracing core 1550 and/or graphics core 1530. The execution of the main shader generates additional work to be performed (e.g., to be performed by one or more sub-shaders and/or fixed function hardware). The message unit 1904 distributes the work generated by the SIMD core/EU 1901 to the scheduler 1907, and accesses the free stack pool, the classification circuit 1908 or the ray-BVH intersection circuit 1905 as needed. If additional work is sent to the scheduler 1907, it is scheduled for processing on the SIMD/SIMT core/EU 1901. Prior to scheduling, the classification circuit 1908 may classify the rays into groups or bins as described herein (e.g., group rays with similar characteristics). The ray-BVH intersection circuit 1905 uses the BVH volume to perform a ray intersection test. For example, the ray-BVH intersection circuit 1905 can compare the ray coordinates with each level of BVH to identify the volume intersected by the ray.Shader records, user-assigned structures (including pointers to entry functions, vendor-specific metadata, and global parameters of shaders executed by SIMD core/EU 1901) can be used to reference shaders. Each execution instance of a shader is associated with a call stack, which can be used to store the parameters passed between the parent shader and the child shader. The call stack can also store references to successive functions that are executed when the call returns.Figure 20 shows an example of a set of assigned stack 2001, which includes the main shader stack, hit shader stack, traversal shader stack, continuous function stack and ray-BVH intersection stack (as described, which can be fixed by Function hardware 1910 executes). The new shader call can implement a new stack from the free stack pool 2002. The call stack can be cached in the local L1 cache 1903, 1906 to reduce access latency.In one embodiment, there are a limited number of call stacks, and each call stack is allocated a fixed maximum size "Sstack" in a continuous area of the memory. Therefore, the base address of the stack can be directly calculated as base address=SID*Sstack according to the stack index (SID). In one embodiment, when the work is scheduled to the SIMD core/EU 1901, the stack ID is allocated and deallocated by the scheduler 1907.In one embodiment, the main dispatcher 1909 includes a graphics processor command processor that dispatches the main shader in response to a dispatch command from the host (eg, CPU). If the scheduler 1907 can allocate a stack ID for each SIMD channel, it receives these dispatch requests and starts the main shader on the SIMD processor thread. The stack ID is allocated from the free stack pool 2002 initialized at the beginning of the dispatch command.The execution shader may generate sub-shaders by sending the generated message to the message passing unit 1904. The command includes the stack ID associated with the shader, and also includes a pointer to the sub-shader record for each active SIMD channel. The parent shader can only issue this message once for the active channel. In one embodiment, the parent shader terminates after sending the generation messages of all relevant channels.The shader executed on the SIMD core/EU 1901 can also use the generated message and the shader record pointer reserved for the fixed function hardware to generate fixed function tasks such as ray-BVH intersection. As mentioned, the message transfer unit 1904 sends the generated light-BVH intersection work to the fixed-function light-BVH intersection circuit 1905, and directly sends the callable shader to the classification circuit 1908. In one embodiment, the classification circuit groups the shaders by the shader record pointer to obtain SIMD batches with similar characteristics. Therefore, stack IDs from different parent shaders can be grouped in the same batch by the classification circuit 1908. The classification circuit 1908 sends the grouped batch to the scheduler 1907, which accesses the shader record from the graphics memory 2511 or the last level cache (LLC) 1920, and starts the shader on the processor thread.In one embodiment, continuous is considered a callable shader, and can also be referenced through shader records. When the child shader is generated and the value is returned to the parent shader, the pointer to the continuous shader record is pushed onto the call stack 2001. When the sub-shader returns, the continuous shader record will be popped from the call stack 2001, and the continuous shader will be generated. The generated successive passes are similar to the classification unit of the callable shader, and are started on the processor thread.As shown in FIG. 21, an embodiment of the classification circuit 1908 uses shader record pointers 2101A, 2101B, and 2101n to group the generated tasks to create a SIMD batch for coloring. The stack ID or context ID in the sorted batch can be grouped from different assignments and different input SIMD channels. In one embodiment, the grouping circuit 2110 uses a content addressable memory (CAM) structure 2101 including multiple entries to perform classification, where each entry is identified by a label 2101. As mentioned, in one embodiment, the label 2101 is the corresponding shader record pointer 2101A, 2101B, 2101n. In one embodiment, the CAM structure 2101 stores a limited number of tags (eg, 32, 64, 128, etc.), and each tag is associated with an incomplete SIMD batch corresponding to the shader record pointer.For incoming generation commands, each SIMD channel has a corresponding stack ID (shown as 16 context IDs 0-15 in each CAM entry) and shader record pointers 2101A-B,...n (used as labels value). In one embodiment, the grouping circuit 2110 compares the shader record pointer of each channel with the tag 2101 in the CAM structure 2101 to find a matching batch. If a matching batch is found, the stack ID/context ID is added to the batch. Otherwise, a new entry with a new shader record pointer label is created, which may evict old entries with incomplete batches.When the call stack is empty, the execution shader can deallocate the call stack by sending a deallocation message to the message unit. The deallocation message will be relayed to the scheduler, which returns the stack ID/context ID of the active SIMD channel to the free pool.An embodiment of the present invention uses a combination of fixed-function ray traversal and software ray traversal to implement a hybrid method for ray traversal operations. Therefore, it provides the flexibility of software traversal while maintaining the efficiency of fixed-function traversal. FIG. 22 shows an acceleration structure that can be used for hybrid traversal, which is a two-level tree with a single top-level BVH 2200 and several bottom-level BVHs 2201 and 2202. Graphical elements are shown on the right to indicate an internal traversal path 2203, an external traversal path 2204, a traversal node 2205, a leaf node 2206 with triangles, and a leaf node 2207 with custom primitives.The leaf nodes with triangles 2206 in the top-level BVH 2200 can refer to intersection shader records of triangles, custom primitives, or traversal shader records. The leaf node with triangle 2206 of the bottom-level BVH 2201-2202 can only reference the intersection shader record of the triangle and the custom primitive. The referenced type is encoded in the leaf node 2206. Internal traversal 2203 refers to the traversal within each BVH2200-2202. The internal traversal operation includes the calculation of the ray-BVH intersection, and the traversal across the BVH structure 2200-2202 is called the external traversal. Internal traversal operations can be efficiently implemented in fixed-function hardware, while external traversal operations can be performed with acceptable performance using programmable shaders. Therefore, one embodiment of the present invention uses a fixed function circuit 1910 to perform internal traversal operations, and uses a shader execution circuit 1900 including a SIMD core/EU 1901 for executing programmable shaders to perform external traversal operations.In one embodiment, when the ray intersects the traverse node during the internal traversal, the traversal shader is generated. The classification circuit 1908 groups these shaders through the shader record pointers 2101A-B, n to create a SIMD batch, which is initiated by the scheduler 1907 for SIMD execution on the graphics SIMD core/EU 1901. The traversal shader can be modified in several ways to achieve a wide range of applications. For example, the traversal shader can choose a coarser level of detail (LOD) BVH or transform the light to achieve a rigid body transformation. Then, the traversal shader generates internal traversals for the selected BVH.The internal traversal calculates the ray-BVH intersection by traversing the BVH and calculating the ray-box and ray-triangle intersection. The internal traversal is generated in the same way as the shader by sending a message to the message passing circuit 1904, which relays the corresponding generated message to the ray-BVH intersection circuit 1905 that calculates the ray-BVH intersection.In one embodiment, the stack used for internal traversal is stored locally in the fixed function circuit 1910 (e.g., in the L1 cache 1906). When a ray intersects a leaf node corresponding to a traversal shader or an intersection shader, the internal traversal is terminated and the internal stack is truncated. The truncated stack together with pointers to rays and BVH are written to memory at the location specified by the calling shader, and then the corresponding traversal shader or intersecting shader is generated. If the ray intersects any triangle during the internal traversal, the corresponding hit information is provided as the input parameters of these shaders, as shown in the following code. These generated shaders are grouped by the classification circuit 1908 to create SIMD batches for execution.Truncating the internal traversal stack reduces the cost of spilling it into memory. An embodiment of the present invention uses "Restart Trail for Stackless BVH Traversal, High Performance Graphics (2010), Pages 107-111 (Restart Trail for Stackless BVH Traversal, High Performance Graphics (2010), pp.107-111)" The method described in to truncate the stack to a small number of entries at the top of the stack, namely 42-bit restart trace and 6-bit depth value. The restart trajectory indicates the branch that has been made inside the BVH, and the depth value indicates the traversal depth corresponding to the last stack entry. This is enough to restore the internally traversed information at a later time.When the internal stack is empty and there are no more BVH nodes to be tested, the internal traversal is complete. In this case, an external stack handler is generated, which pops the top of the external stack and resumes traversal (if the external stack is not empty).In one embodiment, the external traversal executes the main traversal state machine and is implemented in the program code executed by the shader execution circuit 1900. It generates internal traversal queries under the following conditions: (1) when new rays are generated by the hit shader or the main shader; (2) when the traversal shader selects BVH for traversal; and (3) when the external stack handler targets When BVH resumes internal traversal.As shown in FIG. 23, before the internal traversal is generated, space is allocated for the fixed function circuit 1910 on the call stack 2305 to store the truncated internal stack 2310. The offsets 2303-2304 to the top of the call stack and the internal stack are maintained in the traverse state 2300, which is also stored in the memory 2511. The traversal state 2300 also includes the hit information of the rays in the world space 2301 and the object space 2302 and the closest intersecting primitive.The traversal shader, the intersection shader and the external stack handler are all generated by the ray-BVH intersection circuit 1905. The traversal shader is allocated on the call stack 2305 before starting a new internal traversal for the second-level BVH. The external stack handler is the shader: it is responsible for updating the hit information and resuming any pending internal traversal tasks. When the traversal is complete, the external stack handler is also responsible for generating hit or miss shaders. When there are no pending internal traversal queries to be generated, the traversal is complete. When the traversal is completed and the intersection is found, a hit shader is generated; otherwise, a miss shader is generated.Although the above-mentioned hybrid traversal scheme uses a two-level BVH hierarchy, the embodiments of the present invention described herein can use any number of BVH levels with corresponding changes in the external traversal implementation.In addition, although the fixed function circuit 1910 for performing light-BVH intersection is described in the above embodiment, other system components may also be implemented in the fixed function circuit. For example, the above-mentioned external stack handler can be an internal (not visible to the user) shader, which can be implemented in the fixed function BVH traversal/intersection circuit 1905. This embodiment can be used to reduce the number of dispatched shader stages and round trips between the fixed function intersecting hardware 1905 and the processor.The embodiments of the present invention described here use user-defined functions to implement programmable shading and ray traversal control. These user-defined functions can be executed with higher SIMD efficiency on existing and future GPU processors. The programmable control of ray traversal implements several important features, such as process instantiation, random level of detail selection, custom primitive intersection, and lazy BVH update.Device and method for light classification based on quantified convergence directionIn hardware ray tracing with a SIMD architecture, one of the basic problems is to keep all SIMD channels effectively utilized. For example, each channel can operate on separate lights at the same time, and the lights can be completely independent of each other. In order to be dispatched together, rays need to share common attributes, such as shader program code and texture resources.In addition, it is desirable that the rays share a common direction and intersect the same object in the same overall area. Because these rays are most likely to use closely spaced texture data, it can improve cache utilization.The embodiments of the present invention provide a power-efficient hardware solution to quickly determine the approximate incident direction of light from the perspective of the intersecting object based on the BVH bounding box of the intersecting object. This approximation can then be used to group the assigned rays by the direction of incidence.In particular, one embodiment is based on both the estimated incident direction and the shader record ID, user-assigned structure (including pointers to entry functions, vendor-specific metadata, and shader information executed by the SIMD core/EU). Global parameters) to group the rays. For example, this embodiment will group light rays condensed at the same intersection. To do this, the bounding box of the target object is used to determine the rough intersection coordinates and appended to the shader record ID to create a composite classification key. Rays that roughly intersect the bounding box at the same position have an improved chance of being juxtaposed in the texture space of the object.In addition, the resulting secondary light rays, such as reflected light rays, shadow light rays, etc., have opportunities for improvement and share the same general direction. Therefore, the same technique can be used to group them together during secondary light dispatch. You can also group rays that intersect different instances of the same object.In an embodiment shown in FIG. 24, a primary ray generation shader 2405 executed on one or more execution units (EU) generates a primary ray set. The light traversal circuit/logic 2420 traverses the light rays passing through the constructed Boundary Volume Hierarchy (BVH) to identify the volume through which the light rays pass. The intersection circuit 2430 performs an intersection test to identify objects within the volume that intersect the light rays.One embodiment of the intersection circuit 2430 includes a ray direction evaluation circuit/logic 2435 to process the estimated direction of incidence 2436 for each ray using the techniques described below. In one embodiment, the light direction evaluator 2435 generates the light direction classification key 2438 based on the estimated light direction 2436.The light classification circuit/logic 2440 classifies the light based on the estimated light direction 2436 and/or the light direction classification key 2438 combined with the shader record ID 2437. In one embodiment, the rays are classified into groups within a plurality of classification FIFO queues 2400-2403. The ray dispatcher 2430 then dispatches the ray group from the classification FIFO 2400-2403 to the EU 2415 for further processing, traversal, and intersection operations.As mentioned, one embodiment of the light direction evaluator 2435 determines the approximate light direction 2436 based on an efficient light/bounding box test. Then, it constructs a direction classification key 2438 based on the approximate light direction, which uses a small number of bits to encode the light direction. The classification circuit/logic 2440 uses the direction classification key in combination with the shader record ID classification key (for example, the concatenation of the shader record ID and additional configurable fields) to group the light services into different classification FIFOs 2400-2403.In one embodiment, the light direction evaluator 2435 uses the bounding box around the object based on the bounding box of the BVH leaf node to generate the quantized light direction. Because this BVH data already exists, obtaining the BVH leaf node does not require any additional effort. In addition, as part of the normal ray tracing operation, a ray/leaf node frame test was performed. One embodiment of the ray direction evaluator 2435 enhances the ray/frame intersection test to extract the intersecting face and the low-resolution intersection coordinates with that face.In one embodiment, there is no need to uniquely identify intersecting nodes in the classification key. This can group rays that intersect different instances of the same object. Having a match on the shader record ID prevents grouping completely unrelated rays that intersect with unrelated objects. This is particularly advantageous for scenes that include many repetitive structures.Figure 25 shows an example volume that intersects rays 2501-2505, which has six walls A1-A2, B1-B2, and C1-C2. In one embodiment, while checking the intersection of the bounding boxes, the ray direction evaluator 2435 detects which of the six walls A1-A2, B1-B2, and C1-C2 is intersected. In one embodiment, the light direction evaluator 2435 assigns the same code to the opposite side of the volume, because the light usually intersects the object from approximately one side of the scene. In this way, there are only three unique side codes: A, B and C, which can be coded with 2 digits. It is almost impossible for light to emerge from the completely opposite side of the subject. If this happens, the light direction evaluator 2435 can gracefully revert to using the shader record ID 2437 for grouping.In one embodiment, a two-dimensional ray intersection coordinate is generated to identify the intersection point on the intersected wall. Then, the accuracy of these coordinates is reduced so that they can fit within the sort key of the specified size. As an example and not a limitation, the intersection calculation may be performed in a floating-point format with reduced precision or a fixed-point format (eg, Int4, Int8, Bfloat16, etc.). In a specific implementation, 3 digits of precision are used to encode each of the 2D coordinates. Therefore, the 2-bit side code and the 3-bit value of each of U and V can be packed in the 8-bit field of the classification key.This low coordinate resolution has many benefits. First, given that the size of the classification key does not increase very often, an 8-bit value is a reasonable compromise. The rays are only roughly grouped by their intersection points, and the dispatcher 2430 is not left idle by waiting for rays that are very tightly packed with the already waiting rays. The circuit required to create the intersection of the ray/bounding box described in this article can utilize existing traversal/intersection circuits and operations. Embodiments of the invention will not require the large number of additional gates normally associated with precise floating point calculations.The creation of the quantization direction classification key will be described with reference to FIG. 25. The rays 2501 and 2502 should be dispatched together because they will intersect the objects contained in the bounding box in similar positions. Rays 2503 and 2504 are not co-located because they intersect different walls of the bounding box (ie, as indicated by different side IDs). Light 2505 hits the same wall as 2501 and 2502, but at a different location. Therefore, rays 2505 will have the same side ID but different U/V coordinates.FIG. 26 shows an example of a classification key 2600 including a shader record key 2601 and an intersection key 2602. The intersecting key of this embodiment includes the 8-bit value described above-that is, 6 bits for U and V coordinates (ie, bits 39:34) and a 2-bit side ID (ie, bits 33:32). The most frequently changed 8-bit intersecting key 2602 bits (ie, U[0] and V[0] values) are encoded in the most significant bit position of the classification key 2600. In this way, bits are classified according to their entropy. The reason for this specific mixing is that the classification accuracy can be easily adjusted by only changing the number of classification keys to be matched. In one embodiment, the lowest accuracy (1) is achieved by matching only the least significant 32 bits that encode the shader record ID (ie, bits 31:0). By matching all 40 bits, the highest accuracy can be achieved (5).In one embodiment, the classification circuit/logic 2440 uses adjustable classification key accuracy and fills the classification FIFO 2403 according to the following set of rules described with respect to the flowchart in FIG. 27.At 2701, when new rays are received for classification, the accuracy P is initially set to the highest value (for example, 40 bits in one embodiment). If it is determined at 2702 that a match is found, then at 2706 the light is submitted to the corresponding classification FIFO. If no match is found at 2702 and it is determined at 2703 that all classification FIFOs are allocated, then at 2705 the accuracy is reduced by the specified increment. Try to find a less accurate match at 2702, and if a match is found, add the light to the classification FIFO at 2706. Otherwise, the accuracy can continue to decrease at 2705 until a match is found at 2702.If the classification FIFO is available at 2703 after it is determined that there is no match at 2702, a new classification FIFO is formed at 2704 with the current precision P (for example, the highest precision). As mentioned, the new classification FIFO may have the same shader record key 2601 as the existing classification FIFO (but with a different intersecting key). The current ray will be added to the new classification FIFO, and the next ray will be selected at 2707.Therefore, in this embodiment, when all classification FIFOs 2400-2403 are allocated and there is no exact 40-bit classification key match, the accuracy is reduced until a match is found or until the accuracy has reached the minimum value (ie, 32-bit shader record ID2601). When some classification FIFOs are available and there is no exact 40-bit classification key match, a new classification FIFO will be formed for the unmatched classification key. The shader record ID can therefore be copied across multiple FIFOs. In one embodiment, during the forced eviction of the partially occupied classification FIFO, as long as the shader record ID 2601 matches, light can be combined across different classification FIFOs.This method improves the storage efficiency of hardware ray tracing, thereby improving performance and reducing power consumption. It is said that ray tracing will replace traditional rasterization technology in the future. In order to win the high-end graphics market segmentation, we need to be highly competitive in performance.In the embodiments, the term "engine" or "module" or "logic" can refer to, be a part of, or include the following: application specific integrated circuit (ASIC), electronic circuit, processor ( Shared, dedicated or group) and/or memory (shared, dedicated or group) that executes one or more software or firmware programs, combinational logic circuits and/or other suitable components that provide the described functions. In an embodiment, the engine, module or logic may be implemented in firmware, hardware, software or any combination of firmware, hardware and software.ExampleThe following are example implementations of different embodiments of the invention.Example 1. A device comprising: a light generator for generating a plurality of rays; a light direction evaluation circuit/logic for generating approximate light direction data for each of the plurality of rays; and light rays A classification circuit/logic for classifying the rays into multiple ray queues based at least in part on the approximate ray direction data.Example 2. The device of example 1, wherein the approximate ray direction data includes a quantized direction value associated with each ray of the plurality of rays.Example 3. The device according to example 2, wherein the quantized direction value of each ray includes: first data indicating the side surface of the volume intersecting with the ray; and second data including the ray A quantified intersection coordinate with the intersection between the sides of the volume.Example 4. The device according to example 2, wherein the light classification circuit/logic is configured to: based on the combination of the quantized direction value associated with the light and the shader record key, the multiple light rays One or more of the rays are grouped into the plurality of ray queues.Example 5. The device according to example 4, wherein the light classification circuit/logic is configured to: first try to use both the quantized light direction value and the shader record key to match the light with the light queue, and Only when no match is found, an attempt is made to match the ray to the ray queue using only the shader record key.Example 6. The device according to example 5, wherein when no match is found using the quantized light direction value and the shader record key, the light classification circuit/logic is used to try to allocate the light containing the light The new light queue.Example 7. The device according to example 6, wherein the classification circuit/logic is configured to: only after determining that the new light queue cannot be allocated, try to use only the shader record key to combine the light with The light queue matches.Example 8. The device according to example 1, further comprising: a light dispatcher for dispatching the plurality of light rays in groups, the group being defined by the light queue in which the light rays are stored.Example 9. The device according to example 1, further comprising: a ray traversing circuit for traversing one or more of the plurality of rays passing through the boundary volume hierarchy; and a ray intersecting circuit for Determine the intersection between one or more of the plurality of rays and one or more objects in the scene.Example 10. A method comprising: generating a plurality of rays; determining approximate ray direction data for each ray of the plurality of rays; and classifying the rays into a plurality of rays based at least in part on the approximate ray direction data In a light queue.Example 11. The method of example 10, wherein the approximate ray direction data includes a quantized direction value associated with each ray of the plurality of rays.Example 12. The method according to example 11, wherein the quantized direction value of each ray includes: first data indicating the side surface of the volume intersecting with the ray; and second data including the ray A quantified intersection coordinate with the intersection between the sides of the volume.Example 13. The method of example 11, wherein the classification further comprises: grouping the plurality of rays into the plurality of rays based on a combination of the quantized direction value and shader record key associated with the rays In the light queue.Example 14. The method according to Example 13, further comprising: an initial attempt to use both the quantized light direction value and the shader record key to match the light with the light queue; and only if no match is found, then try Only use the shader record key to match the rays to the ray queue.Example 15. The method according to example 14, further comprising: when no match is found using the quantized light direction value and the shader record key, attempting to allocate a new light queue containing the light.Example 16. The method of example 15, wherein the attempt to match the ray with the ray queue using only the shader record key is performed only after it is determined that the new ray queue cannot be allocated.Example 17. The method according to Example 10, further comprising: assigning the plurality of rays in groups, the group being defined by the ray queue in which the rays are stored.Example 18. The method according to Example 10, further comprising: traversing one or more of the plurality of rays of light passing through the boundary volume hierarchy; and determining that the one or more of the plurality of rays of light and The intersection between one or more objects in the scene.Example 19. A machine-readable medium having program code stored thereon, the program code when executed by a machine, causes the machine to perform the following operations: generate a plurality of rays; determine each of the plurality of rays Approximate ray direction data of the ray; and classifying the ray into a plurality of ray queues based at least in part on the approximate ray direction data.Example 20. The machine-readable medium of example 19, wherein the approximate ray direction data includes a quantized direction value associated with each ray of the plurality of rays.Example 21. The machine-readable medium of example 20, wherein the quantized direction value of each ray includes: first data indicating the side surface of the volume intersecting with the ray; and second data including The quantized intersection coordinate of the intersection between the ray and the side of the volume.Example 22. The machine-readable medium of Example 20, wherein the classification further comprises: grouping the plurality of rays into all the rays based on a combination of the quantized direction value and a shader record key associated with the rays Said multiple light queues.Example 23. The machine-readable medium according to Example 22, further comprising program code for causing the machine to perform the following operations: an initial attempt to use both the quantized light direction value and the shader record key to convert light Match the ray queue; and only if no match is found, try to match the ray with the ray queue using only the shader record key.Example 24. The machine-readable medium of example 23, further comprising: when no match is found using the quantized light direction value and the shader record key, attempting to allocate a new light queue containing the light.Example 25. The machine-readable medium of example 24, wherein the attempt to match the ray with the ray queue using only the shader record key is performed only after it is determined that the new ray queue cannot be allocated.Example 26. The machine-readable medium according to Example 19, further comprising program code for causing the machine to perform the following operations: assigning the plurality of light rays in groups, the group being composed of the light rays stored therein The line of rays is defined.Example 27. The machine-readable medium of Example 19, further comprising program code for causing the machine to perform the following operations: traverse one or more of the plurality of rays passing through the boundary volume hierarchy; And determining the intersection between one or more of the plurality of rays and one or more objects in the scene.The embodiments of the present invention may include various steps that have been described above. These steps can be embodied in machine-executable instructions, and these machine-executable instructions can be used to make a general-purpose or special-purpose processor execute these steps. Alternatively, these steps may be performed by specific hardware components containing hard-wired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.As described herein, instructions may refer to a specific configuration of hardware, such as an application specific integrated circuit (ASIC) configured to perform certain operations or have predetermined functionality or stored in a memory embodied in a non-transitory computer-readable medium Software instructions. Therefore, codes and data stored and executed on one or more electronic devices (e.g., terminal stations, network elements, etc.) can be used to implement the techniques shown in the drawings. Such electronic devices use computer machine-readable media (for example, non-transitory computer machine-readable storage media (for example, magnetic disks; optical disks; random access memory; read-only memory; flash memory devices; phase change memory)) and temporary computers A machine-readable communication medium (for example, electric, optical, acoustic or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, etc.) to store and transmit (internally and/or through the network and other electronic equipment) codes And data.In addition, such electronic devices usually include a set of one or more processors coupled with one or more other components, such as one or more storage devices (non-transitory machine-readable storage media), user input/output devices ( For example, keyboard, touch screen and/or display) and network connection. The coupling of the set of processors and other components is usually performed through one or more buses and bridges (also referred to as bus controllers). The storage device and the signal carrying network services respectively represent one or more types of machine-readable storage media and machine-readable communication media. Therefore, the storage device of a given electronic device typically stores code and/or data for execution on a set of one or more processors of the electronic device. Of course, different combinations of software, firmware, and/or hardware can be used to implement one or more parts of the embodiments of the present invention. Throughout this detailed description, for the purpose of explanation, many specific details are set forth in order to provide a thorough understanding of the present invention. However, it is obvious to those skilled in the art that the present invention can be practiced without some of these specific details. In some cases, well-known structures and functions are not described in detail to avoid obscuring the subject of the present invention. Therefore, the scope and spirit of the present invention should be judged according to the appended claims. |
A processor employs a first instruction cache, a second instruction cache, and a fetch unit coupled to the first instruction cache and the second instruction cache. The fetch unit generates a branch target address responsive to a branch instruction which includes a displacement. Additionally, the fetch unit selects one of the first instruction cache and the second instruction cache from which to fetch instructions stored at the branch target address responsive to a size of the displacement. |
What is claimed is: 1. A processor comprising:a first instruction cache configured to store instructions; a second instruction cache configured to store instructions; and a fetch unit coupled to the first instruction cache and the second instruction cache, wherein the fetch unit is configured to generate a branch target address responsive to a branch instruction which includes a displacement, and wherein the fetch unit is configured to select one of the first instruction cache and the second instruction cache from which to fetch instructions stored at the branch target address responsive to a size of the displacement. 2. The processor as recited in claim 1 wherein the fetch unit is configured to select the first instruction cache responsive to a first size of the displacement, and wherein the fetch unit is configured to select the second instruction cache responsive to a second size of the displacement, and wherein the first size is smaller than the second size.3. The processor as recited in claim 2 wherein the first size is 8 bits and the second size is 32 bits.4. The processor as recited in claim 2 wherein the first instruction cache has a first latency which is less than a second latency of the second instruction cache.5. The processor as recited in claim 2 wherein the first instruction cache has a first storage capacity which is less than a second storage capacity of the second instruction cache.6. The processor as recited in claim 2 wherein the fetch unit is further configured to generate a prefetch address for the second instruction cache if the first instruction cache is selected to receive the fetch address, and wherein the prefetch address is a sequential address to the branch target address.7. A method comprising:generating a branch target address responsive to a branch instruction having a displacement; selecting one of a first instruction cache and a second instruction cache as a selected instruction cache from which to fetch instructions corresponding to the branch target address responsive to a size of the displacement; and fetching the instructions from the selected instruction cache. 8. The method as recited in claim 7 wherein the selecting comprises:selecting the first instruction cache as the selected instruction cache responsive to a first size of the displacement; and selecting the second instruction cache as the selected instruction cache responsive to a second size of the displacement, wherein the first size is smaller than the second size. 9. The method as recited in claim 8 wherein the first size is 8 bits and the second size is 32 bits.10. The method as recited in claim 8 wherein the first instruction cache has a first latency which is less than a second latency of the second instruction cache.11. The method as recited in claim 8 wherein the first instruction cache has a first storage capacity which is less than a second storage capacity of the second instruction cache.12. The method as recited in claim 8 further comprising generating a prefetch address for the second instruction cache if the first instruction cache is selected in the selecting, and wherein the prefetch address is a sequential address to the branch target address.13. A computer system comprising:a processor including: a first instruction cache configured to store instructions; a second instruction cache configured to store instructions; and a fetch unit coupled to the first instruction cache and the second instruction cache, wherein the fetch unit is configured to generate a branch target address responsive to a branch instruction which includes a displacement, and wherein the fetch unit is configured to select one of the first instruction cache and the second instruction cache from which to fetch instructions stored at the branch target address responsive to a size of the displacement; and a peripheral device for communicating between the computer system and another computer system. 14. The computer system as recited in claim 13 wherein the peripheral device is a modem.15. The computer system as recited in claim 13 further comprising an audio peripheral device.16. The computer system as recited in claim 15 wherein the audio peripheral device includes a sound card.17. The computer system as recited in claim 13 further comprising a second processor including:a third instruction cache configured to store instructions; a fourth instruction cache configured to store instructions; and a second fetch unit coupled to the third instruction cache and the fourth instruction cache, wherein the second fetch unit is configured to generate a second branch target address responsive to a second branch instruction which includes a second displacement, and wherein the second fetch unit is configured to select one of the third instruction cache and the fourth instruction cache from which to fetch instructions stored at the second branch target address responsive to a size of the second displacement. 18. A processor comprising:a first instruction cache configured to store instructions; a second instruction cache configured to store instructions; and a fetch unit coupled to the first instruction cache and the second instruction cache, wherein the fetch until is configured to select one of the first instruction cache and the second instruction cache from which to fetch instructions stored at a branch target address corresponding to a branch instruction including a displacement, wherein the fetch unit is configured to select one of the first instruction cache and the second instruction cache responsive to a size of the displacement. 19. The processor as recited in claim 18 wherein the fetch unit is configured to select the first instruction cache responsive to a first size of the displacement, and wherein the fetch unit is configured to select the second instruction cache responsive to a second size of the displacement, and wherein the first size is smaller than the second size.20. The processor as recited in claim 19 wherein the first size is 8 bits and the second size is 32 bits.21. The processor as recited in claim 19 wherein the first instruction cache has a first latency which is less than a second latency of the second instruction cache.22. The processor as recited in claim 19 wherein the first instruction cache has a first storage capacity which is less than a second storage capacity of the second instruction cache.23. The processor as recited in claim 18 wherein the fetch unit is further configured to select a sequential address to the branch target address for fetching from the second instruction cache if the first instruction cache is selected to receive the branch target address.24. The processor as recited in claim 18 wherein the fetch unit comprises a branch scanner configured to generate the branch target address.25. The processor as recited in claim 24 further comprising a predecode unit configured to predecode the branch instruction prior to storage in one of the first instruction cache or the second instruction cache, and wherein the predecode unit is configured to replace the displacement field within the branch instruction with an encoding of the branch target address prior to storing the branch instruction in the one of the first instruction cache or the second instruction cache.26. The processor as recited in claim 25 wherein the branch scanner is configured to generate the branch target address by selecting the branch target address from the branch instruction. |
This Application is a continuation of U.S. patent application Ser. No. 09/099,984 filed Jun. 19, 1998, now issued U.S. Pat. No. 6,199,154, which claims benefit of priority to the Provisional Application serial No. 60/065,878, entitled "High Frequency, Wide Issue Microprocessor" filed on Nov. 17, 1997 by Witt. The Provisional Application is incorporated herein by reference in its entirety.BACKGROUND OF THE INVENTION1. Field of the InventionThis invention is related to the field of processors and, more particularly, to instruction fetch mechanisms within processors.2. Description of the Related ArtSuperscalar processors attempt to achieve high performance by dispatching and executing multiple instructions per clock cycle, and by operating at the shortest possible clock cycle time consistent with the design. To the extent that a given processor is successful at dispatching and/or executing multiple instructions per clock cycle, high performance may be realized. In order to increase the average number of instructions dispatched per clock cycle, processor designers have been designing superscalar processors which employ wider issue rates. A "wide issue" superscalar processor is capable of dispatching (or issuing) a larger maximum number of instructions per clock cycle than a "narrow issue" superscalar processor is capable of dispatching. During clock cycles in which a number of dispatchable instructions is greater than the narrow issue processor can handle, the wide issue processor may dispatch more instructions, thereby achieving a greater average number of instructions dispatched per clock cycle.In order to support wide issue rates, it is desirable for the superscalar processor to be capable of fetching a large number of instructions per clock cycle (on the average). For brevity, a processor capable of fetching a large number of instructions per clock cycle (on the average) will be referred to herein as having a "high fetch bandwidth". If the superscalar processor is unable to achieve a high fetch bandwidth, then the processor may be unable to take advantage of the wide issue hardware due to a lack of instructions being available for issue.Several factors may impact the ability of a particular processor to achieve a high fetch bandwidth. For example, many code sequences have a high frequency of branch instructions, which may redirect the fetching of subsequent instructions within that code sequence to a branch target address specified by the branch instruction. Accordingly, the processor may identify the branch target address upon fetching the branch instruction. Subsequently, the next instructions within the code sequence may be fetched using the branch target address. Processors attempt to minimize the impact of branch instructions on the fetch bandwidth by employing highly accurate branch prediction mechanisms and by generating the subsequent fetch address (either branch target or sequential) as rapidly as possible.Another factor which may impact the ability of a particular processor to achieve a high fetch bandwidth is the hit rate and latency of an instruction cache employed by the processor. Processors typically include an instruction cache to reduce the latency of instruction fetches (as compared to fetching from main memory external to the processor). By providing low latency access to instructions, instruction caches may help achieve a high fetch bandwidth. Furthermore, the low latency of access to the instructions may allow branch instructions to be rapidly detected and corresponding branch target addresses to be rapidly generated for subsequent instruction fetches.Modern processors have been attempting to achieve shorter clock cycle times in order to augment the performance gains which may be achieved with high issue rates. Unfortunately, the short clock cycle times being employed by modern processors tend to limit the size of an instruction cache which may be employed. Generally, larger instruction caches have a higher latency than smaller instruction caches. At some size, the instruction cache access time (i.e. latency from presenting a fetch address to the instruction cache and receiving the corresponding instructions therefrom) may even exceed the desired clock cycle time. On the other hand, larger instruction caches typically achieve higher hit rates than smaller instruction caches.Both high hit rates in the instruction cache and low latency access to the instruction cache are important to achieving high fetch bandwidth. If hit rates are low, than the average latency for instruction access may increase due to the more frequent main memory accesses required to fetch the desired instructions. Because larger instruction caches are capable of storing more instructions, they are more likely to be storing the desired instructions (once the instructions have been accessed for the first time) than smaller caches (which replace the instructions stored therein with other instructions within the code sequence more frequently). On the other hand, if the latency of each cache access is increased (due to the larger size of the instruction cache), the average latency for fetching instructions increases as well. As mentioned above, low average latency is important to achieving high fetch bandwidth by allowing more instructions to be fetched per clock cycle at a desired clock cycle time and by aiding in the more rapid detection and prediction of branch instructions. Accordingly, an instruction fetch structure which can achieve both high hit rates and low latency access is desired to achieve short clock cycle times as well as high fetch bandwidth.SUMMARY OF THE INVENTIONThe problems outlined above are in large part solved by a processor in accordance with the present invention. The processor employs a first instruction cache, a second instruction cache, and a fetch unit employing a fetch/prefetch method among the first and second instruction caches designed to provide high fetch bandwidth. The fetch unit selects a fetch address based upon previously fetched instructions (e.g. the existence or lack thereof of branch instructions within the previously fetched instructions) from a variety of fetch address sources. Depending upon the source of the fetch address, the fetch address is presented to one of the first and second instruction caches for fetching the corresponding instructions. If the first cache is selected to receive the fetch address, the fetch unit may select a prefetch address for presentation to the second cache. The prefetch address is selected from a variety of prefetch address sources and is presented to the second instruction cache. Instructions prefetched in response to the prefetch address are provided to the first instruction cache for storage.In one embodiment, the first instruction cache may be a low latency, relatively small cache while the second instruction cache may be a higher latency, relatively large cache. Fetch addresses from many of the fetch address sources may be likely to hit in the first instruction cache. For example, branch target addresses corresponding to branch instructions having small displacements may be likely to hit in the first instruction cache, which stores the most recently accessed cache lines. Also, return addresses corresponding to return instructions may be likely to hit in the first instruction cache since the corresponding call instruction may have been recently executed. Other fetch addresses may be less likely to hit in the first instruction cache. For example, branch target addresses corresponding to branch instructions having large displacements or branch target addresses formed using an indirect method may be less likely to hit in the first instruction cache. Accordingly, these fetch addresses may be immediately fetched from the second instruction cache, instead of first attempting to fetch from the first instruction cache. The latency of attempting an access in the first instruction cache may thereby be avoided.By generating prefetch addresses for the second instruction cache when the fetch address is conveyed to the first instruction cache, the fetch unit attempts to increase the likelihood that subsequent fetch addresses hit in the first instruction cache. Hits in the first instruction cache may provide the lowest latency, and hence may operate to improve the fetch bandwidth. Furthermore, in one embodiment the first instruction cache may provide multiple cache lines in response to fetch addresses. Accordingly, a relatively larger number of instructions may be provided per fetch than if only one cache line is provided. Fetch bandwidth may thereby be further improved.Broadly speaking, the present invention contemplates a processor comprising a first instruction cache configured to store instructions; a second instruction cache configured to-store instructions; and a fetch unit. Coupled to the first instruction cache and the second instruction cache, the fetch unit is configured to generate a fetch address responsive to previously fetched instructions. The fetch unit is configured to select one of the first instruction cache and the second instruction cache from which to fetch instructions stored at the fetch address. Additionally, the fetch unit is configured to select the one of the first instruction cache and the second instruction cache dependent upon a source of the fetch address.The present invention further contemplates a method for fetching instructions in a processor. A fetch address is selected from a plurality of fetch address sources responsive to previously fetched instructions. One of the first instruction cache within the processor and the second instruction cache within the processor is selected to receive the fetch address dependent upon which one of the plurality of fetch address sources is selected. Instructions are fetched from the selected one of the first instruction cache and the second instruction cache.Moreover, the present invention contemplates a computer system, comprising a processor, a memory, and an input/output (I/O) device. The processor is configured to select a fetch address from one of a plurality of fetch address sources within the processor. The processor is further configured to fetch instructions from one of a first instruction cache and a second instruction cache included within the processor dependent upon which one of the plurality of address sources from which the fetch address is selected. Coupled to the processor, the memory is configured to store instructions. The processor is configured to fetch the instructions from the memory if the instructions miss in the first instruction cache and the second instruction cache. Coupled to the processor, the I/O device is configured to communicate between the computer system and a second computer system to which the I/O device is coupled.BRIEF DESCRIPTION OF THE DRAWINGSOther objects and advantages of the invention will become apparent upon reading the following detailed description and upon reference to the accompanying drawings in which:FIG. 1 is a block diagram of one embodiment of a processor.FIG. 2 is a block diagram of one embodiment of a fetch/scan unit shown in FIG. 1.FIG. 3 is a block diagram of one embodiment of a lookahead/collapse unit shown in FIG. 1.FIG. 4 is a block diagram of one embodiment of a fetch control unit shown in FIG. 2.FIG. 5 is a flowchart illustrating selection of a fetch address for an L0 cache shown in FIG. 1 according to one embodiment of the fetch control unit shown in FIGS. 2 and 4.FIG. 6 is a flowchart illustrating selection of a fetch address for an L1 cache shown in FIG. 1 according to one embodiment of the fetch control unit shown in FIGS. 2 and 4.FIG. 7 is a block diagram of one embodiment of an L0 I-cache shown in FIG. 1.FIG. 8 is a block diagram of one embodiment of a computer system including the processor shown in FIG. 1.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.DETAILED DESCRIPTION OF THE INVENTIONTurning now to FIG. 1, a block diagram of one embodiment of a superscalar processor 10 is shown. Other embodiments are possible and contemplated. In the embodiment shown in FIG. 1, processor 10 includes a predecode unit 12, an L1 I-cache 14, an L0 I-cache 16, a fetch/scan unit 18, an instruction queue 20, an alignment unit 22, a lookahead/collapse unit 24, a future file 26, a reorder buffer/register file 28, a first instruction window 30A, a second instruction window 30B, a plurality of functional units 32A, 32B, 32C, and 32D, a plurality of address generation units 34A, 34B, 34C, and 34D, a load/store unit 36, an L1 D-cache 38, an FPU/multiinedia unit 40, and an external interface unit 42. Elements referred to herein by a particular reference number followed by various letters will be collectively referred to using the reference number alone. For example, functional units 32A, 32B, 32C, and 32D will be collectively referred to as functional units 32.In the embodiment of FIG. 1, external interface unit 42 is coupled to predecode unit 12, L1 D-cache 38, an L2 interface 44, and a bus interface 46. Predecode unit 12 is further coupled to L1 I-cache 14. L1 I-cache 14 is coupled to L0 I-cache 16 and to fetch/scan unit 18. Fetch/scan unit 18 is also coupled to L01-cache 16 and to instruction queue 20. Instruction queue 20 is coupled to alignment unit 22, which is further coupled to lookahead/collapse unit 24. Lookahead/collapse unit 24 is further coupled to future file 26, reorder buffer/register file 28, load/store unit 36, first instruction window 30A, second instruction window 30B, and FPU/multimedia unit 40. FPU/multimedia unit 40 is coupled to load/store unit 36 and to reorder buffer/register file 28. Load/store unit 36 is coupled to L1 D-cache 38. First instruction window 30A is coupled to functional units 32A-32B and to address generation units 34A-34B. Similarly, second instruction window 30B is coupled to functional units 32C-32D and address generation units 34C-34D. Each of L1 D-cache 38, functional units 32, and address generation units 34 are coupled to a plurality of result buses 48 which are further coupled to load/store unit 36, first instruction window 30A, second instruction window 30B, reorder buffer/register file 28, and future file 26.Generally speaking, processor 10 employs a pair of caches (L0 I-cache 16 and L1 I-cache 14) and a fetch/prefetch method employed within fetch/scan unit 18 to increase the fetch bandwidth achievable within processor 10. L0 I-cache 16 is a relatively small (as compared to L1 I-cache 14) cache and may therefore provide low latency access to instructions. L1 I-cache 14 is a larger cache and may therefore exhibit a higher latency than L0 I-cache 16, but may also exhibit a higher hit rate than L0 I-cache 16. Fetch/scan unit 18 is configured to generate a fetch address based upon a variety of fetch address sources and/or the instructions previously fetched by processor 10 in response to previously generated fetch address. Depending upon the source of the fetch address, fetch/scan unit 18 fetches the corresponding instructions from either L0 I-cache 16 or L1 I-cache 14. Many of the most frequently selected sources of fetch addresses are presented to L0 I-cache 16 under the assumption that a cache hit in L0 I-cache 16 may occur. On the other hand, certain sources of fetch addresses may generally be less likely to hit in L0 I-cache 16. For these sources of fetch addresses, fetch/scan unit 18 routes the fetch address to L1 I-cache 14 without first accessing L0-cache 16. Additionally, fetch/scan unit 18 employs a prefetch algorithm to attempt to prefetch instructions likely to be fetched (based upon the current fetch address) from L1 I-cache 14 to L0 I-cache 16, if L0 I-cache 16 is selected to receive the fetch address generated by fetch/scan unit 18. By aggressively prefetching from L1 I-cache 14 to L0 I-cache 16, many of the more frequently used sources of fetch addresses may be more likely to hit in L0 I-cache 16.Advantageously, low latency and high bandwidth instruction fetch may be achievable from the combination of L0 I-cache 16, L1 I-cache 14, and fetch/scan unit 18. Performance of processor 10 may be increased as a result of the numerous instructions which may be available for simultaneous dispatch and issue within processor 10. As used herein, a fetch address refers to an address generated responsive to previously fetched instructions, wherein the instructions stored at the fetch address are predicted to be the next instructions after the previously fetched instructions within the instruction sequence being executed. On the other hand, a prefetch address refers to an address generated responsive to previously fetched instructions, wherein the instructions stored at the prefetch address are predicted to be within the instruction sequence being executed but which are not predicted to be the next instructions after the previously fetched instructions within the instruction sequence. Instead, the instructions stored at the prefetch address are predicted to be subsequent to the next instructions after the previously fetched instructions within the instruction sequence.Predecode unit 12 receives instruction bytes fetched by external interface unit 42 and predecodes the instruction bytes prior to their storage within L1 I-cache 14. Predecode information generated by predecode unit 12 is stored in L1 I-cache 14 as well. Generally, predecode information is provided to aid in the identification of instruction features which may be useful during the fetch and issue of instructions but which may be difficult to generate rapidly during the fetch and issue operation. The term "predecode", as used herein, refers to decoding instructions to generate predecode information which is later stored along with the instruction bytes being decoded in an instruction cache (e.g. L1 I-cache 14 and/or L0 I-cache 16).In one embodiment, processor 10 employs two bits of predecode information per instruction byte. One of the bits, referred to as the "start bit", indicates whether or not the instruction byte is the initial byte of an instruction. When a group of instruction bytes is fetched, the corresponding set of start bits identifies the boundaries between instructions within the group of instruction bytes. Accordingly, multiple instructions may be concurrently selected from the group of instruction bytes by scanning the corresponding start bits. While start bits are used to locate instruction boundaries by identifying the initial byte of each instruction, end bits could alternatively be used to locate instruction boundaries by identifying the final byte of each instruction.The second predecode bit used in this embodiment, referred to as the "control transfer" bit, identifies which instructions are branch instructions. The control transfer bit corresponding to the initial byte of an instruction indicates whether or not the instruction is a branch instruction. The control transfer bit corresponding to subsequent bytes of the instruction is a don't care except for relative branch instructions having a small displacement field. According to one particular embodiment, the small displacement field is an 8 bit field. Generally, a "small displacement field" refers to a displacement field having fewer bits than the target address generated by branch instructions. For relative branch instructions having small displacement fields, the control transfer bit corresponding to the displacement byte is used as described below.In addition to generating predecode information corresponding to the instruction bytes, predecode unit 12 is configured to recode the displacement field of relative branch instructions to actually store the target address in the present embodiment. In other words, predecode unit 12 adds the displacement of the relative branch instruction to the address corresponding to the relative branch instruction as defined by the instruction set employed by processor 10. The resulting target address is encoded into the displacement field as a replacement for the displacement, and the updated displacement field is stored into L1 I-cache 14 instead of the original displacement field. Target address generation is simplified by precomputing relative target addresses, and hence the branch prediction mechanism may operate more efficiently.In one embodiment of processor 10 which employs the x86 instruction set, predecode unit 12 is configured to recode eight bit and 32 bit displacement fields. The 32 bit displacement fields may store the entirety of the target address. On the other hand, the eight bit displacement field is encoded. More particularly, the eight bit displacement field and corresponding control transfer predecode bit is divided into a cache line offset portion and a relative cache line portion. The cache line offset portion is the cache line offset portion of the target address. The relative cache line portion defines the cache line identified by the target address (the "target cache line") in terms of a number of cache lines above or below the cache line storing the relative branch instruction. A first cache line is above a second cache line if each byte within the first cache line is stored at an address which is numerically greater than the addresses at which the bytes within the second cache line are stored. Conversely, a first cache line is below the second cache line if each byte within the first cache line is stored at an address which is numerically less than the addresses at which the bytes within a second cache line are stored. A signed eight bit displacement specifies an address which is +/- 128 bytes of the address corresponding to the branch instruction. Accordingly, the number of above and below cache lines which can be reached by a relative branch instruction having an eight bit displacement is limited. The relative cache line portion encodes this limited set of above and below cache lines. Generally, branch instructions having a small displacement field have displacements within a predefined range, whereas larger displacement fields may store values outside the predefined range.Tables 1 and 2 below illustrates an exemplary encoding of the predecode information corresponding to a byte in accordance with one embodiment of processor 10.<tb> <sep>TABLE 1<tb> <sep>Predecode Encoding<tb> <sep>Start<sep>Transfer<tb> <sep>Bit<sep>Bit<sep>Meaning<tb> <sep>1<sep>0<sep>Start byte of an instruction which is not a branch.<tb> <sep>1<sep>1<sep>Start byte of a branch instruction.<tb> <sep>0<sep>x<sep>Not an instruction boundary. Control Transfer Bit<tb> <sep> <sep> <sep>corresponding to displacement is used on 8-bit<tb> <sep> <sep> <sep>relative branches to encode target address as shown in<tb> <sep> <sep> <sep>Table 2 below.<tb> <sep>TABLE 1<tb> <sep>Predecode Encoding<tb> <sep>Start<sep>Transfer<tb> <sep>Bit<sep>Bit<sep>Meaning<tb> <sep>1<sep>0<sep>Start byte of an instruction which is not a branch.<tb> <sep>1<sep>1<sep>Start byte of a branch instruction.<tb> <sep>0<sep>x<sep>Not an instruction boundary. Control Transfer Bit<tb> <sep> <sep> <sep>corresponding to displacement is used on 8-bit<tb> <sep> <sep> <sep>relative branches to encode target address as shown in<tb> <sep> <sep> <sep>Table 2 below.Predecode unit 12 conveys the received instruction bytes and corresponding predecode information to L1 I-cache 14 for storage. L1 I-cache 14 is a high speed cache memory for storing instruction bytes and predecode information. L1 I-cache 14 may employ any suitable configuration, including direct mapped and set associative configurations. In one particular embodiment, L1 I-cache 14 is a 128 KB, two way set associative cache employing 64 byte cache lines. L1 I-cache 14 includes additional storage for the predecode information corresponding to the instruction bytes stored therein. The additional storage is organized similar to the instruction bytes storage. As used herein, the term "cache line" refers to the unit of allocation of storage in a particular cache. Generally, the bytes within a cache line are manipulated (i.e. allocated and deallocated) by the cache as a unit.In one embodiment, L1 I-cache 14 is linearly addressed and physically tagged. A cache is linearly addressed if at least one of the address bits used to index the cache is a linear address bit which is subsequently translated to a physical address bit. The tags of a linearly address/physically tagged cache include each translated bit in addition to the bits not used to index. As specified by the x86 architecture, instructions are defined to generate logical addresses which are translated through a segmentation translation mechanism to a linear address and further translated through a page translation mechanism to a physical address. It is becoming increasingly common to employ flat addressing mode, in which the logical address and corresponding linear address are equal. Processor 10 may be configured to assume flat addressing mode. Accordingly, fetch addresses, target addresses, etc. as generated by executing instructions are linear addresses. In order to determine if a hit is detected in L1 I-cache 14, the linear address presented thereto by fetch/scan unit 18 is translated using a translation lookaside buffer (TLB) to a corresponding physical address which is compared to the physical tags from the indexed cache lines to determine a hit/miss. When flat addressing mode is not used, processor 10 may still execute code but additional clock cycles may be used to generate linear addresses from logical addresses.L0 I-cache 16 is also a high speed cache memory for storing instruction bytes. Because L1 I-cache 14 is large, the access time of L1 I-cache 14 may be large. In one particular embodiment, L1 I-cache 14 uses a two clock cycle access time. In order to allow for single cycle fetch access, L0 I-cache 16 is employed. L0 I-cache 16 is comparably smaller than L1 I-cache 14, and hence may support a more rapid access time. In one particular embodiment, L0 I-cache 16 is a 512 byte fully associative cache. Similar to L1 I-cache 14, L0 I-cache 16 is configured to store cache lines of instruction bytes and corresponding predecode information (e.g. 512 bytes stores eight 64 byte cache lines and corresponding predecode data is stored in additional storage). In one embodiment, L0 I-cache 16 may be linearly addressed and linearly tagged.Fetch/scan unit 18 is configured to generate fetch addresses for L0 I-cache 16 and fetch or prefetch addresses for L1 I-cache 14. Instructions fetched from L0 I-cache 16 are scanned by fetch/scan unit 18 to identify instructions for dispatch as well as to locate branch instructions and to form branch predictions corresponding to the located branch instructions. Instruction scan information and corresponding instruction bytes are stored into instruction queue 20 by fetch/scan unit 18. Additionally, the identified branch instructions and branch predictions are used to generate subsequent fetch addresses for L0 I-cache 16.Fetch/scan unit 18 employs a prefetch algorithm to attempt to prefetch cache lines from L1 I-cache 14 to L0 I-cache 16 prior to the prefetched cache lines being fetched by fetch/scan unit 18 for dispatch into processor 10. Any suitable prefetch algorithm may be used. One embodiment of the prefetch algorithm is set forth in more detail below.Fetch/scan unit 18 employs an aggressive branch prediction mechanism in attempt to fetch larger "runs" of instructions during a clock cycle. As used herein, a "run" of instructions is a set of one or more instructions predicted to be executed in the sequence specified within the set. For example, fetch/scan unit 18 may fetch runs of 24 instruction bytes from L0 I-cache 16. Each run is divided into several sections which fetch/scan unit 18 scans in parallel to identify branch instructions and to generate instruction scan information for instruction queue 20. According to one embodiment, fetch/scan unit 18 attempts to predict up to two branch instructions per clock cycle in order support large instruction runs.Instruction queue 20 is configured to store instruction bytes provided by fetch/scan unit 18 for subsequent dispatch. Instruction queue 20 may operate as a first-in, first-out (FIFO) buffer. In one embodiment, instruction queue 20 is configured to store multiple entries, each entry comprising: a run of instructions, scan data identifying up to five instructions within each section of the run, and addresses corresponding to each section of the run. Additionally, instruction queue 20 may be configured to select up to six instructions within up to four consecutive run sections for presentation to alignment unit 22. Instruction queue 20 may, for example, employ 2-3 entries.Alignment unit 22 is configured to route instructions identified by instruction queue 20 to a set of issue positions within lookahead/collapse unit 24. In other words, alignment unit 22 selects the bytes which form each instruction from the run sections provided by instruction queue 20 responsive to the scan information provided by instruction queue 20. The instructions are provided into the issue positions in program order (i.e. the instruction which is first in program order is provided to the first issue position, the second instruction in program order is provided to the second issue position, etc.).Lookahead/collapse unit 24 decodes the instructions provided by alignment unit 22. FPU/multimedia instructions detected by lookahead/collapse unit 24 are routed to FPU/multimedia unit 40. Other instructions are routed to first instruction window 30A, second instruction window 30B, and/or load/store unit 36. In one embodiment, a particular instruction is routed to one of first instruction window 30A or second instruction window 30B based upon the issue position to which the instruction was aligned by alignment unit 22. According to one particular embodiment, instructions from alternate issue positions are routed to alternate instruction windows 30A and 30B. For example, instructions from issue positions zero, two, and four may be routed to the first instruction window 30A and instructions from issue positions one, three, and five may be routed to the second instruction window 30B. Instructions which include a memory operation are also routed to load/store unit 36 for access to L1 D-cache 38.Additionally, lookahead/collapse unit 24 attempts to generate lookahead addresses or execution results for certain types of instructions. Lookahead address/result generation may be particularly beneficial for embodiments employing the x86 instruction set. Because of the nature the x86 instruction set, many of the instructions in a typical code sequence are versions of simple moves. One reason for this feature is that x86 instructions include two operands, both of which are source operands and one of which is a destination operand. Therefore, one of the source operands of each instruction is overwritten with an execution result. Furthermore, the x86 instruction set specifies very few registers for storing register operands. Accordingly, many instructions are moves of operands to and from a stack maintained within memory. Still further, many instruction dependencies are dependencies upon the ESP/EBP registers and yet many of the updates to these registers are increments and decrements of the previously stored values.To accelerate the execution of these instructions, lookahead/collapse unit 24 generates lookahead copies of the ESP and EBP registers for each of instructions decoded during a clock cycle. Additionally, lookahead/collapse unit 24 accesses future file 26 for register operands selected by each instruction. For each register operand, future file 26 may be storing either an execution result or a tag identifying a reorder buffer result queue entry corresponding to the most recent instruction having that register as a destination operand.In one embodiment, lookahead/collapse unit 24 attempts to perform an address calculation for each instruction which: (i) includes a memory operand; and (ii) register operands used to form the address of the memory operand are available from future file 26 or lookahead copies of ESP/EBP. Additionally, lookahead/collapse unit 24 attempts to perform a result calculation for each instruction which: (i) does not include a memory operand; (ii) specifies an add/subtract operation (including increment and decrement); and (iii) register operands are available from future file 26 or lookahead copies of ESP/EBP. In this manner, many simple operations may be completed prior to instructions being sent to instruction windows 30A-30B.Lookahead/collapse unit 24 detects dependencies between a group of instructions being dispatched and collapses any execution results generated therein into instructions dependent upon those instruction results. Additionally, lookahead/collapse unit 24 updates future file 26 with the lookahead execution results. Instruction operations which are completed by lookahead/collapse unit 24 (i.e. address generations and/or instruction results are generated and load/store unit 36 or future file 26 and the result queue are updated) are not dispatched to instruction windows 30A-30B.Lookahead/collapse unit 24 allocates a result queue entry in reorder buffer/register file 28 for each instruction dispatched. In one particular embodiment, reorder buffer/register file 28 includes a result queue organized in a line-oriented fashion in which storage locations for execution results are allocated and deallocated in lines having enough storage for execution results corresponding to a maximum number of concurrently dispatchable instructions. If less than the maximum number of instructions are dispatched, then certain storage locations within the line are empty. Subsequently dispatched instructions use the next available line, leaving the certain storage locations empty. In one embodiment, the result queue includes 40 lines, each of which may store up to six execution results corresponding to concurrently dispatched instructions. Execution results are retired from the result queue in order into the register file included within reorder buffer/register file 28. Additionally, the reorder buffer handles branch is predictions, transmitting the corrected fetch address generated by the execution of the ranch instruction to fetch/scan unit 18. Similarly, instructions which generate other exceptions are handled within the reorder buffer. Results corresponding to instructions subsequent to the exception-generating instruction are discarded by the reorder buffer. The register file comprises a storage location for each architected register. For example, the x86 instruction set defines 8 architected registers. The register file for such an embodiment includes eight storage locations. The register file may further include storage locations used as temporary registers by a microcode unit in embodiments employing microcode units.Future file 26 maintains the speculative state of each architected register as instructions are dispatched by lookahead/collapse unit 24. As an instruction having a register destination operand is decoded by lookahead/collapse unit 24, the tag identifying the storage location within the result queue portion of reorder buffer/register file 28 assigned to the instruction is stored into the future file 26 storage location corresponding to that register. When the corresponding execution result is provided, the execution result is stored into the corresponding storage location (assuming that a subsequent instruction which updates the register has not been dispatched).It is noted that, in one embodiment, a group of up to six instructions is selected from instruction queue 20 and moves through the pipeline within lookahead/collapse unit 24 as a unit. If one or more instructions within the group generates a stall condition, the entire group stalls. An exception to this rule is if lookahead/collapse unit 24 generates a split line condition due to the number of ESP updates within the group). Such a group of instructions is referred to as a "line" of instructions lierein.Instruction windows 30 receive instructions from lookahead/collapse unit 24. Instruction windows 30 store the instructions until the operands corresponding to the instructions are received, and then select the instructions for execution. Once the address operands of an instruction including a memory operation have been received, the instruction is transmitted to one of the address generation units 34. Address generation units 34 generate an address from the address operands and forward the address to load/store unit 36. On the other hand, once the execution operands of an instruction have been received, the instruction is transmitted to one of the functional units 32 for execution. In one embodiment, each integer window 30A-30B includes 25 storage locations for instructions. Each integer window 30A-30B is configured to select up to two address generations and two functional unit operations for execution each clock cycle in the address generation units 34 and fiuntional units 32 connected thereto. In one embodiment, instructions fetched from L0 I-cache 16 remain in the order fetched until stored into one of instruction windows 30, at which point the instructions may be executed out of order.In embodiments of processor 10 employing the x86 instruction set, an instruction may include implicit memory operations for load/store unit 36 as well as explicit functional operations for functional units 32. Instructions having no memory operand do not include any memory operations, and are handled by functional units 32. Instructions having a source memory operand and a register destination operand include an implicit load memory operation handled by load/store unit 36 and an explicit functional operation handled by functional units 32. Instructions having a memory source/destination operand include implicit load and store memory operations handled by load/store unit 36 and an explicit functional operation handled by functional units32. Finally, instructions which do not have an explicit functional operation are handled by load/store unit 36. Each memory operation results in an address generation handled either by lookahead/collapse unit 24 or address generation units 34. Memory operations and instructions (i.e. functional operations) may be referred to herein separately, but may be sourced from a single instruction.Address generation units 34 are configured to perform address generation operations, thereby generating addresses for memory operations in load/store unit 36. The generated addresses are forwarded to load/store unit 36 via result buses 48. Functional units 32 are configured to perform integer arithmetic/logical operations and execute branch instructions. Execution results are forwarded to future file 26, reorder buffer/register file 28, and instruction windows 30A-30B via result buses 48. Address generation units 34 and functional units 32 convey the result queue tag assigned to the instruction being executed upon result buses 48 to identify the instruction being executed. In this manner, future file 26, reorder buffer/register file 28, instruction windows 30A-30B, and load/store unit 36 may identify execution results with the corresponding instruction. FPU/multimedia unit 40 is configured to execute floating point and multimedia instructions.Load/store unit 36 is configured to interface with L1 D-cache 38 to perform memory operations. A memory operation is a transfer of data between processor 10 and an external memory. The memory operation may be an explicit instruction, or may be implicit portion of an instruction which also includes operations to be executed by functional units 32. Load memory operations specify a transfer of data from external memory to processor 10, and store memory operations specify a transfer of data from processor 10 to external memory. If a hit is detected for a memory operation within L1 D-cache 38, the memory operation is completed therein without access to external memory. Load/store unit 36 may receive addresses for memory operations from lookahead/collapse unit 24 (via lookahead address calculation) or from address generation units 34. In one embodiment, load/store unit 36 is configured perform up to three memory operations per clock cycle to L1 D-cache 38. For this embodiment, load/store unit 36 may be configured to buffer up to 30 load/store memory operations which have not yet accessed D-cache 38. The embodiment may further be configured to include a 96 entry miss buffer for buffering load memory operations which miss D-cache 38 and a 32 entry store data buffer. Load/store unit 36 is configured to perform memory dependency checking between load and store memory operations.L1 D-cache 38 is a high speed cache memory for storing data. Any suitable configuration may be used for L1 D-cache 38, including set associative and direct mapped configurations. In one particular embodiment, L1 D-cache 38 is a 128 KB two way set associative cache employing 64 byte lines. L1 D-cache 38 may be organized as, for example, 32 banks of cache memory per way. Additionally, L1 D-cache 38 may be a linearly addressed/physically tagged cache employing a TLB similar to L1 I-cache 14.External interface unit 42 is configured to transfer cache lines of instruction bytes and data bytes into processor 10 in response to cache misses. Instruction cache lines are routed to predecode unit 12, and data cache lines are routed to L1 D-cache 38. Additionally, external interface unit 42 is configured to transfer cache lines discarded by L1 D-cache 38 to memory if the discarded cache lines have been modified to processor 10. As shown in FIG. 1, external interface unit 42 is configured to interface to an external L2 cache via L2 interface 44 as well as to interface to a computer system via bus interface 46. In one embodiment, bus interface unit 46 comprises an EV/6 bus interface.Turning now to FIG. 2, a block diagram of one embodiment of fetch/scan unit 18 is shown. Other embodiments are possible and contemplated. As shown in FIG. 2, fetch/scan unit 18 includes a fetch control unit 50, a plurality of select next blocks 52A-52C, an instruction select multiplexor (mux) 54, an instruction scanner 56, a branch scanner 58, a branch history table 60, a branch select mux 62, a return stack 64, an indirect address cache 66, and a forward collapse unit 68. Fetch control unit 50 is coupled to L1 I-cache 14, L0 I-cache 16, indirect address cache 66, return stack 64, branch history table 60, branch scanner 58, and instruction select mux 54. Select next block 52A is coupled to L1 I-cache 14, while select next blocks 52B-52C are coupled to L0 I-cache 16. Each select next block 52 is coupled to instruction select mux 54, which is further coupled to branch scanner 58 and instruction scanner 56. Instruction scanner 56 is coupled to instruction queue 20. Branch scanner 58 is coupled to branch history table 60, return stack 64, and branch select mux 62. Branch select mux 62 is coupled to indirect address cache 66. Branch history table 60 and branch scanner 58 are coupled to forward collapse unit 68, which is coupled to instruction queue 20.Fetch control unit 50 receives branch prediction information (including target addresses and taken/not taken predictions) from branch scanner 58, branch history table 60, return stack 64, and indirect address cache 66. Responsive to the branch prediction information, fetch control unit 50 generates fetch addresses for L0 I-cache 16 and a fetch or a prefetch address for L1 I-cache 14. In one embodiment, fetch control unit 50 generates two fetch addresses for L0 I-cache 16. The first fetch address is selected as the target address corresponding to the first branch instruction identified by branch scanner 58 (if any). The second fetch address is the sequential address to the fetch address selected in the previous clock cycle (i.e. the fetch address corresponding to the run selected by instruction select mux 54).L0 I-cache 14 provides the cache lines (and predecode information) corresponding to the two fetch addresses, as well as the cache lines (and predecode information) which are sequential to each of those cache lines, to select next blocks 52B-52C. More particularly, select next block 52B receives the sequential cache line corresponding to the sequential address and the next incremental cache line to the sequential cache line. Select next block 52C receives the target cache line corresponding to the target address as well as the cache line sequential to the target cache line. Additionally, select next blocks 52B-52C receive the offset portion of the corresponding fetch address. Select next blocks 52B-52C each select a run of instruction bytes (and corresponding predecode information) from the received cache lines, beginning with the run section including the offset portion of the corresponding fetch address. Since the offset portion of each fetch address can begin anywhere within the cache line, the selected run may included portions of the fetched cache line and the sequential cache line to the fetched cache line. Hence, both the fetched cache line and the sequential cache line are received by select next blocks 52B-52C.Similarly, select next block 52A receives a prefetched cache line (and corresponding predecode information) from L1 I-cache 14 and selects an instruction run therefrom. Since one cache line is prefetched from L1 I-cache 14, the run selected therefrom may comprise less than a full run if the offset portion of the prefetch address is near the end of the cache line. It is noted that the fetch cache lines from L0 I-cache 16 may be provided in the same clock cycle as the corresponding addresses are generated by fetch control unit 50, but the prefetch cache line may be a clock cycle delayed due to the larger size and slower access time of L1 I-cache 14. In addition to providing the prefetched cache line to select next block 52A, L1 I-cache 14 provides the prefetched cache line to L0 I-cache 16. If the prefetched cache line is already stored within L0 I-cache 16, L0 I-cache 16 may discard the prefetched cache line. However, if the prefetched cache line is not already stored in L0 I-cache 14, the prefetched cache line is stored into L0 I-cache 16. In this manner, cache lines which may be accessed presently are brought into L0 I-cache 16 for rapid access therefrom. According to one exemplary embodiment, L0 I-cache 16 comprises a fully associative cache structure of eight entries. A fully associative structure may be employed due to the relatively small number of cache lines included in L0 I-cache 16. Other embodiments may employ other organizations (e.g. set associative or direct-mapped).Fetch control unit 50 selects the instruction run provided by one of select next blocks 52 in response to branch prediction information by controlling instruction select mux 54. As will be explained in more detail below, fetch control unit 50 receives (in the present embodiment) target addresses from branch scanner 58, return stack 64, and indirect address cache 66 early in the clock cycle as well as at least a portion of the opcode byte of the first branch instruction identified by branch scanner 58. Fetch control unit 50 decodes the portion of the opcode byte to select the target address to be fetched from L0 I-cache 16 from the various target address sources and provides the selected target address to L0 I-cache 16. In parallel, the sequential address to the fetch address selected in the previous clock cycle (either the target address or the sequential address from the previous clock cycle, depending upon the branch prediction from the previous clock cycle) is calculated and provided to L0 I-cache 16. Branch prediction information (i.e. taken or not taken) is provided by branch history table 60 late in the clock cycle. If the branch instruction corresponding to the target address fetched from L0 I-cache 16 is predicted taken, then fetch control unit 50 selects the instruction run provided by select next block 52C. On the other hand, if the branch instruction is predicted not taken, then the instruction run selected by select next block 52B is selected. The instruction run provided by select next block 52A is selected if a predicted fetch address missed L0 I-cache 16 in a previous clock cycle and was fetched from L1 I-cache 14. Additionally, the instruction run from L1 I-cache 14 is selected if the instruction run was fetched responsive to a branch instruction have a 32 bit displacement or indirect target address generation or an L0 I-cache miss was fetched.The selected instruction run is provided to instruction scanner 56 and branch scanner 58. Instruction scanner 56 scans the predecode information corresponding to the selected instruction run to identify instructions within the instruction run. More particularly in one embodiment, instruction scanner 56 scans the start bits corresponding to each run section in parallel and identifies up to five instructions within each run section. Pointers to the identified instructions (offsets within the run section) are generated. The pointers, instruction bytes, and addresses (one per run section) are conveyed by instruction scanner 56 to instruction queue 20. If a particular run section includes more than five instructions, the information corresponding to run sections subsequent to the particular run section is invalidated and the particular run section and subsequent run sections are rescanned during the next clock cycle.Branch scanner 58 scans the instruction run in parallel with instruction scanner 56. Branch scanner 58 scans the start bits and control transfer bits of the instruction run to identify the first two branch instructions within the instruction run. As described above, a branch instruction is identified by the control transfer bit corresponding to the start byte of an instruction (as identified by the start bit) being set. Upon locating the first two branch instructions, branch scanner 58 assumes that the instructions are relative branch instructions and selects the corresponding encoded target addresses from the instruction bytes following the start byte of the branch instruction. For embodiments employing the x86 instruction set, a nine bit target address (the displacement byte as well as the corresponding control transfer bit) is selected, and a 32 bit target address is selected as well. Furthermore, at least a portion of the opcode byte identified by the start and control transfer bits is selected. The target addresses and opcode bytes are routed to fetch control unit 50 for use in selecting a target address for fetching from L0 I-cache 16. The fetch addresses of each branch instruction (determined from the fetch address of the run section including each branch instruction and the position of the branch instruction within the section) are routed to branch history table 60 for selecting a taken/not-taken prediction corresponding to each branch instruction. Furthermore, the fetch addresses corresponding to each branch instruction are routed to branch select mux 62, which is further routed to indirect address cache 66. The target address of each branch instruction is routed to forward collapse unit 68. According to one embodiment, branch scanner 58 is configured to scan each run section in parallel for the first two branch instructions and then to combine the scan results to select the first two branch instructions within the run.Branch scanner 58 may further be configured to determine if a subroutine call instruction is scanned during a clock cycle. Branch scanner 58 may forward the fetch address of the next instruction following the detected subroutine call instruction to return stack 64 for storage therein.In one embodiment, if there are more than two branch instructions within a run, the run is scanned again during a subsequent clock cycle to identify the subsequent branch instruction.The fetch addresses of the identified branch instructions are provided to branch history table 60 to determine a taken/not taken prediction for each instruction. Branch history table 60 comprises a plurality of taken/not-taken predictors corresponding to the previously detected behavior of branch instructions. One of the predictors is selected by maintaining a history of the most recent predictions and exclusive ORing those most recent predictions with a portion of the fetch addresses corresponding to the branch instructions. The least recent (oldest) prediction is exclusive ORed with the most significant bit within the portion of the fetch address, and so forth through the most recent prediction being exclusive ORed with the least significant bit within the portion of the fetch address. Since two predictors are selected per clock cycle, the predictor corresponding to the second branch instruction is dependent upon the prediction of the first branch instruction (for exclusive ORing with the least significant bit of the corresponding fetch address). Branch history table 60 provides the second predictor by selecting both of the predictors which might be selected (i.e. the predictor that would be selected if the first branch instruction is predicted not-taken and the predictor that would be selected if the first branch instruction is predicted taken) and then selecting one of the two predictors based on the actual prediction selected for the first branch instruction.Branch history table 60 receives information regarding the execution of branch instructions from functional units 32A-32D. The history of recent predictions corresponding to the executed branch instruction as well as the fetch address of the executed branch instruction are provided for selecting a predictor to update, as well as the taken/not taken result of the executed branch instruction. Branch history table 60 selects the corresponding predictor and updates the predictor based on the taken/not taken result. In one embodiment, the branch history table stores a bimodal counter. The bimodal counter is a saturating counter which saturates at a minimum and maximum value (i.e. subsequent decrements of the minimum value and increments of the maximum value cause no change in the counter). Each time a branch instruction is taken, the corresponding counter is incremented and each time a branch instruction is not taken,.the corresponding counter is decremented. The most significant bit of the counter indicates the taken/not taken prediction (e.g. taken if set, not taken if clear). In one embodiment, branch history table 60 stores 64K predictors and maintains a history of the 16 most recent predictions. Each clock cycle, the predictions selected during the clock cycle are shifted into the history and the oldest predictions are shifted out of the history.Return stack 64 is used to store the return addresses corresponding to detected subroutine call instructions. Return stack 64 receives the fetch address of a subroutine call instruction from branch scanner 58. The address of the byte following the call instruction (calculated from the fetch address provided to return stack 64) is placed at the top of return stack 64. Return stack 64 provides the address stored at the top of the return stack to fetch control unit 50 for selection as a target address if a return instruction is detected by branch scanner 58 and fetch control unit 50. In this manner, each return instruction receives as a target address the address corresponding to the most recently detected call instruction. Generally in the x86 instruction set, a call instruction is a control transfer instruction which specifies that the sequential address to the call instruction be placed on the stack defined by the x86 architecture. A return instruction is an instruction which selects the target address from the top of the stack. Generally, call and return instructions are used to enter and exit subroutines within a code sequence (respectively). By placing addresses corresponding to call instructions in return stack 64 and using the address at the top of return stack 64 as the target address of return instructions, the target address of the return instruction may be correctly predicted. In one embodiment, return stack64 may comprise 16 entries.Indirect address cache 66 stores target addresses corresponding to previous executions of indirect branch instructions. The fetch address corresponding to an indirect branch instruction and the target address corresponding to execution of the indirect branch instruction are provided by functional units 32A-32D to indirect address cache 66. Indirect address cache 66 stores the target addresses indexed by the corresponding fetch addresses. Indirect address cache 66 receives the fetch address selected by branch select mux 62 (responsive to detection of an indirect branch instruction) and, if the fetch address is a hit in indirect address cache 66, provides the corresponding target address to fetch control unit 50. In one embodiment, indirect address cache 66 may comprise 32 entries.According to one contemplated embodiment, if indirect address cache 66 detects a miss for a fetch address, indirect address cache 66 may be configured to select a target address to provide from one of the entries. In this manner, a "guess" at a branch target is provided in case an indirect branch instruction is decoded. Fetching from the guess may be performed rather than awaiting the address via execution of the indirect branch instruction. Alternatively, another contemplated embodiment awaits the address provided via execution of the indirect branch instruction.It is noted that, if an encoded target address is selected, the actual target address may be presented to L0 I-cache 16. Fetch control unit 50 may be configured to precalculate each of the possible above/below target addresses and select the correct address based on the encoded target address. Alternatively, fetch control unit SO may record which L0 I-cache storage locations are storing the above and below cache lines, and select the storage locations directly without a tag compare.Forward collapse unit 68 receives the target addresses and positions within the instruction run of each selected branch instruction as well as the taken/not taken predictions. Forward collapse unit 68 determines which instructions within the run should be cancelled based upon the received predictions. If the first branch instruction is predicted taken and is backward (i.e. the displacement is negative), all instructions subsequent to the first branch instruction are cancelled. If the first branch instruction is predicted taken and is forward but the displacement is small (e.g. within the instruction run), the instructions which are between the first branch instruction and the target address are cancelled. The second branch instruction, if still within the run according to the first branch instruction's prediction, is treated similarly. Cancel indications for the instructions within the run are set to instruction queue 20.Turning now to FIG. 3, a block diagram of one embodiment of lookahead/collapse unit 24 is shown. Other embodiments are possible and contemplated. As shown in FIG. 3, lookahead/collapse unit 24 includes a plurality of decode units 70A-70F, an ESP/EBP lookahead unit 72, a lookahead address/result calculation unit 74, a dispatch control unit 76, and an operand collapse unit 78. Decode units 70A-70F are coupled to receive instructions from alignment unit 22. Decode units 70A-70F are coupled to provide decoded instructions to FPU/multimedia unit 40, ESP/EBP lookahead unit 72, future file 26, and lookahead address/result calculation unit 74. ESP/EBP lookahead unit 72 is coupled to lookahead address/result calculation unit 74, as is future file 26. Lookahead address/result calculation unit 74 is further coupled load/store unit 36 and dispatch control unit 76. Dispatch unit 76 is further coupled to operand collapse unit 78, future file 26, load/store unit 36, and reorder buffer 28. Operand collapse unit 78 is coupled to instruction windows 30.Each decode unit 70A-70F forms an issue position to which alignment unit 22 aligns an instruction. While not indicated specifically throughout FIG. 3 for simplicity the drawing, a particular instruction remains within its issue position as the instruction moves through lookahead/collapse unit 24 and is routed to one of instruction windows 30A-30B if not completed within lookahead/collapse unit 24.Decode units 70A-70F route FPU/multimedia instructions to FPU/multimedia unit 40. However, if the FPU/multimedia instructions include memory operands, memory operations are also dispatched to load/store unit 36 in response to the instruction through lookahead address/result calculation unit 74. Additionally, if the address for the memory operations cannot be generated by lookahead address/result calculation unit 74, an address generation operation is dispatched to one of address generation units 34A-34D via instruction windows 30A-30B. Still further, entries within reorder buffer 28 are allocated to the FPU/multimedia instructions for maintenance of program order. Generally, entries within reorder buffer 28 are allocated from decode units 70A-70F for each instruction received therein.Each of decode units 70A-70F may be further configured to determine: (i) whether or not the instruction uses the ESP or EBP registers as a source operand; and (ii) whether not the instruction modifies the ESP/EBP registers (i.e. has the ESP or EBP registers as a destination operand). Indications of these determinations are provided by decode units 70A-70F to ESP/EBP lookahead unit 72. ESP/EBP lookahead unit 72 generates lookahead information for each instruction which uses the ESP or EBP registers as a source operand. The lookahead information may include a constant to be added to the current lookahead value of the corresponding register and an indication of a dependency upon an instruction in a prior issue position. In one embodiment, ESP/EBP lookahead unit 72 is configured to provide lookahead information as long as the set of concurrently decoded instructions provided by decode units 70A-70F do not include more than: (i) two push operations (which decrement the ESP register by a constant value); (ii) two pop operations (which increment ESP register by a constant value); (iii) one move to ESP register; (iv) one arithmetic/logical instruction having the ESP as a destination; or (v) three instructions which update ESP. If one of these restrictions is exceeded, ESP/EBP lookahead unit 72 is configured to stall instructions beyond those which do not exceed restrictions until the succeeding clock cycle (a "split line" case). For those instructions preceded, in the same clock cycle but in earlier issue positions, by instructions which increment or decrement the ESP register, ESP/EBP lookahead unit 72 generates a constant indicating the combined total modification to the ESP register of the preceding instructions. For those instructions preceded by a move or arithmetic operation upon the ESP or EBP registers, ESP/EBP lookahead unit 72 generates a value identifying the issue position containing the move or arithmetic instruction.The lookahead values may be used by lookahead address/result calculation unit 74 to generate either a lookahead address corresponding to the instruction within the issue position (thereby inhibiting an address generation operation which would otherwise be performed by one of address generation units 34A-34D) or a lookahead result corresponding to the instruction (thereby providing lookahead state to future file 26 earlier in the pipeline). Performance may be increased by removing address generation operations and/or providing lookahead state prior to functional units 32A-32D and address generation units 34A-34D. Many x86 code sequences include a large number of relatively simple operations such as moves of values from a source to destination without arithmetic/logical operation or simple arithmetic operations such as add/subtract by small constant or increment/decrement of a register operand. Accordingly, functional units 32A-32D may typically execute the more complex arithmetic/logical operations and branch instructions and address generation units 34A-34D may typically perform the more complex address generations. Instruction throughput may thereby be increased.Decode units 70A-70F may be still further configured to identify immediate data fields from the instructions decoded therein. The immediate data is routed to lookahead address/result calculation unit 74 by decode units 70A-70F. Additionally, decode unit 70A-70F are configured to identify register operands used by the instructions and to route register operand requests to future file 26. Future file 26 returns corresponding speculative register values or result queue tags for each register operand. Decode units 70 further provide dependency checking between the line of instructions to ensure that an instruction which uses a result of an instruction within a different issue position receives a tag corresponding to that issue position.Lookahead address/result calculation unit 74 receives the lookahead values from ESP/EBP lookahead units 72, the immediate data from decode units 70A-70F, and the speculative register values or result queue tags from future file 26. Lookahead address/result calculation unit 74 attempts to generate either a lookahead address corresponding to a memory operand of the instruction, or a lookahead result if the instruction does not include a memory operand. For example, simple move operations can be completed (with respect to functional units 32 and address generation units 34) if an address generation can be performed by lookahead address/result calculation unit 74. In one embodiment, lookahead address/result calculation unit 74 is configured to compute addresses, using displacement only, register plus displacement, ESP/EBP plus displacement, and scale-index-base addressing mode except for index or base registers being ESP/EBP. Load/store unit 36 performs the memory operation and returns the memory operation results via result buses 48. Even if no address is generated for a memory operation by lookahead address/result calculation unit 74, lookahead address/result calculation unit 74 indicates the memory operation and corresponding result queue tag to load/store unit 36 to allocate storage within load/store unit 36 for the memory operation.Simple arithmetic operations which increment or decrement a source operand, add/subtract a small immediate value to a source operand, or add/subtract two register source operands may also be completed via lookahead address/result calculation unit 74 if the source operands are available from future file 26 (i.e. a speculative register value is received instead of a result queue tag). Instructions completed by lookahead address/result calculation units 74 are indicated as completed and are allocated entries in reorder buffer 28 but are not dispatched to instruction windows 30. Lookahead address/result calculation unit 74 may comprise, for example, an adder for each issue position along with corresponding control logic for selecting among the lookahead values, immediate data, and speculative register values. It is noted that simple arithmetic operations may still be forwarded to instruction windows 30 for generation of condition flags, according to the present embodiment. However, generating the functional result in lookahead address/result calculation unit 74 provides the lookahead state early, allowing subsequent address generations/instructions to be performed early as well.Lookahead address/result calculation unit 74 may be configured to keep separate lookahead copies of the ESP/EBP registers in addition to the future file copies. However, if updates to the ESP/EBP are detected which cannot be calculated by lookahead address/result calculation unit 74, subsequent instructions may be stalled until a new lookahead copy of the ESP/EBP can be provided from future file 26 (after execution of the instruction which updates ESP/EBP in the undeterminable manner).Dispatch control unit 76 determines whether or not a group of instructions are dispatched to provide pipeline flow control. Dispatch control unit 76 receives instruction counts from instruction windows 30 and load/store counts from load/store unit 36 and, assuming the maximum possible number of instructions are in flight in pipeline stages between dispatch control units 76 and instruction windows 30 and load/store unit 36, determines whether or not space will be available for storing the instructions to be dispatched within instruction windows 30 and/or load/store unit 36 when the instructions arrive therein. If dispatch control unit 76 determines that insufficient space will be available in load/store unit 36 and either instruction window 30, dispatch is stalled until the instruction counts received by dispatch control unit 76 decrease to a sufficiently low value.Upon releasing instructions for dispatch through dispatch control unit 76, future file 26 and reorder buffer 28 are updated with speculatively generated lookahead results. In one embodiment, the number of non-ESP/EBP updates supported may be limited to, for example, two in order to limit the number of ports on future file 26. Furthermore, operand collapse unit 78 collapses speculatively generated lookahead results into subsequent, concurrently decoded instructions which depend upon those results as indicated by the previously determined intraline dependencies. In this manner, the dependent instructions receive the speculatively generated lookahead results since these results will not subsequently be forwarded from functional units 32A-32D. Those instructions not completed by lookahead address/result calculation unit 74 are then transmitted to one of instruction windows 30A-30B based upon the issue position to which those instructions were aligned by alignment unit 22.It is noted that certain embodiments of processor 10 may employ a microcode unit (not shown) for executing complex instructions by dispatching a plurality of simpler instructions referred to as a microcode routine. Decode units 70A-70F may be configured to detect which instructions are microcode instructions and to route the microcode instructions to the microcode unit. For example, the absence of a directly decoded instruction output from a decode unit 70 which received a valid instruction may be an indication to the microcode unit to begin execution for the corresponding valid instruction. It is further noted that various storage devices are shown in FIGS. 2 and 3 (e.g. devices 79A, 79B, and similar devices in FIG. 2 and devices 79C, 79D and similar devices in FIG. 3). The storage devices represent latches, registers, flip-flops and the like which may be used to separate pipeline stages. However, the particular pipeline stages shown in FIGS. 2 and 3 are but one embodiment of suitable pipeline stages for one embodiment of processor 10. Other pipeline stages may be employed in other embodiments.It is noted that, while the x86 instruction set and architecture has been used as an example above and may be used as an example below, any instruction set and architecture may be used. Additionally, displacements may be any desirable size (in addition to the 8 bit and 32 bit sizes used as examples herein). Furthermore, while cache line fetching may be described herein, it is noted that cache lines may be sectors, and sectors may be fetched, if desirable based upon cache line size and the number of bytes desired to be fetched. Turning next to FIG. 4, a block diagram of one embodiment of fetch control unit 50 is shown. Other embodiments are possible contemplated. As shown in FIG. 4, fetch control unit 50 includes a decoder/L0 fetch control unit 150, an L0 fetch address mux 152, an incrementor 154, an L1 fetch control unit 156, an incrementor 160, and an L1 fetch address mux 162. Decoder/L0 fetch control unit 150 is coupled to receive the first branch opcode corresponding to the first branch instruction within the run from branch scanner 58 and to reorder buffer 28 to receive a misprediction redirection indication. Additionally, decoder/L0 fetch control unit 150 is coupled to L0 fetch address mux 152, L1 fetch control unit 156, and instruction select mux 54. L0 fetch address mux 152 is coupled to receive the first target address (assuming a small displacement) corresponding to the first branch instruction within the run as selected by branch scanner 58. The second target address corresponding to the second branch instruction address is also provided to L0 fetch address mux 152 with a one clock cycle delay (again, assuming a small displacement). Additionally, L0 fetch address mux 152 is configured to receive the return address provided by return stack 64 (i.e. the address at the top of return stack 64), the corrected fetch address provided by reorder buffer 28 upon misprediction redirection, and the sequential address to the address fetched in the previous clock cycle (generated by incrementor 154). L0 fetch address mux 152 is coupled to provide the target fetch address to L0 I-cache 16 and to incrementor 160. Incrementor 160 is also coupled to receive the corrected fetch address from reorder buffer 28 upon detection of a misprediction redirection. L1 fetch control unit 156 is further coupled to L0 I-cache 16 to receive a miss indication, to reorder buffer 28 to receive an indication of a misprediction, and to decoder/L0 fetch control unit 150 to receive an indication of decoding a branch instruction using an indirect address or 32 bit displacement, or a return instruction. L1 fetch address mux 162 is coupled to indirect address cache 66 to receive a predicted indirect target address, to branch scanner 58 to receive 32-bit target addresses corresponding to relative branch instructions, to incrementor 160 to received the next sequential address to the corrected fetch address and to the predicted branch fetch address for L0 I-cache 16, to return stack 64 to receive the return address which is second to the top of return stack 64, to fetch address mux 152 to receive the target fetch address, to register 158 to receive the sequential fetch address, and to L1 I-cache 14 to provide an L1 fetch address. Fetch control unit 50 provides a sequential fetch address to L0 I-cache 16 via a register 158.Decoder/L0 fetch control unit 150 is configured to decode the opcode corresponding to the first identified branch instruction from branch scanner 58 in order to select the target fetch address for L0 I-cache 16. In order to provide the target fetch address as rapidly as possible, decoder/L0 fetch control unit 150 decodes only a portion of the opcode byte received from branch scanner 58 according to one particular embodiment of decoder/L0 fetch control unit 150. More particularly, for the x86 instruction set, decoder/L0 fetch control unit 150 may decode the four most significant bits of the opcode byte identified by the set start and control transfer bits to select one of the first target address from branch scanner 58, the return address from return stack 64, and the sequential address.Because the branch prediction corresponding to the first branch instruction within the run is not available until late in the clock cycle in which the fetch address is selected, in this particular embodiment, decoder/L0 fetch control unit 150 does not attempt to select the second branch target address as the target fetch address. If the first branch instruction is predicted not taken, via branch history table 60, the second target address corresponding to the second identified branch instruction (if any) may be fetched in a subsequent clock cycle if the second branch instruction is predicted taken by branch history table 60. Also, if the first branch is predicted taken but the first target address is within the same run as the first branch, the sequential address is selected. If the first branch does not branch past the second branch within the run, the second target address is selected during the subsequent clock cycle. Similarly, if the first branch instruction uses an indirect target address or 32-bit relative target address, L0 fetch address mux 152 may select an address and the fetched instructions may be discarded in favor of instructions at the actual branch target. In these cases, the fetch address selected by decoder/L0 fetch control unit 150 is a don't care, and the actual fetch address is provided to L1 I-cache 14 by L1 fetch control unit 156. Decoder/L0 fetch control unit 150 signals L1 fetch control unit 156 upon detecting a 32-bit relative target address, a branch instruction using an indirect address, and a return instruction.L1 fetch control unit 156 generates an L1 fetch address for L1 I-cache 14 by controlling L1 fetch address mux 162. The cache line corresponding to the L1 fetch address is conveyed to L0 I-cache 16 for storage, and may be selected for dispatch if the address is a fetch address (as described above). L1 fetch control unit 156 selects the L1 fetch address from one of several sources. If a branch misprediction is signaled by reorder buffer 28, the sequential address to the corrected fetch address (received from incrementor 160) is selected since the other address sources are based upon instructions within the mispredicted path. If no branch misprediction is signalled and an L0 fetch address miss is detected, L1 fetch control unit 156 selects the L0 fetch address miss for fetching (via register 164 or register 166, depending upon which address misses). It is noted that either the sequential fetch address or the target fetch address (or both) may miss L0 I-cache 16. Each miss is indicated via miss signals from L0 I-cache 16. If the target fetch address is a miss, the target address may be selected for fetching from L1 I-cache 14 (received by L1 fetch address mux 162 via register 164). If the target address is a hit and the sequential fetch address is a miss, the sequential fetch address may be selected for fetching from L1 I-cache 16. Alternative strategies for selecting which miss address to fetch may be employed as well. If no miss is detected, L1 fetch control unit 156 selects either the indirect address provided by indirect address cache 66 or a 32-bit branch target address from branch scanner 58 responsive to signals from decoder/L0 fetch control unit 150 indicating a decode of such instructions. If L1 fetch control unit 156 receives a signal from decoder/L0 fetch control unit 150 indicating that a return instruction has been detected, L1 fetch control unit 156 selects the return address which is next to the top of return stack 64 (i.e. the return address which will be the top of return stack 64 upon deletion of the return address being fetched from L0 I-cache 16). If no signals are received from decoder/L0 fetch control unit 150, L1 fetch control unit 156 prefetches the cache line sequential to the target address selected by fetch address mux 152 (as received from incrementor 160).Indirect addresses and 32-bit target addresses are not fetched from L0 I-cache 16 in the present embodiment because these types of target addresses are typically selected by a programmer when the target instruction sequence is not spatially located within memory near the branch instruction. Because L0 I-cache 16 stores a small number of cache lines most recently accessed in response to the code sequence being executed, it may be statistically less likely that the target instruction sequence is stored in the L0 I-cache 16. Accordingly, these fetch addresses are conveyed directly to L1 I-cache 14 for fetching. A fetch address may be conveyed to L0 I-cache 16, but the instructions are discarded. By fetching from L1 I-cache 14 without first checking L0 I-cache 16 for a hit, a clock cycle of latency may be saved.It is noted that, in cases in which a fetch address is not selected for L1 I-cache 14, a prefetch address is selected in response to the selected fetch address for L0 I-cache 16. For example, if a return address is selected for fetching from L0 I-cache 16, then the return address which is next to the top of return stack 64 is selected for prefetching from L1 I-cache 14. If a misprediction redirection is selected, the next sequential fetch address to the corrected fetch address is selected. If a branch target address is selected, the next sequential address to the branch target address is selected. Finally, if a sequential address is selected, the next incremental address to that sequential address is selected. It is further noted that, while cache lines and runs are discussed as being fetched in various portions of the present disclosure, generally, each cache line includes instruction bytes which form one or more instructions. Hence, each fetch may be viewed as fetching a cache line, a cache line of instruction bytes, a run of instructions, or instructions. Other embodiments may fetch and prefetch instructions in units other than cache lines or runs, as desired. A sequential address to a particular address may be the address of instructions subsequent to the unit of fetch including the particular address.Incrementor 154 is configured to increment the fetch address corresponding to the run selected for dispatch based on the branch prediction information received from branch history table 60. Decoder/L0 fetch control unit 150 includes logic for selecting the run, via instruction select multiplexor 54, based on L0 I-cache hit information as well as the branch prediction information. This logic also causes incrementor 154 to increment the fetch address corresponding to the selected run (either the sequential fetch address provided from register 158 or the target fetch address provided from L0 fetch address mux 152). Accordingly, the sequential fetch address for the subsequent clock cycle is generated and stored in register 158. Incrementor 160 increments both the corrected fetch address and the target fetch address. It is noted that incrementors 154 and 160 increment to the next run boundary (i.e. so that a fetch address of the next run is generated).It is noted that, while a particular set of sources for L0 I-cache fetch addresses, L1 I-cache fetch addresses, and L1 I-cache prefetch addresses are described above, other sets of address sources are contemplated. The set of address sources described above may be added to, deleted from, or both to form other contemplated sets of sources. Furthermore, other contemplated embodiments may generate only one fetch address per clock cycle for L0 I-cache (instead of a target fetch address and a sequential fetch address as described above). Still other contemplated embodiments may generate other fetch addresses for L0 I-cache 16 as well.In one particular embodiment of decoder/L0 fetch control unit 150 employed within one embodiment of processor 10 employing the x86 instruction set, opcodes having the four most significant bits equal to (in hexadecimal) 7, E, or 0 result in the first target address being selected by L0 fetch address mux 152. Opcodes having the four most significant bits equal to C result in the return address from return stack 64 being selected, and opcodes having the four most significant bits equal to F cause the sequential address to be selected.In the x86 instruction set, branch instruction opcodes having the four most significant bits equal to 7 are conditional jump instructions having eight bit relative displacements. Accordingly, an opcode corresponding to a set start bit and set control transfer bit which has the four most significant bits equal to 7 correctly selects the target address provided by branch scanner 58. Branch instruction opcodes having the four most significant bits equal to E may be conditional jump instructions with eight bit relative displacements, or call or unconditional jump instructions having either eight bit relative displacements or 32 bit relative displacements. For these cases, decoder/L0 fetch control unit 150 selects the first target address provided by branch scanner 58 and, if further decode indicates that a 32-bit displacement field is included in the branch instruction, the instructions fetched in response to the selection are discarded and the correct fetch address is fetch from L1 I-cache 14 via L1 fetch control unit 156 selecting, via L1 fetch address mux 162, the 32-bit fetch address from branch scanner 58. Finally, branch instruction opcodes having the four most significant bits equal to 0 specify 32-bit relative displacements. Since decoder/L0 fetch control unit 150 cannot select the 32 bit target address for fetching from L0 I-cache 16 in the present embodiment, decoder/L0 fetch control unit 150 selects the first target address provided from branch scanner 58 and signals L1 fetch control unit. 156 to select the 32-bit branch target address from branch scanner 58 for fetching from L1 I-cache 14.Branch instruction opcodes having the four most significant bits equal to C are return instructions, and hence the return address provided by return address stack 64 provides the predicted fetch address. On the other hand, branch instruction opcodes having the four most significant bits equal to F are call or unconditional jump instructions which use indirect target address generation. The indirect address is not provided to L0 fetch address mux 152, and hence a default selection of the sequential address is performed. The instructions fetched in response to the sequential address are discarded and instructions fetched from L1 I-cache 14 are provided during a subsequent clock cycle.It is noted that, although the above description describes an embodiment of decoder/L0 fetch control unit 150 which partially decodes an opcode to select a target, other embodiments may employ full decodes or other partial decodes, as desired.Turning next to FIG. 5, a flowchart is shown illustrating operation of one embodiment of decoder/L0 fetch control unit 150. Other embodiments are possible and contemplated. While shown as a serial series of steps in FIG. 5 for ease of understanding, it is understood that the steps illustrated may be performed in any suitable order, and may be performed in parallel by combinatorial logic employed within decoder/L0 fetch control unit 150.Decoder/L0 fetch control unit 150 determines if a branch misprediction is being signalled by reorder buffer 28 (decision block 192). If a misprediction is signalled, the corrected fetch address received from reorder buffer 28 is selected (step 193). On the other hand, if a misprediction is not signalled, decoder/L0 fetch control unit 150 determines if the second target address corresponding to the second branch instruction identified during the previous clock cycle by branch scanner 58 is to be fetched (decision block 194). The second target address may be fetched if the first branch instruction was predicted not-taken and the second branch instruction was predicted taken. Additionally, the second target address may be fetched if the first branch instruction was predicted taken, but was a small forward displacement which does not cancel the second branch instruction, and the second branch instruction was predicted taken. If the second target address is to be fetched, decoder/L0 fetch control unit 150 selects the second target address (which was received in the previous clock cycle and is one clock cycle delayed in reaching L0 fetch address mux 152-step 195). Finally, if the second target address is not to be fetched, decoder/L0 fetch control unit 150 selects one of the first target address, the return stack address, or the sequential address as described above (step 196).Turning now to FIG. 6, a flowchart is shown illustrating operation of one embodiment of L1 fetch control unit 156. Other embodiments are possible and contemplated. While shown as a serial series of steps in FIG. 6 for ease of understanding, it is understood that the steps illustrated may be performed in any suitable order, and may be performed in parallel by combinatorial logic employed within L1 fetch control unit 156.If a branch misprediction redirection is received by L1 fetch control unit 156 (decision block 170), the sequential cache line to the cache line corresponding to the corrected fetch address is prefetched from L1 I-cache 14 (step 172). On the other hand, if a branch misprediction redirection is not received, L1 fetch control unit 156 determines if an L0 I-cache miss has occurred (decision block 174). If an L0 I-cache miss is detected, the address missing L0 I-cache 16 is fetched from L1 I-cache 14 (step 176). In the absence of an L0 I-cache miss, L1 fetch control unit 156 determines if either an indirect target address or a 32-bit relative target address has been detected by decoder/L0 fetch control unit 150 (decision block 178). If such a signal is received, the indirect address received from indirect address cache 66 or the 32-bit relative target address received from branch scanner 58 is fetched from L1 I-cache 14 depending upon which signal is received (step 180). If the return stack address is selected for fetching from L0 I-cache 16 (decision block 184), the next return stack address is prefetched from L1 I-cache 14 (step 186). Finally, if the return stack is not signalled, L1 fetch control unit 156 prefetches the next sequential cache line to the current target fetch address (step 182).Turning now to FIG. 7, a block diagram of one embodiment of L0 I-cache 16 is shown. Other embodiments are possible and contemplated. In the embodiment shown, L0 I-cache 16 includes a cache storage 100, a tag compare and select unit 102, a replacement line select unit 104, and a set of line select muxes 106A-106D. Cache storage 100 is coupled to receive a prefetched cache line from L1 I-cache 14, and is further coupled to tag compare and select unit 102, replacement line select unit 104, and line select muxes 106. Replacement line select unit 104 is firer coupled to receive an indication that a prefetched cache line is being provided by L1 I-cache 14. Tag compare and select unit 102 is coupled to receive the target fetch address and sequential fetch address provided by fetch control unit 50, and to provide a miss indication to fetch control unit 50 corresponding to each of the target fetch address and the sequential fetch address. Furthermore, tag compare and select unit 102 provides selection controls to line select muxes 106. Muxes 106 are coupled to select next blocks 52B and 52C. More particularly, line select mux 106A provides the sequential cache line (corresponding to the sequential address provided by fetch control unit 50) to select next block 52B. Line select mux 106B provides the next incremental cache line to the sequential cache line. Line select mux 106C provides the target cache line, and line select mux 106D provides the sequential line to the target cache line, to select next block 52C.Cache storage 100 comprises a set of cache line storage locations. Each cache line storage location is configured to store an address tag identifying the cache line, the instruction bytes within the cache line, and the corresponding predecode data. Each of the cache lines is read each clock cycle and provided to each of line select muxes 106. In this manner, any cache line stored in cache storage 100 may be selected to be provided to select next blocks 52B-52C. Accordingly, if both the addressed cache line (sequential or branch target) and the cache line sequential to the addressed cache line are hits in L0 I-cache 16, a full run of instructions is selectable for dispatch even if the cache line offset portion of the address is near the end of the cache line. In other words, reading each stored cache line and selecting therefrom may be advantageous to providing high fetch bandwidth.The instruction bytes and predecode data corresponding to each cache line are provided to line select muxes 106, and the tags for each cache line are provided to tag compare and select unit 102. Tag compare and select unit 102 compares the tags to the sequential and branch target addresses provided by fetch control unit 50 in order to generate selection controls for line select muxes 106. More particularly, tag compare and elect unit 102 compares the sequential address to each address tag. A match between one of the tags and the sequential address causes tag compare and select unit 102 to select the corresponding instruction bytes and predecode data via line select mux 106A. If no match is detected, tag compare and select unit 102 activates a corresponding miss signal to fetch control unit 50. Furthermore, the output of line select mux 106A indicates invalid in the case of a miss, and the bytes are ignored by branch scanner 58 and instruction scanner 56.Additionally, tag compare and select unit 102 compares the tags to the next incremental cache line address from the sequential address. The next incremental cache line address may be provided by fetch control unit 50, or may be calculated by tag compare and select unit 102. Alternatively, replacement line select unit 104 may manage the cache lines stored in cache storage 100 such that the next incremental cache line is stored contiguous to the sequential cache line and may include an indication that the cache line is the next incremental cache line. A match between one of the tags and the next incremental address is used to select the corresponding instruction bytes and predecode data via line select mux 106B. If no match is detected, the output of line select mux 106B indicates invalid and the bytes are ignored by branch scanner 58 and instruction scanner 56.Tag compare and select unit 102 further compares the branch target address to each address tag. A match between one of the tags and the branch target address causes tag compare and select unit 102 to select the corresponding instruction bytes and predecode data via line select mux 106C. If no match is detected, tag compare and select unit 102 activates a corresponding miss signal to fetch control unit 50. Furthermore, the output of line select mux 106C indicates invalid in the case of a miss, and the bytes are ignored by branch scanner 58 and instruction scanner 56.Additionally, tag compare and select unit 102 compares the tags to the sequential cache line address to the branch target address. The sequential cache line address to the branch target address may be provided by fetch control unit 50, or may be calculated by tag compare and select unit 102. Alternatively, replacement line select unit 104 may manage the cache lines stored in cache storage 100 such that the sequential cache line is stored contiguous to the branch target cache line and may include an indication that the cache line is the sequential cache line. A match between one of the tags and the sequential address to the branch target address is used to select the corresponding instruction bytes and predecode data via line select mux 106D. If no match is detected, the output of line select mux 106D indicates invalid and the bytes are ignored by branch scanner 58 and instruction scanner 56.Replacement line select unit 104 selects which of the cache lines within cache line storage 100 is to be replaced with a prefetched cache line received from L1 I-cache 14. A variety of replacement strategies may be used. For example, replacement line select unit 104 may monitor which cache lines are fetched from L0 I-cache 16 and employ a least recently used (LRU)-like replacement algorithm (e.g. true LRU, modified LRU, etc.). Alternatively, replacement line select unit 104 may operate L0 I-cache 104 as a first-in, first-out FIFO storage for replacement purposes. In such an embodiment, replacement line select unit 104 may include a pointer indicating a particular cache line storage location. Upon selecting that cache line storage location for replacement, the pointer may be incremented to the next storage location. In yet another alternative, random replacement may be used. Any suitable replacement algorithm may be employed, as desired.Prior to selecting a cache line for replacement, replacement line select unit 104 may compare the prefetch address provided by L1 I-cache 14 to the tags stored in L0 I-cache 16. If the prefetched cache line is already stored in L0 I-cache 16, then the prefetched cache line may be discarded instead of replacing a different cache line.Turning now to FIG. 8, a block diagram of one embodiment of a computer system 200 including processor 10 coupled to a variety of system components through a bus bridge 202 is shown. Other embodiments are possible and contemplated. In the depicted system, a main memory 204 is coupled to bus bridge 202 through a memory bus 206, and a graphics controller 208 is coupled to bus bridge 202 through an AGP bus 210. Finally, a plurality of PCI devices 212A-212B are coupled to bus bridge 202 through a PCI bus 214. A secondary bus bridge 216 may further be provided to accommodate an electrical interface to one or more EISA or ISA devices 218 through an EISA/ISA bus 220. Processor 10 is coupled to bus bridge 202 through bus interface 46.Bus bridge 202 provides an interface between processor 10, main memory 204, graphics controller 208, and devices attached to PCI bus 214. When an operation is received from one of the devices connected to bus bridge 202, bus bridge 202 identifies the target of the operation (e.g. a particular device or, in the case of PCI bus 214, that the target is on PCI bus 214). Bus bridge 202 routes the operation to the targeted device. Bus bridge 202 generally translates an operation from the protocol used by the source device or bus to the protocol used by the target device or bus.In addition to providing an interface to an ISA/EISA bus for PCI bus 214, secondary bus bridge 216 may further incorporate additional functionality, as desired. For example, in one embodiment, secondary bus bridge 216 includes a master PCI arbiter (not shown) for arbitrating ownership of PCI bus 214. An input/output controller (not shown), either external from or integrated with secondary bus bridge 216, may also be included within computer system 200 to provide operational support for a keyboard and mouse 222 and for various serial and parallel ports, as desired. An external cache unit (not shown) may further be coupled to bus interface 46 between processor 10 and bus bridge 202 in other embodiments. Alternatively, the external cache may be coupled to bus bridge 202 and cache control logic for the external cache may be integrated into bus bridge 202.Main memory 204 is a memory in which application programs are stored and from which processor 10 primarily executes. A suitable main memory 204 comprises DRAM (Dynamic Random Access Memory), and preferably a plurality of banks of SDRAM (Synchronous DRAM).PCI devices 212A-212B are illustrative of a variety of peripheral devices such as, for example, network interface cards, video accelerators, audio cards, hard or floppy disk drives or drive controllers, SCSI (Small Computer Systems Interface) adapters and telephony cards. Similarly, ISA device 218 is illustrative of various types of peripheral devices, such as a modem, a sound card, and a variety of data acquisition cards such as GPIB or field bus interface cards.Graphics controller 208 is provided to control the rendering of text and images on a display 226. Graphics controller 208 may embody a typical graphics accelerator generally known in the art to render three-dimensional data structures which can be effectively shifted into and from main memory 204. Graphics controller 208 may therefore be a master of AGP bus 210 in that it can request and receive access to a target interface within bus bridge 202 to thereby obtain access to main memory 204. A dedicated graphics bus accommodates rapid retrieval of data from main memory 204. For certain operations, graphics controller 208 may further be configured to generate PCI protocol transactions on AGP bus 210. The AGP interface of bus bridge 202 may thus include functionality to support both AGP protocol transactions as well as PCI protocol target and initiator transactions. Display 226 is any electronic display upon which an image or text can be presented. A suitable display 226 includes a cathode ray tube ("CRT"), a liquid crystal display ("LCD"), etc.It is noted that, while the AGP, PCI, and ISA or EISA buses have been used as examples in the above description, any bus architectures may be substituted as desired. It is further noted that computer system 200 may be a multiprocessing computer system including additional processors (e.g. processor 10a shown as an optional component of computer system 200). Processor 10a may be similar to processor 10. More particularly, processor 10a may be an identical copy of processor 10. Processor 10a may share bus interface 46 with processor 10 (as shown in FIG. 8) or may be connected to bus bridge 202 via an independent bus.In accordance with the above disclosure, a processor has been shown which employs a pair of instruction caches and a fetch algorithm which attempts to maximize the fetch bandwidth achievable from the caches. Higher fetch bandwidth than that achievable in single cache configurations may be achieved using the combination. Accordingly, a wide issue superscalar processor may more frequently receive sufficient instructions to maximize the average number of instructions dispatched/executed per clock cycle.Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. |
The invention concerns methods and devices for providing leakage power estimation. In one embodiment, one or more detected temperature values (108) and one or more voltage values (110) are used to determine the leakage power of an integrated circuit (IC) component. The invention further relates to other embodiments. |
CLAIMSWhat is claimed is: I. An apparatus comprising: a first logic (202) to generate a first signal corresponding to one or more sensed temperature values; and a second logic (204, 252) to generate a second signal corresponding to one or more voltage values; and a third logic (208, 254) to generate a third signal corresponding to a leakage power value based on the first signal and the second signal. 2. The apparatus of claim I, further comprising a fourth logic (112) to adjust power consumption of one or more components of a computing system (100, 300, 500, 600) based on the third signal. 3. The apparatus of claim 1, wherein the one or more voltage values comprise a current value of a threshold voltage and a current value of a supply voltage. 4. The apparatus of claim I, further comprising a fourth logic (206) to generate a fourth signal corresponding to a base leakage power value, wherein the third logic generates the third signal based on the first signal, the second signal, and the fourth signal. 5. The apparatus of claim 1, further comprising one or more temperature sensors (108) to sense the temperature values. 6. The apparatus of claim I, wherein the third logic comprises a multiplier (208, 254) to multiply the first and second signals to provide the third signal. 2') 7. The apparatus of claim 1, further comprising one or more processor cores (300), wherein at least one of the one or more processor cores comprises one or more of the first logic, the second logic, or the third logic. 8. The apparatus of claim 1, further comprising one or more processor cores (300), wherein at least one of the one or more processor cores, the first logic, the second logic, and the third logic are on a same die. 9. A method comprising: determining a temperature scaling value (404) corresponding to one or more temperature values sensed (402) from a device (102); determining a voltage scaling value (404) based on one or more voltage values corresponding to the device; and scaling a reference leakage power value (406) of the device based on the temperature scaling value and the voltage scaling value to generate a signal corresponding to a leakage power of the device. 10. The method of claim 9, wherein the sensing and scaling are performed during run-time of the device. 11. The method of claim 9, wherein determining the temperature scaling value comprises accessing a storage unit (202). 12. The method of claim 9, wherein determining the voltage scaling value comprises accessing a storage unit (204, 252). 13. A computing system comprising: a memory (202, 206, 204, 252) to store a plurality of bits representing a plurality of scaling factors; a first logic (330) having one or more components to perform one or more computing operations; and a second logic (106) to scale a base leakage power value corresponding to at least one of the one or more components based, at least in part, on sensed temperature variations and one or more of the plurality of stored scaling factors. 14. The computing system of claim 13, further comprising a third logic (112) to adjust power consumption of at least one of the one or more components based on the scaled leakage power value. 15. The computing system of claim 13, wherein the second logic comprises a multiplier (208, 254) to multiply a first signal corresponding to a temperature scaling value, a second signal corresponding to a voltage scaling value, and a third signal corresponding to the base leakage power value. 16. The computing system of claim 13, wherein the plurality of the stored sealing factors comprises a plurality of temperature scaling values and a plurality of voltage scaling values. 17. The computing system of claim 13, further comprising one or more processor cores (300), wherein at least one of the one or more processor cores comprises one or more of the first logic, the second logic, or the third logic. 18. The computing system of claim 13, further comprising one or more processor cores (300), wherein at least one of the one or more processor cores, the first logic, the second logic, and the third logic are on a same die. 19. The computing system of claim 13, wherein the one or more computing operations comprise one or more of data processing, data storage, and data communication. 20. The computing system of claim 13, further comprising an audio device (526, 647). |
LEAKAGE POWER ESTIMATION BACKGROUND 100011 The present disclosure generally relates to the field of electronics. More particularly, an embodiment of the invention relates to leakage power estimation in an integrated circuit (IC) device. 100021 Power consumption, both dynamic and leakage, is one of the major concerns in IC design. In particular, sub-threshold leakage (or leakage power) may be growing with each successive design generation. For example, as supply voltage is lowered (e.g., to reduce dynamic power consumption), threshold voltage may also be lowered (e.g., to maintain low gate delay or high frequency). However, lowering the threshold voltage may affect leakage power nonlinearly. 100031 In some implementations, leakage power may be assumed to have a constant value during run-time. However, leakage power may vary during run-time, for example, due to changes in temperature, supply voltage, or threshold voltage. Accordingly, power management techniques may be less accurate without knowledge of leakage power. BRIEF DESCRIPTION OF ThE DRAWINGS 100041 The detailed description is provided with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items. 100051 Figs. 1, 5, and 6 illustrate block diagrams of computing systems in accordance with various embodiments of the invention. 100061 Figs. 2A and 2B illustrate block diagrams of portions of leakage power estimation systems, according to various embodiments. 100071 Fig. 3 illustrates a block diagram of a processor core, according to an embodiment. 100081 Fig. 4 illustrates a flow diagram of a method, according to an embodiment. DETAILED DESCRIPTION 100091 in the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, various embodiments of the invention may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments of the invention. Various aspects of embodiments of the invention may be performed using various means, such as integrated semiconductor circuits ("hardware"), computer-readable instructions organized into one or more programs ("software"), or some combination of hardware and software. For the purposes of this disclosure reference to "logic" shall mean either hardware, software, or some combination thereof. 100101 Some of the embodiments discussed herein may provide an efficient technique to estimate leakage power (e.g., static or a sub-threshold leakage power generated by one or more components of an IC device). In an embodiment, the leakage power consumption may be due to one or more variations such as variations in temperature and/or voltage (e.g., threshold and/or supply voltage). Furthermore, some of the embodiments discussed herein may be applied in various computing systems, such as the computing systems discussed with reference to Figs. 1, 5, and 6. More particularly, Fig. I illustrates a block diagram of a computing system 100, according to an embodiment. The system 100 may include one or more domains 102-1 through I 02-M (collectively referred to herein as "domains 102" or "domain 102"). Each of the domains 102-1 through 102-M may include various components, but for clarity, sample components are only shown with reference to domains 102-1 and 102-2. Also, each domain 102 may correspond to a portion of a computing system (such as the components discussed with reference to Figs. 5 and 6, or more generally to one or more transistors of an IC device). In an embodiment, each of the domains 102 may include various circuitry (or logic) that is clocked by a clock signal that may be different than the clock signal used in other domains. In one embodiment, one or more of these clock signals may be mesosynchronous, or otherwise related (e.g., with a relationship that may or may not repeat itself over time). 100111 As illustrated in Fig. 1, each domain may communicate data with other domains through one or more buffers 104. In an embodiment, the buffers 104 may be first-in, first-out (FIFO) buffers. Each domain may include a logic to estimate leakage power of one or more components within the corresponding domain (such as logics 106-I and 106-2 shown with reference to domains 102-1 and 102-2, respectively, and generally referred to herein as "logic 106" or "logics 106"), one or more temperature sensors (such as sensor(s) 108-1 and I 08-2 shown with reference to domains 102-I and 102-2, respectively), a logic to control frequency and/or voltage levels and/or provide current threshold voltage and/or supply voltage values (e.g., logics 110-1 and 110-2 shown with reference to domains 102-1 and 102-2, respectively), and a logic to manage power consumption of one or more components of the corresponding domain (such as logics 112-1 and 112-2 shown with reference to domains 1 02-1 and 102-2, respectively, and generally referred to herein as "logic 112" or "logics 112"). In an embodiment, the threshold voltage of a transistor may be adjusted by applying a current to the body (or substrate) of the transistor. 100121 In various embodiments, the power management logic 112 may adjust power consumption of one or more components of a corresponding domain. For example, the logic 112 may utilize information such as the leakage power estimation value (e.g., provided by the corresponding logic 106), dynamic power estimation, and/or some other information (e.g., committed instructions per cycle, cache misses, etc.) to adjust supply voltage andlor threshold voltage of one or more components of the corresponding domain. Also, the logic 112 may adjust the frequency of a clock signal (e.g., a clock signal that is used within at least a portion of the corresponding domain). In an embodiment, the logic 112 may turn off one or more components such: one or more processor cores or portions of the processor cores (e.g., different pipelines, etc.) and/or data caches (e.g., including various levels of caches such as level 1 (LI), level 2 (L2), or other levels) or portions of data caches (e.g., different banks of caches). 100131 Figs. 2A and 2B illustrate block diagrams of portions of leakage power estimation systems 200 and 250, according to various embodiments. In one embodiment, the systems 200 and 250 may be the same or similar to the logic 106 discussed with reference to Fig. 1. In an embodiment, the storage units discussed with reference to Figs. 2A and 2B may be the same or similar to memory components discussed with reference to Figs. 5 and/or 6. 100141 As shown in Figs. 2A and 2B, the systems 200 and 250 may include a temperature scaling factor storage unit 202 (e.g., to store a plurality of temperature scaling factor values). The storage unit(s) 202 may receive sensed temperature values from the sensors I 0 that correspond to one or more components such as those discussed with reference to Figs. 1, 5, and 6. The system 200 may also include a voltage scaling factor storage unit 204 (e.g., to store a plurality of voltage factor values) and a reference leakage storage unit 206 (e.g., to store a reference or base leakage power value). The base leakage value stored in the storage unit 206 may be determined at design time (e.g., through simulations or circuit measurements) or at test time. For example, the base leakage value may be determined at test time for designs where there is a relatively high variability (since the base value may be calculated independently for each chip and/or block to allow for adapting the estimations to the specifics of each circuit). 100151 In an embodiment, the system 200 may also include a rounding logic 210 to round temperature values received from the sensors 1 08 (e.g., such that values sensed may be rounded to a nearest value stored in the storage unit 202). An interpolation logic 212 may interpolate the values output by the storage unit 202 to actual temperature measurement provided by the sensors 108. Similarly, the system may include a voltage rounding logic 214 (e.g., to round current threshold and/or supply voltage values to a nearest value stored in the storage unit 204) and a voltage interpolation logic 218 (e.g., to interpolate the values output by the storage unit 204 to actual voltage values provided by the control logic 110). A multiplier 208 may multiply the determined temperature scaling factor (e.g., looked up from the storage unit 202 based on sensed temperature values from sensor(s) 108), the determined voltage scaling factor (e.g., looked up from the storage unit 204 based on current voltage values provided by logic 110), and the reference leakage value (from the storage unit 206. The multiplication value may then be utilized to manage power settings (e.g., by the power management logic 11 2) such as discussed with reference to Fig. I. 100161 Referring to Fig. 2B, the system 250 may include a reference leakage storage unit 252 that stores base leakage values for a corresponding set of voltages. Accordingly, in one embodiment, a single storage unit (252) may store values that correspond to a combination of values stored in the reference leakage storage unit 206 of Fig. 2A and corresponding values stored in the voltage scaling factor storage 204 of Fig. 2A. For example, a plurality of leakage power values may be indexed by a temperature factor (e.g., provided by the sensor(s) 108) and a voltage factor (e.g., corresponding to the threshold voltage value and/or supply voltage value provided by logic 110). Such an embodiment may allow a single look up (e.g., based on current threshold and/or supply voltage values from the logic 110) to provide a reference leakage value that may be scaled by the temperature scaling factor looked up from the storage unit 202 (e.g., based on sensed temperature value(s) provided by sensors 108) via a multiplier 254. Alternatively, the values stored in the storage units 202, 204, 206, and/or 252 may be combined into a single storage unit (not shown) to allow a single look up to provide a leakage value that corresponds to sensed temperature value(s) provided by sensors 108 and/or current threshold and/or supply voltage values from the logic 110. Also, the system 250 may include rounding and/or interpolation logic (e.g., that may be the same or similar to the logics 210, 212, 214, and/or 21 8) in accordance with some embodiments. 100171 Fig. 3 illustrates a block diagram of a processor core 300, according to an embodiment. In one embodiment, the core 300 may represent various components that may be present in a processor or number of processors (such as those discussed with reference to Figs. 5 and 6). The processor core 300 may include one or more domains such as a second level cache domain 302, a frontend domain 304, and one or more backend domains 306. Components within each of the domains 302, 304, and 306 may be clocked by a different clock signal such as discussed with reference to Fig. 1. Moreover, each of the domains (e.g., 302, 304, and 306) may include more or less components than those shown in Fig. 3 in various embodiments. 100181 The second level (L2) cache domain 302 may include an L2 cache 308 (e.g., to store data including instructions), the sensor(s) 108, and logics 106, 110, and 11 2. In one embodiment, the L2 cache 308 may be shared by multiple cores in a multi-core processor such as those discussed with reference to Figs. 5 and 6. Also, the L2 cache 308 may be off of the same die as the processor cores. Accordingly, in various embodiments of the invention, a processor may include the domains 304 and 306, and may or may not include the L2 cache 308. 100191 As shown in Fig. 3, the frontend domain 304 may include one or more of the sensor(s) 108, logics 106, 110, and 112, a reorder buffer 318, a rename and steer unit 320, a instruction cache 322, a decode unit 324, a sequencer 326, and/or a branch prediction unit 328. In one embodiment, the frontend domain 304 may include other components such as an instruction fetch unit. 100201 The backend domains 306 may include one or more of a first level (LI) cache domain 328 and one or more execution domains 330-1 through 330-N. The LI cache domain 328 may include an Li cache 332 (e.g., to store data including instructions), the sensor(s) 108, and logics 106, 110, and 112. Furthermore, the execution domains 330-1 through 330-N may include one or more of an integer execution unit and/or a floating point execution unit. The execution domains 330-1 through 330-N may each comprise an issue queue (338-1 through 338-N, respectively), a register file (340-1 through 340-N, respectively), the sensor(s) 108, logics 106, 110, and 112, and/or an execution unit (346-1 through 346-N, respectively). 100211 In one embodiment, each of the domains 302, 304, and 306 may include one or more first-in, first-out (FIFO) buffer(s) 348 to synchronize communication between the various clock domains (e.g., between the domains 302, 304, and/or 306). 100221 Additionally, the processor core 300 (and, in an embodiment, such as the one shown in Fig. 3, the backend domains 306) may include an interconnection or bus 350 to facilitate communication between various components of the processor core 300. For example, after an instruction is successfully executed (e.g., by the execution domains 330-1 through 330-N), the instruction commit may be communicated to the ROB 318 (e.g., via the interconnection 350) to retire that instruction. Additionally, the domains within the backend (e.g., domains 328 and 330-1 through 330-N) may communicate via the interconnection 350. For example, communication among execution units (330-1 through 330-N) may occur for type conversion instructions. Further operations of components of Figs. 1-3 will be discussed with reference to the method 400 of Fig. 4. 100231 Furthermore, even though Fig. 3 illustrates that each of the domains 302, 304, and 306 may include the sensor(s) 108 and logics 106, 110, and 112, various domains may share the same the sensor(s) 108 and logics 106, 110, and 112. For example, a single set of the sensor(s) 108 and logics 106, 110, and 112 may be utilized for all domains of the processor core 300. 100241 Fig. 4 illustrates a flow diagram of a method 400 to provide estimate leakage power, according to an embodiment. In one embodiment, the operations of the method 400 may be performed by one or more components, such as the components discussed with reference to Figs. 1-3 and 5-6. 100251 Referring to Figs. 1-4, at an operation 402, the sensor(s) 108 may sense one or more temperature values corresponding to an IC device. The sensed temperature value(s) may be used to determine a temperature scaling factor (e.g., from the storage unit 202) at an operation 404. At operation 404, a voltage scaling factor may also be determined such as discussed with reference to Figs. 2A and 2B (e.g., from the storage units 204 and/or 252). At an operation 406, the determined scaling factors of operation 404 may then be used to scale a base leakage value (e.g., stored in the unit 206 and/or 252) such as discussed with reference to Figs. 2A and 2B. At an operation 408, a signal may be generated (e.g., by the multipliers 205 and 254) that correspond to an estimated leakage power of the IC device. As discussed with reference to Fig. 1, the estimated leakage power (408) may be used to adjust power consumption of one or more components of a computing system (e.g., systems discussed with reference to Figs. 1, 5, andlor 6). 100261 In an embodiment, the following equation may be used to provide the estimate leakage power at operation 408: P(Vdd, V, , T) = j3. e 1d0 e. VddO 100271 In the above formula, P corresponds to the estimate leakage power value, P0 corresponds to the base leakage power value (e.g., that may be stored in units 206 and/or 252), Vdd corresponds to supply voltage (that may be provided by the logic 110), VIb corresponds to threshold voltage (that may be provided by the logic 110), Vado corresponds to Vdd at which base leakage was measured, V,h0 corresponds to Vh at which base leakage was measured, T corresponds to current temperature value(s) sensed by the sensor(s) 108, T0 corresponds to the temperature at which base leakage was measured, ö, and are circuit dependent constants set by the designer. In various embodiments, values corresponding to term (T-T0 T(T) = e may be stored in the storage unit 202 and values corresponding to the term V(Vdd, h) = ---e'° e may be stored in the storage units 204 Vddo (or 252). Hence, a multiplier (208, 254) may be used to multiply the terms T(T) and V(Vd(J, V/h) to provide value of P. 100281 Moreover, in one embodiment, dynamic calibration of an IC component may be performed in idle mode (e.g., where there is no dynamic power consumption). In such situation, the temperature increase (over a controlled ambient temperature) in each portion (e.g., blocks) of the IC component may be dependant upon the leakage power. The thermal sensors 108 that may be placed in the blocks can report the stable temperature (e.g., after a relatively long period of time). With the temperature map, a tool (such as a computing device that is external to the IC component) may derive the power map that is causing the scenario, e.g., via reverse-engineering. The leakage values may then be computed based on the static temperatures of the portions (since other constants may be known, such as supply voltage, threshold voltage, and ambient temperature), Once the power map is computed it may be stored in the reference leakage storage 206. In an embodiment, a special dedicated microcode may be used to communicate between the IC component being calibrated and test equipment (e.g., to report the temperature readings and to perform the base leakage update). 100291 Fig. 5 illustrates a block diagram of a computing system 500 in accordance with an embodiment of the invention. The computing system 500 may include one or more central processing unit(s) (CPUs) 502 or processors that communicate via an interconnection network (or bus) 504. The processors 502 may be any type of a processor such as a general purpose processor, a network processor (that processes data communicated over a computer network 503), or other types of a processor (including a reduced instruction set computer (RISC) processor or a complex instruction set computer (CISC)). Moreover, the processors 502 may have a single or multiple core design. The processors 502 with a multiple core design may integrate different types of processor cores on the same integrated circuit (IC) die. Also, the processors 502 with a multiple core design may be implemented as symmetrical or asymmetrical multiprocessors. In an embodiment, one or more of the processors 502 may utilize the embodiments discussed with reference to Figs. 1-4. For example, one or more of the processors 502 may include one or more processor cores (300). Also, the operations discussed with reference to Figs. 1-4 may be performed by one or more components of the system 500. 100301 A chipset 506 may also communicate with the interconnection network 504. The chipset 506 may include a memory control hub (MCH) 508. The MCH 508 may include a memory controller 510 that communicates with a memory 512. The memory 512 may store data and sequences of instructions that are executed by the CPU 502, or any other device included in the computing system 500. In one embodiment of the invention, the memory 512 may include one or more volatile storage (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or the like. Nonvolatile memory may also be utilized such as a hard disk. Additional devices may communicate via the interconnection network 504, such as multiple CPUs and/or multiple system memories. 100311 The MCH 508 may also include a graphics interface 514 that communicates with a graphics accelerator 516. In one embodiment of the invention, the graphics interface 514 may communicate with the graphics accelerator 516 via an accelerated graphics port (AGP). In an embodiment of the invention, a display (such as a flat panel display) may communicate with the graphics interface 514 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display. The display signals produced by the display device may pass through various control devices before being interpreted by and subsequently displayed on the display. 100321 A hub interface 51 8 may allow the MCH 508 to communicate with an inputloutput control hub (ICH) 520. The ICH 520 may provide an interface to I/O devices that communicate with components of the computing system 500. The ICH 520 may communicate with a bus 522 through a peripheral bridge (or controller) 524, such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or the like. The bridge 524 may provide a data path between the CPU 502 and peripheral devices. Other types of topologies may be utilized. Also, multiple buses may communicate with the ICH 520, e.g., through multiple bridges or controllers. Moreover, other peripherals in communication with the ICH 520 may include, in various embodiments of the invention, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), USB port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), or the like. 100331 The bus 522 may communicate with an audio device 526, one or more disk drive(s) 528, and a network interface device 530 (which communicates with the computer network 503). Other devices may be in communication with the bus 522. Also, various components (such as the network interface device 530) may be in communication with the MCH 508 in some embodiments of the invention. In addition, the processor 502 and the MCH 508 may be combined to form a single chip. Furthermore, the graphics accelerator 516 may be included within the MCH 508 in other embodiments of the invention. 100341 Furthermore, the computing system 500 may include volatile and/or nonvolatile memory (or storage). For example, nonvolatile memory may include one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), a disk drive (e.g., 528), a floppy disk, a compact disk ROM (CD-ROM), a digital versatile disk (DVD), flash memory, a magneto-optical disk, or other types of nonvolatile machine-readable media capable of storing electronic instructions and/or data. 100351 Fig. 6 illustrates a computing system 600 that is arranged in a point-to-point (PtP) configuration, according to an embodiment of the invention. In particular, Fig. 6 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to-point interfaces. The operations discussed with reference to Figs. 1-5 may be performed by one or more components of the system 600. 100361 As illustrated in Fig. 6, the system 600 may include several processors, of which only two, processors 602 and 604 are shown for clarity. The processors 602 and 604 may each include a local memory controller hub (MCH) 606 and 608 to allow communication with memories 610 and 612. The memories 610 and/or 612 may store various data such as those discussed with reference to the memory 512. 100371 The processors 602 and 604 may be any type of a processor such as those discussed with reference to the processors 502 of Fig. 5. The processors 602 and 604 may exchange data via a point-to-point (PtP) interface 614 using PtP interface circuits 616 and 618, respectively. The processors 602 and 604 may each exchange data with a chipset 620 via individual PtP interfaces 622 and 624 using point to point interface circuits 626, 628, 630, and 632. The chipset 620 may also exchange data with a high-performance graphics circuit 634 via a high-performance graphics interface 636, using a PtP interface circuit 637. 100381 At least one embodiment of the invention may be provided within the processors 602 and 604. For example, one or more of the domains 102 discussed with reference to Fig. 1 and/or processor core(s) 300 may be located within the processors 602 and 604. Other embodiments of the invention, however, may exist in other circuits, logic units, or devices within the system 600 of Fig. 6. Furthermore, other embodiments of the invention may be distributed throughout several circuits, logic units, or devices illustrated in Fig. 6. 100391 The chipset 620 may communicate with a bus 640 using a PtP interface circuit 641. The bus 640 may have one or more devices that communicate with it, such as a bus bridge 642 and I/O devices 643. Via a bus 644, the bus bridge 643 may be in communication with other devices such as a keyboard/mouse 645, communication devices 646 (such as modems, network interface devices, etc. that may be in communication with the computer network 503), audio I/O device, andlor a data storage device 648. The data storage device 648 may store code 649 that may be executed by the processors 602 andlor 604. 100401 In various embodiments of the invention, the operations discussed herein, e.g., with reference to Figs. 1-6, may be implemented by hardware (e.g., circuitry), software, firmware, microcode, or combinations thereof, which may be provided as a computer program product, e.g., including a machine-readable or computer-readable medium having stored thereon instructions (or software procedures) used to program a computer to perform a process discussed herein. Also, the term "logic" may include, by way of example, software, hardware, or combinations of software and hardware. The machine-readable medium may include a storage device such as those discussed with respect to Figs. 1-6. Additionally, such computer-readable media may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a bus, a modem, or a network connection). Accordingly, herein, a carrier wave shall be regarded as comprising a machine-readable medium. [00411 Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least an implementation. The appearances of the phrase "in one embodiment" in various places in the specification may or may not be all referring to the same embodiment. 100421 Also, in the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. In some embodiments of the invention, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements arc in direct physical or electrical contact. However, "coupled" may also mean that two or more elements may not be in direct contact with each other, but may still cooperate or interact with each other. 100431 Thus, although embodiments of the invention have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts arc disclosed as sample forms of implementing the claimed subject matter. |
The present invention relates to a stepped micro electromechanical structure (MEMS) capacitor that is actuated by a plurality of MEMS switches. The MEMS switches may be within the stepped capacitor circuit, or they may be actuated by an independent circuit. The stepped capacitor may also be varied with intermediate steps of capacitance by providing at least one variable capacitor in the stepped MEMS capacitor structure. |
1.A microelectromechanical (MEMS) capacitor comprising:a plurality of MEMS capacitors arranged in parallel in the circuit;At least one switch is coupled in series with at least one of the plurality of MEMS capacitors in the first circuit.2.The MEMS capacitor of claim 1 wherein the at least one switch comprises at least one MEMS switch.3.The MEMS capacitor of claim 1 wherein said at least one switch comprises at least one parallel plate switch.4.The MEMS capacitor of claim 1 wherein said at least one switch further comprises:a first switch in series with a first MEMS capacitor, wherein the first switch has a first voltage closure threshold;A second switch is coupled in series with a second MEMS capacitor, wherein the second switch has a second voltage closure threshold that is higher than a first voltage closure threshold.5.The MEMS capacitor of claim 1 wherein at least one of said plurality of capacitors has a movable charging pad.6.The MEMS capacitor of claim 1 wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of linearly graded capacitances.7.The MEMS capacitor of claim 1, wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of linear staging capacitances, wherein At least one of the plurality of capacitors of the charging pad is configured to enable a plurality of intermediately graded capacitances.8.The MEMS capacitor of claim 1 wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of geometrically graded capacitances.9.The MEMS capacitor of claim 1 wherein at least one of said plurality of capacitors has a movable charging pad, wherein said plurality of capacitors are configured to enable a plurality of geometrically graded capacitances, wherein At least one of the plurality of capacitors of the charging pad is configured to enable a plurality of intermediately graded capacitances.10.The MEMS capacitor of claim 1 wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of exponentially graded capacitances.11.The MEMS capacitor of claim 1 wherein at least one of said plurality of capacitors has a movable charging pad, wherein said plurality of capacitors are configured to enable a plurality of exponentially graded capacitances, wherein At least one of the plurality of capacitors of the charging pad is configured to enable a plurality of intermediately graded capacitances.12.A microelectromechanical (MEMS) capacitor comprising:a plurality of MEMS capacitors in a circuit arranged in parallel;At least one switch is coupled in series with at least one of the plurality of MEMS capacitors in the first circuit, wherein each of the at least one switch is energized by a second circuit.13.The MEMS capacitor of claim 12 wherein said at least one switch comprises at least one MEMS switch.14.The MEMS capacitor of claim 12 wherein said at least one switch comprises at least one parallel plate switch.15.The MEMS capacitor of claim 12 wherein said at least one switch further comprises:a first switch in series with a first MEMS capacitor, wherein the first switch has a first voltage closure threshold;A second switch is coupled in series with a second MEMS capacitor, wherein the second switch has a second voltage closure threshold that is higher than a first voltage closure threshold.16.The MEMS capacitor of claim 12, wherein at least one of the plurality of capacitors has a movable charging pad.17.The MEMS capacitor of claim 12, wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of linear staging capacitances.18.The MEMS capacitor of claim 12, wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of linear staging capacitances, wherein At least one of the plurality of capacitors of the charging pad is configured to enable a plurality of intermediately graded capacitances.19.The MEMS capacitor of claim 12, wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of geometrically graded capacitances.20.The MEMS capacitor of claim 12, wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of geometrically graded capacitances, wherein At least one of the plurality of capacitors of the charging pad is configured to enable a plurality of intermediately graded capacitances.21.The MEMS capacitor of claim 12, wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of exponentially graded capacitances.22.The MEMS capacitor of claim 12, wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of exponentially graded capacitances, wherein At least one of the plurality of capacitors of the charging pad is configured to enable a plurality of intermediately graded capacitances.23.A microelectromechanical (MEMS) capacitor comprising:a plurality of MEMS capacitors in a circuit arranged in parallel;At least one switch is coupled in series with at least one of the plurality of MEMS capacitors in the first circuit, wherein each of the at least one switch is energized by a respective independent circuit.24.The MEMS capacitor of claim 23 wherein said at least one switch comprises at least one MEMS switch.25.The MEMS capacitor of claim 23 wherein said at least one switch comprises at least one parallel plate switch.26.The MEMS capacitor of claim 23 wherein said at least one switch further comprises:a first switch in series with a first MEMS capacitor, wherein the first switch has a first voltage closure threshold;A second switch is coupled in series with a second MEMS capacitor, wherein the second switch has a second voltage closure threshold that is higher than a first voltage closure threshold.27.The MEMS capacitor of claim 23, wherein at least one of the plurality of capacitors has a movable charging pad.28.The MEMS capacitor of claim 23, wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of linear staging capacitances.29.The MEMS capacitor of claim 23, wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of linear staging capacitances, wherein At least one of the plurality of capacitors of the charging pad is configured to enable a plurality of intermediately graded capacitances.30.The MEMS capacitor of claim 23, wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of geometrically graded capacitances.31.The MEMS capacitor of claim 23, wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of geometrically graded capacitances, wherein At least one of the plurality of capacitors of the charging pad is configured to enable a plurality of intermediately graded capacitances.32.The MEMS capacitor of claim 23, wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of exponentially graded capacitances.33.The MEMS capacitor of claim 23, wherein at least one of the plurality of capacitors has a movable charging pad, wherein the plurality of capacitors are configured to enable a plurality of exponentially graded capacitances, wherein At least one of the plurality of capacitors of the charging pad is configured to enable a plurality of intermediately graded capacitances. |
Microelectromechanical structure switching multi-stage variable capacitor and manufacturing method thereofField of inventionThe present invention generally relates to mechanically switched capacitors. More specifically, the present invention relates to capacitors that are graded by mechanical turn-on and turn-off. Furthermore, the grading capacitor is mechanically changeable.Background of the inventionOne of the difficulties in integrated circuit packaging is that the large, often passive, device that can be placed on the silicon with the integrated circuit (IC) has no integration with conventional active components such as field effect transistors. The structure of the volume. Some components can be placed off-chip, but their adaptability is limited. For example, prior art on-chip variable capacitors are based on varactor technology with a tuning range of less than about 25%. In addition, the increased complexity of microelectronic devices such as computers and handheld devices has raised an ever-increasing need for a wider range of operability of passive devices. An example is a varactor that can be used as a component in a computer or in a handheld device.FIG. 1 is a schematic diagram showing a circuit 10 of a basic component. The capacitor 12 is contained therein. Capacitor 12 can be a variable capacitor and is also referred to as a varactor. Existing varactor technology has a lower pull-in effect. Furthermore, prior art diaphragm capacitors have a capacitance adjustable range that is limited due to the voltage exceeding its threshold voltage (Vc). At Vc, the film breaks down and the capacitor is shorted. Furthermore, due to the suspension characteristics of prior art capacitors, the central portion of the variable membrane is closer to the fixed electrode than the edge portion. This phenomenon causes the local capacitance at the center of the variable film to be greater than the local capacitance at the fixed edge portion of the variable film.In addition, from a production standpoint, a wide range of capacitors have not been made into a single capacitor so that one capacitor can be suitable for a variety of applications.BRIEF DESCRIPTION OF THE DRAWINGSIn order to obtain a method in accordance with the above and other advantages of the present invention, a more specific description of the present invention, which has been briefly described, will be described with reference to the specific embodiments illustrated in the accompanying drawings. In the drawings, the same structures will have the same reference signs. To best illustrate the structure of the present invention, the figures included herein are graphical representations of integrated circuit structures. Thus, while still exhibiting the basic structure of the present invention, the actual appearance of the fabricated structure may look different, for example, in a photomicrograph. Moreover, the drawings show only the structures required to understand the invention. Additional structures known in the art are not included to keep the figures clear. The drawings are intended to depict only the exemplary embodiments of the present invention, and the embodiments are not illustrated, and therefore, ,among them:1 is a schematic view showing a circuit of a basic element;2A is a front cross-sectional view of a MEMS capacitor in accordance with an embodiment of the present invention;2B is a front cross-sectional view of a MEMS capacitor in accordance with an embodiment of the present invention;Figure 3A is a front cross-sectional view of a MEMS switch in accordance with the present invention;Figure 3B is a front cross-sectional view of a MEMS switch in accordance with the present invention;Figure 4 is a schematic view showing a circuit segment of a grading capacitor of the present invention;Figure 5 is a hierarchical capacitance diagram as a function of voltage across the switching circuit;Figure 6 is a schematic illustration of a graded variable MEMS capacitor in accordance with one embodiment of the present invention;Figure 7 is a schematic illustration of a graded variable MEMS capacitor in accordance with one embodiment of the present invention;Figure 8 is a schematic illustration of a graded variable MEMS capacitor in accordance with one embodiment of the present invention;Figure 9 is a front cross-sectional view of a variable capacitor in accordance with the present invention;Figure 10 is an enlarged front cross-sectional view of the variable capacitor showing the relative deformation of the MEMS device;Figure 11 is a partial cross-sectional view of the variable capacitor shown in Figure 9;Figure 12 is a front cross-sectional view of another embodiment of a variable capacitor;Figure 13 is a top plan view of another embodiment of the variable capacitor shown in Figure 9;Figure 14 is a top plan view of another embodiment of the variable capacitor shown in Figure 9;Figure 15 is a top plan view of another embodiment of the variable capacitor shown in Figure 9;Figure 16 is a front cross-sectional view showing another embodiment of the present invention;Figure 17 is a front cross-sectional view showing another embodiment of the variable capacitor shown in Figure 16;Figure 18 is a front cross-sectional view showing another embodiment of a variable capacitor;Figure 19 is a front cross-sectional view showing another embodiment of the variable capacitor shown in Figure 18;Figure 20 is a process flow diagram showing the method of the present invention.Detailed description of the inventionThe present invention relates to micro electromechanical structure (MEMS) staged capacitors which can also vary between graded capacitances.2A illustrates an embodiment of the present invention in which MEMS capacitor 22 includes a fixed charging pad 24, a movable charging pad 26, and a dielectric layer 28 between them to prevent shorting. In addition, the MEMS capacitor 22 includes a movable plate 30 that uses a direct current potential to pull the movable charging plate 26 toward the fixed charging pad 24, thereby changing the capacitance between them. Typically, the fixed charging pad 24 and the movable charging pad 30 are located on the substrate 32. The MEMS capacitor 22 can be referred to as a first capacitor type. Other embodiments of the variable capacitor are described below.2B shows a second embodiment of the present invention in which the MEMS capacitor 23 includes a fixed charging pad 24, a movable charging pad 26, and a dielectric layer 28 between them to prevent short circuits. The MEMS capacitor 23 does not include the movable plate 30 as used in the first capacitor form. Thus, at a certain excitation voltage, the removable charging pad 26 will be destroyed and the dielectric layer 28 will approach and/or contact the fixed charging pad 24. The MEMS capacitor 23 can be referred to as a second capacitor type. Other embodiments of the variable capacitor will be described below.FIG. 3A illustrates an embodiment of the present invention in which MEMS switch 34 includes a fixed charging pad 24 and a variable switch plate 36. In addition, the MEMS switch 34 includes a movable plate 30 that utilizes a DC potential to pull the variable switch plate 36 toward the fixed charging pad 24 to close the switch. Typically, the fixed charging pad 24 and the movable plate 30 are disposed above the substrate 32.FIG. 3B illustrates another embodiment of a MEMS switch 35 that may be used. It can be seen that the MEMS switch 35 can be a parallel plate switch having a structure similar to the MEMS capacitor 22 shown in Figure 2B. The substrate 32 supports the fixed charging pad 24. Above the fixed charging pad 24 is a variable switch plate 36 that can be pulled toward the fixed charging pad 24 to close the MEMS switch 34.FIG. 4 shows a device that can be utilized in place of capacitor 12 as if it were present in circuit 10 of FIG. The grading capacitor 14 is depicted as including a plurality of capacitors 16 arranged in parallel in the first circuit 18. Further, a plurality of switches 20 and capacitors 16 are connected in series in a row. In the first embodiment, there may be a circuit having n MEMS capacitors and m switches, where m < n. For example, n=2, m can be equal to 1. Therefore, the circuit should have two capacitors and only one switch, and the switch should only be connected in a row with one of the two MEMS capacitors. Preferably, the plurality of switches includes at least one MEMS switch as described herein.To achieve a graded capacitance, the surface area of the MEMS switch can be graded such that a first switch in series with the first MEMS capacitor has a first voltage closure threshold and a second switch in series with the second MEMS capacitor has a first voltage A second voltage closure threshold with a higher threshold is closed. Such an arrangement may be continuously performed such that the linear grading of the capacitance is proportional to the voltage applied to both sides of the first circuit 18. For example, if multiple switches are MEMS switches 34, increasing the nominal voltage from one voltage unit to two voltage units will increase the capacitance from one capacitance unit to two capacitance units by turning on one more switch.According to this embodiment, the curve of the increase in capacitance as a function of the increase in the rated grading voltage will have a positive slope as shown in FIG. Thus, a solution for a linear increase in capacitance response 38 can be obtained in which the relative surface area in the MEMS switch varies linearly.Another embodiment of the invention includes a MEMS switch having a geometrical increase in surface area, such as 1, 2, 4, 8, and the like. Therefore, the increase in capacitance will be a function of the nominal grading voltage, however, the slope of function 40 will be lower than the slope of the linearly increasing grading scheme. Similarly, the present invention may have an increased surface area according to an index such as 1, 10, 100, 1000, etc., if a 10-base index ratio is used, the capacitance increase will be imposed on both sides of the classification capacitor 14. The function of the grading voltage, however, the slope of the function 42 will also be lower than the slope of the linearly increasing grading scheme.FIG. 6 illustrates another embodiment that may be used in place of capacitor 12 as if it were present in circuit 10 of FIG. The grading capacitor 15 is described as including a plurality of capacitors 17 arranged in parallel in the circuit 19. In the present embodiment, the signal Vs is applied together with the DC excitation voltage. The surface area dimensions of capacitors C1 through Cn can be different such that each of them will break down at a different DC voltage. Thus, by classifying the DC voltage, the total capacitance of the classification can be achieved. Thus, as described herein, a graded increased surface area can be achieved. In particular, a combination of linear, geometrically, exponentially increasing surface area can be implemented in order to achieve both a digital and a practically simulated varactor effect.A further definition of the capacitance according to any of the above three schemes can be achieved by independently adjusting all capacitors or any one of the capacitors as described herein. Figure 7 is another embodiment of the present invention. The variable grading capacitor 44 is equipped with a capacitor first circuit 46 in which a separate capacitor 48 is combined with a plurality of MEMS switches in the form of a first capacitor as described herein in order to achieve a preferred capacitance. In the present embodiment, at least one of the plurality of individual capacitors 48 has a movable charging pad. Each capacitor 48 is depicted as having a capacitor conditioning circuit 52, however, it is understood that the individual capacitors 48 having the capacitor conditioning circuit 52 can be varied between one and all of the individual capacitors 48. In the present embodiment, the scheme of increasing the capacitance is controlled by a plurality of DC voltages, DC1 to DCn. Likewise, the particular surface area of capacitors C1 through Cn can provide a linear, geometric or exponential voltage-off threshold response to the characteristics as described herein. In one embodiment, the individual capacitors 48 are changed by one of a linear, geometric, and exponential area difference.Moreover, for the case where the resizing is considered to be similar to the integer variation of the capacitance, the circuit utilizing the change such as the capacitor adjustment circuit 52 can be considered to be similar to the intermediate or fractional change further defined above the integer increase.In the first example, the variable grading capacitor 44 includes four nominal first capacitors and an nth capacitor having a surface area five times the surface area of each of the four nominal first capacitors. By combining the nominal first capacitor with the nth capacitor, a full range of capacitances from 1 to 9 can be achieved. Further, as described herein, the capacitance can be realized by changing any one or each of the nominal first capacitor and the nth capacitor by the capacitor adjusting circuit 52 for each capacitor which is required to change the hierarchical capacitance from beginning to end. Intermediate definition.In the second example, in addition to the 5x capacitor, an additional capacitor having a surface area of 10 times that of each of the nominal first capacitors is provided to achieve a range of capacitance variations from 1 to 19. In addition to the 5 and 10 times capacitors, another capacitor that can be equipped with a surface area of 20 times each of the nominal first capacitors can achieve a range of capacitance variations from 1 to 39. Intermediate or fractional variations can be achieved by changing any or all of the capacitors by independent regulation circuit 52, as described herein. Other non-integer linear schemes can be established within the spirit and scope of the present invention.For situations where more control is needed on the capacitance, each MEMS switch 48 can have its own switching circuit 52, such as an adjustment circuit. Figure 8 illustrates such an embodiment including a more general case. In the present embodiment, the surface area of each MEMS switch 48 can be substantially equal to each other, and by applying a sufficient voltage to the independent switch circuit 52 to close them, it is possible to close any or all of them. As described herein, the variable individual capacitors 16 and 48 can be varied linearly, geometrically, or exponentially. Additionally, capacitor conditioning circuit 52 can be used to implement an intermediate step of capacitance fractional variation.In accordance with the present invention, various types of MEMS capacitors can be used to achieve the required capacitance. Figure 9 is a front cross-sectional view of the variable capacitor of the present invention indicated by reference numeral 6. FIG. 9 shows the substrate 68 in which the fixed charging pad 70 is disposed. The movable charging pad 72 is placed above the fixed charging pad. The movable charging pad is characterized by a planar portion 74, a floating portion 76 and a terminating portion 78. Attached to the planar portion 74 of the movable charging pad 72 is a hard liner 80. The hard liner 80 can occupy the same footprint as the planar portion 74 of the removable charging pad 72.The first separation distance 82 is considered to be the original separation distance between the fixed charging pad 70 and the planar portion 74 prior to the application of the force. Similarly, the second separation distance 84 is considered to be the separation distance of the adjustable capacitor between the fixed charging pad 70 and the planar portion 74 when a given force is applied.The hard liner 80 can be made of any material that can prevent the planar portion 74 of the movable charging plate 72 from being bent. Preferably, the hard liner 80 is made of silicon nitride SixNy, wherein x and y have values consisting of a stoichiometric combination and a solid solution combination. The hard liner 80 may be composed of an oxide such as silica, titania, alumina, ceria, cerium oxide, etc., as well as other oxides composed of a stoichiometric combination and a solid solution combination. In addition, the rigid liner 80 can be constructed of any material, preferably dielectric, to enable the structure of the present invention to achieve an adjustable range of greater than 30%, more preferably greater than 50%, and most preferably greater than 100%.The second separation distance 84 is considered to be approximately constant. By "substantially constant" is meant to minimize the planar portion 74 of the movable charging pad 72. Relative deformation is defined as a relative measure of the vertical relative deformation of any point along the charging surface 86 along the planar portion 74 to any other point thereon being divided by the length 88 of the planar portion 74. Figure 10 shows details of the exaggerated relative deformation, wherein the deformation difference 90 can be relatively quantized by the length 92. The relative deformation in the present invention may range from about 30% to about 0.1%, more preferably from about 10% to about 0.5%, most preferably from about 2% to about 1%.Referring back to FIG. 9, the first separation distance is from the end 78 of the movable charging pad 72 down to the size of the fixed charging pad 70. The floating portion 76 of the movable charging pad 72 is separated from the fixed charging pad 70 by a variable distance range of at most a first separation distance 82 and a minimum second separation distance 84. Therefore, for the floating portion 76, the material in this portion is more suitable for reducing the capacitance.Figure 11 further illustrates a top view of the variable capacitor 66 of the present invention. The hard liner 80 has been removed to further illustrate the removable charging pad 72. The movable charging pad 72 is considered to include a flat portion 74, a floating portion 76 and an end portion 78 that are bent at an angle indicated by a broken line 94. End portion 78 and suspension portion 76 are also curved at a certain angle as indicated by dashed line 96.Figure 11 shows that the suspension portion 76 can include a via 98 to form an incomplete surface suspension of the planar portion 74. By reducing the size of the area of the charging surface that occurs at the variable first spacing distance 82, the incomplete surface of the floating portion 76 of the movable charging pad 72 reduces the capacitive surface area for that portion of the movable charging pad. Thus, the incomplete surface of the floating portion 76 enables better control of the quality of the variable capacitor of the present invention. Furthermore, since there is less material that must be bent in the floating portion 76 when the floating portion 76 has an incomplete surface suspension, the movable charging plate 72 is more flexible and thus more easily adjustable. It should be understood that the suspension portion 76 can also be solid. In the case where the floating portion 76 has an incomplete surface, the fixed charging pad 70 has a first surface area, and the movable charging plate 72 has a second surface area that is smaller than the first surface area.In a preferred embodiment, the capacitor according to the invention has a portion that is divided into solid surface charging plates and an incomplete surface suspension.Figure 12 is a front cross-sectional view showing another variable capacitor 100 of another embodiment of the present invention. Figure 12 shows a variable dielectric material 102 having a removable charging pad 104 positioned thereon and suspended above a fixed charging pad 70. Note that the movable charging pad 104 cannot be in electrical contact with the fixed charging pad 70 due to the insertion of the variable dielectric material therebetween.In the present embodiment, the variable dielectric material 102 is divided into a planar portion 106, a floating portion 108, and an end portion 110. The hard liner 80 is positioned over the variable dielectric material 102. The hard liner 80 has substantially the same footprint as the movable charging pad 104 and the planar portion 106. The movable charging pad 104 is inserted between the hard pad 80 and the planar portion 106. Viewed from top to bottom in FIG. 12, although the hard liner 80 is shown as completely covering the removable charging pad 104, it should be understood that the hard pad 80 may have a footprint greater than, equal to, or less than the movable charging pad 104. Where the stiffener 80 is larger than the movable charging pad 104, it may be expanded by a factor ranging from about 1.01 to about 2, preferably from about 1.1 to about 1.5.During formation of at least one via 98 (not shown in FIG. 12) in the variable dielectric material 102 under the movable charging pad, the at least one via has at least one via relative to the variable dielectric material 102 The area is within 1% to 50%, preferably from 10% to 40%.Figure 13 is another embodiment of the present invention. In this embodiment, the hard liner 80 is overlaid on the movable charging pad 112 (not visible). In this embodiment, the hard liner 80 masks the planar portion 114 of the movable charging pad 112. In this embodiment, between the planar portion of the movable charging pad 114 and the end portion 118, the floating portion 116 of the movable charging pad 112 forms a wavy floating spring. With the present embodiment, greater flexibility can be achieved for the movement of the planar portion 114 of the movable charging pad 112.Figure 13 shows a floating portion 116 having "W" and "M" shapes. Although these shapes are a preferred embodiment, a simpler or more complex shape can be achieved. An example of a simpler shape is shown in FIG. In Fig. 14, a movable charging pad 120 having a "U" shape and an inverted "U" shaped floating portion 122 is wavyly coupled between the planar portion 114 of the movable charging pad and the end portion 118. Another example of a simpler shape is shown in FIG. In FIG. 15, the movable charging pad 124 includes an "S" shaped and mirrored "S" shaped floating portion 126 that fluctuates between the planar portion 114 and the end 118 of the movable charging pad 124.Although the wave-shaped suspensions 116, 122, and 126, respectively shown in Figures 13, 14, and 15, are depicted as part of the movable charging plates 112, 120, and 124, respectively, it should be understood that the wave-shaped suspensions 116, 122, and 126 are also It can be an integral part of a variable dielectric material. The components of the variable dielectric material can be used in the structure of Figure 12.In another embodiment, the wave structure constituting the suspended portion of the variable dielectric material may be a continuous wave structure extending over a planar portion of the movable charging plate to form a multiple channel opening structure. Thus, in the case where the continuous wave structure can be illustrated in Fig. 12, one suspension portion 108 can be joined, one suspension portion 108 can be joined, and one planar portion 106 can be joined, and ended at the other suspension portion 108 and the end portion 110, respectively.Whether charging a Bione material or a variable dielectric material, different variability is achieved by using a particular material and utilizing the dimensions of the wave structure. For example, the floating portion 116 of the movable charging pad 112 has a thickness 128 and an amplitude 130 that may be related to the length 92 and/or width 132 of the movable charging pad 112. Similarly, the floating portion 122 of the movable charging pad 120 has a thickness 128 and an amplitude 130 that may be related to the length 92 and/or width 132 of the movable charging pad 120.Figure 16 illustrates another embodiment of the invention in which the capacitance and electrostatic excitation functions are separated. Variable capacitor 134 includes a planar portion 136 and a hard liner 80. Although the floating portion or the like is not shown, any of the embodiments described herein may be included. A fixed charging pad 138 on the substrate 140 can be lifted onto the exciter plate 142. The exciter plate 142 is located above the lower substrate 144. Lifting of the fixed charging pad 138 may be omitted or omitted to achieve a structure in which the fixed charging pad 138 is substantially at the same height as the fixed exciter plate 142. For this other embodiment, the substrates 140 and 144 can be the same horizontal plane and formed of the same material in one processing step.The planar portion 136 of the movable charging pad is affixed to the rigid liner 80. The planar portion 136 and the stiffener 80 are energized together by the exciter plate 142 to achieve a preferred separation distance 146 for the required capacitance. The exciter plate 142 adjusts the planar portion 136 of the movable charging pad to a position of the ideal separation distance 146 using an electromotive force.Figure 17 shows another embodiment of the invention similar to the embodiment of Figure 16. The variable capacitor 148 adds a plurality of movable charging plates 150 that are insulated from the movable exciter plate 152. According to the present embodiment, a preferred capacitance can be achieved with an electromotive force applied between the fixed exciter plate 142 and the movable exciter plate 152. When a capacitance is established between the fixed charging pad 138 and the movable charging pad 150, such an excitation scheme has a reduced effect, if any. Therefore, achieving an ideal capacitance may be more directly related to the separation distance 146.Figure 18 illustrates another embodiment of the invention in which the capacitance is separated from the function of electrostatic excitation. The fixed charging pad 154 on the substrate 156 can be raised above the exciter plate 158. The exciter plate 158 is located on the lower substrate 160. Lifting of the fixed charging pad 154 may be omitted or omitted to achieve a structure in which the fixed charging pad 154 is substantially at the same height as the fixed exciter plate 158. For this alternative embodiment, substrates 156 and 160 can be the same horizontal plane and formed from the same material in one processing step.The planar portion 136 of the movable charging pad is affixed to the rigid liner 80. The planar portion 136 and the stiffener 80 are energized together by the exciter plate 158 to achieve a preferred separation distance 146 for the required capacitance. The exciter plate 158 adjusts the planar portion 136 of the movable charging pad to a position of the ideal separation distance 146 using an electromotive force.19 illustrates another embodiment of the present invention in which a plurality of movable charging plates 162 insulated from the movable exciter plate 164 are added, similar to the embodiment of FIG. According to the present embodiment, a preferred capacitance can be achieved with an electromotive force applied between the fixed exciter plate 166 and the movable exciter plate 164. When a capacitance is established between the fixed charging pad 168 and the movable charging pad 162, this excitation scheme has a reduced effect, if any. Therefore, achieving an ideal capacitance may be more directly related to the separation distance 146.In the embodiments depicted in Figures 16, 17, 18 and 19, it will be appreciated that the portable charging will be implemented using a floating portion embodiment as described in this disclosure including the insertion of a variable and/or dielectric structure. The board is suspended. In addition, other suspension schemes can be used in the present embodiment of the invention.In the abandoned embodiment, the floating portions 76, 108, 116, 122, and 126 are examples of methods for suspending the movable charging pad. In the abandoned embodiment, the fixed charging pads 70 and 138 are examples of methods for moving the removable charging pad.A variable capacitor is fabricated in accordance with the method 170 of the present invention shown in FIG. A groove 172 as shown in FIG. 9 is formed in the substrate 68. The groove 172 may be formed by a separate etch, or it may be part of a corrugated structure. A fixed charging pad 70 is formed in the recess 172 by deposition, such as chemical vapor deposition (CVD) or physical vapor deposition (PVD). The method illustrated in FIG. 20 illustrates that the recesses and the fixed charging pad can be formed simultaneously, as shown in block 174. The movable charging pad 72 is formed by, for example, filling the recess 172 with a temporary material, depositing the movable charging pad 72 and wet etching the temporary filling material filling the recess 172. As shown in block 178, a hard liner 80 is formed on the partially movable charging pad 72. In the case of forming a pattern of at least a portion of the movable charging pad 72 prior to clearing the fill material in the recess 172, forming a plurality of vias or a pattern of any of the fluctuating suspensions as disclosed herein will facilitate clearing the fill. material. In accordance with the method of the present invention, variable dielectric material 102 covering the fixed charging pad can be formed at 178.The variable capacitor 100 shown in Fig. 12 is formed in a similar manner to the variable capacitor 66. Prior to forming the movable charging pad 104, the variable dielectric layer 102 is formed on the fill material to be removed to form the recess 172, as shown in process block 176. After the variable dielectric layer 102 is formed, a pattern may be formed before or after the filling material deposited in the recess 172 is removed. In the case where the pattern of the variable dielectric layer 102 is formed prior to the removal of the fill material in the recess 172, forming a pattern of any of the fluctuating suspension portions as disclosed herein will facilitate removal of the fill material.The variable capacitor 134 shown in Fig. 16 is formed by forming the lower substrate 144 in the recess 172 and forming the fixed actuator plate 142 on the lower substrate 144. The lifted substrate 140 is formed by depositing or etching a portion of the recess 172. In accordance with the embodiments described herein, during formation of the variable dielectric layer (not depicted), a fixed charging pad 138 is formed on the lift substrate 140 and the recess 172 is filled with a fill material to be removed. In the case where the fixed charging plates 138 and 1 are fixed at the same height, the excitation plates 142 can form a pattern from the same material layer. A similar method of forming the variable capacitor 148 with additional restrictions, even if the movable charging pad 150 is patterned to form the movable excitation plate 152.The variable capacitor 178 shown in Fig. 18 is formed by forming the lower substrate 160 in the recess 172 and forming the fixed exciter plate 158 on the lower substrate 160. The lifted substrate 156 is formed by depositing or etching a portion of the recess 172. In accordance with the embodiments described herein, during formation of the variable dielectric layer (not depicted), a fixed charging pad 154 is formed on the raised substrate 156 and the recess 172 is filled with a fill material to be removed. In the case where the fixed charging plate 154 and the fixed excitation plate 158 are at the same height, they can form a pattern from the same material layer. A similar method of forming the variable capacitor 180 with additional restrictions, even if the movable charging pad 162 is patterned to form the movable excitation plate 164.The invention has significant advantages. One advantage is that an adjustable range that is not achievable by the prior art is achieved. Due to the hard lining disclosed herein, the critical gap between the movable charging pad and the fixed charging pad is smaller than allowed by the prior art. Therefore, the tunable range of the variable capacitor can be greater than 100%. Since applied as a non-limiting example to radio technology, the variable capacitor of the present invention enables the radio to operate in multiple bands, such as 900 MHz, 1.9 GHz, and 2.4 GHz. Thus, the design of the transceiver can be changed so that the same variable capacitor can be used for different frequencies.Another advantage is that the establishment and control of the preferred capacitance becomes more predictable and thus becomes more reliable. The appearance of hard liners and incomplete surface suspensions significantly reduces the unchanging capacitance of the prior art variable capacitors near the ends. Moreover, separating the excitation from the capacitance as disclosed herein allows for more control.In addition to the fluctuating suspension for variable capacitors, as described herein, fluctuating suspension for MEMS switches can also be used.It will be readily apparent to those skilled in the art that the details of the components that have been described and illustrated in order to explain the nature of the invention, Various other modifications are made in terms of materials and arrangements, method steps, and the like. |
The invention relates to adaptive background scanning in a memory subsystem. A log of error events associated with a memory device is maintained. Each error event included in the log is associated with one of a plurality of physical locations within the memory device. Physical locations within the memory device are identified for background scanning based on the log of error events. A background scan is performed on the physical location identified based on the log of error events. |
1.A system comprising:memory device; anda processing device operably coupled to the memory device to perform operations comprising:maintaining a log of error events associated with the memory device, each error event included in the log being associated with one of a plurality of physical locations within the memory device;Identifying a physical location within the memory device for background scanning based on the log of error events; andA background scan is performed on the physical locations identified based on the log of error events.2.The system of claim 1, wherein identifying the physical location within the memory device comprises:randomly select error events from the log of error events; andThe physical location is determined to be associated with the error event.3.The system of claim 1, wherein identifying the physical location within the memory device comprises:determining the number of error events associated with the physical location in the log; andThe physical location is selected based on the number of error events associated with the physical location in the log.4.The system of claim 1, wherein identifying the physical location within the memory device comprises:predicting the likelihood of future error events occurring at the physical location of the memory device; andThe physical location is selected based on the likelihood of the future error event occurring at the physical location.5.1. The system of claim 1, wherein maintaining the log of error events includes aggregating data describing read errors, error handling events, and block collapse events detected at the memory device.6.The system of claim 1, wherein the log of error events is limited to a predetermined number of recent error events.7.The system of claim 1, wherein:the physical location is a first physical location;the background scan is a first background scan; andThe operations further include:After a predetermined interval, identifying a second physical location within the memory device for performing a background scan; andA background scan is performed on the second physical location.8.A method comprising:maintaining a log of error events associated with a memory device, each error event included in the log being associated with one of a plurality of physical locations within the memory device;Identifying a physical location within the memory device for background scanning based on the log of error events; andA background scan is performed on the identified physical location.9.9. The method of claim 8, wherein identifying the physical location within the memory device comprises:randomly select error events from the log of error events; andThe physical location is determined to be associated with the error event.10.9. The method of claim 8, wherein identifying the physical location within the memory device comprises:determining the number of error events associated with the physical location in the log; andThe physical location is selected based on the number of error events associated with the physical location in the log.11.9. The method of claim 8, wherein identifying the physical location within the memory device comprises:predicting the likelihood of future error events occurring at the physical location of the memory device; andThe physical location is selected based on the likelihood of the future error event occurring at the physical location.12.9. The method of claim 8, wherein maintaining the log of error events includes aggregating data describing read errors, error handling events, and block collapse events detected at the memory device.13.9. The method of claim 8, wherein the log of error events is limited to a predetermined number of recent error events.14.The method of claim 8, wherein:the physical location is a first physical location;the background scan is a first background scan; andThe method further includes:After a predetermined interval, identifying a second physical location within the memory device for performing a background scan; andA background scan is performed on the second physical location.15.A computer-readable storage medium comprising instructions that, when executed by a processing apparatus, configure the processing apparatus to perform operations comprising:maintaining a log of error events associated with a memory device, each error event included in the log being associated with one of a plurality of physical locations within the memory device;Identifying a physical location within the memory device for background scanning based on the log of error events; andA background scan is performed on the identified physical location.16.The computer-readable storage medium of claim 15, wherein identifying the physical location within the memory device comprises:randomly select error events from the log of error events; andThe physical location is determined to be associated with the error event.17.The computer-readable storage medium of claim 15, wherein identifying the physical location within the memory device comprises:determining the number of error events associated with the physical location in the log; andThe physical location is selected based on the number of error events associated with the physical location in the log.18.16. The computer-readable storage medium of claim 15, wherein maintaining the log of error events includes aggregating data describing read errors, error handling events, and block collapse events detected at the memory device.19.16. The computer-readable storage medium of claim 15, wherein the log of error events is limited to a predetermined number of recent error events.20.The computer-readable storage medium of claim 15, wherein:the physical location is a first physical location;the background scan is a first background scan; andThe operations further include:After a predetermined interval, identifying a second physical location within the memory device for performing a background scan; andA background scan is performed on the second physical location. |
Adaptive Background Scanning in the Memory Subsystemtechnical fieldEmbodiments of the present disclosure relate generally to memory subsystems, and more particularly, to adaptive background scanning in memory subsystems.Background techniqueThe memory subsystem may include one or more memory devices that store data. The memory components can be, for example, non-volatile memory devices and volatile memory devices. In general, a host system may utilize a memory subsystem to store data at and retrieve data from a memory device.SUMMARY OF THE INVENTIONIn one aspect, the present application provides a system comprising: a memory device; and a processing device operably coupled to the memory device to perform operations comprising: maintaining a log of error events associated with the memory device , each error event included in the log is associated with one of a plurality of physical locations within the memory device; identifying a physical location within the memory device based on the log of error events for background scanning; and Logs to perform a background scan while identifying the physical location.In another aspect, the present application provides a method comprising: maintaining a log of error events associated with a memory device, each error event included in the log is associated with one of a plurality of physical locations within the memory device identifying a physical location within the memory device for performing a background scan based on a log of error events; and performing a background scan on the identified physical location.In another aspect, the present application provides a computer-readable storage medium comprising instructions that, when executed by a processing device, configure the processing device to perform operations comprising: maintaining an error associated with the memory device a log of events, each error event included in the log being associated with one of a plurality of physical locations within the memory device; identifying a physical location within the memory device based on the log of error events for background scanning; and A background scan is performed at the identified physical location.Description of drawingsThe present disclosure will be more fully understood from the detailed description given below and the accompanying drawings of various embodiments of the disclosure.1 illustrates an example computing system including a memory subsystem, according to some embodiments of the present disclosure.2 is a data flow diagram illustrating the interaction between components of a memory subsystem when performing an adaptive background scan in accordance with some embodiments of the present disclosure.3 and 4 are flow diagrams illustrating example methods for performing adaptive background scanning in a memory subsystem in accordance with some embodiments of the present disclosure.5 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.Detailed waysAspects of the present disclosure relate to performing adaptive background scanning in a memory subsystem. The memory subsystem may be a storage device, a memory module, or a mix of storage devices and memory modules. Examples of memory devices and memory modules are described below in conjunction with FIG. 1 . In general, a host system may utilize a memory subsystem that includes one or more components such as memory devices that store data. The host system can provide data to be stored at the memory subsystem and can request data to be retrieved from the memory subsystem.The memory device may be a non-volatile memory device. One example of a non-volatile memory device is a NAND memory device. Other examples of non-volatile memory devices are described below in conjunction with FIG. 1 . Data operations may be performed by the memory subsystem. Data operations may be host-initiated operations. For example, the host system may initiate data operations (eg, write, read, erase, etc.) on the memory subsystem. The host system may send access requests (eg, write commands, read commands) to the memory subsystem to store data on memory devices at the memory subsystem, and to read data from memory devices on the memory subsystem .Some memory devices (eg, NAND memory devices) include arrays of memory cells (eg, flash cells) used to store data. Each cell includes a transistor, and within each cell, data is stored as the transistor's threshold voltage based on the cell's logic value (eg, 0 or 1). During a read operation, a read reference voltage is applied to the transistor, and if the read reference voltage is higher than the cell's threshold voltage, the transistor is programmed and recognized by the memory subsystem as a binary value of 0. The memory cells in these devices can be grouped into pages that can refer to logical units of the memory device used to store data. For some types of memory devices (eg, NAND), pages are grouped to form blocks (also referred to herein as "memory blocks").Background scan operations may run in the context of the memory subsystem (eg, during idle periods when the memory subsystem is not performing other operations in response to host-initiated commands). A memory device background scan may begin by reading a segment (eg, a codeword, block, or portion of a block) of the memory device. A background scan can track the number of bit corrections required in order to determine the quality of that section of memory. A background scan can also determine if a segment is uncorrectable. Sections of memory can be analyzed to determine metric values (eg, in terms of amount or type of error correction required, estimated remaining lifetime, number of cells operating below a threshold level, section uncorrectable). If the metric value is above the threshold, the background scan may proceed to the next memory segment. If the metric value is below the threshold, the background scan may attempt corrective action, eg, by performing a flush relocation event on the memory segment or the portion of memory associated with the memory segment. For example, if a portion of a block is read and determined to have a metric value below a threshold, a flush relocation event may be performed for the block containing the read portion after data has been recovered using a system-driven read recovery method. .Conventionally, background scans are performed at certain frequencies over the lifetime of the memory device. In some conventional implementations, the memory subsystem controller operates a timer, and when the timer reaches a timer threshold (eg, 3 minutes), a background scan is initiated. The traditional approach to background scanning is to try as aggressively as possible and capture as much as possible. However, this conventional approach fails to account for the variability in quality and weaknesses of certain devices such as NAND. For example, a given memory device can have significant variability in the physical location where error events occur. This means that the background scan may repeatedly target certain locations where false events are unlikely, while ignoring other areas where false events are very likely. Thus, conventional methods result in insufficient background scanning, which often reduces the relative performance and reliability of memory devices.Aspects of the present disclosure address the problems of traditional background scanning techniques by utilizing adaptive background scanning methods. Adaptive background scan methods utilize error data from background scans and other data integrity checks to identify physical locations in memory devices that cause frequent occurrences of error events such as read errors, mishandling events, or block collapse events. Components of the memory subsystem controller (eg, firmware) aggregate and use erroneous data to increase the rate of background scans on high-risk locations within the memory device.The adaptive background scanning method described herein improves the overall reliability of the memory subsystem by adapting and compensating for variability in memory devices and workloads. For example, the background scan component targets the location more frequently by scanning the worst-case section of the memory device more frequently. The adaptive background scanning method also provides an adaptive solution to NAND drift (eg, material variability or manufacturing line shift). In addition, the adaptive background scanning method also allows high-value systems to utilize low-quality NAND devices. Additionally, the adaptive background scanning method may improve the overall efficiency of background scanning by focusing on the worst sectors of the memory device, thereby improving device reliability without sacrificing the performance of additional scans. In this way, the overall rate of background scanning can be reduced while still ensuring the same level of reliability.FIG. 1 illustrates an example computing system 100 including a memory subsystem 110 in accordance with some embodiments of the present disclosure. Memory subsystem 110 may include media such as one or more volatile memory devices (eg, memory device 140 ), one or more non-volatile memory devices (eg, memory device 130 ), or a combination thereof.Memory subsystem 110 may be a storage device, a memory module, or a mix of storage devices and memory modules. Examples of storage devices include solid state drives (SSD), flash drives, universal serial bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal flash storage (UFS) drives, secure digital (SD) cards, and hard disk drives (HDDs). Examples of memory modules include dual inline memory modules (DIMMs), small outline DIMMs (SO-DIMMs), and various types of non-volatile dual inline memory modules (NVDIMMs).Computing system 100 may be a computing device, such as a desktop computer, a laptop computer, a web server, a mobile device, a vehicle (eg, an airplane, drone, train, car, or other vehicle), Internet of Things (IoT) enabled A device, an embedded computer (eg, an embedded computer included in a vehicle, industrial equipment, or networked business device), or such computing device that includes memory and processing means.Computing system 100 may include host system 120 coupled to one or more memory subsystems 110 . In some embodiments, host system 120 is coupled to different types of memory subsystems 110 . FIG. 1 illustrates one example of a host system 120 coupled to a memory subsystem 110 . As used herein, "coupled to" or "coupled with" generally refers to a connection between components, which may be an indirect communicative connection or a direct communicative connection (eg, without intervening components), whether wired or wireless, including Connections such as electrical connections, optical connections, magnetic connections, and the like.Host system 120 may include a processor chipset and a software stack executed by the processor chipset. A processor chipset may include one or more cores, one or more caches, a memory controller (eg, an NVDIMM controller), and a storage protocol controller (eg, a Peripheral Component Interconnect Express (PCIe) controller, Serial Advanced Technology Attachment (SATA) controller). The host system 120 uses the memory subsystem 110 , eg, to write data to and read data from the memory subsystem 110 .Host system 120 may be coupled to memory subsystem 110 via a host interface. Examples of host interfaces include, but are not limited to, SATA interfaces, PCIe interfaces, USB interfaces, Fibre Channel, Serial Attached SCSI (SAS), Small Computer System Interface (SCSI), Double Data Rate (DDR) memory bus, DIMM interfaces such as , DIMM sockets supporting Double Data Rate (DDR), Open NAND Flash Interface (ONFI), Double Data Rate (DDR), Low Power Double Data Rate (LPDDR) or any other interface. The host interface can be used in the host system Data is transmitted between 120 and memory subsystem 110. While memory subsystem 110 is coupled with host system 120 through a PCIe interface, host system 120 may further utilize an NVM Express (NVMe) interface to access components (eg, memory device 130). A host interface may provide an interface for communicating control, address, data, and other signals between memory subsystem 110 and host system 120. Figure 1 illustrates memory subsystem 110 as an example. In general, host system 120 may communicate via the same A connection, a plurality of individual communication connections, and/or a combination of communication connections access the plurality of memory subsystems.The memory devices 130, 140 may comprise any combination of different types of non-volatile memory devices and/or volatile memory devices. Volatile memory devices (eg, memory device 140) may be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).Some examples of non-volatile memory devices (eg, memory device 130) include NAND-type flash memory and write-in-place memory, such as three-dimensional (3D) cross-point memory devices, which are intersections of non-volatile memory cells array. Cross-point arrays of non-volatile memory can perform bit storage based on changes in bulk resistance in conjunction with stackable cross-grid data access arrays. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform write-in-place operations, where non-volatile memory cells can be programmed without pre-erasing the non-volatile memory cells . The NAND-type flash memory includes, for example, two-dimensional NAND (2D NAND) and 3D NAND.Each of memory devices 130 may include one or more arrays of memory cells. One type of memory cell, such as a single level cell (SLC), can store one bit per cell. Other types of memory cells, such as multi-level cell (MLC), three-level cell (TLC), quad-level cell (QLC), and five-level cell (PLC), can store multiple bits per cell. In some embodiments, each of the memory devices 130 may include one or more arrays of memory cells, such as SLC, MLC, TLC, QLC, or any combination of such. In some embodiments, a particular memory device may include an SLC portion of memory cells, as well as an MLC portion, a TLC portion, a QLC portion, or a PLC portion. The memory cells of memory device 130 may be grouped into pages, which may refer to logical units of the memory device used to store data. In the case of some types of memory (eg, NAND), pages may be grouped to form blocks.Although non-volatile memory components such as NAND-type flash memory (eg, 2D NAND, 3D NAND) and 3D cross-point non-volatile memory cell arrays are described, memory device 130 may be based on any other type of non-volatile memory Memories such as read only memory (ROM), phase change memory (PCM), optional memory, other chalcogenide based memories, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM), magnetic Random Access Memory (MRAM), Spin Transfer Torque (STT)-MRAM, Conductive Bridge RAM (CBRAM), Resistive Random Access Memory (RRAM), Oxide-Based RRAM (OxRAM), NOR Flash, and Electrically Erasable Programmable Read Only Memory (EEPROM).Memory subsystem controller 115 (or, for simplicity, controller 115 ) may communicate with memory device 130 to perform operations such as reading data, writing data, or erasing data at memory device 130 and other such operations. The memory subsystem controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The hardware may include digital circuitry with dedicated (ie, hard-coded) logic to perform the operations described herein. Memory subsystem controller 115 may be a microcontroller, special purpose logic circuitry (eg, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.), or other suitable processor.Memory subsystem controller 115 may include a processor 117 (processing device) configured to execute instructions stored in local memory 119 . In the illustrated example, the local memory 119 of the memory subsystem controller 115 includes embedded memory configured to store instructions for performing operations that control the memory subsystem 110 (including handling the memory subsystem 110 and the host system) 120) various processes, operations, logic flows and routines.In some embodiments, local memory 119 may include memory registers that store memory pointers, fetched data, and the like. Local memory 119 may also contain ROM for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been illustrated as including the memory subsystem controller 115, in another embodiment of the present disclosure, the memory subsystem 110 does not include the memory subsystem controller 115, but may instead Depends on external control (eg, provided by an external host or by a processor or controller separate from the memory subsystem).In general, memory subsystem controller 115 may receive commands or operations from host system 120 and may convert the commands or operations into instructions or appropriate commands to effect desired accesses to memory device 130 and/or memory device 140 . Memory subsystem controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and ECC operations, encryption operations, cache operations, and logical addresses (eg, logical block address (LBA) operations) associated with memory device 130. ), namespace) and physical addresses (eg, physical block addresses). Memory subsystem controller 115 may further include host interface circuitry to communicate with host system 120 via a physical host interface. Host interface circuitry may translate commands received from host system 120 into command instructions to access memory device 130 and/or memory device 140, and translate responses associated with memory device 130 and/or memory device 140 into useful instructions. information on the host system 120 .In some embodiments, memory device 130 includes a local media controller 135 that operates in conjunction with memory subsystem controller 115 to perform operations on one or more memory cells of memory device 130 .Memory subsystem 110 also includes an adaptive background scan (ABS) component 113 responsible for managing and performing background scans on memory devices 130 and 140 . During a background scan, the ABS component 113 reads data from a portion (eg, a page, block, or portion of a block) of one of the memory devices 130 or 140 to determine a metric (eg, as to the amount or type of error correction required) , estimated remaining life, amount of cells operating below a threshold level, sectors not correctable by the ECC engine), and if the metric is below the threshold, corrective action is performed by the memory subsystem controller 115, such as by The portion of the memory device from which data is read performs a refresh relocation event. To improve the efficiency of background scanning, ABS component 113 maintains one or more logs of error events that occur at memory devices 130 and 140 and uses the one or more logs to identify physical locations within devices 130 and 140 for processing Background scan. As an example, the ABS component 113 may maintain a first log identifying error events through NAND chips and a second log identifying error events through word lines.In some embodiments, memory subsystem controller 115 includes at least a portion of ABS assembly 113 . For example, memory subsystem controller 115 may include processor 117 (processing device) configured to execute instructions stored in local memory 119 for performing the operations described herein. In some embodiments, ABS component 113 is part of host system 120, an application, or an operating system. In some embodiments, the local media controller 135 includes at least a portion of the ABS component 113 .2 is a data flow diagram illustrating the interaction between components of a memory subsystem when performing an adaptive background scan in accordance with some embodiments of the present disclosure. In the example illustrated in FIG. 2, memory device 130 is a NAND memory device that includes multiple memory blocks.As shown, NAND block 200 includes an array (2D or 3D) of pages (rows) and strings (columns). Each NAND cell includes a transistor, and within each cell, data is stored as the transistor's threshold voltage. For example, SLC NAND can store one bit per cell. Other types of memory cells, such as MLC, TLC, QLC, and PLC, can store multiple bits per cell. Strings are connected within NAND block 200 to allow storage and retrieval of data from selected cells. NAND cells in the same column are connected in series to form bit lines (BL). All cells in the bit line are connected to a common ground on one end and to a common sense amplifier on the other end for reading the threshold voltage of one of the cells when decoding data. NAND cells are connected horizontally to word lines (WLs) at their control gates to form pages. In MLC, TLC, QLC, and PLC NAND, a page is a collection of connected cells that share the same word line and are the smallest unit of programming.ABS component 113 builds and maintains error event log 201 based on error data generated by memory subsystem controller 115 and memory device 130 . The error event log contains error events detected at the memory device 130 . For example, false events can be detected while performing a background scan. These error events include, for example, read errors, error handling events, and block refresh events as described in error data generated by memory subsystem controller 115 . Each error event included in error event log 201 is associated with a physical location in memory device 130 (eg, a page, block, or portion thereof). More specifically, each entry in error event log 201 indicates a type of error event (eg, read error, error handling event, and block flush event) and an identifier corresponding to the physical location in memory device 130 where the event occurred.In some embodiments, the error event log 201 is limited to a predetermined number of recent error events. Thus, once the number of error events in the error event log 201 reaches a predetermined number, the ABS component 113 removes the oldest error events from the log and then adds newly detected error events. In some embodiments, multiple instances of a single error event may be added to error event log 201 . By adding multiple instances of a single error event, the ABS component 113 can increase the probability that a physical location corresponding to the error event will be selected for a background scan.At 202, the ABS component 113 continuously monitors and aggregates error data corresponding to the error event log 201, and at a predefined frequency, the ABS component 113 uses the error event log 201 to identify the physical location in the memory device 130 (at 204). ), and the ABS component 113 performs a background scan on the identified physical location (at 206). For example, the ABS component 113 can randomly select an error event from the error event log 201 and perform a background scan on the corresponding physical location. The ABS component 113 utilizes a timer (eg, operated by the memory subsystem controller 115), and when the time reaches a timer threshold, the ABS component 113 identifies a physical location and performs a background scan on the physical location. In this way, the ABS component 113 performs a background scan at a predefined frequency. The ABS component 113 can adjust the frequency of the background scan based on the rate of error events (eg, errors per power-on time, errors per written byte, or errors per program-erase cycle). Accordingly, the ABS component 113 may include a counter to track the total number of error events added to the error event log 201 .When performing a background scan on a physical location, the ABS component 113 analyzes the data read from the physical location to determine metric values (eg, amount or number of bit errors, amount or type of error correction required, estimated remaining life, operational The amount of cells below a threshold level, sectors are not correctable by the ECC engine), and if the metric is below the threshold, corrective action is performed by the memory subsystem controller 115, such as by performing a refresh relocation event on the physical location. For example, if a page in memory device 130 is read and determined to have a metric value below a threshold, a refresh relocation event may be performed for that page. During a flush relocation event, data from scanned pages is copied to a new physical location within the memory device (eg, an open page in open block 200 of memory device 130). If an error event is detected during a background scan (eg, if the error metric exceeds a threshold), the ABS component 113 may add the newly detected error event to the error event log 201 to support ongoing ABS management within the memory subsystem 110 .3 and 4 are flowcharts illustrating an example method 300 for adaptive background scanning in a memory subsystem (eg, memory subsystem 110 ) in accordance with some embodiments of the present disclosure. Method 300 may be performed by processing logic, which may include hardware (eg, processing device, circuitry, special purpose logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (eg, on a processing device) run or execute instructions) or a combination thereof. In some embodiments, method 300 is performed by ABS assembly 113 of FIG. 1 . Although the processes are shown in a particular sequence or order, unless otherwise specified, the order of the processes may be modified. Accordingly, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. Additionally, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are possible.At operation 305, the processing device maintains a log (eg, error event log 201) of error events that occurred at one or more memory devices (eg, memory devices 130 and/or 140). Each error event in the error log is associated with a physical location on the memory device. More specifically, each entry in the log of error events contains an indicator of the error event type and an identifier corresponding to the physical location in the memory device where the error event occurred. Multiple instances of a given error event may be included in the log to increase the probability of selecting that location for a background scan.To maintain a log of error events, the processing device aggregates error data generated at one or more memory devices and/or memory subsystem controllers coupled to the one or more memory devices. The error data aggregated by the processing device describes read errors, error handling events, and block collapse events detected at the memory device. Thus, the log of error events identifies read errors, error handling events, and block collapse events detected at one or more memory devices. Consistent with some embodiments, the processing device may maintain multiple logs of error events. For example, a processing device may maintain a first log identifying error events through the NAND chip or device and maintain a second log identifying error events through word lines.In some embodiments, the logging of error events is limited to a predetermined number of recent error events. Thus, once the log of error events reaches a predetermined number, the processing device removes the oldest error events from the log and then adds newly detected error events.At operation 310, the processing device identifies a physical location within the memory device for performing a background scan based on the log of error events. As shown in FIG. 4, operation 310 may include operations 405 and 410, consistent with some embodiments. At operation 405, the processing device randomly selects an error event from the log of error events, and at operation 410, the processing device identifies a physical location corresponding to the randomly selected error event.Returning to FIG. 3, in some embodiments, the processing device identifies physical locations for background scanning based on the number or frequency of error events occurring at the physical locations. For example, to identify physical locations, the processing device may determine the number of error events in the log that are associated with each physical location included in the log. The processing device may select the physical location with the highest number of error events for background scanning.In some embodiments, the processing device identifies the physical location for background scanning based on the predicted likelihood that future error events will occur at the physical location. For example, the processing device may analyze the log of error events to predict the likelihood of future error events occurring at each physical location included in the log, and select the physical location with the highest probability.At operation 315, the processing device performs a background scan on the identified physical location. When performing a background scan, the processing device analyzes the data read from the physical location to determine error metrics (eg, amount or type of error correction required, estimated remaining life, amount of cells operating below threshold levels, The physical location is not correctable by the ECC engine, etc.), and if the error metric is below a threshold, corrective action is performed by the memory subsystem controller 115, such as by performing a refresh relocation event on the physical location. For example, if a page in memory device 130 is read and determined to have a metric value below a threshold, a refresh relocation event may be performed for that page. During a flush relocation event, data from scanned pages is copied to a new physical location within the memory device (eg, an open page in open block 200 of memory device 130). If an error event is detected during a background scan (eg, if the error metric exceeds a threshold), the processing device may add the newly detected error event to a log to support ongoing ABS management within the memory subsystem.Consistent with some embodiments, operation 305 is performed, and operations 310 and 315 may be repeated at a predetermined frequency. That is, as new error events are detected and new physical locations are identified and scanned at a predefined frequency, the processing device continues to update the log of error events. For example, after scanning a first physical location identified based on a log of error events, the processing device waits a predefined interval (based on a predetermined frequency) before identifying and scanning a second physical location in the memory device based on the log. Consistent with some embodiments, the processing device may vary the frequency with which physical locations are selected (operation 310 ) and scanned (operation 315 ). For example, the processing device may vary the frequency based on the rate of error events (eg, errors per power-on time, errors per written byte, or errors per program-erase cycle).Consistent with some embodiments, method 300 may be performed at a predefined frequency and repeated in conjunction with conventional background scans of portions of random scan memory devices. For example, at each interval, at least operations 310 and 315 are performed in conjunction with a conventional background scan (eg, before or after).exampleExample 1 is a memory subsystem comprising: a memory device; and a processing device operably coupled to the memory device to perform operations comprising: maintaining an error event associated with the memory device a log, each error event included in the log is associated with one of a plurality of physical locations within the memory device; identifying a physical location within the memory device based on the log of error events for use in performing a background scan; and performing a background scan on the physical location identified from the memory device.Example 2 includes the memory subsystem of example 1, wherein identifying the physical location within the memory device comprises: randomly selecting an error event from the log of error events; and determining the physical location and the error event Associated.Example 3 includes the memory subsystem of any of examples 1 and 2, wherein identifying the physical location within the memory device comprises determining a number of error events in the log associated with the physical location ; and selecting the physical location based on the number of error events associated with the physical location in the log.Example 4 includes the memory subsystem of any of Examples 1-3, wherein identifying the physical location within the memory device comprises predicting future error events that occur at the physical location of the memory device and selecting the physical location based on the likelihood of the future error event occurring at the physical location.Example 5 includes the memory subsystem of any of Examples 1-4, wherein maintaining the log of error events includes aggregated descriptions of read errors, error handling events, and block collapse events detected at the memory device The data.Example 6 includes the memory subsystem of any of Examples 1-5, wherein the log of error events is limited to a predetermined number of recent error events.Example 7 includes the memory subsystem of any of Examples 1-6, wherein: the physical location is a first physical location; the background scan is a first background scan; and the operations further comprise: at a predetermined After the interval, a second physical location within the memory device is identified for performing a background scan; and a background scan is performed on the second physical location.Example 8 is a method comprising: maintaining a log of error events associated with a memory device, each error event included in the log being associated with one of a plurality of physical locations within the memory device; Identifying a physical location within the memory device for performing a background scan based on the log of error events; and performing a background scan on the identified physical location.Example 9 includes the method of example 8, wherein identifying the physical location within the memory device comprises: randomly selecting an error event from the log of error events; and determining that the physical location is associated with the error event .Example 10 includes the method of any of examples 8 and 9, wherein identifying the physical location within the memory device comprises: determining a number of error events in the log associated with the physical location; and selecting the physical location based on the number of error events associated with the physical location in the log.Example 11 includes the method of any of Examples 8-10, wherein identifying the physical location within the memory device comprises predicting a likelihood of future error events occurring at the physical location of the memory device and selecting the physical location based on the likelihood of the future error event occurring at the physical location.Example 12 includes the method of any of examples 8-11, wherein maintaining the log of error events includes aggregating data describing read errors, error handling events, and block collapse events detected at the memory device .Example 13 includes the method of any of Examples 8-12, wherein the log of error events is limited to a predetermined number of recent error events.Example 14 includes the method of any of Examples 8-13, wherein: the physical location is a first physical location; the background scan is a first background scan; and the method further comprises: after a predetermined interval , identifying a second physical location within the memory device for performing a background scan; and performing a background scan on the second physical location.Example 15 is a computer-readable storage medium comprising instructions that, when executed by a processing device, configure the processing device to perform operations comprising: maintaining a log of error events associated with a memory device, Each error event included in the log is associated with one of a plurality of physical locations within the memory device; identifying a physical location within the memory device for context based on the log of error events scanning; and performing a background scan on the identified physical location.Example 16 includes the computer-readable storage medium of example 15, wherein identifying the physical location within the memory device comprises: randomly selecting an error event from the log of error events; and determining that the physical location is related to the Error event associated.Example 17 includes the computer-readable storage medium of any one or more of Examples 15 and 16, wherein identifying the physical location within the memory device comprises determining that the log is associated with the physical location and selecting the physical location based on the number of error events associated with the physical location in the log.Example 18 includes the computer-readable storage medium of any one or more of Examples 15-17, wherein maintaining the log of error events includes aggregated descriptions of read errors detected at the memory device, error handling Data for events and block collapse events.Example 19 includes the computer-readable storage medium of any one or more of Examples 15-18, wherein the log of error events is limited to a predetermined number of recent error events.Example 20 includes the computer-readable storage medium of any one or more of Examples 15-19, wherein: the physical location is a first physical location; the background scan is a first background scan; and the operations Further comprising: after a predetermined interval, identifying a second physical location within the memory device for performing a background scan; and performing a background scan on the second physical location.5 illustrates an example machine in the form of a computer system 500 within which a set of instructions can be executed for causing the machine to perform any one or more of the methods discussed herein. 5 illustrates an example machine of a computer system 500 within which a set of instructions for causing the machine to perform any one or more of the methods discussed herein can be executed. In some embodiments, computer system 500 may correspond to a host system (eg, host system 120 of FIG. 1 ) that includes, be coupled to, or utilize a memory subsystem (eg, memory subsystem 110 of FIG. 1 ), or may be operable to execute Operation of the controller (eg, executing an operating system to perform operations corresponding to the ABS assembly 113 of FIG. 1 ). In alternative embodiments, the machines may be connected (eg, networked) to other machines in a local area network (LAN), intranet, extranet, and/or the Internet. A machine may be in the capacity of a server or client machine in a client-server network environment as a peer machine in a peer-to-peer (or decentralized) network environment or as a server or client machine in a cloud computing infrastructure or environment to operate.The machine may be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network appliance, server, network router, switch or bridge, or capable of executing (sequentially or otherwise) ) is any machine that specifies an instruction set for an action to be taken by that machine. Additionally, although a single machine is described, the term "machine" should also be considered to encompass any collection of machines that, individually or collectively, execute a set of instructions (or sets of instructions) to perform any of the methods discussed herein. any or more of.Example computer system 500 includes processing device 502, main memory 504 (eg, ROM, flash memory, DRAM such as SDRAM or RDRAM, etc.), static memory 506 (eg, flash memory, static random access memory (SRAM), etc.) , and data storage systems 518 , which communicate with each other via bus 530 .Processing device 502 represents one or more general-purpose processing devices, such as microprocessors, central processing units, and the like. Rather, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets , or a processor that implements a combination of instruction sets. Processing device 502 may also be one or more special-purpose processing devices, such as an ASIC, FPGA, digital signal processor (DSP), network processor, and the like. Processing device 502 is configured to execute instructions 526 for performing the operations and steps discussed herein. Computer system 500 may further include a network interface device 508 to communicate over network 520 .The data storage system 518 may include a machine-readable storage medium 524 (also referred to as a computer-readable medium) on which are stored one or more sets of instructions 526 or embody the methods or functions described herein any or more of the software. Instructions 526 may also reside entirely or at least partially within main memory 504 and/or within processing device 502 during execution thereof by computer system 500, which also constitute machine-readable storage media. Machine-readable storage medium 524 , data storage system 518 , and/or main memory 504 may correspond to memory subsystem 110 of FIG. 1 .In one embodiment, instructions 526 include instructions to implement functionality corresponding to a data destruction component (eg, ABS component 113 of FIG. 1 ). Although machine-readable storage medium 524 is shown as a single medium in example embodiments, the term "machine-readable storage medium" should be considered to encompass a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" shall also be considered to encompass any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Accordingly, the term "machine-readable storage medium" shall be considered to include, but not be limited to, solid-state memory, optical media, and magnetic media.Some portions of the previous detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithms are described and represented as the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here and generally considered to be a self-consistent sequence of operations that produce a desired result. An operation is one that requires a physical manipulation of a physical quantity. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to a computer that manipulates and transforms data represented as physical (electronic) quantities within the registers and memory of a computer system into other data similarly represented as physical quantities within the computer system memory or registers or other such information storage systems The actions and processes of a system or similar electronic computing device.The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the given purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs may be stored in a computer-readable storage medium such as, but not limited to, any type of disk (including floppy disks, optical disks, CD-ROMs and magneto-optical disks), ROM, RAM, EPROM, EEPROM, magnetic or optical cards, or Any type of medium suitable for storage of electronic instructions, each coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the methods. The structure of a variety of these systems will be presented as set forth in the description below. Additionally, the present disclosure is not described with reference to any particular programming language. It should be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having stored thereon instructions that may be used to program a computer system (or other electronic device) to perform processes in accordance with the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, machine-readable (eg, computer-readable) media include machine (eg, computer)-readable storage media such as ROM, RAM, disk storage media, optical storage media, flash memory components, and the like.In the foregoing specification, embodiments of the present disclosure have been described with reference to specific example embodiments thereof. It will be apparent that various modifications may be made to the present disclosure without departing from the broader scope of embodiments of the disclosure as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. |
A network device includes a group of queues, each having a weighted round robin mechanism. The priority queues on a port detect an overflow condition and transfer a flag to the weighted round robin device in response to detecting the overflow condition. The weighted round robin mechanism adjusts the weight associated with one or more of the priority queues in response to receiving the flag and transfers data from the queues based on the adjusted weights. |
What is claimed is:1. A method for transferring data to a port in a network device having a plurality of priority queues, each priority queue being associated with a weight, comprising:detecting an overflow condition in one of the plurality of priority queues;adjusting the weight associated with the one of the plurality of priority queues to a value higher than the other of the plurality of priority queues;transferring data from the plurality of priority queues based on the adjusted weight; andreturning the weights of the one of the plurality of priority queues and the priority queues containing high priority data to original values when the overflow condition no longer exists.2. The method of claim 1 wherein the adjusting comprises:changing the weight associated with priority queues containing high priority data.3. The method of claim 2 wherein the changing the weight comprises:increasing the weight associated with the priority queues containing high priority data.4. The method of claim 1 wherein the network device includes a multiport switch and the plurality of priority queues include one of input queues or output queues.5. The method of claim 1 further comprising:transferring a flag from the one of the plurality of priority queues to a weighted round robin device in response to detecting the overflow condition.6. The method of claim 5 wherein the flag identifies the one of the plurality of priority queues.7. The method of claim 1 wherein the weights represent an amount of data to be transferred from each priority queue during a cycle.8. A network device comprising:a plurality of queues, each queue being associated with a weight and being configured to detect an overflow condition and transfer a flag in response to the detecting; anda weighted round robin device configured to receive a flag from at least one of the plurality of queues, adjust the weight of the one of the plurality of queues to a value higher than the weights of the other of the plurality of queues, transfer data from the plurality of queues based on the adjusted weight, and return the adjusted weight to an original value when the overflow condition no longer exists.9. The network device of claim 8 wherein the weighted round robin device is further configured to:change the weight of at least one other of the plurality of queues to indicate an increased priority.10. The network device of claim 8 wherein, when changing the weight of at least one other of the plurality of queues, the weighted round robin device is configured to:increase the weight associated with the at least one other of the plurality of queues.11. The network device of claim 8 wherein the weighted round robin device is further configured to:increase the weight associated with each of the plurality of queues containing high priority data.12. The network device of claim 8 wherein the network device includes a multiport switch and the plurality of queues includes a plurality of output queues.13. The network device of claim 8 wherein each respective queue is configured to detect the overflow condition when the number of entries in the first output queue of the respective queue exceeds the threshold.14. A system for transferring data in a network device, comprising:a plurality of queues configured to store a number of entries, detect whether the number of entries exceeds a threshold, and transfer a flag when the number of entries exceeds the threshold; anda logic device configured to receive a first flag from one of the plurality of queues and cause a higher number of entries to be transferred from the one of the plurality of queues than from the other of the plurality of queues to a port based on the first flag.15. The system of claim 14 wherein the logic device is further configured to:associate a weight with each of the plurality of queues, andadjust the weight of at least one of the plurality of queues in response to receiving the flag.16. The system of claim 15 wherein, when adjusting the weight, the logic device is configured to:increase the weight associated with queues containing high priority data, and the system further comprises:a transmit module configured to receive the data from the logic device based on the increased weights associated with the queues.17. The system of claim 14 wherein the logic device is further configured to:cause a lower number of entries to be transferred from the one of the plurality of queues when the number of entries stored in the one of the plurality of queues no longer exceeds the threshold.18. A network device comprising:a plurality of queues, each queue being associated with a weight and being configured to detect an overflow condition and transfer a flag in response to the detecting, wherein each of the plurality of queues comprises:a write side including a first output queue and control logic configured to detect when a number of entries in the first output queue exceeds a threshold, anda read side including a second output queue; anda weighted round robin device configured to receive a flag from at least one of the plurality of queues, adjust the weight of the one of the plurality of queues to a value higher than the weights of the other of the plurality of queues, and transfer data from the plurality of queues based on the adjusted weight. |
TECHNICAL FIELDThe present invention relates generally to communication systems and, more particularly, to a system and method for alleviating congestion in a network device.BACKGROUND ARTNetwork devices, such as multiport switches, commonly include a group of output ports through which data can be transferred. During high traffic periods, it is common for one or more of these ports to become congested. In a switch that implements output queuing, congestion at an output port is typically indicated by the port's output queue overflowing.Currently, there is no way to automatically adjust the flow of data out of the network device's output queues to alleviate or avoid an overflow condition.DISCLOSURE OF THE INVENTIONThere exists a need for a mechanism that automatically adjusts the flow of data in a network device to alleviate congestion. This and other needs are met by the present invention, where local hardware, under software control when needed, automatically adjusts the flow of data to a port in a network device when an output queue overflow condition is detected.Additional advantages and other features of the invention will be set forth in part in the description that follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from the practice of the invention. The advantages and features of the invention may be realized and obtained as particularly pointed out in the appended claims.According to the present invention, the foregoing and other advantages are achieved in part by a method for transferring data to a port in a network device having a group of priority queues. Each of the priority queues in the network device is associated with a weight. The method includes detecting an overflow condition in one of the priority queues, adjusting the weight associated with at least one of the priority queues in response to detecting the overflow condition, and transferring data from the priority queues based on the adjusted weights.Other advantages and features of the present invention will become readily apparent to those skilled in this art from the following detailed description. The embodiments shown and described provide illustration of the best mode contemplated for carrying out the invention. The invention is capable of modifications in various obvious respects, all without departing from the invention. Accordingly, the drawings are to be regarded as illustrative in nature, and not as restrictive.BRIEF DESCRIPTION OF THE DRAWINGSReference is made to the attached drawings, where elements having the same reference number designation represent like elements throughout.FIG. 1 is a block diagram of an exemplary system in which a system and method consistent with the present invention may be implemented;FIG. 2 is a detailed diagram of the multiport switch of FIG. 1 according to an implementation consistent with the present invention;FIG. 3 is an exemplary diagram of a transmitter module and associated output queue for a port N of the multiport switch of FIG. 2;FIG. 4 is a detailed diagram of the output queue of FIG. 3; andFIG. 5 is a flowchart of exemplary processing for transmitting data according to an implementation consistent with the present invention.BEST MODE FOR CARRYING OUT THE INVENTIONThe present invention will be described with the example of a switch in a packet switched network, such as an Ethernet (IEEE 802.3) network. It will become apparent, however, that the present invention is also applicable to other packet switched systems, as described in detail below, as well as to other types of systems in general.Switch Architecture OverviewFIG. 1 is a block diagram of an exemplary system in which systems and methods consistent with the present invention may be implemented. The exemplary system may include a packet switched network 100, such as an Ethernet (IEEE 802.3) network. The packet switched network 100 may include network stations 110, transformers 120, transceivers 130 and 140, a network node 150, a host 160, external memories 170, and multiport switches 180. The network stations 110 may include conventional communication devices, such as computers, with different configurations. For example, the devices may send and receive data at network data rates of 10 megabits per second (Mb/s) or 100 Mb/s.Each 10/100 Mb/s network station 110 may send and receive data to and from a multiport switch 180 according to either a half-duplex or full duplex Ethernet protocol. The Ethernet protocol ISO/IEC 8802-3 (ANSI/IEEE Std. 802.3, 1993 Ed.) defines a half-duplex media access mechanism that permits all stations 110 to access the network channel with equality. Traffic in a half-duplex environment may not be distinguished over the transmission medium. Rather, each half-duplex station 110 may include an Ethernet interface card that uses carrier-sense multiple access with collision detection (CSMA/CD) to listen for traffic on the transmission medium. The absence of network traffic is detected by sensing deassertion of a receive carrier on the transmission medium.Any station 110 having data to send may attempt to access the channel by waiting a predetermined amount of time, known as the interpacket gap interval (IPG), after deassertion of the receive carrier on the transmission medium. If multiple stations 110 are connected to the same link, each of the stations 110 may attempt to transmit data in response to the sensed deassertion of the receive carrier and after the IPG interval, possibly resulting in a collision. Hence, the transmitting station 10 may monitor the transmission medium to determine if there has been a collision due to another station 110 sending data on the same link at the same time. If a collision is detected, both stations 110 cease transmitting, wait a random amount of time, and then retry the transmission.The 10/100 Mb/s network stations 110 that operate in full duplex mode may send and receive data packets according to the Ethernet standard IEEE 802.3u. The full duplex environment provides a two-way, point-to-point communication link enabling simultaneous transmission and reception of data packets between each link partner (i.e., the 10/100 Mb/s network station 110 and the corresponding multiport switch 180).The transformers 120 may include magnetic transformers that provide AC coupling between the network stations 110 and the transceivers 130. The transceivers 130 may include 10/100 Mb/s physical layer transceivers that communicate with the multiport switches 180 via respective serial media independent interfaces (SMIIs) or reduced media independent interfaces (RMIIs). Each of the transceivers 130 may be configured to send and receive data packets between the multiport switch 180 and up to four network stations 110 via the SMII/RMII. The SMII/RMII may operate at a data rate sufficient to enable simultaneous transmission and reception of data packets by each of the network stations 110 and the corresponding transceiver 130.The transceiver 140 may include one or more 1000 Mb/s (i.e., 1 Gb/s) physical layer transceivers that provide communication with nodes, such as the network node 150, via, for example, a high speed network transmission medium. The network node 150 may include one or more 1 Gb/s network nodes that send and receive data packets at a network speed of 1 Gb/s. The network node 150 may include, for example, a server or a gateway to a high-speed backbone network.The host 160 may include a computer device that provides external management functions to control the overall operation of the multiport switches 180. The external memories 170 may include synchronous static random access memories (SSRAMs) that provide external storage for the multiport switches 180. Each of the external memories 170 may include a Joint Electron Device Engineering Council (JEDEC) pipelined burst or Zero Bus Turnaround (ZBT) SSRAM having a 64-bit wide data path and a 17-bit wide address path. The external memories 170 may be addressable as upper and lower banks of 128 K in 64-bit words. The size of the external memories 170 is preferably at least 1 Mbyte with data transfers possible on every clock cycle through pipelining.The multiport switches 180 selectively forward data packets received from the network stations 110 or the network node 150 to the appropriate destination according to the appropriate transmission protocol, such as the Ethernet protocol. The multiport switches 180 may be cascaded together (via lines 190) to expand the capabilities of the multiport switches 180.FIG. 2 is a detailed diagram of the multiport switch 180 according to an implementation consistent with the present invention. The multiport switch 180 may include a receiver 205, a transmitter 210, a data bus 215, a scheduler 220, flow control logic 225, buffer management logic 230, a port vector queue (PVQ) 235, output control queues 240, an internal rules checker (IRC) 245, registers 250, management information base (MIB) counters 255, a host interface 260, an external memory interface 265, an EEPROM interface 270, an LED interface 275, and a Joint Test Action Group (JTAG) interface 280.The receiver 205 may include media access control (MAC) modules and receive buffers, such as first-in, first-out (FIFO) buffers. The receive modules may include input ports that support SMIIs, RMIIs, gigabit media independent interfaces (GMIIs), ten bit interfaces (TBIs), and proprietary interfaces for expansion with other multiport switches 180 (FIG. 1). The expansion ports (EPs) may be used to transfer data between other multiport switches 180 according to a prescribed protocol. The expansion ports may permit the multiport switches 180 to be cascaded together to form a backbone network. Each of the receive modules may include queuing logic that receives data packets from the network stations 110 and/or network node 150 and stores the packets in the corresponding receive FIFOs. The queuing logic may then send portions of the packets to the IRC 245 for processing and to the external memory 170 for storage via the external memory interface 265.The transmitter 210 may include MAC modules and transmit buffers, such as FIFO buffers. The transmit modules may include output ports that support SMIIs, GMIIs, TBIs, and proprietary interfaces for expansion with other multiport switches 180. Each of the transmit modules may include dequeuing logic that obtains packets from the external memory 170 and stores the packets in the corresponding transmit FIFOs. The transmit modules may read the data packets from the corresponding transmit FIFOs and transmit the packets to the network stations 110 and/or network node 150. In an alternative implementation consistent with the present invention, the functions of the receiver 205 and transmitter 210 may be performed by a transceiver that manages both the receiving and transmitting of data packets.The data bus 215 may include one or more conductors that connect the receiver 205, the transmitter 210, the IRC 245, and the external memory interface 265. The scheduler 220 may include logic that controls access to the external memory 170 by the queuing and dequeuing logic of the receiver 205 and transmitter 210, respectively. The multiport switch 180 is configured to operate as a non-blocking switch, where network data is received and transmitted from the switch ports at the respective wire rates of 10, 100, or 1000 Mb/s. Hence, the scheduler 220 may control the access by different ports to optimize use of the bandwidth of the external memory 170.The flow control logic 225 may include logic that operates in conjunction with the buffer management logic 230, the PVQ 235, and the output control queues 240 to control the transmission of packets by the transmitter 210. The flow control logic 225 may control the transmitter 210 so that the transmitter 210 outputs packets in an efficient manner based on the volume of data traffic. The buffer management logic 230 may include logic that oversees the use of memory within the multiport switch 180. For example, the buffer management logic 230 may manage the use of frame pointers and the reuse of frame pointers once the data packet has been transmitted to its designated output port(s). Frame pointers identify the location of data frames stored in the external memory 170 that require transmission.The PVQ 235 may include logic that obtains a frame pointer to the appropriate output queue(s) in output control queues 240 that correspond to the output ports to receive the data frame transmission. For multicopy frames, the PVQ 235 may supply multiple copies of the same frame pointer to more than one output queue. The output control queues 240 may include a FIFO-type output queue corresponding to each of the transmit modules in the transmitter 210. Each of the output queues may include multiple priority queues for frames having different levels of priority. For example, a high priority queue may be used for frames that require lower access latency (e.g., frames for multimedia applications or management frames). The frame pointers stored in the FIFO-type output queues may be processed by the dequeuing logic for the respective transmit modules. The dequeuing logic uses the frame pointers to access the external memory 170 to read data frames at the memory locations specified by the frame pointers.The IRC 245 may include an internal decision making engine that makes frame forwarding decisions for data packets that are received by the receiver 205. The IRC 245 may monitor (i.e., "snoop") the data bus 215 to determine the frame pointer value and a part of the data frame, for example, the header information of a received packet, including the source, destination, and virtual local area network (VLAN) address information. The IRC 245 may use the header information to determine which output port will output the data frame stored at the location specified by the frame pointer. The IRC 245 may, thus, determine that a given data frame should be output by either a single port (i.e., unicast), multiple ports (i.e., multicast), all ports (i.e., broadcast), or no port (i.e., discarded).For example, each data frame may include a header that identifies the source and destination addresses. The IRC 245 may use the destination address to identify the appropriate output port to output the data frame. The frame header may also include VLAN address information that identifies the frame as information destined to one or more members of a group of network stations 110. The IRC 245 may alternatively determine that a data frame should be transferred to another multiport switch 180 via the expansion port.Therefore, the IRC 245 determines whether a frame temporarily stored in the external memory 170 should be output to a single output port, multiple output ports, no output port, or another multiport switch 180. The IRC 245 may make its forwarding decision based on information stored in an IRC address table.The IRC 245 may output its forwarding decision to the PVQ 235 in the form of a forwarding descriptor. The forwarding descriptor may include, for example, a priority class identifying whether the data frame is high priority or low priority, a port vector identifying each output port that should transmit the frame, the input port number, or VLAN information. The PVQ 235 may decode the forwarding descriptor to obtain the frame pointer. The PVQ 235 may then supply the frame pointer to the appropriate output queues within the output control queues 240.The IRC 245 may also perform layer 3 filtering. For example, the IRC 245 may examine each received data packet for up to 128 programmable patterns and process the packet based on the result. The result may dictate that the IRC 245 drop the packet, forward the packet to the host 160, or assign a user priority or a Differentiated Services Code Point (DSCP) to the packet. User priorities and the DSCP may be independently mapped into output priority classes.The registers 250 may include configuration and status registers used by the host interface 260. The MIB counters 255 may provide statistical network information in the form of MIB objects for use by the host 160. The host interface 260 may include a standard interface that permits an external management entity, such as the host 160, to control the overall operation of the multiport switch 180. The host interface 260 may decode host accesses within a prescribed register space and read and write configuration and status information to and from the registers 250. The registers 250, MIB counters 255, host interface 260, receiver 205, data bus 215, output control queues 240, and IRC 245 may be connected via a host bus 262.The external memory interface 265 may include a standard interface that permits access to the external memory 170. The external memory interface 265 may permit external storage of packet data in the external memory 170 in a direct memory access (DMA) transaction during an assigned time slot determined by the scheduler 220. In an implementation consistent with the present invention, the external memory interface 265 operates at a clock frequency of at least 66 MHz and, preferably, at a frequency of 100 MHz or above.The EEPROM interface 270 may include a standard interface to another external memory, such as an EEPROM. The LED interface 275 may include a standard interface to external LED logic. The LED interface 275 may send the status of conditions of the input and output ports to the external LED logic. The LED logic may drive LED display elements that are human-readable. The JTAG interface 280 may include a standard interface to external testing equipment to permit, for example, a boundary scan test to be performed on the multiport switch 180.The foregoing description of the switch architecture provides an overview of the switch operations in a packet switched network. A more detailed description of the features of the present invention as embodied, for example, in the multiport switch 180 is provided below.The present invention is directed to improving transmission of data in a network device, such as the multiport switch 180 described above. The multiport switch 180 detects an overflow in an output queue and adjusts the weights of the output queues to support higher priority traffic over lower priority traffic.FIG. 3 is an exemplary diagram of a transmitter module and associated output control queue for a port N of the multiport switch 180 of FIG. 2. The transmitter modules 210 and associated output queues 240 of the other ports of multiport switch 180 may be similarly configured.In FIG. 3, a group of output priority queues 310-316 connects to a transmitter module (TX) 340 via a weighted round robin (WRR) mechanism 330. The output priority queues 310-316 receive data, such as forwarding descriptors, from the PVQ 235 and provide storage prior to transmission. In an implementation consistent with the present invention, the multiport switch 180 associates multiple output priority queues 310-316 with each transmitter module 340. The output priority queues 310-316 may be associated with different priorities. For example, a first group of the output priority queues 310-316 may store information of a low priority while a second group of output priority queues 310-316 may store information of a high priority. High priority information may include information associated with data that requires lower access latency, such as data destined for a management device or data for a multimedia application. Low priority information may include information associated with any other data.In an alternative implementation consistent with the present invention, each output priority queue 310-316 may be associated with a different priority. For example, output priority queue 310 may store information having a priority of "1" (i.e., a lowest priority indication), output priority queue 312 may store information having a priority of "2" (i.e., a higher priority indication), output priority queue 314 may store information having a priority of "3" (i.e., a priority indication higher than that associated with output priority queue 312), etc. It will be appreciated that other output queue/transmitter module combinations may alternatively be used.The WRR mechanism 330 may include one or more devices capable of storing a weight indication for each of the output priority queues 310-316 and allowing one or more entries from an output priority queue 310-316 to be read by the transmitter module 340 based on the stored weights. As will be described in more detail below, the WRR mechanism 330 may receive a threshold flag 320 from an output priority queue 310-316 and adjust the weights of the output priority queues 310-316 so that higher priority traffic to the transmitter module 340 takes preference over lower priority traffic.The transmitter module 340 may include a MAC module capable of transmitting packets to other network devices, such as network stations 110. The transmitter module 340 may include one or more transmit buffers (not shown), such as FIFO buffers. The transmitter module 340 may also include dequeuing logic (not shown) that reads forwarding descriptors from the output priority queues 310-316 and uses the forwarding descriptors to obtain packets from the external memory 170. The dequeuing logic may also cause the packets to be stored in the transmit FIFOs of the transmitter module 340. The transmitter module 340 may then read the data packets from the corresponding transmit FIFOs and transmit the packets to the network stations 110 or other network devices.FIG. 4 is an exemplary detailed diagram of the output priority queue 316 of FIG. 3. Output priority queues 310-314 may be similarly configured. As illustrated, the output priority queue 316 includes a write side 401, a read side 402, and an overflow engine 430.The write side 401 of the output priority queue 316 may include a write side queue 410 and control logic 425. The write side 401 receives forwarding descriptors from the PVQ 235 and stores them in the write side queue 410. The control logic 425 may include one or more devices for detecting when the number of entries in the write side 401 exceeds a threshold.The read side 402 of the output priority queue 316 may include a read side queue 450. The transmitter module's 340 dequeuing logic reads forwarding descriptors from the read side 402 of the output priority queue 316 and uses this information to retrieve packets from the external memory 170. The overflow engine 430 controls writing and reading of data to an overflow area of the external memory 170.Each of the output priority queues 310-316 may be sized according to the bandwidth of the port it services. There are times, however, when an output priority queue 310-316 cannot hold all the entries destined for the transmitter module 340. When entries are written into an empty output priority queue 310-316, the overflow engine 430 passes the entries directly from the write side 401 to the read side 402 of the queue 310-316. When the read side 402 is full, additional entries written to the output priority queue's 310-316 write side 401 may be placed into the port's output priority queue 310-316 overflow area in external memory 170. Once the port's output priority queue 310-316 read side 402 and overflow area are full, additional entries placed into the output priority queue 310-316 may begin to fill the write side 401 of the queue 310-316. If an attempt is made to write to an output priority queue 310-316 when the write side 401 is full or above a predetermined threshold, the output priority queue 310-316 is considered to be in an overflow state.In an implementation consistent with the present invention, when an overflow condition exists (i.e., when an output priority queue's 310-316 write side 402 is full or above a predetermined threshold), the output priority queue 310-316, more specifically the control logic 425, transmits a threshold flag 320 to the WRR mechanism 330. As will be described in more detail below, the threshold flag 230 signals the WRR mechanism 330 to adjust the weights associated with the output priority queues 310-316 to give preference to high priority traffic.Exemplary ProcessingFIG. 5 is a flowchart of exemplary processing for transmitting data according to an implementation consistent with the present invention. Processing may begin with a network device, such as multiport switch 180, monitoring the output priority queues 310-316 to determine whether an overflow condition exists [step 510]. As described above, an overflow condition may exist when the number of entries in the write side 401 of an output queue for a particular priority 310-316 exceeds a predetermined threshold. The threshold may be set automatically or manually by a network administrator.If the multiport switch 180 detects the occurrence of an overflow condition in an output priority queue 310-316 [step 520], the particular output priority queue 310-316 transfers a threshold flag 320 to the WRR mechanism 330 [step 530]. The WRR mechanism 330 may then adjust the weights associated with the output priority queues 310-316 so that high priority entries are given preference over low priority entries [step 540]. Prior to the adjustment, the WRR mechanism 330 may weight each of the output priority queues 310-316 equally or according to predetermined rules. When the output queues are weighted equally, the WRR mechanism 330 may act according to a conventional round robin scheme. Here, the WRR mechanism 330 may select an entry from each output priority queue 310-316 in turn to be read by the transmitter module 340, skipping queues that do not have entries. Upon receipt of the threshold flag 320, however, the WRR mechanism 330 may adjust the weights of those output queues 310-316 containing high priority entries so that high priority entries take preference over low priority entries. Moreover, according to an implementation consistent with the present invention, the WRR mechanism 330 may attribute the greatest weight to the output queue 310-316 in the overflow state until that time that the overflow condition no longer exists. In such a situation, the threshold flag 320 may include information for identifying the output queue 310-316 from which the flag 320 was transmitted.As an example, assume that the output priority queues 310-316 are not in an overflow state and that the WRR mechanism 330 assigns an equal weight to each of the output priority queues 310-316. The WRR mechanism 330 may associate, for example, a weight of "1" with each output priority queue 310-316. During normal operation, the WRR mechanism 330 may, in a cyclical fashion, allow for one entry to be read from each output priority queue 310-316, skipping those output priority queues that do not have entries.Assume now that output priority queue 316 enters an overflow state. Upon entering the overflow state, the output priority queue 316 transfers a threshold flag 320 to the WRR mechanism 330. The WRR mechanism 330 may adjust the weight of the output priority queue 316, as well as those other output priority queues 310-314 having high priority entries. Assuming that output priority queue 314 contains high priority entries while output priority queues 310 and 312 contain only low priority entries, the WRR mechanism 330 may, for example, increase the weights of output priority queue 314 to "3" while increasing the weight of output priority queue 316 (i.e., the output queue in the overflow state) to "6". Here, the WRR mechanism 330 may, in a cyclical fashion, allow, for example, one entry to be read from output priority queues 310 and 312, three entries to be read from output priority queue 314, and six entries to be read from output priority queue 316. By adjusting the flow of traffic to the transmitter module 340 in this manner, the amount of time that an output priority queue 316 remains in overflow state can be reduced. Moreover, by adjusting the overflow threshold to a level below a state in which the queue 310-316 can receive no additional entries, the output priority queues 310-316 can be prevented from entering an overflow state.Described has been a system and method for adjusting the transfer of data from output queues in a network device based on changes in traffic patterns. Advantages of the present invention include the ability to reduce the amount of time that an output queue is in an overflow state by automatically adjusting the weight assigned to the queue. In addition to this, the number of packets that could possibly be dropped for the higher priority queue are reduced, thus improving end-to-end performance of the network, since typical higher layer protocols transmit all the packets in a certain window size and even if one packet from the window is dropped, the entire window needs to be retransmitted. This scheme would avoid such singular packet drops that cause network performance to degrade.Only the preferred embodiments of the invention and a few examples of its versatility are shown and described in the present disclosure. It is to be understood that the invention is capable of use in various other combinations and environments and is capable of modifications within the scope of the inventive concept as expressed herein. For example, while the above description focussed on adjusting the amount of data transferred from output queues, the present invention is not so limited. Implementations consistent with the present invention are equally applicable to other types of queues. |
PROBLEM TO BE SOLVED: To provide a boundary scan chain for stacked memory.SOLUTION: An embodiment of a memory device comprises: a system element; and a memory stack which includes one or more memory die layers, each memory die layer including a plurality of input-output (I/O) cells and a boundary scan chain for the I/O cells. The boundary scan chain of each memory die layer includes: a scan chain portion for each of the I/O cells, the scan chain portion for each of the I/O cells including a first scan logic multiplexer and a scan logic latch, an input of the scan logic latch being coupled with an output of the first scan logic multiplexer; and a decoder to provide command signals to the scan chain. |
A memory stack comprising a system element and one or more memory die layers, each memory die layer comprising a plurality of input / output (I / O) cells and boundary scan chains of said I / O cells. A storage device having a stack, wherein a boundary scan chain of the memory die layer is a scan chain portion of each of the I / O cells, and a scan chain portion of the I / O cells is a first scan logic A scan chain portion having a multiplexer and a scan logic latch, wherein an input of the scan logic latch is coupled to an output of the first scan logic multiplexer, and a decoder for providing a command signal to the scan chain Storage device.The storage device according to claim 1, wherein the first scan logic multiplexer has a first input from the I / O cell and a second input from a scan chain portion or a serial data input before the scan chain.The storage device according to claim 1, wherein the scan logic latch has an output to a next scan chain portion or serial data output of the scan chain.The storage device of claim 1, wherein the command signal provided by the decoder comprises an enable signal for each of the first scan logic multiplexers and a clock signal for each of the scan logic latches.The scan chain portion of each I / O cell, which is a data I / O cell, further comprises a second scan logic multiplexer, the second scan logic multiplexer having a first input from a memory output latch, and 5. The memory device of claim 4, further comprising: a second input coupled to the output of the scan logic latch.6. The storage device of claim 5, wherein the command signal provided by the decoder further comprises an enable signal to each of the second scan logic multiplexers of the scan portion of the data I / O cell.The storage device according to claim 1, wherein a scan chain portion of each I / O cell which is a command address bus cell further has an output driver for driving out the scan signal to the command address bus cell.The memory stack has a plurality of through silicon vias (TSVs) that carry signals through the storage device, and the TSVs have connections for scan testing using boundary scan chains of each memory die layer. The storage device according to claim 1.The storage device of claim 1, wherein the scan chain provides serial and parallel testing of each memory die layer of the memory stack.10. The storage device according to claim 9, wherein the serial and parallel tests include serial and parallel inputs to IO cells and serial and parallel outputs from IO cells.The storage device according to claim 1, wherein the routing of the memory scan layer boundary scan chain has one or more unused address pins.The storage device of claim 11, wherein the one or more unused pins are reserved for more dense memory dies.Inputting a set of scan data to a first memory element of a plurality of memory elements of a memory stack, each memory element having a scan boundary chain, the inputting step, and the plurality of memory elements having the scan data Transferring to a second memory element, acquiring an output of scan data from the second memory element, and scan data output from the second memory element, the scan data being input to the first memory element Determining whether the input scan data and the output scan data match, and the scan test succeeds.The method of claim 13, wherein the scan data is input through a serial data input of the first memory element and output from a serial data output of the second memory element.15. The method of claim 14, wherein transferring the scan data to a second memory element places the first memory element in serial output mode and the second memory element in serial input mode.15. The method of claim 14, wherein transferring the scan data to a second memory element places the first memory element in parallel output mode and the second memory element in parallel input mode.The scan boundary chain has a scan chain portion of each of a plurality of I / O cells of the memory element, and the scan chain portion of the I / O cell has a scan logic multiplexer and a scan logic latch, and the scan The method of claim 13, wherein an input of a logic latch is coupled to an output of the scan logic multiplexer.A processor for processing data for a system, a transmitter for transmitting data via an omnidirectional antenna, a receiver for receiving data, or both the transmitter and the receiver, and data for the system A system having a memory for storing, the memory comprising a stacked memory, the stacked memory comprising a memory stack of one or more memory elements, each memory element being of the memory elements A boundary scan chain of a plurality of I / O cells is provided, and a boundary scan chain of the memory element is a scan chain portion of each of the I / O cells, and a scan chain portion of the I / O cells is a first scan A logic multiplexer and a scan logic latch, and an input of the scan logic latch is the first scan logic System having coupled to an output of the click multiplexer, said scan chain portion, a decoder for providing a command signal to the scan chain, a.The system of claim 18, wherein the first scan logic multiplexer has a first input from the I / O cell and a second input from a scan chain portion or serial data input prior to the scan chain.19. The system of claim 18, wherein the scan logic latch has an output to a next scan chain portion or serial data output of the scan chain.The scan chain portion of each I / O cell, which is a data I / O cell, further comprises a second scan logic multiplexer, the second scan logic multiplexer having a first input from a memory output latch, and 19. The system of claim 18, further comprising: a second input coupled to an output of the scan logic latch.The system of claim 18, wherein a scan chain portion of each I / O cell that is a command address bus cell further comprises an output driver for driving a scan signal out to the command address bus cell.The system of claim 18, wherein the scan chain provides serial and parallel testing of each memory die layer of the memory stack.19. The system of claim 18, wherein the serial and parallel tests comprise serial and parallel inputs to IO cells and serial and parallel outputs from IO cells.Non-transitory computer readable storage medium storing data representative of a sequence of instructions, wherein the sequence of instructions, when executed by a processor, causes the processor to scan a first memory element of a plurality of memory elements of a memory stack. Inputting a set of data, each memory element having a scan boundary chain, transferring the scan data to a second memory element of the plurality of memory elements, and the second Obtaining an output of scan data from a memory element, and determining whether scan data input to the first memory element matches scan data output from the second memory element, the input being Scan data and the output scan data match Scan test is successful if a computer-readable storage medium for executing a process having the steps of the determining.26. The computer readable storage medium of claim 25, wherein the scan data is input via a serial data input of the first memory element and output from a serial data output of the second memory element.26. The computer readable storage medium of claim 25, wherein transferring the scan data to the second memory element comprises placing the first memory element in a serial output mode and placing the second memory element in a serial input mode. .26. The computer readable storage medium of claim 25, wherein transferring the scan data to the second memory element comprises placing the first memory element in a parallel output mode and placing the second memory element in a parallel input mode. .A semiconductor device having a controller die and a memory die coupled to the controller die, the memory die having a plurality of input / output (I / O) cells, each I / O cell having normal logic. And a scan logic, the scan logic having a first input from an I / O cell and a second scan logic multiplexer having a second input from another I / O cell or one of the serial data inputs And a scan logic latch, wherein an input of the scan logic latch is coupled to an output of the first scan logic multiplexer, the scan logic latch having an output to one of a third I / O cell or serial data output The scan logic latch and the memory die disposed to provide command signals to the scan chain Semiconductor device comprising a decoder that, the.30. The semiconductor device of claim 29, wherein the controller die comprises an application processor.30. The semiconductor device of claim 29, further comprising a touch screen coupled to the controller die. |
スタックドメモリのためのバウンダリスキャンチェーンEmbodiments of the present invention generally relate to the field of electronic devices, and more particularly to boundary scan chains for stacked memories.In order to provide more dense memory for computing, a concept has been developed for storage (which may be referred to as 3D stacked memory or stacked memory) having a plurality of closely coupled memory elements.The 3D stacked memory includes a combined layer or package of Dynamic Random Access Memory (DRAM), which may be referred to as a memory stack. Stacked memory is used to provide a large amount of computer memory to a single device or package, which may also have system components such as a memory controller or central processing unit (CPU) .Stacked memory testing is particularly important because the cost of manufacturing each storage device is compared to conventional single layer storage devices.However, testing of such storage devices may require significant costs. For example, testing of I / O connections may require that stacked memory devices contain specific hardware, but that hardware makes extensive use of the limited space of complex storage devices. , Reduce the memory space and increase the manufacturing cost.An object of the present invention is to provide a boundary scan chain for stacked memory.In order to solve the above problem, one aspect of the present invention is a memory stack having a system element and one or more memory die layers, each memory die layer comprising a plurality of input / output (I / O) cells and A memory device having the memory stack having a boundary scan chain of I / O cells, wherein a boundary scan chain of a memory die layer is a scan chain portion of each of the I / O cells, A scan chain portion of an I / O cell comprises a first scan logic multiplexer and a scan logic latch, and an input of the scan logic latch is coupled to an output of the first scan logic multiplexer, and the scan chain portion A decoder for providing a command signal to the scan chain About.According to the above-described aspect, a boundary scan chain for stacked memory can be provided.FIG. 1 illustrates one embodiment of 3D stacked memory.FIG. 2 illustrates one embodiment of a boundary scan chain for storage devices.FIG. 3 illustrates one embodiment of scan chain routing in one embodiment of a boundary scan chain.FIG. 4 is a diagram of command encoding in one embodiment of an apparatus or system having a boundary scan chain.FIG. 5 shows a timing diagram of one embodiment of an apparatus or system having a boundary scan chain.FIG. 6A is a flow chart illustrating a boundary scan process of a stacked memory device having a serial-in-serial-out test process.FIG. 6B is a flowchart showing the boundary scan process of the stacked memory device having the serial in parallel out test process.FIG. 7 is a block diagram illustrating one embodiment of a device or system having stacked memory devices.FIG. 8 illustrates one embodiment of a computing system having boundary scan chains for testing stacked memories.Embodiments of the present invention generally relate to boundary scan chains for stacked memory.As used herein, "3D stacked memory" (where 3D is three dimensional) or "stacked memory" means computer memory including one or more combined memory layers, memory packages or other memory elements. Do. The memory may be stacked vertically or horizontally (adjacent) or may have memory elements coupled together. In particular, stacked memory DRAM devices or systems include storage devices having multiple DRAM die layers. Stacked memory devices may also have system elements in the device called system layers or elements, which have elements such as central processing units (CPUs), memory controllers and other related system elements. You may The system layer may comprise a logic chip or a system on chip (SoC). Stacked memory devices may have through silicon vias (TSVs) that provide interconnections between die layers. In some embodiments, the logic chip may be an application processor or a graphics processing unit (GPU).By "boundary scan chain" is meant a set of interconnected test elements within an electronic device to enable testing of interconnections.In some embodiments, a device, system or process provides electrical access to IOs on stacked DRAMs with TSVs. In some embodiments, boundary scan chains are provided for testing of elements of stacked memory. In some embodiments, the boundary scan chain allows serial and parallel I / O with IO cells and allows verification of proper connectivity between dies in a TSV connected stack.The use of TSVs in electronic devices is an emerging technology. Among the challenges in the design and manufacture of such devices is physical access to the I / O cells. Conventional devices address the need for scan chain access that requires complex implementation or allows only serial output. There are industry standards for interconnect testing (such as IEEE 1149.1 and IEEE 1500), and these standards are generally complex and are primarily designed for board-to-chip interconnects as well. In conventional devices and processes, a typical scan chain has a chain that requires a command decoder, multiple registers, and two latches (flip-flop elements) per I / O cell. However, this requires a large amount of hardware for I / O interconnection of stacked memory devices.In some embodiments, an apparatus, system or method implements a "bare" or lightweight boundary scan chain in stacked memory. In some embodiments, scan chains further utilize reduced command decoding logic. The scan chain embodiment is suitable for implementation in a DRAM architecture where logic gates require a large silicon area.In some embodiments, scan chains are not limited to serial outputs, but support parallel outputs. In some embodiments, parallel outputs enable die-to-die interconnect testing within the memory stack and die-to-die interconnect testing from the memory stack to the SoC or memory controller.In some embodiments, the boundary scan chain may be utilized in multiple situations for testing for stacked memory testing and operation at a manufacturer, such as testing at start-up. In some embodiments, the boundary scan chain enables test and debug processing of TSV connections at the memory supplier before attaching the SoC or other logic elements. In some embodiments, boundary scan chain elements may also be utilized after attachment of the SoC to verify proper connections and to isolate and diagnose poor connections.FIG. 1 illustrates one embodiment of 3D stacked memory. In the figure, 3D stacked memory device 100 comprises system elements 110 (referred to as logic chips or controller dies) coupled to one or more DRAM memory die layers 120, also referred to as memory stacks. In some embodiments, system elements may be system on chip (SoC) or other similar elements. Elements of this figure and the following figures are provided for illustration and are not drawn to scale. Although FIG. 1 shows an embodiment in which system element 110 is coupled below the memory stack of one or more memory die layers 120, each embodiment is not limited to such a configuration. For example, in some embodiments, system element 110 may be disposed adjacent to memory stack 120 and coupled with memory stack 120 in an adjacent configuration. Each die layer may have one or more slices or portions, and may have one or more different channels. Each die layer may have a temperature compensated self-refresh (TCSR) circuit to address thermal issues, the TCSR and mode register (MR) are part of the device management logic, and MC is It may have a thermal offset bit for adjusting the refresh rate by TCSR. Both the die layer and the system elements may be thermally coupled.In this figure, the DRAM memory die layer has four memory die layers, which are the first memory die layer 130, the second memory die layer 140, the third memory die layer 150 and the fourth memory die layer It is 160. However, each embodiment is not limited to a particular number of memory die layers in memory stack 120, but may have more or less memory die layers. System element 110 includes a memory controller 112 for memory stack 120. In some embodiments, each memory die layer (except for the top or outermost memory die layer such as the fourth memory die layer 160 in the figure) provides a path through the silicon substrate of the memory die layer. Through silicon vias (TSVs).In some embodiments, each memory die layer has an interface for connection with system elements 110 or other die layers. Here, the first memory die layer 130 has a first interface 125 for coupling between the first memory die layer 130 and the system element 110, and the second memory die layer 140 is a second memory die layer 140. And a third interface 150 for coupling between the first memory die layer 130 and the third memory die layer 130, and the third memory die layer 150 is for coupling between the third memory die layer 150 and the second memory die layer 140. With the third interface 145, the fourth memory die layer 160 has the fourth interface 155 for coupling between the fourth memory die layer 160 and the third memory die layer 150.In some embodiments, stacked memory device 100 includes boundary scan chains 175 for each memory die layer to enable testing of the I / O cells of memory device 100. In some embodiments, boundary scan chain 175 may have the elements shown in FIG. 2, and the scan chain requires a single latch and one or two multiplexers for each I / O cell. Do.FIG. 2 illustrates one embodiment of a boundary scan chain for storage devices. In some embodiments, a scan chain 200 for a memory die provides a test of multiple I / O cells 205. In this figure, the circuit elements shown are memory logic elements (non-hatched elements) for normal storage operations or scan logic elements (hatched elements) for testing of I / O cells. In some embodiments, each I / O cell has two latches: one for additional latch elements (called scan logic latches) and one for CA (Command Address bus) pins or two for DQ (Data) pins. It has a scan chain part including multiplexers (referred to as first and second scan logic multiplexers).In some embodiments, the output of the first scan logic multiplexer of the scan chain is coupled to the input to the scan logic latch of the scan chain. In some embodiments, a first input to the first scan logic multiplexer of the scan chain is coupled with a signal from the I / O driven by the first memory logic input driver and the input memory logic latch, The input is the output of the scan logic latch of the previous scan chain or serial data input (SDI) (for the first scan chain). The output of the last scan chain is coupled to the serial data output (SDO). The output of the scan logic latch of each CA section is further coupled to a scan logic output driver to drive the output signal to the CA I / O cell. The output of the scan logic latch of each DQ section is further coupled to the first input of the second scan logic multiplexer of the DQ section. In some embodiments, the second scan logic multiplexer of each DQ scan chain is a second input coupled to the memory logic output latch of the DQ cell and a memory logic output driver for driving the output signal to the DQ cell. And a coupled output. In some embodiments, scan logic elements are further coupled to scan logic decoder elements. In some embodiments, the scan logic decoder may include a signal to each I / O output, an enable signal to each first scan logic multiplexer, an enable signal to each second scan logic multiplexer, and each scan logic latch. A clock signal may be provided.In some embodiments, the first and second scan logic multiplexers may select serial data or parallel data in and may select normal data or scan data out.For example, the first scan chain unit 210 of the I / O cell CAn includes a first scan circuit 212 having a first scan logic multiplexer 214 and a scan logic latch 216. The first scan chain unit 210 further includes a scan logic output driver 217 that drives a signal to each CA I / O cell for scan test, such as CAn in this example. Each CA I / O cell is also coupled to a memory logic input driver 221, and each DQ cell is coupled to a memory logic output driver 237 and a memory logic input driver 241. The output of scan logic latch 216 is coupled to the input of scan logic driver 217 and to a scan chain portion 230 with scan circuit 232 and a next scan logic portion shown as a second scan logic multiplexer 235 for DQn. As shown, the scan logic decoder 250 may include each scan logic output driver (eg, 217) and each memory logic output driver (eg, 237) and an enable pin of each first scan logic multiplexer (eg, 214). Two scan logic multiplexers (such as 235) are coupled to the enable pin and to each scan logic latch (such as 216 and 232) to the clock pin. The inputs to the decoder are SSEN (detection signal), CS_n (chip select), SCK (scan clock), SSH_n (scan shift) and SOE_n (scan output enable).In some embodiments, boundary scan chains provide only limited impact on operation. In some embodiments, the direct impact on normal signal processing is the multiplexer delay in the DQ read path (via a second scan logic multiplexer such as 235). In some embodiments, the CA pin is normally input only, but for scanning, a small driver (scan logic output driver such as 217) for parallel data out is provided.In some embodiments, the boundary scan chain is implemented in Wide IO DRAM with four independent chains per die and up to four dies in the stack. In such an implementation, the SSEN signal is common to all channels and dies. Each channel has one copy of SCK, SSH and SOE (Scan Output Enable). Each channel also has a CS signal per die (up to 4 CSs per channel or up to 16 CSs per stack). CS is a signal uniquely tied to the channel and die. In some embodiments, independent CS control is utilized during parallel read / write processing. In some embodiments, signals are provided to a scan logic decoder for control of scan logic and memory logic processing.FIG. 3 illustrates one example of scan chain routing in one embodiment of a memory die boundary scan chain. In some embodiments, the boundary scan chain is provided from the serial data in pin (SDI 300) to the serial data out pin (SDO 350).In this example, the chain is routed such that the first cell for exiting the chain in serial processing is A0 and the last is DQ112. In this implementation, TSV connections for Power, NC (No Connect), DA (Direct Access), DA (o), TEST, CS n, SSEN (), SSH n, SDI, SCK, SDO, SOE n, RST n and VPIN But have been excluded from the scan chain. In some embodiments, one or more unused address pins (used for more dense memories such as future more dense DRAMs) are included in the scan chain routing.FIG. 4 is a diagram of command encoding in one embodiment of an apparatus or system having a boundary scan chain. In some embodiments, the encoding shown in FIG. 4 is provided to a decoder or similar element such as scan logic decoder 250 shown in FIG. In some embodiments, serial scan in 405 or scan in / out 410 is used to initialize the scan chain to a known value, and scan out 415 is used to read the state of each node in the chain The parallel input 420 may be used to obtain the state of all pins simultaneously, and the parallel drive may be used to drive out, whatever information is loaded into the scan chain. Scan non-enable command coding 425 (SEN = '0') is also shown. Scanning is a slow capability usually used for DC connectivity testing. However, in some embodiments, parallel processing on stacked memory drives data on one die and gets data on another die with a fairly accurate delay, thereby causing AC and speed Enable related testing.FIG. 5 is a timing diagram of one embodiment of an apparatus or system having a boundary scan chain. In this figure, the signaling of SSEN 505, SSH_n 510, SOE_n 515, SCK 520, CS_0 525, CS_1 530 and DQ or CA 535 is shown for the parallel data out period and the parallel data in period.Enabling SSEN 505 initiates a sensing period tSES, which is 20 ns (nanoseconds) until the end of the parallel out sensing period. Enabling SOE_n (510) and SSH_n (515) ('1') and transitioning chip select CS_0 to '0' will disclose a parallel out period. After that, the parallel in period starts when SCK = '1' and ends when SSH_n returns to '0'.FIG. 6A is a flowchart illustrating a boundary scan process of a stacked memory device including a serial in / serial out test process. In some embodiments, for a storage device that includes a memory stack having multiple storage devices (refers to any memory di-layer or other memory element), serial-in-serial-out scan chain process 600 includes serial data Loading the desired data into the scan chain of the first device (Device A) using the input function 605 and selecting one of the other devices (Device B) 610 of the memory stack. In some embodiments, Device A is placed in serial output mode and Device B is placed in serial input mode 615. In some embodiments, the scan chain is clocked using serial data in / out mode to connect scan data from Device A to Device B 620.In some embodiments, data from Device B's serial data output pin is observed (625). The test pattern from Device B's serial data output pin should be the same as the pattern clocked by Device A. In some embodiments, if the test pattern from Device B matches the test pattern to Device A 630, the scan test succeeds 635, otherwise there is an error condition and the scan test is not successful. It will be successful (640).FIG. 6B is a flowchart illustrating a process of boundary scan of a stacked memory device having a serial in parallel out test process. In some embodiments, in a storage device including a memory stack having a plurality of storage devices, serial-in parallel-out scan chain processing 650 may be performed on the scan chain of the first device (Device A) using a serial data input function. Loading the desired data (655) and selecting (660) one of the other devices (Device B) of the memory stack. In some embodiments, Device A is placed in parallel output mode and Device B is placed in parallel input mode. Data is copied 665 from Device A to Device B at the rising edge (or falling edge in another implementation) of the scan clock.In some embodiments, the scan chain of Device B is clocked in serial data in / out mode 670 to provide a serial output of data received from Device A in parallel mode. In some embodiments, data from Device B's serial data output pin is observed (675). In some embodiments, if the test pattern from Device B matches the test pattern for Device A (680), the test succeeds (685), otherwise an error condition results (690).Stacked memory may be utilized in many different computing environments depending on the number of memory layers in the storage device. FIG. 7 is a block diagram illustrating one embodiment of a device or system having stacked memory devices. The computing device 700 can be a laptop or notebook computer, a netbook, a tablet computer (a device with a touch screen without a separate keyboard, a device with both a touch screen and a keyboard, a quick start called "instant-on" operation And a device generally connected to the network in an operation called "Always Connect", a computing device including a cell phone or smart phone, a wireless enabled electronic reader, or other wireless mobile devices. It will be appreciated that some of the components are shown generally and not all components of the device are necessarily shown in the device 600. These components may be connected by one or more buses or other connections 705.Device 700 has a processor 710 that performs the main processing operations of device 700. Processor 710 may include one or more physical devices such as a microprocessor, an application processor, a microcontroller, a programmable logic device or other processing means. The processing operations performed by processor 710 include the execution of an operating platform or operating system on which an application, device function, or both are performed. The processing operations include I / O (input / output) related processing by the user or other device, processing related to power management, processing related to connecting the device 700 to other devices. The processing operations may also include processing associated with audio I / O, display I / O or both.In one embodiment, device 700 includes an audio subsystem 720 that represents hardware (such as audio hardware and audio circuitry) and software (such as drivers and codecs) components associated with providing audio functions to a computing device. Audio functions can include speakers, headphones, both, audio output, microphone input. Devices for such functions may be integrated into device 700 or may be connected to device 700. In one embodiment, a user interacts with device 700 by providing audio commands that are received and processed by processor 710.Display subsystem 730 represents hardware (such as a display) and software (such as a driver) components that provide a display having visual, tactile or both elements for the user to interact with the computing device. Display subsystem 730 includes a display interface 732 having a screen or hardware device utilized to provide a display to the user. In one embodiment, display interface 732 has logic that is independent of processor 710 that performs at least some processing associated with the display. In one embodiment, display subsystem 730 includes a touch screen device that provides both output and input to the user.I / O controller 740 represents the hardware devices and software components associated with the interaction with the user. The I / O controller 740 is operable to manage hardware that is part of the audio subsystem 720, the display subsystem 730 or both of these subsystems. Additionally, I / O controller 740 shows connection pins for additional devices that connect to device 700 for the user to interact with the system. For example, the devices that can be attached to device 700 are used by certain applications such as microphone devices, speakers or stereo systems, video systems or other display devices, keyboards or keypad devices, or card readers or other devices Other I / O devices may be included.As mentioned above, the I / O controller 740 may interact with the audio subsystem 720, the display subsystem 730, or both. For example, input via a microphone or other audio device may provide input or commands for one or more applications or functions of device 700. In another example, if the display subsystem has a touch screen, the display device also operates as an input device that is at least partially manageable by the I / O controller 740. There may also be additional buttons or switches on device 700 to provide I / O functions managed by I / O controller 740.In one embodiment, I / O controller 740 manages devices such as accelerometers, cameras, light sensors or other environmental sensors, or other hardware that may be mounted to device 700. The input can be part of the direct user interaction and can provide the system with environmental inputs that affect its operation (filtering for noise, luminance detection Display adjustment, camera flash application, or other features).In one embodiment, device 700 includes power management 750 that manages functions related to battery power usage, battery charging and power saving processing.In some embodiments, memory subsystem 760 includes storage for storing information in device 700. Processor 710 reads and writes data to the elements of memory subsystem 760. The memory may be non-volatile storage (with no change if power to the storage is interrupted), volatile storage (with non-deterministic state when power to the storage is interrupted) or It is possible to have both memories. Memory 760 may store application data, user data, music, photos, documents or other data, as well as system data (long-term or temporary) associated with the execution of applications and functions of system 700.In some embodiments, memory subsystem 760 includes stacked memory device 762 having one or more memory die layers and system elements. In some embodiments, each memory die layer or other storage element of stacked memory device 762 has a boundary scan chain 764 as shown in FIG. 2 for testing of memory I / O cells.Connection means 770 are hardware devices (such as wireless communication, wired communication, or connectors and communication hardware for both) and software components (driver, protocol stack) for enabling the device 700 to communicate with an external device. Etc). The device may be a peripheral device such as a headset, printer or other device, as well as another device such as another computing device, a wireless access point or a base station.The connection means 770 can have several different types of connection means. For generalization, the device 700 is shown comprising a cellular connection means 772 and a wireless connection means 774. In general, the cellular connection means 772 is 4G / LTE (Long Term Evolution), GSM (Global System for Mobile communication) or a variant or derivative, CDMA (Code Division Multiple Access) or a variant or derivative, TDM (Time Division Multipexing) or a variant or Represents cellular network connection means provided by a wireless carrier, such as those provided via derivations or other cellular service standards. A wireless connection means 774 represents a non-cellular wireless connection means, and represents a personal area network (such as Bluetooth (registered trademark)), a local area network (such as WiFi), a wide area network (such as WiMax), and other wireless communication. The connection means may comprise one or more omnidirectional or directional antennas 776.Peripheral connections 780 have software components (drivers, protocol stacks, etc.) as well as hardware interfaces and connectors for making peripheral connections. It will be appreciated that the device 700 can have peripheral devices ("from" 784) connected to it along with peripheral devices ("to" 782) to other computing devices. Device 700 typically has a "docking" connector for connecting to other computing devices, such as for managing content on device 700 (such as downloading, uploading, modifying or synchronizing). In addition, the docking connector allows the device 700 to connect to a peripheral device that allows, for example, control of content output to audiovisual or other systems.In addition to dedicated docking connectors or other dedicated connection hardware, device 700 can make peripheral connections 780 via conventional or standards based connectors. Usual types include Universal Serial Bus (USB) connectors (which can include any of several different hardware interfaces), display ports including MiniDisplayPort (MDP), High Definition Multimedia Interface (HDMI), Firewire Or other types can be included.FIG. 8 illustrates one embodiment of a computing system that includes boundary scan chains for testing stacked memories. The computing system may include a computer, server, game console or other computing device. In this figure, certain standards and known components associated with the present disclosure are not illustrated. In some embodiments, computing system 800 comprises an interconnect or crossbar 805 or other communication means for transmitting data. The computing system 800 may include processing means such as one or more processors 810 coupled to the interconnect 805 to process information. Processor 810 may include one or more physical processors and one or more logical processors. The interconnect 805 is shown as a single interconnect for simplicity, but may represent multiple different interconnects or buses, and the connection of components to such interconnect may be variable. The interconnect 805 shown in FIG. 8 is an abstraction that represents one or more independent physical buses, point-to-point connections, or both connected by appropriate bridges, adapters or controllers.In some embodiments, computing system 800 further includes random access memory (RAM) or other dynamic storage device or element as main memory 812 for storing instructions and information to be executed by processor 810. The RAM memory includes a DRAM (Dymanic RAM) that requests memory content refresh processing, and an SRAM (Static RAM) that increases cost but does not require content refresh processing. In some embodiments, main memory may include active storage of applications including browser applications utilized in network browsing activities by users of computing systems. The DRAM memory may include an SDRAM (Synchronous DRAM) having a clock signal for controlling a signal and an EDO (Extended Data-Out) DRAM. In some embodiments, the memory of the system may include specific registers or other application specific memory.In some embodiments, main memory 812 may be a boundary scan chain for testing of memory I / O cells, such as illustrated in FIG. 2, with each memory die layer or other memory element of a stacked memory device. It has a stacked memory 814 having 815.The computing system 800 may also include a read only memory (ROM) 816 or other static storage device that stores static information and instructions for the processor 810. The computing system 800 may have one or more non-volatile memory elements 818 for storage of particular elements.In some embodiments, computing system 800 includes one or more input devices 830, which may be keyboards, mice, touch pads, voice command recognition, gesture recognition, or any other device that provides input to a computing system. Including one or more ofComputing system 800 may also be coupled to output display 840 via interconnect 805. In some embodiments, display 840 may include an LCD (Liquid Crystal Display) or any other display technology to display information or content to the user. In some circumstances, display 840 may have a touch screen utilized as at least a portion of an input device. In some circumstances, display 840 may be or include an audio device such as a speaker that provides audio information.One or more transmitters or receivers 845 may also be coupled to the interconnect 805. In some embodiments, computing system 800 may have one or more ports 850 for sending and receiving data. The computing system 800 may further include one or more omnidirectional or directional antennas for reception of data via wireless signals.The computing system 800 may also include a power device or system 860 that may include a power source, battery, solar cell, fuel cell or other system or device for providing or generating power. The power provided by the power device or system 860 may be distributed as needed for the elements of the computing system 800.In the above description, for the purposes of explanation, numerous specific details have been provided to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form. There may be an intermediate configuration between the illustrated components. The components disclosed or illustrated herein may have additional inputs or outputs not disclosed or illustrated.Various embodiments may have various processes. These processes may be performed by hardware components or computer programs or machine executable computer programs or machines that may be used to cause instructions programmed transformations or special purpose processors or logic circuits to execute the process. It may be realized by a language. Alternatively, these processes may be performed by a combination of hardware and software.A portion of the various embodiments is a computer storing computer program instructions utilized to program a computer (or other electronic device) for execution by one or more processors to perform a process in accordance with a particular embodiment. It may be provided as a computer program having a readable medium. The computer readable medium may be, without limitation, a floppy (registered trademark) diskette, an optical disk, a compact disk read only memory (CD-ROM), a magneto optical disk, a ROM, a RAM, an erasable programmable ROM (EPROM), and an EEPROM (electrically EPROM). , Magnetic or optical card, flash memory, or any other type of computer readable medium suitable for storing electronic instructions. Furthermore, each embodiment may also be downloaded as a computer program, which may be transferred from the remote computer to the requesting computer.Although many of the methods are described in their most basic form, processes may be added or deleted to any method, and information may be described without departing from the basic scope of the invention. It is possible to add or subtract to any of the above messages. It will be apparent to those skilled in the art that numerous further improvements and adaptations are possible. Specific examples are provided to illustrate but not to limit the invention. The scope of embodiments of the present invention is not to be determined by the specific embodiments described above, but only by the following claims.Where element "A" is coupled to element "B", element A may be coupled directly to element B or indirectly via, for example, element C. When a specification or claim states that a component, feature, composition, process or feature A "produces" a component, feature, composition, process or feature B, it means that "A" is at least partially "B". It is meant that there may be at least one other component, feature, configuration, process or characteristic that aids in causing "B". Where the specification indicates that a component, feature, configuration, process or characteristic may or may be included, the specific component, feature, configuration, process or characteristic is sought to be included. Absent. Where the specification or claim refers to an "a" element, this does not mean that there is only one described element.An example is an implementation or example of the present invention. The phrase "one embodiment," "one embodiment," "some embodiments" or "other embodiments" in the specification may be at least one of the features, configurations or characteristics described with respect to that embodiment. Although included in the examples of the part, it is meant that they do not necessarily have to be included in all the examples. The various appearances of "an embodiment", "an embodiment" or "some embodiments" mean that not all refer to the same embodiment. In the above description of the embodiments of the present invention, various features are grouped together in a single embodiment, drawing or description thereof to aid the understanding of the present disclosure and one or more aspects of the various inventions. It should be understood that However, the methods of the present disclosure should not be construed as reflecting an intention that the claimed invention requires more features than explicitly stated in the claims. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing embodiment. Thus, the claims are expressly included in the present disclosure, and each claim is itself another embodiment of the present invention.100 stacked memory device 200 scan chain |
Some novel features pertain to a substrate that includes a first dielectric layer, a first interconnect, a first cavity, and a first electroless metal layer. The first dielectric layer includes a first surface and a second surface. The first interconnect is on the first surface of the substrate layer. The first cavity traverses the first surface of the first dielectric layer. The first electroless metal layer is formed at least partially in the first cavity. The first electroless metal layer defines a second interconnect embedded in the first dielectric layer. In some implementations, the substrate further includes a core layer. The core layer includes a first surface and a second surface. The first surface of the core layer is coupled to the second surface of the first dielectric layer. In some implementations, the substrate further includes a second dielectric layer. |
CLAIMSWHAT IS CLAIMED IS:1. A substrate comprising:a first dielectric layer comprising a first surface and a second surface;a first interconnect on the first surface of the first dielectric layer;a first cavity traversing the first surface of the first dielectric layer; and a first electroless metal layer formed at least partially in the first cavity, wherein the first electroless metal layer defines a second interconnect embedded in the first dielectric layer.2. The substrate of claim 1, further comprising:a second cavity traversing the first surface of the first dielectric layer; and a second electroless metal layer formed at least partially in the second cavity, wherein the second electroless metal layer defines a third interconnect embedded in the first dielectric layer.3. The substrate of claim 1, further comprising:a first pad on the first surface of the first dielectric layer;a first via traversing the first dielectric layer, the first via coupled to the first pad; anda second pad embedded in the first dielectric layer, the second embedded through the second surface of the first dielectric layer, wherein the second pad is coupled to the first via.4. The substrate of claim 1, further comprising a core layer comprising a first surface and a second surface, wherein the first surface of the core layer is coupled to the second surface of the first dielectric layer.5. The substrate of claim 4, wherein the core layer comprises a first via.6. The substrate of claim 4, further comprising a second dielectric layer comprising a first surface and a second surface, wherein the first surface of the second dielectric layer is coupled to the second surface of the core layer.7. The substrate of claim 1, further comprising:a third interconnect embedded in the first surface of the first dielectric layer, the third interconnect comprising an electroless metal layer; anda first pad on the first surface of the first dielectric layer, the first pad coupled to the third interconnect.8. The substrate of claim 1, further comprising a resist layer on the first dielectric layer.9. The substrate of claim 1, wherein the substrate is one of at least a package substrate and/or an interposer.10. The substrate of claim 1, wherein the substrate is incorporated into one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.1 1. An apparatus comprising:a first dielectric layer comprising a first surface and a second surface;a first interconnect means on the first surface of the first dielectric layer;a first cavity traversing the first surface of the first dielectric layer; and a first electroless interconnect means formed at least partially in the first cavity.12. The apparatus of claim 1 1, further comprising:a second cavity traversing the first surface of the first dielectric layer; and a second electroless interconnect means is formed at least partially in the second cavity.13. The apparatus of claim 1 1, further comprising:a first pad on the first surface of the first dielectric layer;a first vertical interconnect means traversing the first dielectric layer, the first vertical interconnect means is coupled to the first pad; and a second pad embedded in the first dielectric layer, the second embedded through the second surface of the first dielectric layer, wherein the second pad is coupled to the first vertical interconnect means.14. The apparatus of claim 11, further comprising a core layer comprising a first surface and a second surface, wherein the first surface of the core layer is coupled to the second surface of the first dielectric layer.15. The apparatus of claim 14, wherein the core layer comprises a first vertical interconnect means.16. The apparatus of claim 14, further comprising a second dielectric layer comprising a first surface and a second surface, wherein the first surface of the second dielectric layer is coupled to the second surface of the core layer.17. The apparatus of claim 1 1, further comprising:a third electroless interconnect means embedded in the first surface of the first dielectric layer; anda first pad on the first surface of the first dielectric layer, the first pad coupled to the third electroless interconnect means.18. The apparatus of claim 11, further comprising a resist layer on the first dielectric layer.19. The apparatus of claim 11, wherein the apparatus is one of at least a substrate and/or an interposer.20. The apparatus of claim 1 1, wherein the apparatus is incorporated into one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.21. A method for fabricating a substrate, comprising:forming a first dielectric layer comprising a first surface and a second surface; forming a first interconnect on the first surface of the first dielectric layer;forming a first cavity that traverses the first surface of the first dielectric layer; andforming a first electroless metal at least partially in the first cavity, wherein the first electroless metal defines a second interconnect embedded in the first dielectric layer.22. The method of claim 21 , further comprising:forming a second cavity that traverses the first surface of the first dielectric layer; andforming a second electroless metal at least partially in the second cavity, wherein the second electroless metal defines a third interconnect embedded in the first dielectric layer.23. The method of claim 21, further comprising:forming a first pad on the first surface of the first dielectric layer;forming a first via that traverses the first dielectric layer, the first via coupled to the first pad; andforming a second pad embedded in the first dielectric layer, the second embedded through the second surface of the first dielectric layer, wherein the second pad is coupled to the first via.24. The method of claim 21, further comprising forming a core layer comprising a first surface and a second surface, wherein the first surface of the core layer is formed on the second surface of the first dielectric layer.25. The method of claim 24, wherein the core layer comprises a first via.26. The method of claim 24, further comprising forming a second dielectric layer comprising a first surface and a second surface, wherein the first surface of the second dielectric layer is formed on the second surface of the core layer.27. The method of claim 21 , further comprising: forming a third interconnect embedded in the first surface of the first dielectric layer, the third interconnect comprising an electroless metal layer; andforming a first pad on the first surface of the first dielectric layer, the first pad coupled to the third interconnect.28. The method of claim 21, further comprising forming a resist layer on the first dielectric layer.29. The method of claim 21, wherein the substrate is one of at least a package substrate and/or an interposer.30. The method of claim 21, wherein the substrate is incorporated into one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer. |
PACKAGE SUBSTRATE COMPRISING SURFACE INTERCONNECT AND CAVITY COMPRISING ELECTROLESS FILLCROSS-REFERENCE TO RELATED APPLICATION[0001] This application claims priority to and the benefit of U.S. Non-Provisional Patent Application No. 14/251,486 filed in the United States Patent and Trademark Office on April 11, 2014 the entire content of which is incorporated herein by reference.BACKGROUNDField[0002] Various features relate to a package substrate comprising a surface interconnect and a trench comprising electroless fill.Background[0003] FIG. 1 illustrates a conventional integrated package 100 that includes a substrate 102, a set of interconnects 104, a first die 106, a second die 108, a first set of die to package interconnects 116, a second set of die to package interconnects 1 18, and a third set of solder balls 120. The third set of solder balls 120 is for a substrate to motherboard interconnect. The first set of die to package interconnects 116 and/or the second set of solder balls 1 18 may be solder balls. The set of interconnects 104 includes traces, which are located inside the substrate 102. The first die 106 is coupled to the substrate 102 through the first set of interconnects 116. The second die 108 is coupled to the substrate 102 through the second set of interconnects 118. The third set of solder balls 120 is coupled to the substrate 102. The first die 106 and the second die 108 are coupled to the third set of solder balls 120 through the set of interconnects 104 in the substrate 102. Typically, the third set of solder balls 120 is coupled to a printed circuit board (PCB) (not shown).[0004] Conventional integrated packages, such as the one described in FIG. 1, have certain limitations and downsides. For example, conventional integrated packages are limited by the routing density and can be costly to fabricate. There is a need to provide integrated devices that are cheaper to produce, as well as having better (e.g., higher) routing density characteristics. Therefore, there is a need for a cost effective integrated package that has a low profile but also takes up a little real estate as possible. Ideally, such an integrated package will also provide higher density connections with the dies.SUMMARY[0005] Various features, apparatus and methods described herein provide a package substrate.[0006] A first example provides a substrate that includes a first dielectric layer, a first interconnect, a first cavity, and a first electroless metal layer. The first dielectric layer includes a first surface and a second surface. The first interconnect is on the first surface of the first dielectric layer. The first cavity traverses the first surface of the first dielectric layer. The first electroless metal layer is formed in the first cavity. The first electroless metal layer defines a second interconnect embedded in the first dielectric layer.[0007] According to an aspect, the substrate includes a second cavity traversing the first surface of the first dielectric layer, and a second electroless metal layer formed in the second cavity, wherein the second electroless metal layer defines a third interconnect embedded in the first dielectric layer.[0008] According to one aspect, the substrate includes a first pad on the first surface of the first dielectric layer, a first via traversing the first dielectric layer, the first via coupled to the first pad, and a second pad embedded in the first dielectric layer, where the second embedded through the second surface of the first dielectric layer, wherein the second pad is coupled to the first via.[0009] According to an aspect, the substrate includes a core layer comprising a first surface and a second surface, where the first surface of the core layer is coupled to the second surface of the first dielectric layer. In some implementations, the core layer includes a first via. In some implementations, the substrate includes a second dielectric layer comprising a first surface and a second surface, where the first surface of the second dielectric layer is coupled to the second surface of the core layer.[0010] According to one aspect, the substrate includes a third interconnect embedded in the first surface of the first dielectric layer, where the third interconnect comprising an electroless metal layer, and a first pad on the first surface of the first dielectric layer, where the first pad coupled to the third interconnect. [0011] According to an aspect, the substrate includes a resist layer on the first dielectric layer.[0012] According to one aspect, the substrate is one of at least a package substrate and/or an interposer.[0013] According to an aspect, the substrate is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.[0014] A second example provides an apparatus that includes a first dielectric layer comprising a first surface and a second surface, a first interconnect means on the first surface of the first dielectric layer, a first cavity traversing the first surface of the first dielectric layer, and a first electroless interconnect means formed at least partially in the first cavity.[0015] According to an aspect, the apparatus includes a second cavity traversing the first surface of the first dielectric layer, and a second electroless interconnect means is formed at least partially in the second cavity.[0016] According to one aspect, the apparatus includes a first pad on the first surface of the first dielectric layer, a first vertical interconnect means traversing the first dielectric layer, the first vertical interconnect means is coupled to the first pad, and a second pad embedded in the first dielectric layer, the second embedded through the second surface of the first dielectric layer, wherein the second pad is coupled to the first vertical interconnect means.[0017] According to an aspect, the apparatus includes a core layer comprising a first surface and a second surface, where the first surface of the core layer is coupled to the second surface of the first dielectric layer. In some implementations, the core layer includes a first vertical interconnect means. In some implementations, the apparatus includes a second dielectric layer comprising a first surface and a second surface, where the first surface of the second dielectric layer is coupled to the second surface of the core layer.[0018] According to one aspect, the apparatus includes a third electroless interconnect means embedded in the first surface of the first dielectric layer, and a first pad on the first surface of the first dielectric layer, the first pad coupled to the third electroless interconnect means. [0019] According to an aspect, the apparatus includes a resist layer on the first dielectric layer.[0020] According to one aspect, the apparatus is one of at least a substrate and/or an interposer.[0021] According to an aspect, the apparatus is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.[0022] A third example provides a method for fabricating a substrate. The method forms a first dielectric layer comprising a first surface and a second surface. The method forms a first interconnect on the first surface of the first dielectric layer. The method forms a first cavity that traverses the first surface of the first dielectric layer. The method forms a first electroless metal at least partially in the first cavity, where the first electroless metal defines a second interconnect embedded in the first dielectric layer.[0023] According to an aspect, the method forms a second cavity that traverses the first surface of the first dielectric layer. The method forms a second electroless metal at least partially in the second cavity, where the second electroless metal defines a third interconnect embedded in the first dielectric layer.[0024] According to one aspect, the method forms a first pad on the first surface of the first dielectric layer. The method forms a first via that traverses the first dielectric layer, the first via coupled to the first pad. The method forms a second pad embedded in the first dielectric layer. The second pad embedded through the second surface of the first dielectric layer, where the second pad is coupled to the first via.[0025] According to an aspect, the method forms a core layer comprising a first surface and a second surface, wherein the first surface of the core layer is formed on the second surface of the first dielectric layer. In some implementations, the core layer comprises a first via. In some implementations, the method forms a second dielectric layer comprising a first surface and a second surface, wherein the first surface of the second dielectric layer is formed on the second surface of the core layer.[0026] According to one aspect, the method forms a third interconnect embedded in the first surface of the first dielectric layer, the third interconnect comprising an electroless metal layer. The method forms a first pad on the first surface of the first dielectric layer, where the first pad coupled to the third interconnect. [0027] According to an aspect, the method forms a resist layer on the first dielectric layer.[0028] According to one aspect, the substrate is one of at least a package substrate and/or an interposer.[0029] According to an aspect, substrate is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.DRAWINGS[0030] Various features, nature and advantages may become apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout.[0031] FIG. 1 illustrates a profile view of a conventional integrated device.[0032] FIG. 2 illustrates an example of a core of a package substrate.[0033] FIG. 3 illustrates an example of a coreless substrate that includes an embedded trench with selective electroless copper fill in the trench and semi additive process formed traces on the surface of the dielectric layer.[0034] FIG. 4 illustrates an example of a cored substrate that includes an embedded trench with selective electroless copper fill in the trench and semi additive process formed traces on the surface of the dielectric layer.[0035] FIG. 5 illustrates an example of a coreless substrate that includes an embedded trench with selective electroless copper fill in the trench and semi additive process formed traces on the surface of the dielectric layer.[0036] FIG. 6 illustrates an example of a cored substrate that includes an embedded trench with selective electroless copper fill in the trench and semi additive process formed traces on the surface of the dielectric layer.[0037] FIG. 7 illustrates an example of a plan view of a substrate that includes an embedded trench with selective electroless copper fill in the trench and semi additive process formed traces on the surface of the dielectric layer.[0038] FIG. 8 (comprising FIG. 8A, FIG. 8B, and FIG. 8C) illustrates an exemplary sequence for providing / fabricating a substrate that includes an embedded trench with selective electroless copper fill in the trench and semi additive process formed traces on the surface of the dielectric layer. [0039] FIG. 9 illustrates a flow diagram of a method for providing / fabricating a substrate that includes an embedded trench with selective electroless copper fill in the trench and semi additive process formed traces on the surface of the dielectric layer.[0040] FIG. 10 (comprising FIG. 10A and FIG. 10B) illustrates an exemplary sequence for providing / fabricating a substrate that includes an electroless metal layer.[0041] FIG. 1 1 illustrates a flow diagram of a method for providing / fabricating a substrate that includes an electroless metal layer.[0042] FIG. 12 illustrates a flow diagram of a method for providing / fabricating an interconnect using a semi-additive patterning (SAP) process.[0043] FIG. 13 illustrates a sequence for providing / fabricating an interconnect using a semi-additive patterning (SAP) process.[0044] FIG. 14 illustrates another example of a coreless substrate that includes an embedded trench with selective electroless copper fill in the trench and semi additive process formed traces on the surface of the dielectric layer.[0045] FIG. 15 illustrates another example of a cored substrate that includes an embedded trench with selective electroless copper fill in the trench and semi additive process formed traces on the surface of the dielectric layer.[0046] FIG. 16 illustrates various electronic devices that may integrate a semiconductor device, a die, a package substrate, an integrated circuit and/or PCB described herein.DETAILED DESCRIPTION[0047] In the following description, specific details are given to provide a thorough understanding of the various aspects of the disclosure. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For example, circuits may be shown in block diagrams in order to avoid obscuring the aspects in unnecessary detail. In other instances, well-known circuits, structures and techniques may not be shown in detail in order not to obscure the aspects of the disclosure.Overview[0048] Some novel features pertain to a substrate that includes a first dielectric layer, a first interconnect, a first cavity, and a first electroless metal layer. The first dielectric layer includes a first surface and a second surface. The first interconnect is on the first surface of the first dielectric layer. The first cavity traverses the first surface of the first dielectric layer. The first electroless metal layer is selectively formed on the surface of the first dielectric layer, including in at least the first cavity of the first dielectric layer. In some implementations, a second metal layer is selectively formed on portions of the first electroless metal layer. In some implementations, the second metal layer is selectively formed using semi-additive patterning (SAP) process. In some implementations, the first electroless metal layer formed in the first cavity defines an embedded high density interconnect. In some implementations, the first electroless metal layer and/or the second metal layer defines an interconnect (e.g., trace, pad) on the surface of the first dielectric layer. In some implementations, the package substrate includes a core layer coupled to the first dielectric layer. In some implementations, the core layer includes a set of interconnects.Exemplary Package Substrate That Includes an Electroless Metal Layer[0049] FIG. 3 conceptually illustrates an example a package substrate that includes surface interconnects and a cavity that includes an electroless fill. Specifically, FIG. 3 illustrates a package substrate 300 that includes a first dielectric layer 302, a first pad 304, a via 306, a second pad 308, a first interconnect 310, a second interconnect 312, a first cavity 320, and a third interconnect 322.[0050] The first dielectric layer 302 has a first surface (e.g., top surface) and a second surface (e.g., bottom surface). The first surface is opposite to the second surface. Different implementations may use different materials for the first dielectric layer 302. In some implementations, the first dielectric layer 302 may be a filled epoxy.[0051] The first pad 304 is located on the first surface of the first dielectric layer 302. The via 306 traverses the first dielectric layer 302. The first pad 304 is coupled to a first portion (e.g., top portion, top surface) of the via 306. The second pad 308 is embedded in the second surface of the first dielectric layer 302. The second pad 308 is coupled to a second portion (e.g., bottom portion, bottom surface) of the via 306. Different implementations may use different materials for the first pad 304, the via 306, and/or the second pad 308. In some implementations, the first pad 304, the via 306, and the second pad 308 includes a metal layer (e.g., copper layer).[0052] The first interconnect 310 is on the first surface of the first dielectric layer 302. In some implementations, the first interconnect 310 is a trace on the first surface of the first dielectric layer 302. The second interconnect 312 is embedded in the second surface of the first dielectric layer 302. In some implementations, the second interconnect 312 is a trace embedded in the second surface of the first dielectric layer 302. Different implementations may use different materials for the first and second interconnects 310 and 312. In some implementations, the first and second interconnects 310 and 312 include a metal layer (e.g., copper layer).[0053] FIG. 3 also illustrates that the cavity 320 traverses the first surface of the first dielectric layer 302. Different implementations may use different process for fabricating the cavity 320 in the first dielectric layer 302. In some implementations, the cavity 320 partially traverses the first dielectric layer 302 through the first surface of the first dielectric layer 302. In some implementations, the cavity 320 is at least partially filled with the third interconnect 322. In some implementations, the third interconnect 322 is a trace that is made of an electroless fill. In some implementations, the electroless fill is an electroless metal layer (e.g., electroless copper layer).[0054] In some implementations, the third interconnect 322 are high density and/or fine pitch interconnects that electrically couple two dies on the package substrate. An example of interconnects that may electrically couple two dies is further described in FIG. 7. In some implementations, the spacing between two adjacent interconnects 322 (e.g., traces, electroless fill interconnect in the trench) is about 5 microns (μιη) or less. In some implementations, the spacing between two adjacent interconnects (e.g., traces) is about 3 microns (μιη) or less.[0055] In some implementations, the third interconnect 322 is made of a different material than the first interconnect 310 and/or the second interconnect 312. For example, the third interconnect 322 includes an electroless metal layer, and the first interconnect 310 and/or the second interconnect 312 includes a metal layer.[0056] FIG. 3 illustrates a package substrate without a core layer. However, in some implementations, a package substrate may include a core layer.[0057] FIG. 4 conceptually illustrates an example a package substrate that includes a core layer, surface interconnects and a cavity that includes an electroless fill. Specifically, FIG. 4 illustrates a package substrate 400 that includes a core layer 402, a first dielectric layer 404, a second dielectric layer 406, a first pad 410, a first via 412, a second pad 414, a second via 416, a third pad 418, a third via 420, and a fourth pad 422. The package substrate 400 also includes a first interconnect 424, a cavity 430, and a second interconnect 432. [0058] The core layer 402 has a first surface (e.g., top surface) and a second surface (e.g., bottom surface). The first surface is opposite to the second surface. Different implementations may use different materials for the core layer 402. In some implementations, the core layer 402 may be made of at least one of a dielectric layer. The first dielectric layer 404 is coupled to the first surface of the core layer 402. The second dielectric layer 406 is coupled to the second surface of the core layer 402. In some implementations, the first dielectric layer 404 and the second dielectric layer 406 are prepeg dielectric layers.[0059] The first pad 410 is located on a first surface (e.g., top surface) of the first dielectric layer 404. The first via 412 traverses the first dielectric layer 404. The first pad 410 is coupled to a first portion (e.g., top portion, top surface) of the first via 412. The second pad 414 is embedded in a second surface (e.g., bottom surface) of the first dielectric layer 404. The second pad 414 is coupled to a second portion (e.g., bottom portion, bottom surface) of the first via 412.[0060] The second via 416 traverses the core layer 402. The second pad 414 is coupled to a first portion (e.g., top portion, top surface) of the second via 416. The second pad 414 is on the first surface of the core layer 402. The third pad 418 is coupled to a second portion (e.g., bottom portion, bottom surface) of the second via 416.[0061] The third pad 418 is on the second surface (e.g., bottom surface) of the core layer 402. The third pad 418 is embedded in a first surface of the second dielectric layer 406. The third via 420 traverses the second dielectric layer 406. The third pad 418 is coupled to a first portion (e.g., top portion, top surface) of the third via 420. The fourth pad 422 is on a second surface (e.g., bottom surface) of the second dielectric layer 406. The fourth pad 422 is coupled to a second portion (e.g., bottom portion, bottom surface) of the third via 420.[0062] Different implementations may use different materials for the first pad 410, the first via 412, the second pad 414, the second via 416, the third pad 418, the third via 420, and the fourth pad 422. In some implementations, the first pad 410, the first via 412, the second pad 414, the second via 416, the third pad 418, the third via 420, and the fourth pad 422 includes a metal layer (e.g., copper layer).[0063] The first interconnect 424 is on the first surface of the first dielectric layer 404. In some implementations, the first interconnect 424 is a trace on the first surface of the first dielectric layer 404. Different implementations may use different materials for the first interconnect 424. In some implementations, the first interconnect 424 include a metal layer (e.g., copper layer).[0064] FIG. 4 also illustrates that the cavity 430 traverses the first surface of the first dielectric layer 404. Different implementations may use different process for fabricating the cavity 430 in the first dielectric layer 404. In some implementations, the cavity 430 partially traverses the first dielectric layer 404 through the first surface of the first dielectric layer 404. In some implementations, the cavity 430 is at least partially filled with the second interconnect 432. In some implementations, the second interconnect 432 is a trace that is made of an electroless fill. In some implementations, the electroless fill is an electroless metal layer (e.g., electroless copper layer).[0065] In some implementations, the second interconnect 432 is a high density and/or fine pitch interconnects that electrically couple two dies on the package substrate. An example of interconnects that may electrically couple two dies is further described in FIG. 7. In some implementations, the spacing between two adjacent interconnects 432 (e.g., traces, electroless fill interconnect in trench) is about 5 microns (μιη) or less. In some implementations, the spacing between two adjacent interconnects (e.g., traces) is about 3 microns (μιη) or less.[0066] In some implementations, the second interconnect 432 is made of a different material than the first interconnect 424. For example, the second interconnect 432 includes an electroless metal layer, and the first interconnect 424 includes a metal layer.[0067] FIGS. 3-4 illustrate exemplary high level package substrates of some implementations. FIGS. 5-6 illustrates exemplary package substrates with more details. In some implementations, the package substrates of FIGS. 5-6 are similar to the package substrates of FIGS. 3-4, except that FIGS. 5-6 have more detail.[0068] FIG. 5 conceptually illustrates an example a package substrate that includes surface interconnects and a cavity that includes an electroless fill. Specifically, FIG. 5 illustrates a package substrate 500 that includes a first dielectric layer 502, a first pad 504, a via 506, a second pad 508, a first interconnect 510, a second interconnect 512, a first cavity 520, and a third interconnect 522. The first dielectric layer 502 has a first surface (e.g., top surface) and a second surface (e.g., bottom surface). The first surface is opposite to the second surface. Different implementations may use different materials for the first dielectric layer 502. In some implementations, the first dielectric layer 502 may be a substrate. [0069] The first pad 504 is located on the first surface of the first dielectric layer 502. In some implementations, the first pad 504 includes a first metal layer 503 and a second metal layer 505. In some implementations, the first metal layer 503 is a seed layer. In some implementations, the first metal layer 503 is an electroless fill layer (e.g., electroless metal layer). The via 506 traverses the first dielectric layer 502. In some implementations, the via 506 includes a first metal layer 507 and a second metal layer 509. In some implementations, the first metal layer 507 is a seed layer. In some implementations, the first metal layer 507 is an electroless fill layer (e.g., electroless metal layer). In some implementations, the first metal layer 507 may also be formed on the side walls of the via 506.[0070] The first pad 504 is coupled to a first portion (e.g., top portion, top surface) of the via 506. The second pad 508 is embedded in the second surface of the first dielectric layer 502. The second pad 508 is coupled to a second portion (e.g., bottom portion, bottom surface) of the via 506. Different implementations may use different materials for the first pad 504, the via 506, and the second pad 508. In some implementations, the first pad 504, the via 506, and the second pad 508 includes a metal layer (e.g., copper layer).[0071] The first interconnect 510 is on the first surface of the first dielectric layer 502. In some implementations, the first interconnect 510 is a trace on the first surface of the first dielectric layer 502. In some implementations, the first interconnect 510 includes a first metal layer 51 1 and a second metal layer 513. In some implementations, the first metal layer 511 is a seed layer. In some implementations, the first metal layer 511 is an electroless fill layer (e.g., electroless metal layer)[0072] The second interconnect 512 is embedded in the second surface of the first dielectric layer 502. In some implementations, the second interconnect 512 is a trace embedded in the second surface of the first dielectric layer 502. Different implementations may use different materials for the first and second interconnects 510 and 512. In some implementations, the first and second interconnects 510 and 512 include a metal layer (e.g., copper layer).[0073] FIG. 5 also illustrates that the cavity 520 traverses the first surface of the first dielectric layer 502. Different implementations may use different process for fabricating the cavity 520 in the first dielectric layer 502. In some implementations, the cavity 520 partially traverses the first dielectric layer 502 through the first surface of the first dielectric layer 502. In some implementations, the cavity 520 is at least partially filled with the third interconnect 522. In some implementations, the third interconnect 522 is a trace that is made of an electroless fill. In some implementations, the electroless fill is an electroless metal layer (e.g., electroless copper layer).[0074] In some implementations, the third interconnect 522 is a high density and/or fine pitch interconnects that electrically couple two dies on the package substrate. An example of interconnects that may electrically couple two dies is further described in FIG. 7. In some implementations, the spacing between two adjacent interconnects 522 (e.g., traces) is about 5 microns (μιη) or less. In some implementations, the spacing between two adjacent interconnects (e.g., traces) is about 3 microns (μιη) or less.[0075] In some implementations, the third interconnect 522 is made of a different material than the first interconnect 510 and/or the second interconnect 512. For example, the third interconnect 522 includes an electroless metal layer, and the first interconnect 510 and/or the second interconnect 512 includes a metal layer.[0076] FIG. 5 illustrates a package substrate without a core layer (e.g., coreless package substrate). However, in some implementations, a package substrate may include a core layer (e.g., cored package substrate).[0077] FIG. 6 conceptually illustrates an example a package substrate that includes a core layer, surface interconnects and a cavity that includes an electroless fill. Specifically, FIG. 6 illustrates a package substrate 600 that includes a core layer 602, a first dielectric layer 604, a second dielectric layer 606, a first pad 610, a first via 612, a second pad 614, a second via 616, a third pad 618, a third via 620, and a fourth pad 622. The package substrate 600 also includes a first interconnect 624, a cavity 630, and a second interconnect 632[0078] The core layer 602 has a first surface (e.g., top surface) and a second surface (e.g., bottom surface). The first surface is opposite to the second surface. Different implementations may use different materials for the core layer 602. In some implementations, the core layer 602 may be made of at least one of a dielectric layer. The first dielectric layer 604 is coupled to the first surface of the core layer 602. The second dielectric layer 606 is coupled to the second surface of the core layer 602. In some implementations, the first dielectric layer 604 and the second dielectric layer 606 are prepeg dielectric layers.[0079] The first pad 610 is located on a first surface (e.g., top surface) of the first dielectric layer 604. In some implementations, the first pad 610 includes a first metal layer 611 and a second metal layer 613. In some implementations, the first metal layer 611 is a seed layer. In some implementations, the first metal layer 61 1 is an electroless fill layer (e.g., electroless metal layer). The first via 612 traverses the first dielectric layer 604. The first pad 610 is coupled to a first portion (e.g., top portion, top surface) of the first via 612. In some implementations, the first via 612 includes a first metal layer 615 and a second metal layer 617. In some implementations, the first metal layer 615 may also be formed on the side walls of the via 612. In some implementations, the first metal layer 615 is a seed layer. In some implementations, the first metal layer 615 is an electroless fill layer (e.g., electroless metal layer). The second pad 614 is embedded in a second surface (e.g., bottom surface) of the first dielectric layer 604. The second pad 614 is coupled to a second portion (e.g., bottom portion, bottom surface) of the first via 612.[0080] The second via 616 traverses the core layer 602. The second pad 614 is coupled to a first portion (e.g., top portion, top surface) of the second via 616. The second pad 614 is on the first surface of the core layer 602. The third pad 618 is coupled to a second portion (e.g., bottom portion, bottom surface) of the second via 616.[0081] The third pad 618 is on the second surface (e.g., bottom surface) of the core layer 602. The third pad 618 is embedded in a first surface of the second dielectric layer 606. The third via 620 traverses the second dielectric layer 606. The third pad 618 is coupled to a first portion (e.g., top portion, top surface) of the third via 620. The fourth pad 622 is on a second surface (e.g., bottom surface) of the second dielectric layer 606. The fourth pad 622 is coupled to a second portion (e.g., bottom portion, bottom surface) of the third via 620. In some implementations, the third via 620 includes a first metal layer 621 and a second metal layer 623. In some implementations, the first metal layer 621 may also be formed on the side walls of the via 620. In some implementations, the first metal layer 621 is a seed layer. In some implementations, the first metal layer 621 is an electroless fill layer (e.g., electroless metal layer).[0082] Different implementations may use different materials for the first pad 610, the first via 612, the second pad 614, the second via 616, the third pad 618, the third via 620, and the fourth pad 622. In some implementations, the first pad 610, the first via 612, the second pad 614, the second via 616, the third pad 618, the third via 620, and the fourth pad 622 includes a metal layer (e.g., copper layer).[0083] The first interconnect 624 is on the first surface of the first dielectric layer 604. In some implementations, the first interconnect 624 is a trace on the first surface of the first dielectric layer 604. Different implementations may use different materials for the first interconnect 624. In some implementations, the first interconnect 624 include a metal layer (e.g., copper layer). In some implementations, the first interconnect 624 includes a first metal layer 625 and a second metal layer 627. In some implementations, the first metal layer 625 is a seed layer. In some implementations, the first metal layer 625 is an electroless fill layer (e.g., electroless metal layer).[0084] FIG. 6 also illustrates that the cavity 630 traverses the first surface of the first dielectric layer 604. Different implementations may use different process for fabricating the cavity 630 in the first dielectric layer 604. In some implementations, the cavity 630 partially traverses the first dielectric layer 604 through the first surface of the first dielectric layer 604. In some implementations, the cavity 630 is at least partially filled with the second interconnect 632. In some implementations, the second interconnect 632 is a trace that is made of an electroless fill. In some implementations, the electroless fill is an electroless metal layer (e.g., electroless copper layer).[0085] In some implementations, the second interconnect 632 is a high density and/or fine pitch interconnects that electrically couple two dies on the package substrate. An example of interconnects that may electrically couple two dies is further described in FIG. 7. In some implementations, the spacing between two adjacent interconnects 632 (e.g., traces) is about 5 microns (μιη) or less. In some implementations, the spacing between two adjacent interconnects (e.g., traces) is about 3 microns (μιη) or less.[0086] In some implementations, the second interconnect 632 is made of a different material than the first interconnect 624. For example, the second interconnect 632 includes an electroless metal layer, and the first interconnect 624 includes a metal layer.[0087] FIGS. 3-6 illustrate packages without a solder resist layer. However, in some implementations, one or more solder resist layers may be selectively formed on a the first surface (e.g., top surface) and/or the second surface (e.g., bottom surface) of the package. Several examples of packages with one or more solder resist layers are described in FIGS. 14-15.Exemplary Package Substrate That Includes An Electroless Metal Layer[0088] FIG. 7 illustrates an example of a plan view of a package substrate coupled to two dies. Specifically, FIG. 7 illustrates a package substrate 702, a first die 704, a second die 706, a set of interconnects 710, a first set of pads 714, a second set of pads 716, a third pad 724, and a fourth pad 726. In some implementations, the package substrate 702 is representative of at least one of the package substrates 300, 400, 500, and/or 600 of FIGS. 3, 4, 5, and/or 6. However, the package substrate 702 may represent other package substrate in the present disclosure.[0089] The set of interconnects 710 are embedded traces on the surface of the package substrate 702. In some implementations, the set of interconnects 710 are traces made of an electroless fill. In some implementations, the electroless fill is an electroless metal layer (e.g., electroless copper layer). In some implementations, the set of interconnects 710 may include at least one of the interconnects 322, 432, 522, and/or 632 from FIGS. 3, 4, 5, and/or 6. In some implementations, the set of interconnects 710 are located in a set of cavities in the package substrate 702. In some implementations, at least part of the set of interconnects 710 is covered with a solder resist layer. In some implementations, at least part of the package substrate 702 is covered with a solder resist layer. Examples of package substrates covered with one or more solder resist layers are further described in FIGS. 14-15.[0090] In some implementations, the set of interconnects 710 are high density and/or fine pitch interconnects that electrically couple the first die 704 and the second die 706. In some implementations, the spacing between two adjacent interconnects (e.g., traces) from the set of interconnects 710 is about 5 microns (μιη) or less. In some implementations, the spacing between two adjacent interconnects (e.g., traces) from the set of interconnects 710 is about 3 microns (μιη) or less.[0091] The set of interconnects 710 is coupled to the first set of pads 714. The first set of pads 714 may be coupled to the first die 704. The set of interconnects 710 is coupled to the second set of pads 716. The second set of pads 716 may be coupled to the second die 706. The third pad 724 may be a via pad. The third pad 724 may be coupled to the first die 704. The fourth pad 726 may be a via pad. The fourth pad 726 may be coupled to the second die 706.Exemplary Sequence for Providing a Package Substrate That Includes An Electroless Metal Layer[0092] In some implementations, providing a package substrate that includes an cavity that includes an electroless fill includes several processes. FIG. 8 (which includes FIGS. 8A-8C) illustrates an exemplary sequence for providing a package substrate. In some implementations, the sequence of FIGS. 8A-8C may be used to provide / manufacture the package substrate of FIGS. 3 and/or 5, and/or other package substrates described in the present disclosure. [0093] It should be noted that the sequence of FIGS. 8A-8C may combine one or more stages in order to simplify and/or clarify the sequence for providing a package substrate.[0094] As shown in stage 1 of FIG. 8A, a core layer 800 is provided. In some implementations, the core layer 800 is a temporary core layer. In some implementations, providing the core layer 800 may include receiving a core layer from a supplier or fabricating a core layer. Different implementations may use different materials for the core layer. In some implementations, the core layer 800 is a dielectric layer. The core layer 800 includes a first metal layer 802 and a second metal layer 804. The first metal layer 802 is coupled to a first surface (e.g., top surface) of the core layer 800. The second metal layer 804 is coupled to a second surface (e.g., bottom surface) of the core layer 800. In some implementations, providing the core layer includes providing the first metal layer 802 and/or the second metal layer 804. In some implementations, providing the first metal layer 802 and/or the second metal layer 804 includes receiving the first metal layer 802 and/or the second metal layer 804 with the core layer 800 from a supplier or fabricating the first metal layer 802 and/or the second metal layer 804 on the core layer 800.[0095] At stage 2, a dry film resist (DFR) 806 is provided on the first metal layer 802. In some implementations, providing the DFR 806 includes forming (e.g., laminating) the DFR 806 on the first metal layer 802, and selectively removing the DFR 806 to define a pattern on the first metal layer 802. In some implementations, these patterns include one or more cavities (e.g., cavity 807) in the DFR 806. In some implementations, selectively removing the DFR 806 includes exposing the DFR 806, and developing the DFR 806 to form the pattern that includes one or more cavities.[0096] At stage 3, a third metal layer 808 is provided in the cavities (e.g., cavity 807) of the DFR 806. Different implementations may provide the third metal layer 808 differently. In some implementations, the third metal layer 808 is formed in one or more cavities and on the first metal layer 802. In some implementations, the third metal layer 808 is provided using a metal plating process.[0097] At stage 4, the DFR 806 is removed. In some implementations, removing the DFR 806 includes stripping the DFR 806, leaving the third metal layer 808. Different implementations may use different processes for removing the DFR 806.[0098] At stage 5, as shown in FIG. 8B, a first dielectric layer 810 is provided on the first metal layer 802 (e.g., the first surface of the core layer 800). In some implementations, providing the first dielectric layer 810 includes forming (e.g., laminating) the first dielectric layer 810 on the first metal layer 802 of the core layer 800. In some implementations, the first dielectric layer 810 is formed about the third metal layer 808.[0099] At stage 6, several cavities (e.g., first cavity 811, second set of cavities 813) are formed in the first dielectric layer 810. As shown at stage 6, the first cavity 811 is formed about a portion of the third metal layer 808 and traverses the first dielectric layer 810. In some implementations, the first cavity 81 1 is a cavity configured to define a via in the first dielectric layer 810. The second set of cavities 813 partially traverses the first dielectric layer 810. In some implementations, the second set of cavities 813 is a set of cavities configured to define a set of interconnects (e.g., traces) embedded in the first dielectric layer 810. Different implementations may use different processes for forming the cavities in the first dielectric layer 810. In some implementations, a laser process is used to form the cavities in the first dielectric layer 810. In some implementations, the laser process allows for the second set of cavities to have a spacing of about5microns (μιη) or less. In some implementations, the laser process allows for the second set of cavities to have a spacing of about 3 microns (μιη) or less.[00100] At stage 7, a fourth metal layer 814 is provided. As shown at stage 7, the fourth metal layer 814 is provided such that a metal layer is formed on a first surface of the first dielectric layer 810. In addition, the fourth metal layer 814 is provided such that at least some of the cavities (e.g., first cavity 811, second set of cavities 813) are at least partially filled with the fourth metal layer 814. In some implementations, the fourth metal layer 814 may be formed on the side walls of the cavities. Stage 7 illustrates the fourth metal layer 814 is not formed on side portions (e.g., side walls) of the cavity 811. However, in some implementations, the fourth metal layer 814 is formed on the entire side portion (e.g., side wall) of the cavity 81 1. In some implementations, the fourth metal layer 814 is an electroless metal layer (e.g., electroless fill, electroless copper layer). In some implementations, the fourth metal layer 814 is a seed layer. In some implementations, providing the fourth metal layer 814 includes using an electroless plating process. In some implementations, defining the fourth metal layer 814 may define one or more traces in the first dielectric layer 810.[00101] At stage 8, as shown in FIG. 8C, a dry film resist (DFR) 816 is provided on the fourth metal layer 814. In some implementations, providing the DFR 816 includes forming (e.g., laminating) the DFR 816 on the fourth metal layer 814, and selectively removing the DFR 816 to define a pattern on the fourth metal layer 814. In some implementations, these patterns include one or more cavities (e.g., cavity 817) in the DFR 816. In some implementations, selectively removing the DFR 816 includes exposing the DFR 816, and developing the DFR 816 to form the pattern that includes one or more cavities.[00102] At stage 9, a fifth metal layer 818 is provided in the cavities (e.g., cavity 817) of the DFR 816. Different implementations may provide the fifth metal layer 818 differently. In some implementations, the fifth metal layer 818 is formed in one or more cavities and on the fourth metal layer 814. In some implementations, the fifth metal layer 818 is provided using a metal plating process. In some implementations, providing the fifth metal layer 818 may define one or more vias and/or one or more traces in the first dielectric layer 810.[00103] At stage 10, the DFR 816 is removed. In some implementations, removing the DFR 816 includes stripping the DFR 816, leaving the fifth metal layer 818. Different implementations may use different processes for removing the DFR 816.[00104] At stage 11, the core layer 800 and the second metal layer 804 are removed, leaving a package substrate 830. In some implementations, at least some of the first metal layer 802 may also be removed. Thus, in some implementations, the package substrate 830 may or may not include the first metal layer 802. In some implementations, the package substrate 830 is similar to the package substrates 300 and/or 500 of FIGS. 3 and 5. In some implementations, one or more solder resist layers may be selectively added (e.g., formed) to a first surface (e.g., top surface) and/or a second surface (e.g., bottom surface) of the package substrate 830.Exemplary Method for Providing a Package Substrate[00105] In some implementations, providing a package substrate that includes an electroless embedded interconnect includes several processes. FIG. 9 illustrates an exemplary flow diagram of a method for providing a package substrate. In some implementations, the method of FIG. 9 may be used to provide / fabricate the package substrate of FIGS. 3 and/or 5, and/or other package substrate described in the present disclosure.[00106] It should be noted that the method of FIG. 9 may combine one or more processes in order to simplify and/or clarify the method for providing a package substrate. In some implementations, the method of FIG. 9 may be used to provide the sequence illustrated in FIGS. 8A-8C.[00107] The method provides (at 905) a core layer. In some implementations, providing the core layer may include receiving a core layer from a supplier or fabricating (e.g., forming) a core layer. Different implementations may use different materials for the core layer. Stage 1 of FIG. 8A illustrates an example of providing a core layer.[00108] The method provides (at 910) at least one dielectric layer on the core layer. In some implementations, providing at least one dielectric layer includes forming at least one dielectric layer.[00109] The method provides (at 915) at least one cavity in the dielectric layer. The cavity may traverse part of the dielectric layer or it may traverse the entire dielectric layer. In some implementations, the cavity is a via cavity. In some implementations, the cavity is a trench for an interconnect.[00110] The method provides (at 920) at least one embedded electroless interconnect in the dielectric layer. In some implementations, providing (e.g., forming) at least one embedded electroless interconnect includes at least partially filling the cavity with a metal layer to define the interconnect. In some implementations, the metal layer is electroless metal fill. Stages 5-7 of FIG. 8B illustrates an example of providing at least one electroless interconnect in a dielectric layer.[00111] The method provides (at 925) at least one interconnect on the dielectric layer. In some implementations, providing (e.g., forming) at least one interconnect includes providing an interconnect (e.g., trace, pad) on the surface of the dielectric layer and/or a via in the dielectric layer. Stages 8-10 of FIG. 8C illustrates an example of providing at least one interconnect. In some implementations, providing at least one interconnect on the dielectric layer includes a semi-additive patterning (SAP) process. An example of a SAP process is described in detail in FIGS. 12-13.[00112] The method removes (at 930) the core layer. Stages 10-11 of FIG. 8C illustrates an example of removing a core layer.[00113] The method provides (at 935) a solder resist layer (e.g., solder mask layer) on the dielectric layer. The method further provides (at 940) a surface finish on the solder resist layer and/or the dielectric layer. Exemplary Sequence for Providing a Package Substrate That Includes An Electroless Metal Layer[00114] In some implementations, providing a package substrate that includes an cavity that includes an electroless fill includes several processes. FIG. 10 (which includes FIGS. 10A-10B) illustrates an exemplary sequence for providing a package substrate. In some implementations, the sequence of FIGS. 10A-10B may be used to provide / manufacture the package substrate of FIGS. 4 and/or 6, and/or other package substrate described in the present disclosure.[00115] It should be noted that the sequence of FIGS. 10A-10B may combine one or more stages in order to simplify and/or clarify the sequence for providing a package substrate.[00116] As shown in stage 1 of FIG. 10A, a core layer 1002 is provided. Different implementations may use different materials for the core layer 1002. In some implementations, the core layer 1002 is a dielectric layer. The core layer 1002 includes a first via 1004, a first pad 1006, and a second pad 1008. The first via 1004 traverse the core layer 1002. The first pad 1006 is on a first surface (e.g., top surface) of the core layer 1002. The first pad 1006 is coupled to a first portion of the first via 1004. The second pad 1008 is on a second surface (e.g., top surface) of the core layer 1002. The second pad 10008 is coupled to a second portion of the first via 1008.[00117] In some implementations, providing the core layer 1002 may include receiving a core layer from a supplier or fabricating a core layer. In some implementations, the first via 1004, the first pad 1006, and/or the second pad 1008 are provided (e.g., formed) after receiving the core layer 1002.[00118] At stage 2, a first dielectric layer 1010 (e.g., first prepeg layer) is formed on the first surface (e.g., top surface) of the core layer 1002, and a second dielectric layer 1012 (e.g., second prepeg layer) is formed on the second surface (e.g., bottom surface) of the core layer 1002.[00119] At stage 3, several cavities are formed in the first dielectric layer 1010 and the second dielectric layer 1012. For example, a first cavity 1011 is formed about a portion of the first pad 1006 and traverses the first dielectric layer 1010. In some implementations, the first cavity 101 1 is a cavity configured to define a via in the first dielectric layer 1010. A second set of cavities 1 1 13 partially traverses the first dielectric layer 1010. In some implementations, the second set of cavities 1013 is a set of cavities configured to define a set of interconnects (e.g., traces) embedded in the first dielectric layer 1010. A third cavity 1015 is formed about the second pad 1008 and traverses the second dielectric layer 1012. In some implementations, the third cavity 1015 is a cavity configured to define a via in the second dielectric layer 1012.[00120] Different implementations may use different processes for forming the cavities in the first dielectric layer 1010 and the second dielectric layer 1012. In some implementations, a laser process is used to form the cavities in the first and second dielectric layers 1010 and 1012. In some implementations, the laser process allows for the second set of cavities to have a spacing of about 5 microns (μιη) or less. In some implementations, the laser process allows for the second set of cavities to have a spacing of about 3 microns (μιη) or less.[00121] At stage 4, a first metal layer 1014 is provided. As shown at stage 4, the first metal layer 1014 is provided such that a metal layer is formed on a first surface of the first dielectric layer 1010. In addition, the first metal layer 1014 is provided such that at least some of the cavities (e.g., first cavity 101 1, second set of cavities 1013) are at least partially filled with the first metal layer 1014. In some implementations, the first metal layer 1014 is an electroless metal layer (e.g., electroless fill, electroless copper layer).[00122] In some implementations, the first metal layer 1014 may be formed on the side walls of the cavities. Stage 4 illustrates the first metal layer 1014 is not formed on side portions (e.g., side walls) of the cavity 1011. However, in some implementations, the first metal layer 1014 is formed on the entire side portion (e.g., side wall) of the cavity 101 1. In some implementations, the first metal layer 1014 is a seed layer. In some implementations, providing the first metal layer 1014 includes using an electroless plating process. In some implementations, defining the first metal layer 1014 may define one or more traces in the first dielectric layer 1010.[00123] In addition, at stage 4, a second metal layer 1016 is provided. As shown at stage 4, the second metal layer 1016 is provided such that a metal layer is formed on a first surface of the second dielectric layer 1012. Moreover, the second metal layer 1016 is provided such that at least some of the cavities (e.g., third cavity 1015) are at least partially filled with the second metal layer 1016. In some implementations, the second metal layer 1016 may be formed on the side walls of the cavities. In some implementations, the second metal layer 1016 is an electroless metal layer (e.g., electroless fill, electroless copper layer).[00124] In some implementations, the second metal layer 1016 may be formed on the side walls of the cavities. Stage 4 illustrates the second metal layer 1016 is not formed on side portions (e.g., side walls) of the cavity 1015. However, in some implementations, the second metal layer 1016 is formed on the entire side portion (e.g., side wall) of the cavity 1015. In some implementations, the second metal layer 1016 is a seed layer. In some implementations, providing the second metal layer 1016 includes using an electroless plating process.[00125] At stage 5, as shown in FIG. 10B, a first dry film resist (DFR) 1020 is provided on the first metal layer 1014. In some implementations, providing the first DFR 1020 includes forming (e.g., laminating) the first DFR 1020 on the first metal layer 1014, and selectively removing the first DFR 1020 to define a pattern on the first metal layer 1014. In some implementations, these patterns include one or more cavities (e.g., cavity 1021) in the first DFR 1020. In some implementations, selectively removing the first DFR 1020 includes exposing the first DFR 1020, and developing the first DFR 1020 to form the pattern that includes one or more cavities.[00126] Moreover, at stage 5, a second dry film resist (DFR) 1022 is provided on the second metal layer 1016. In some implementations, providing the second DFR 1022 includes forming (e.g., laminating) the second DFR 1022 on the second metal layer 1016, and selectively removing the second DFR 1022 to define a pattern on the second metal layer 1016. In some implementations, these patterns include one or more cavities (e.g., cavity 1023) in the second DFR 1022. In some implementations, selectively removing the second DFR 1022 includes exposing the second DFR 1022, and developing the second DFR 1022 to form the pattern that includes one or more cavities.[00127] At stage 6, a third metal layer 1024 is provided in the cavities (e.g., cavity 1021) of the first DFR 1020. Different implementations may provide the third metal layer 1024 differently. In some implementations, the third metal layer 1024 is formed in one or more cavities and on the first metal layer 1014. In some implementations, the third metal layer 1024 is provided using a metal plating process. In some implementations, providing the third metal layer 1024 may define one or more vias and/or one or more traces in the first dielectric layer 1010.[00128] In addition, at stage 6, a fourth metal layer 1026 is provided in the cavities (e.g., cavity 1023) of the second DFR 1022. Different implementations may provide the fourth metal layer 1026 differently. In some implementations, the fourth metal layer 1032 is formed in one or more cavities and on the second metal layer 1016. In some implementations, the fourth metal layer 1026 is provided using a metal plating process. In some implementations, providing the fourth metal layer 1026 may define one or more vias and/or one or more traces in the second dielectric layer 1012.[00129] At stage 7, the first DFR 1020 and the second DFR 1022 are removed. In some implementations, removing the first and second DFRs 1020 and 1022 includes stripping the first and second DFRs 1020 and 1022, leaving the third and fourth metal layers 1024 and 1026. Different implementations may use different processes for removing the first and second DFRs 1020 and 1022. Once the first and second DFRs 1020 and 1022, a package substrate 1030 may be provided. In some implementations, the package substrate 1030 is similar to the package substrates 400 and/or 600 of FIGS. 4 and 6. In some implementations, one or more solder resist layers may be selectively added (e.g., formed) to a first surface (e.g., top surface) and/or a second surface (e.g., bottom surface) of the package substrate 1030.Exemplary Method for Providing a Package Substrate[00130] In some implementations, providing a package substrate that includes an electroless embedded interconnect includes several processes. FIG. 11 illustrates an exemplary flow diagram of a method for providing a package substrate. In some implementations, the method of FIG. 1 1 may be used to provide / fabricate the package substrate of FIGS. 4 and/or 6, and/or other package substrate described in the present disclosure.[00131] It should be noted that the method of FIG. 11 may combine one or more processes in order to simplify and/or clarify the method for providing a package substrate. In some implementations, the method of FIG. 11 may be used to provide the sequence illustrated in FIGS. 10A-10B.[00132] The method provides (at 1105) a core layer. In some implementations, providing the core layer may include receiving a core layer from a supplier or fabricating (e.g., forming) a core layer. Different implementations may use different materials for the core layer. In some implementations, the core layer may include at least one via and at least one pad. Stage 1 of FIG. 10A illustrates an example of providing a core layer that includes a via and a pad.[00133] The method provides (at 1 1 10) at least one dielectric layer on the core layer. In some implementations, providing at least one dielectric layer includes forming at least one dielectric layer. [00134] The method provides (at 1 115) at least one cavity in the dielectric layer. The cavity may traverse part of the dielectric layer or it may traverse the entire dielectric layer. In some implementations, the cavity is a via cavity. In some implementations, the cavity is a trench for an interconnect.[00135] The method provides (at 1 120) at least one embedded electroless interconnect in the dielectric layer. In some implementations, providing (e.g., forming) at least one embedded electroless interconnect includes at least partially filling the cavity with a metal layer to define the interconnect. In some implementations, the metal layer is electroless metal fill. Stages 2-4 of FIG. 10A illustrates an example of providing at least one electroless interconnect in a dielectric layer.[00136] The method provides (at 1 125) at least one interconnect on the dielectric layer. In some implementations, providing (e.g., forming) at least one interconnect includes providing an interconnect (e.g., trace, pad) on the surface of the dielectric layer and/or a via in the dielectric layer. Stages 4-6 of FIGS. 10A-10B illustrates an example of providing at least one interconnect. In some implementations, providing at least one interconnect on the dielectric layer includes a semi-additive patterning (SAP) process. An example of a SAP process is described in detail in FIGS. 12-13.[00137] The method provides (at 1 130) a solder mask layer on the dielectric layer. The method further provides (at 1 135) a surface finish on the solder mask layer and/or the dielectric layer.Exemplary Method and Sequence For Providing A Substrate Using A Semi- Additive Patterning (SAP) Process.[00138] In the present disclosure, numerous methods and sequences are described for providing and/or fabricating a substrate. In some implementations, a semi-additive patterning (SAP) process is used to provide and/or fabricate one or more interconnects (e.g., traces, vias, pads) in/on a substrate.[00139] FIG. 12 illustrates a detailed exemplary flow diagram for a semi-additive processing (SAP) patterning process for fabricating a substrate that includes interconnects. FIG. 12 will be described with reference to FIG. 13 which illustrates an exemplary sequence of a layer (e.g., core layer, prepreg layer) of a substrate during the SAP process of some implementations.[00140] As shown in FIG. 12, the process 1200 may start by providing (at 1205) a dielectric layer that includes copper layer and a primer layer (e.g., a primer coated copper foil). In some implementations, the copper foil is coated with primer and then pressed on the uncured core to form the structure. The primer coated copper foil may be a copper foil. The dielectric layer may be a core layer or a prepreg layer of a substrate. As shown in stage 1 of FIG. 13, the primer 1304 is located between the copper foil 1306 and the dielectric 1302. The copper foil 1306 may be a copper composite foil in some implementations .[00141] Next, the process drills (at 1210) the dielectric layer (e.g., core layer, prepreg layer) to create one or more openings / pattern features (e.g., via pattern features). This may be done to form one or more vias/via features that connect the front and back side of the dielectric. In some implementations, the drilling may be performed by a laser drilling operation. Moreover, in some implementations, the drilling may traverse one or more the metal layers (e.g., primer coated copper foil). In some implementations, the process may also clean the openings / pattern features (e.g., via patterns) created by the drilling operation, by, for example, de-smearing (at 1212) drilled vias / opening on the layer (e.g., core layer).[00142] The process then etches off (at 1215) the copper foil, leaving the primer on the dielectric layer (which is shown in stage 2 of FIG. 13). Next, the process electroless plates (at 1220) a copper seed layer (e.g., copper material) on the primer in some implementations. The thickness of the copper seed layer in some implementations is about 0.1-1 microns (μιη). Stage 3 of FIG. 13 illustrates a copper seed layer 1308 on the primer 1304.[00143] Next, the process applies (at 1225) a dry film resist (DFR) and a pattern is created (at 1230) on the DFR. Stage 4 of FIG. 13 illustrates a DFR 1310 being applied on top of the copper seed layer 1308, while stage 5 of FIG. 13 illustrates the patterning of the DFR 1310. As shown in stage 5, the patterning creates openings 1312 in the DFR 1310.[00144] After patterning (at 1230) the DFR, the process then electrolytically plates (at 1235) a copper material (e.g., copper composite material) through the pattern of the DFR. In some implementations, electrolytically plating comprises dipping the dielectric and the metal layer in a bath solution. Referring to FIG. 13, stage 6 illustrates copper materials 1320 (e.g., copper composite material) being plated in the openings 1312 of the DFR 1310.[00145] Referring back to FIG. 12, the process removes (at 1240) the DFR, selectively etches (at 1245) the copper seed layer to isolate the features (e.g., create vias, traces, pads) and ends. Referring to FIG. 13, Stage 7 illustrates the removal of the DFR 1310, while Stage 8 illustrates the defined features (e.g., composite conductive trace) after the etching process.[00146] The above process of FIG. 12 may be repeated for each core layer or prepreg layer (dielectric layer) of the substrate.[00147] In some implementations, the SAP process may allow for finer / smaller feature (e.g., trace, vias, pads) formation since the SAP process does not require as much etching to isolate features. In some implementations, the above process may be used for produce Interstitial Via Hole (IVH) in substrates and/or Blind Via Hole (BVH) in substrates.Exemplary Package Substrate That Includes an Electroless Metal Layer[00148] In some implementations, a package substrate may include at least one solder resist layer (e.g., solder resist mask). FIG. 14 conceptually illustrates an example a package substrate that includes surface interconnects and a cavity that includes an electroless fill and a solder resist layer. Specifically, FIG. 14 illustrates a package substrate 1400 that includes a first dielectric layer 1402, a first pad 1404, a via 1406, a second pad 1408, a first interconnect 1410, a second interconnect 1412, a first cavity 1420, a third interconnect 1422,a first solder resist layer 1440, and a second solder resist layer 1442. The first dielectric layer 1402 has a first surface (e.g., top surface) and a second surface (e.g., bottom surface). The first surface is opposite to the second surface. Different implementations may use different materials for the first dielectric layer 1402. In some implementations, the first dielectric layer 1402 may be a substrate.[00149] The first pad 1404 is located on the first surface of the first dielectric layer 1402. In some implementations, the first pad 1404 includes a first metal layer 1403 and a second metal layer 1405. In some implementations, the first metal layer 1403 is a seed layer. In some implementations, the first metal layer 1403 is an electroless fill layer (e.g., electroless metal layer). The via 1406 traverses the first dielectric layer 1402. In some implementations, the via 1406 includes a first metal layer 1407 and a second metal layer 1409. In some implementations, the first metal layer 1407 is a seed layer. In some implementations, the first metal layer 1407 is an electroless fill layer (e.g., electroless metal layer).[00150] The first pad 1404 is coupled to a first portion (e.g., top portion, top surface) of the via 1406. The second pad 1408 is embedded in the second surface of the first dielectric layer 1402. The second pad 1408 is coupled to a second portion (e.g., bottom portion, bottom surface) of the via 1406. Different implementations may use different materials for the first pad 1404, the via 1406, and the second pad 1408. In some implementations, the first pad 1404, the via 1406, and the second pad 1408 includes a metal layer (e.g., copper layer).[00151] The first interconnect 1410 is on the first surface of the first dielectric layer 1402. In some implementations, the first interconnect 1410 is a trace on the first surface of the first dielectric layer 1402. In some implementations, the first interconnect 1410 includes a first metal layer 141 1 and a second metal layer 1413. In some implementations, the first metal layer 1411 is a seed layer. In some implementations, the first metal layer 141 1 is an electroless fill layer (e.g., electroless metal layer)[00152] The second interconnect 1412 is embedded in the second surface of the first dielectric layer 1402. In some implementations, the second interconnect 1412 is a trace embedded in the second surface of the first dielectric layer 1402. Different implementations may use different materials for the first and second interconnects 1410 and 1412. In some implementations, the first and second interconnects 1410 and 1412 include a metal layer (e.g., copper layer).[00153] FIG. 14 also illustrates that the cavity 1420 traverses the first surface of the first dielectric layer 1402. Different implementations may use different process for fabricating the cavity 1420 in the first dielectric layer 1402. In some implementations, the cavity 1420 partially traverses the first dielectric layer 1402 through the first surface of the first dielectric layer 1402. In some implementations, the cavity 1420 is at least partially filled with the third interconnect 1422. In some implementations, the third interconnect 1422 is a trace that is made of an electroless fill. In some implementations, the electroless fill is an electroless metal layer (e.g., electroless copper layer).[00154] In some implementations, the third interconnect 1422 is a high density and/or fine pitch interconnects that electrically couple two dies on the package substrate. An example of interconnects that may electrically couple two dies was further described in FIG. 7. In some implementations, the spacing between two adjacent interconnects (e.g., traces) is about 5 microns (μιη) or less. In some implementations, the spacing between two adjacent interconnects (e.g., traces) is about 3 microns (μιη) or less.[00155] In some implementations, the third interconnect 1422 is made of a different material than the first interconnect 1410 and/or the second interconnect 1412. For example, the third interconnect 1422 includes an electroless metal layer, and the first interconnect 1410 and/or the second interconnect 1412 includes a metal layer.[00156] As shown in FIG. 14, the first solder resist layer 1440 is located on the first surface (e.g., top surface) of the first dielectric layer 1402. In some implementations, the first solder resist layer 1440 may also be in the cavity 1420. The second solder resist layer 1442 is located on the second surface (e.g., bottom surface) of the first dielectric 1402. FIG. 14 illustrates a package substrate without a core layer. However, in some implementations, a package substrate may include a core layer.[00157] FIG. 15 conceptually illustrates an example of a package substrate that includes a core layer, surface interconnects and a cavity that includes an electroless fill and a solder resist layer. Specifically, FIG. 15 illustrates a package substrate 1500 that includes a core layer 1502, a first dielectric layer 1504, a second dielectric layer 1506, a first pad 1510, a first via 1512, a second pad 1514, a second via 1516, a third pad 1518, a third via 1520, and a fourth pad 1522. The package substrate 1500 also includes a first interconnect 1524, a cavity 1530, a second interconnect 1532, a first solder resist layer 1540, and a second solder resist layer 1542.[00158] The core layer 1502 has a first surface (e.g., top surface) and a second surface (e.g., bottom surface). The first surface is opposite to the second surface. Different implementations may use different materials for the core layer 1502. In some implementations, the core layer 1502 may be made of at least one of a dielectric layer. The first dielectric layer 1504 is coupled to the first surface of the core layer 1502. The second dielectric layer 1506 is coupled to the second surface of the core layer 1502. In some implementations, the first dielectric layer 1504 and the second dielectric layer 1506 are prepeg dielectric layers.[00159] The first pad 1510 is located on a first surface (e.g., top surface) of the first dielectric layer 1504. In some implementations, the first pad 1510 includes a first metal layer 151 1 and a second metal layer 1513. In some implementations, the first metal layer 1511 is a seed layer. In some implementations, the first metal layer 1511 is an electroless fill layer (e.g., electroless metal layer). The first via 1512 traverses the first dielectric layer 1504. The first pad 1510 is coupled to a first portion (e.g., top portion, top surface) of the first via 1512. In some implementations, the first via 1512 includes a first metal layer 1515 and a second metal layer 1517. In some implementations, the first metal layer 1515 is a seed layer. In some implementations, the first metal layer 1515 is an electroless fill layer (e.g., electroless metal layer). The second pad 1514 is embedded in a second surface (e.g., bottom surface) of the first dielectric layer 1504. The second pad 1514 is coupled to a second portion (e.g., bottom portion, bottom surface) of the first via 1512.[00160] The second via 1516 traverses the core layer 1502. The second pad 1514 is coupled to a first portion (e.g., top portion, top surface) of the second via 1516. The second pad 1514 is on the first surface of the core layer 1502. The third pad 1518 is coupled to a second portion (e.g., bottom portion, bottom surface) of the second via 1516.[00161] The third pad 1518 is on the second surface (e.g., bottom surface) of the core layer 1502. The third pad 1518 is embedded in a first surface of the second dielectric layer 1506. The third via 1520 traverses the second dielectric layer 1506. The third pad 1518 is coupled to a first portion (e.g., top portion, top surface) of the third via 1520. The fourth pad 1522 is on a second surface (e.g., bottom surface) of the second dielectric layer 1506. The fourth pad 1522 is coupled to a second portion (e.g., bottom portion, bottom surface) of the third via 1520. In some implementations, the third via 1520 includes a first metal layer 1521 and a second metal layer 1523. In some implementations, the first metal layer 1521 is a seed layer. In some implementations, the first metal layer 1521 is an electroless fill layer (e.g., electroless metal layer).[00162] Different implementations may use different materials for the first pad 1510, the first via 1512, the second pad 1514, the second via 1516, the third pad 1518, the third via 1520, and the fourth pad 1522. In some implementations, the first pad 1510, the first via 1512, the second pad 1514, the second via 1516, the third pad 1518, the third via 1520, and the fourth pad 1522 includes a metal layer (e.g., copper layer).[00163] The first interconnect 1524 is on the first surface of the first dielectric layer 1504. In some implementations, the first interconnect 1524 is a trace on the first surface of the first dielectric layer 1504. Different implementations may use different materials for the first interconnect 1524. In some implementations, the first interconnect 1524 include a metal layer (e.g., copper layer). In some implementations, the first interconnect 1524 includes a first metal layer 1525 and a second metal layer 1527. In some implementations, the first metal layer 1525 is a seed layer. In some implementations, the first metal layer 1525 is an electroless fill layer (e.g., electroless metal layer).[00164] FIG. 15 also illustrates that the cavity 1530 traverses the first surface of the first dielectric layer 1504. Different implementations may use different process for fabricating the cavity 1530 in the first dielectric layer 1504. In some implementations, the cavity 1530 partially traverses the first dielectric layer 1504 through the first surface of the first dielectric layer 1504. In some implementations, the cavity 1530 is at least partially filled with the second interconnect 1532. In some implementations, the second interconnect 1532 is a trace that is made of an electroless fill. In some implementations, the electroless fill is an electroless metal layer (e.g., electroless copper layer).[00165] As shown in FIG. 15, the first solder resist layer 1540 is located on the first surface (e.g., top surface) of the first dielectric layer 1504. In some implementations, the first solder resist layer 1540 may also be in the cavity 1530. The second solder resist layer 1542 is located on the first surface (e.g., bottom surface) of the second dielectric layer 1506.[00166] In some implementations, the second interconnect 1532 is a high density and/or fine pitch interconnects that electrically couple two dies on the package substrate. An example of interconnects that may electrically couple two dies was further described in FIG. 7. In some implementations, the spacing between two adjacent interconnects 1532 (e.g., traces) is about 5 microns (μιη) or less. In some implementations, the spacing between two adjacent interconnects (e.g., traces) is about 3 microns (μιη) or less.[00167] In some implementations, the second interconnect 1532 is made of a different material than the first interconnect 1524. For example, the second interconnect 1532 includes an electroless metal layer, and the first interconnect 1524 includes a metal layer.Exemplary Electronic Devices[00168] FIG. 16 illustrates various electronic devices that may be integrated with any of the aforementioned integrated device, semiconductor device, substrate, package substrate, integrated circuit, die, interposer or package. For example, a mobile telephone 1602, a laptop computer 1604, and a fixed location terminal 1606 may include an integrated circuit (IC) 1600 as described herein. The IC 1600 may be, for example, any of the integrated circuits, integrated devices, dies, substrates or packages described herein. The devices 1602, 1604, 1606 illustrated in FIG. 16 are merely exemplary. Other electronic devices may also feature the IC 1600 including, but not limited to, mobile devices, hand-held personal communication systems (PCS) units, portable data units such as personal digital assistants, GPS enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communications devices, smartphones, tablet computers or any other device that stores or retrieves data or computer instructions, or any combination thereof.[00169] One or more of the components, steps, features, and/or functions illustrated in FIGS. 3, 4, 5, 6, 7, 8A-8C, 9, 10A-10B, 11, 12, 13, 14, 15 and/or 16 may be rearranged and/or combined into a single component, step, feature or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from the disclosure. It should also be noted that FIGS. 3, 4, 5, 6, 7, 8A-8C, 9, 10A-10B, 1 1, 12, 13, 14, 15 and/or 16 and its corresponding description in the present disclosure is not limited to dies and/or ICs. In some implementations, FIGS. 3, 4, 5, 6, 7, 8A-8C, 9, 10A-10B, 1 1, 12, 13, 14, 15 and/or 16 and its corresponding description may be used to manufacture, create, provide, and/or produce integrated devices. In some an integrated device may include a die package, substrate, package substrate, an integrated circuit (IC), a wafer, a semiconductor device, and/or an interposer.[00170] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation or aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term "aspects" does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation. The term "coupled" is used herein to refer to the direct or indirect coupling between two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered coupled to one another— even if they do not directly physically touch each other.[00171] Also, it is noted that the embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed.[00172] The various features of the disclosure described herein can be implemented in different systems without departing from the disclosure. It should be noted that the foregoing aspects of the disclosure are merely examples and are not to be construed as limiting the disclosure. The description of the aspects of the present disclosure is intended to be illustrative, and not to limit the scope of the claims. As such, the present teachings can be readily applied to other types of apparatuses and many alternatives, modifications, and variations will be apparent to those skilled in the art. |
Techniques for automated data center maintenance are described. In an example embodiment, an automated maintenance device may comprise processing circuitry and non- transitory computer-readable storage media comprising instructions for execution by the processing circuitry to cause the automated maintenance device to receive an automation command from an automation coordinator for a data center, identify an automated maintenance procedure based on the received automation command, and perform the identified automated maintenance procedure. Other embodiments are described and claimed. |
CLAIMSWhat is claimed is:1. An automated maintenance device, comprising:processing circuitry; andnon-transitory computer-readable storage media comprising instructions for execution by the processing circuitry to cause the automated maintenance device to:receive an automation command from an automation coordinator for a data center; identify an automated maintenance procedure based on the received automation command; andperform the identified automated maintenance procedure in the data center.2. The automated maintenance device of claim 1, the automated maintenance procedure to comprise replacing a compute sled in the data center.3. The automated maintenance device of claim 2, the automated maintenance procedure to comprise:removing the compute sled from a sled space within a rack;removing a memory card from a connector slot of the compute sled, the memory card to store a compute state of the compute sled;inserting the memory card into a connector slot of a replacement compute sled;inserting the replacement compute sled into the sled space; andinitiating a restoration of the stored compute state on the replacement compute sled.4. The automated maintenance device of claim 1, the automated maintenance procedure to comprise replacing one or more cache memory modules of a processor on a sled.5. The automated maintenance device of claim 4, the automated maintenance procedure to comprise:removing the processor from a socket to facilitate access to one or more cache memory modules underlying the processor;removing the one or cache memory modules;inserting one or more replacement cache memory modules; andreinserting the processor into the socket.6. The automated maintenance device of claim 5, the automated maintenance procedure to comprise:removing a heat sink from atop the processor prior to removing the processor from the socket; andreinstalling the heat sink after reinserting the processor into the socket.7. The automated maintenance device of claim 1, comprising a radio frequency (RF) interface to receive a wireless signal comprising the automation command.8. An apparatus for coordination of automated data center maintenance, comprising:processing circuitry; andnon-transitory computer-readable storage media comprising instructions for execution by the processing circuitry to:identify a maintenance task to be performed in a data center;determine to initiate automated performance of the maintenance task;select an automated maintenance device to which to assign the maintenance task; and send an automation command to cause the automated maintenance device to perform an automated maintenance procedure associated with the maintenance task.9. The apparatus of claim 8, the non-transitory computer-readable storage media comprising instructions for execution by the processing circuitry to identify the maintenance task based on telemetry data associated with one or more physical resources of the data center.10. The apparatus of claim 8, the non-transitory computer-readable storage media comprising instructions for execution by the processing circuitry to identify the maintenance task based on environmental data received from one or more automated maintenance devices of the data center.11. The apparatus of claim 8, the non-transitory computer-readable storage media comprising instructions for execution by the processing circuitry to add the maintenance task to a pending task queue following identification of the maintenance task.12. The apparatus of claim 11, the non-transitory computer-readable storage media comprising instructions for execution by the processing circuitry to determine to initiate automated performance of the maintenance task based on a determination that the maintenance task constitutes a highest priority task among one or more maintenance tasks comprised in the pending task queue.13. The apparatus of claim 12, the non-transitory computer-readable storage media comprising instructions for execution by the processing circuitry to select the automated maintenance device from among one or more automated maintenance devices in a candidate device pool.14. A method for automated data center maintenance, comprising:receiving, at an automated maintenance device, an automation command from an automation coordinator for a data center; identifying, by processing circuitry of the automated maintenance device, an automated maintenance procedure based on the received automation command; andperforming the identified automated maintenance procedure in the data center.15. The method of claim 14, the automated maintenance procedure to comprise replacing a compute sled in the data center.16. The method of claim 15, the automated maintenance procedure to comprise:removing the compute sled from a sled space within a rack;removing a memory card from a connector slot of the compute sled, the memory card to store a compute state of the compute sled;inserting the memory card into a connector slot of a replacement compute sled;inserting the replacement compute sled into the sled space; andinitiating a restoration of the stored compute state on the replacement compute sled.17. The method of claim 14, the automated maintenance procedure to comprise replacing one or more cache memory modules of a processor on a sled.18. The method of claim 17, the automated maintenance procedure to comprise:removing the processor from a socket to facilitate access to one or more cache memory modules underlying the processor;removing the one or cache memory modules;inserting one or more replacement cache memory modules; andreinserting the processor into the socket.19. The method of claim 18, the automated maintenance procedure to comprise:removing a heat sink from atop the processor prior to removing the processor from the socket; andreinstalling the heat sink after reinserting the processor into the socket.20. At least one non-transitory computer-readable storage medium comprising a set of instructions that, when executed by an automation coordinator for a data center, cause the automation coordinator to:identify a maintenance task to be performed in a data center;determine to initiate automated performance of the maintenance task;select an automated maintenance device to which to assign the maintenance task; and send an automation command to cause the automated maintenance device to perform an automated maintenance procedure associated with the maintenance task.21. The at least one non- transitory computer-readable storage medium of claim 20, comprising instructions that, when executed by the automation coordinator, cause the automation coordinator to identify the maintenance task based on telemetry data associated with one or more physical resources of the data center.22. The at least one non-transitory computer-readable storage medium of claim 20, comprising instructions that, when executed by the automation coordinator, cause the automation coordinator to identify the maintenance task based on environmental data received from one or more automated maintenance devices of the data center.23. The at least one non- transitory computer-readable storage medium of claim 20, comprising instructions that, when executed by the automation coordinator, cause the automation coordinator to add the maintenance task to a pending task queue following identification of the maintenance task.24. The at least one non-transitory computer-readable storage medium of claim 23, comprising instructions that, when executed by the automation coordinator, cause the automation coordinator to determine to initiate automated performance of the maintenance task based on a determination that the maintenance task constitutes a highest priority task among one or more maintenance tasks comprised in the pending task queue.25. The at least one non- transitory computer-readable storage medium of claim 24, comprising instructions that, when executed by the automation coordinator, cause the automation coordinator to select the automated maintenance device from among one or more automated maintenance devices in a candidate device pool. |
AUTOMATED DATA CENTER MAINTENANCERELATED CASEThis application claims priority to United States Patent Application Serial Number 15/654,615 filed July 19, 2017, United States Provisional Patent Application Number62/365,969, filed July 22, 2016, United States Provisional Patent Application Number 62/376,859, filed August 18, 2016, and United States Provisional Patent Application Number 62/427,268, filed November 29, 2016, each of which is hereby incorporated by reference in its entirety.BACKGROUNDIn the course of ordinary operation of a data center, various types of maintenance are typically necessary in order to maintain desired levels of performance, stability, and reliability. Examples of such maintenance include testing, repair, replacement, and/or reconfiguration of components, installing new components, upgrading existing components, repositioning components and equipment, and other tasks of such a nature. A large modern data center may contain great numbers of components and equipment of various types, and as a result, may have the potential to impose a fairly substantial maintenance burden.BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates an embodiment of a first data center.FIG. 2 illustrates an embodiment of a logical configuration of a rack.FIG. 3 illustrates an embodiment of a second data center.FIG. 4 illustrates an embodiment of a third data center.FIG. 5 illustrates an embodiment of a connectivity scheme.FIG. 6 illustrates an embodiment of first rack architecture.FIG. 7 illustrates an embodiment of a first sled.FIG. 8 illustrates an embodiment of a second rack architecture.FIG. 9 illustrates an embodiment of a rack.FIG. 10 illustrates an embodiment of a second sled.FIG. 11 illustrates an embodiment of a fourth data center.FIG. 12 illustrates an embodiment of a first logic flow.FIG. 13 illustrates an embodiment of a fifth data center.FIG. 14 illustrates an embodiment of an automated maintenance device.FIG. 15 illustrates an embodiment of a first operating environment.FIG. 16 illustrates an embodiment of a second operating environment. FIG. 17 illustrates an embodiment of a third operating environment.FIG. 18 illustrates an embodiment of a fourth operating environment.FIG. 19 illustrates an embodiment of a fifth operating environment.FIG. 20 illustrates an embodiment of a sixth operating environment.FIG. 21 illustrates an embodiment of a first logic flow.FIG. 22 illustrates an embodiment of a second logic flow.FIG. 23 illustrates an embodiment of a third logic flow.FIG. 24A illustrates an embodiment of a first storage medium.FIG. 24B illustrates an embodiment of a second storage medium.FIG. 25 illustrates an embodiment of a computing architecture.FIG. 26 illustrates an embodiment of a communications architecture.FIG. 27 illustrates an embodiment of a communication device.FIG. 28 illustrates an embodiment of a first wireless network.FIG. 29 illustrates an embodiment of a second wireless network.DETAILED DESCRIPTIONVarious embodiments may be generally directed to techniques for automated data center maintenance. In one embodiment, for example, an automated maintenance device may comprise processing circuitry and non-transitory computer-readable storage media comprising instructions for execution by the processing circuitry to cause the automated maintenance device to receive an automation command from an automation coordinator for a data center, identify an automated maintenance procedure based on the received automation command, and perform the identified automated maintenance procedure. Other embodiments are described and claimed.Various embodiments may comprise one or more elements. An element may comprise any structure arranged to perform certain operations. Each element may be implemented as hardware, software, or any combination thereof, as desired for a given set of design parameters or performance constraints. Although an embodiment may be described with a limited number of elements in a certain topology by way of example, the embodiment may include more or less elements in alternate topologies as desired for a given implementation. It is worthy to note that any reference to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrases "in one embodiment," "in someembodiments," and "in various embodiments" in various places in the specification are not necessarily all referring to the same embodiment. FIG. 1 illustrates a conceptual overview of a data center 100 that may generally be representative of a data center or other type of computing network in/for which one or more techniques described herein may be implemented according to various embodiments. As shown in FIG. 1, data center 100 may generally contain a plurality of racks, each of which may house computing equipment comprising a respective set of physical resources. In the particular non- limiting example depicted in FIG. 1, data center 100 contains four racks 102 A to 102D, which house computing equipment comprising respective sets of physical resources (PCRs) 105 A to 105D. According to this example, a collective set of physical resources 106 of data center 100 includes the various sets of physical resources 105 A to 105D that are distributed among racks 102A to 102D. Physical resources 106 may include resources of multiple types, such as - for example - processors, co-processors, accelerators, field-programmable gate arrays (FPGAs), memory, and storage. The embodiments are not limited to these examples.The illustrative data center 100 differs from typical data centers in many ways. For example, in the illustrative embodiment, the circuit boards ("sleds") on which components such as CPUs, memory, and other components are placed are designed for increased thermal performance. In particular, in the illustrative embodiment, the sleds are shallower than typical boards. In other words, the sleds are shorter from the front to the back, where cooling fans are located. This decreases the length of the path that air must to travel across the components on the board. Further, the components on the sled are spaced further apart than in typical circuit boards, and the components are arranged to reduce or eliminate shadowing (i.e., one component in the air flow path of another component). In the illustrative embodiment, processing components such as the processors are located on a top side of a sled while near memory, such as DIMMs, are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in each rack 102A, 102B, 102C, 102D, enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity.Furthermore, in the illustrative embodiment, the data center 100 utilizes a single network architecture ("fabric") that supports multiple other network architectures including Ethernet and Omni-Path. The sleds, in the illustrative embodiment, are coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twister pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latencyinterconnections and network architecture, the data center 100 may, in use, pool resources, such as memory, accelerators (e.g., graphics accelerators, FPGAs, ASICs, etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local. The illustrative data center 100 additionally receives usage information for the various resources, predicts resource usage for different types of workloads based on past resource usage, and dynamically reallocates the resources based on this information.The racks 102A, 102B, 102C, 102D of the data center 100 may include physical design features that facilitate the automation of a variety of types of maintenance tasks. For example, data center 100 may be implemented using racks that are designed to be robotically-accessed, and to accept and house robotically-manipulable resource sleds. Furthermore, in the illustrative embodiment, the racks 102A, 102B, 102C, 102D include integrated power sources that receive a greater voltage than is typical for power sources. The increased voltage enables the power sources to provide additional power to the components on each sled, enabling the components to operate at higher than typical frequencies. FIG. 2 illustrates an exemplary logical configuration of a rack 202 of the data center 100. As shown in FIG. 2, rack 202 may generally house a plurality of sleds, each of which may comprise a respective set of physical resources. In the particular non- limiting example depicted in FIG. 2, rack 202 houses sleds 204-1 to 204-4 comprising respective sets of physical resources 205-1 to 205-4, each of which constitutes a portion of the collective set of physical resources 206 comprised in rack 202. With respect to FIG. 1, if rack 202 is representative of - for example - rack 102A, then physical resources 206 may correspond to the physical resources 105 A comprised in rack 102A. In the context of this example, physical resources 105 A may thus be made up of the respective sets of physical resources, including physical storage resources 205-1, physical accelerator resources 205-2, physical memory resources 205-3, and physical compute resources 205-5 comprised in the sleds 204-1 to 204-4 of rack 202. The embodiments are not limited to this example. Each sled may contain a pool of each of the various types of physical resources (e.g., compute, memory, accelerator, storage). By having robotically accessible and robotically manipulable sleds comprising disaggregated resources, each type of resource can be upgraded independently of each other and at their own optimized refresh rate.FIG. 3 illustrates an example of a data center 300 that may generally be representative of one in/for which one or more techniques described herein may be implemented according to various embodiments. In the particular non-limiting example depicted in FIG. 3, data center 300 comprises racks 302-1 to 302-32. In various embodiments, the racks of data center 300 may be arranged in such fashion as to define and/or accommodate various access pathways. For example, as shown in FIG. 3, the racks of data center 300 may be arranged in such fashion as to define and/or accommodate access pathways 311 A, 311B, 311C, and 31 ID. In some embodiments, the presence of such access pathways may generally enable automated maintenance equipment, such as robotic maintenance equipment, to physically access the computing equipment housed in the various racks of data center 300 and perform automated maintenance tasks (e.g., replace a failed sled, upgrade a sled). In various embodiments, the dimensions of access pathways 311A, 31 IB, 311C, and 31 ID, the dimensions of racks 302-1 to 302-32, and/or one or more other aspects of the physical layout of data center 300 may be selected to facilitate such automated operations. The embodiments are not limited in this context.FIG. 4 illustrates an example of a data center 400 that may generally be representative of one in/for which one or more techniques described herein may be implemented according to various embodiments. As shown in FIG. 4, data center 400 may feature an optical fabric 412. Optical fabric 412 may generally comprise a combination of optical signaling media (such as optical cabling) and optical switching infrastructure via which any particular sled in data center 400 can send signals to (and receive signals from) each of the other sleds in data center 400. The signaling connectivity that optical fabric 412 provides to any given sled may includeconnectivity both to other sleds in a same rack and sleds in other racks. In the particular non- limiting example depicted in FIG. 4, data center 400 includes four racks 402A to 402D. Racks 402A to 402D house respective pairs of sleds 404A-1 and 404A-2, 404B-1 and 404B-2, 404C-1 and 404C-2, and 404D-1 and 404D-2. Thus, in this example, data center 400 comprises a total of eight sleds. Via optical fabric 412, each such sled may possess signaling connectivity with each of the seven other sleds in data center 400. For example, via optical fabric 412, sled 404A- 1 in rack 402A may possess signaling connectivity with sled 404A-2 in rack 402A, as well as the six other sleds 404B-1, 404B-2, 404C-1, 404C-2, 404D-1, and 404D-2 that are distributed among the other racks 402B, 402C, and 402D of data center 400. The embodiments are not limited to this example.FIG. 5 illustrates an overview of a connectivity scheme 500 that may generally be representative of link-layer connectivity that may be established in some embodiments among the various sleds of a data center, such as any of example data centers 100, 300, and 400 of FIGs. 1, 3, and 4. Connectivity scheme 500 may be implemented using an optical fabric that features a dual-mode optical switching infrastructure 514. Dual-mode optical switching infrastructure 514 may generally comprise a switching infrastructure that is capable of receiving communications according to multiple link-layer protocols via a same unified set of optical signaling media, and properly switching such communications. In various embodiments, dual-mode optical switching infrastructure 514 may be implemented using one or more dual-mode optical switches 515. In various embodiments, dual-mode optical switches 515 may generally comprise high-radix switches. In some embodiments, dual-mode optical switches 515 may comprise multi-ply switches, such as four-ply switches. In various embodiments, dual-mode optical switches 515 may feature integrated silicon photonics that enable them to switch communications with significantly reduced latency in comparison to conventional switching devices. In some embodiments, dual-mode optical switches 515 may constitute leaf switches 530 in a leaf-spine architecture additionally including one or more dual-mode optical spine switches 520.In various embodiments, dual-mode optical switches may be capable of receiving both Ethernet protocol communications carrying Internet Protocol (IP packets) and communications according to a second, high-performance computing (HPC) link-layer protocol (e.g., Intel's Omni-Path Architecture's, Infiniband) via optical signaling media of an optical fabric. As reflected in FIG. 5, with respect to any particular pair of sleds 504A and 504B possessing optical signaling connectivity to the optical fabric, connectivity scheme 500 may thus provide support for link-layer connectivity via both Ethernet links and HPC links. Thus, both Ethernet and HPC communications can be supported by a single high-bandwidth, low-latency switch fabric. The embodiments are not limited to this example.FIG. 6 illustrates a general overview of a rack architecture 600 that may be representative of an architecture of any particular one of the racks depicted in FIGs. 1 to 4 according to some embodiments. As reflected in FIG. 6, rack architecture 600 may generally feature a plurality of sled spaces into which sleds may be inserted, each of which may be robotically-accessible via a rack access region 601. In the particular non-limiting example depicted in FIG. 6, rack architecture 600 features five sled spaces 603-1 to 603-5. Sled spaces 603-1 to 603-5 feature respective multi-purpose connector modules (MPCMs) 616-1 to 616-5.Included among the types of sleds to be accommodated by rack architecture 600 may be one or more types of sleds that feature expansion capabilities. FIG. 7 illustrates an example of a sled 704 that may be representative of a sled of such a type. As shown in FIG. 7, sled 704 may comprise a set of physical resources 705, as well as an MPCM 716 designed to couple with a counterpart MPCM when sled 704 is inserted into a sled space such as any of sled spaces 603-1 to 603-5 of FIG. 6. Sled 704 may also feature an expansion connector 717. Expansion connector 717 may generally comprise a socket, slot, or other type of connection element that is capable of accepting one or more types of expansion modules, such as an expansion sled 718. By coupling with a counterpart connector on expansion sled 718, expansion connector 717 may provide physical resources 705 with access to supplemental computing resources 705B residing on expansion sled 718. The embodiments are not limited in this context.FIG. 8 illustrates an example of a rack architecture 800 that may be representative of a rack architecture that may be implemented in order to provide support for sleds featuring expansion capabilities, such as sled 704 of FIG. 7. In the particular non-limiting example depicted in FIG. 8, rack architecture 800 includes seven sled spaces 803-1 to 803-7, which feature respective MPCMs 816-1 to 816-7. Sled spaces 803-1 to 803-7 include respective primary regions 803-1A to 803-7A and respective expansion regions 803-1B to 803-7B. With respect to each such sled space, when the corresponding MPCM is coupled with a counterpart MPCM of an inserted sled, the primary region may generally constitute a region of the sled space that physically accommodates the inserted sled. The expansion region may generally constitute a region of the sled space that can physically accommodate an expansion module, such as expansion sled 718 of FIG. 7, in the event that the inserted sled is configured with such a module.FIG. 9 illustrates an example of a rack 902 that may be representative of a rack implemented according to rack architecture 800 of FIG. 8 according to some embodiments. In the particular non-limiting example depicted in FIG. 9, rack 902 features seven sled spaces 903-1 to 903-7, which include respective primary regions 903-1A to 903-7A and respective expansion regions 903-1B to 903-7B. In various embodiments, temperature control in rack 902 may be implemented using an air cooling system. For example, as reflected in FIG. 9, rack 902 may feature a plurality of fans 919 that are generally arranged to provide air cooling within the various sled spaces 903-1 to 903-7. In some embodiments, the height of the sled space is greater than the conventional "1U" server height. In such embodiments, fans 919 may generally comprise relatively slow, large diameter cooling fans as compared to fans used in conventional rack configurations. Running larger diameter cooling fans at lower speeds may increase fan lifetime relative to smaller diameter cooling fans running at higher speeds while still providing the same amount of cooling. The sleds are physically shallower than conventional rack dimensions. Further, components are arranged on each sled to reduce thermal shadowing (i.e., not arranged serially in the direction of air flow). As a result, the wider, shallower sleds allow for an increase in device performance because the devices can be operated at a higher thermal envelope (e.g., 250W) due to improved cooling (i.e., no thermal shadowing, more space between devices, more room for larger heat sinks, etc.).MPCMs 916-1 to 916-7 may be configured to provide inserted sleds with access to power sourced by respective power modules 920-1 to 920-7, each of which may draw power from an external power source 921. In various embodiments, external power source 921 may deliver alternating current (AC) power to rack 902, and power modules 920-1 to 920-7 may be configured to convert such AC power to direct current (DC) power to be sourced to inserted sleds. In some embodiments, for example, power modules 920-1 to 920-7 may be configured to convert 277-volt AC power into 12-volt DC power for provision to inserted sleds via respective MPCMs 916-1 to 916-7. The embodiments are not limited to this example.MPCMs 916-1 to 916-7 may also be arranged to provide inserted sleds with optical signaling connectivity to a dual-mode optical switching infrastructure 914, which may be the same as - or similar to - dual-mode optical switching infrastructure 514 of FIG. 5. In various embodiments, optical connectors contained in MPCMs 916-1 to 916-7 may be designed to couple with counterpart optical connectors contained in MPCMs of inserted sleds to provide such sleds with optical signaling connectivity to dual-mode optical switching infrastructure 914 via respective lengths of optical cabling 922-1 to 922-7. In some embodiments, each such length of optical cabling may extend from its corresponding MPCM to an optical interconnect loom 923 that is external to the sled spaces of rack 902. In various embodiments, optical interconnect loom 923 may be arranged to pass through a support post or other type of load-bearing element of rack 902. The embodiments are not limited in this context. Because inserted sleds connect to an optical switching infrastructure via MPCMs, the resources typically spent in manually configuring the rack cabling to accommodate a newly inserted sled can be saved.FIG. 10 illustrates an example of a sled 1004 that may be representative of a sled designed for use in conjunction with rack 902 of FIG. 9 according to some embodiments. Sled 1004 may feature an MPCM 1016 that comprises an optical connector 1016 A and a power connector 1016B, and that is designed to couple with a counterpart MPCM of a sled space in conjunction with insertion of MPCM 1016 into that sled space. Coupling MPCM 1016 with such a counterpart MPCM may cause power connector 1016 to couple with a power connector comprised in the counterpart MPCM. This may generally enable physical resources 1005 of sled 1004 to source power from an external source, via power connector 1016 and power transmission media 1024 that conductively couples power connector 1016 to physical resources 1005. Sled 1004 may also include dual-mode optical network interface circuitry 1026. Dual- mode optical network interface circuitry 1026 may generally comprise circuitry that is capable of communicating over optical signaling media according to each of multiple link-layer protocols supported by dual-mode optical switching infrastructure 914 of FIG. 9. In some embodiments, dual-mode optical network interface circuitry 1026 may be capable both of Ethernet protocol communications and of communications according to a second, high-performance protocol. In various embodiments, dual-mode optical network interface circuitry 1026 may include one or more optical transceiver modules 1027, each of which may be capable of transmitting and receiving optical signals over each of one or more optical channels. The embodiments are not limited in this context.Coupling MPCM 1016 with a counterpart MPCM of a sled space in a given rack may cause optical connector 1016A to couple with an optical connector comprised in the counterpart MPCM. This may generally establish optical connectivity between optical cabling of the sled and dual-mode optical network interface circuitry 1026, via each of a set of optical channels 1025. Dual-mode optical network interface circuitry 1026 may communicate with the physical resources 1005 of sled 1004 via electrical signaling media 1028. In addition to the dimensions of the sleds and arrangement of components on the sleds to provide improved cooling and enable operation at a relatively higher thermal envelope (e.g., 250W), as described above with reference to FIG. 9, in some embodiments, a sled may include one or more additional features to facilitate air cooling, such as a heatpipe and/or heat sinks arranged to dissipate heat generated by physical resources 1005. It is worthy of note that although the example sled 1004 depicted in FIG. 10 does not feature an expansion connector, any given sled that features the design elements of sled 1004 may also feature an expansion connector according to some embodiments. The embodiments are not limited in this context.FIG. 11 illustrates an example of a data center 1100 that may generally be representative of one in/for which one or more techniques described herein may be implemented according to various embodiments. As reflected in FIG. 11, a physical infrastructure management framework 1150A may be implemented to facilitate management of a physical infrastructure 1100A of data center 1100. In various embodiments, one function of physical infrastructure management framework 1150A may be to manage automated maintenance functions within data center 1100, such as the use of robotic maintenance equipment to service computing equipment within physical infrastructure 1100A. In some embodiments, physical infrastructure 1100A may feature an advanced telemetry system that performs telemetry reporting that is sufficiently robust to support remote automated management of physical infrastructure 1100A. In various embodiments, telemetry information provided by such an advanced telemetry system may support features such as failure prediction/prevention capabilities and capacity planning capabilities. In some embodiments, physical infrastructure management framework 1150A may also be configured to manage authentication of physical infrastructure components using hardware attestation techniques. For example, robots may verify the authenticity of components before installation by analyzing information collected from a radio frequency identification (RFID) tag associated with each component to be installed. The embodiments are not limited in this context.As shown in FIG. 11, the physical infrastructure 1100A of data center 1100 may comprise an optical fabric 1112, which may include a dual-mode optical switching infrastructure 1114.Optical fabric 1112 and dual-mode optical switching infrastructure 1114 may be the same as - or similar to - optical fabric 412 of FIG. 4 and dual-mode optical switching infrastructure 514 of FIG. 5, respectively, and may provide high-bandwidth, low-latency, multi-protocol connectivity among sleds of data center 1100. As discussed above, with reference to FIG. 1, in various embodiments, the availability of such connectivity may make it feasible to disaggregate and dynamically pool resources such as accelerators, memory, and storage. In some embodiments, for example, one or more pooled accelerator sleds 1130 may be included among the physical infrastructure 1100 A of data center 1100, each of which may comprise a pool of accelerator resources - such as co-processors and/or FPGAs, for example - that is available globally accessible to other sleds via optical fabric 1112 and dual-mode optical switching infrastructure 1114.In another example, in various embodiments, one or more pooled storage sleds 1132 may be included among the physical infrastructure 1100A of data center 1100, each of which may comprise a pool of storage resources that is available globally accessible to other sleds via optical fabric 1112 and dual-mode optical switching infrastructure 1114. In some embodiments, such pooled storage sleds 1132 may comprise pools of solid-state storage devices such as solid- state drives (SSDs). In various embodiments, one or more high-performance processing sleds 1134 may be included among the physical infrastructure 1100A of data center 1100. In some embodiments, high-performance processing sleds 1134 may comprise pools of high-performance processors, as well as cooling features that enhance air cooling to yield a higher thermal envelope of up to 250W or more. In various embodiments, any given high-performance processing sled 1134 may feature an expansion connector 1117 that can accept a far memory expansion sled, such that the far memory that is locally available to that high-performance processing sled 1134 is disaggregated from the processors and near memory comprised on that sled. In some embodiments, such a high-performance processing sled 1134 may be configured with far memory using an expansion sled that comprises low-latency SSD storage. The optical infrastructure allows for compute resources on one sled to utilize remote accelerator/FPGA, memory, and/or SSD resources that are disaggregated on a sled located on the same rack or any other rack in the data center. The remote resources can be located one switch jump away or two- switch jumps away in the spine-leaf network architecture described above with reference to FIG. 5. The embodiments are not limited in this context.In various embodiments, one or more layers of abstraction may be applied to the physical resources of physical infrastructure 1100A in order to define a virtual infrastructure, such as a software-defined infrastructure 1100B. In some embodiments, virtual computing resources 1136 of software-defined infrastructure 1100B may be allocated to support the provision of cloud services 1140. In various embodiments, particular sets of virtual computing resources 1136 may be grouped for provision to cloud services 1140 in the form of SDI services 1138. Examples of cloud services 1140 may include - without limitation - software as a service (SaaS) services 1142, platform as a service (PaaS) services 1144, and infrastructure as a service (IaaS) services 1146.In some embodiments, management of software-defined infrastructure 1100B may be conducted using a virtual infrastructure management framework 1150B. In variousembodiments, virtual infrastructure management framework 1150B may be designed to implement workload fingerprinting techniques and/or machine-learning techniques in conjunction with managing allocation of virtual computing resources 1136 and/or SDI services 1138 to cloud services 1140. In some embodiments, virtual infrastructure management framework 1150B may use/consult telemetry data in conjunction with performing such resource allocation. In various embodiments, an application/service management framework 1150C may be implemented in order to provide QoS management capabilities for cloud services 1140. The embodiments are not limited in this context.FIG. 12 illustrates an example of a logic flow 1200 that may be representative of a maintenance algorithm for a data center, such as one or more of data center 100 of FIG. 1, data center 300 of FIG. 3, data center 400 of FIG. 4, and data center 1100 of FIG. 11. As shown in FIG. 12, data center operation information may be collected at 1202. In various embodiments, the collected data center operation information may include information describing various characteristics of ongoing operation of the data center, such as resource utilization levels, workload sizes, throughput rates, temperature measurements, and so forth. In someembodiments, the collected data center operation information may additionally or alternatively include information describing other characteristics of the data center, such as the types of resources comprised in the data center, the locations/distributions of such resources within the data center, the capabilities and/or features of those resources, and so forth. The embodiments are not limited to these examples.Based on data center operation information such as may be collected at 1202, a maintenance task to be completed may be identified at 1204. In one example, based on data center operation information indicating that processing resources on a given sled are non- responsive to communications from resources on other sleds, it may be determined at 1204 that the sled is to be pulled for testing. In another example, based on data center operation information indicating that a particular DIMM has reached the end of its estimated service life, it may be determined that the DIMM is to be replaced. At 1206, a set of physical actions associated with the maintenance task may be determined, and those physical actions may be performed at 1208 in order to complete the maintenance task. For instance, in theaforementioned example in which it is determined at 1204 that a DIMM is to be replaced, the physical actions identified at 1206 and performed at 1208 may include traveling to a particular rack in order to access a sled comprising the DIMM, removing the DIMM from a socket on the sled, and inserting a replacement DIMM into the socket. The embodiments are not limited to this example.FIG. 13 illustrates an overhead view of an example data center 1300. According to various embodiments, data center 1300 may be representative of a data center in which various operations associated with data center maintenance - such as operations associated with one or more of blocks 1202, 1204, 1206, and 1208 in logic flow 1200 of FIG. 12 - are automated using the capabilities of robotic maintenance equipment. According to some embodiments, data center 1300 may be representative of one or more of data center 100 of FIG. 1, data center 300 of FIG. 3, data center 400 of FIG. 4, and data center 1100 of FIG. 11. The embodiments are not limited in this context.In various embodiments, according to an automated maintenance scheme implemented in data center 1300, robots 1360 may be used to service, repair, replace, clean, test, configure, upgrade, move, position, and/or otherwise manipulate equipment housed in racks 1302. Racks 1302 may be arranged in such fashion as to define and/or accommodate access pathways via which robots 1360 can physically access such equipment. Robots 1360 may traverse such access pathways in conjunction with moving around in data center 1300 to perform various tasks. Physical features of equipment housed in racks 1302 may be designed to facilitate robotic manipulation/handling. It is to be appreciated that in various embodiments, the equipment housed in racks 1302 may include some equipment that is not robotically accessible/serviceable. Further, in some embodiments, there may be some equipment within data center 1300 that is robotically accessible/serviceable but is not housed in racks 1302. The embodiments are not limited in this context.FIG. 14 illustrates a block diagram of an automated maintenance device 1400 that may be representative of any given robot 1360 in data center 1300 of FIG. 13 according to various embodiments. As shown in FIG. 14, automated maintenance device 1400 may comprise a variety of elements. In the non-limiting example depicted in FIG. 14, automated maintenance device 1400 comprises locomotion elements 1462, manipulation elements 1463, sensory elements 1464, communication elements 1465, interfaces 1466, memory/storage elements 1467, and operations management and control (OMC) elements 1468.Locomotion elements 1462 may generally comprise physical elements enabling automated maintenance device 1400 to move around within a data center. In various embodiments, locomotion elements 1462 may comprise wheels. In some embodiments, locomotion elements 1462 may comprise caterpillar tracks. In various embodiments, automated maintenance device 1400 may provide the motive power/force required for motion. For example, in some embodiments, automated maintenance device 1400 may feature a battery that provides power to drive wheels or tracks used by automated maintenance device 1400 for moving around in a data center. In various other embodiments, the motive power/force may be provided by an external source. The embodiments are not limited in this context.Manipulation elements 1463 may generally comprise physical elements that are usable to manipulate various types of equipment in a data center. In some embodiments, manipulation elements 1463 may include one or more robotic arms. In various embodiments, manipulation elements 1463 may include one or more multi-link manipulators. In some embodiments, manipulation elements 1463 may include one or more end effectors usable for gripping various types of equipment, components, and/or other objects within the data center. In various embodiments, manipulation elements 1463 may include one or more end effectors comprising impactive grippers, such as jaw or claw grippers. In some embodiments, manipulation elements 1463 may include one or more end effectors comprising ingressive grippers, which may feature pins, needles, hackles, or other elements that are to physically penetrate the surface of an object being gripped. In various embodiments, manipulation elements 1463 may include one or more end effectors comprising astrictive grippers, which may grip objects using air suction, magnetic adhesion, or electroadhesion. The embodiments are not limited to these examples. Sensory elements 1464 may generally comprise physical elements that are usable to sense various aspects of ambient conditions within a data center. Examples of sensory elements 1464 may include cameras, alignment guides/sensors, distance sensors, proximity sensors, barcode readers, RFID/NFC readers, temperature sensors, airflow sensors, air quality sensors, humidity sensors, and pressure sensors. The embodiments are not limited to these examples.Communication elements 1465 may generally comprise a set of electronic components and/or circuitry operable to perform functions associated with communications between automated maintenance device 1400 and one or more external devices. In a given embodiment, such communications may include wireless communications, wired communications, or both. In various embodiments, communication elements 1465 may include elements operative to generate/construct packets, frames, messages, and/or other information to be wirelessly communicated to external device(s), and/or to process/deconstruct packets, frames, messages, and/or other information wirelessly received from external device(s). In various embodiments, for example, communication elements 1465 may include baseband circuitry supporting wireless communications according to one or more wireless communication protocols/standards. In some embodiments, communication elements 1465 may include elements operative to generate, process, construct, and/or deconstruct packets, frames, messages, and/or other information communicated over wired media. In various embodiments, for example, communication elements 1465 may include network interface circuitry supporting wired communications according to one or more wired communication protocols/standards. The embodiments are not limited in this context.In various embodiments, interfaces 1466 may include one or more communication interfaces 1466 A. As reflected in FIG. 14, examples of interfaces 1466 that automated maintenance device 1400 may feature in various embodiments may include - without limitation - communication interfaces 1466 A, testing interfaces 1466B, power interfaces 1466C, and user interfaces 1466D.Communication interfaces 1466A may generally comprise interfaces usable to transmit and/or receive signals via one or more communication media, which may include wired media, wireless media, or both. In various embodiments, communication interfaces 1466 A may include one or more wireless communication interfaces, such as radio frequency (RF) interfaces and/or optical wireless communication (OWC) interfaces. In some embodiments, communication interfaces may additionally or alternatively include one or more wired communication interfaces, such as interface(s) for communicating over media such as coaxial cable, twisted pair, and optical fiber. The embodiments are not limited to these examples. In various embodiments, interfaces 1466 may include one or more testing interfaces 1466B. Testing interfaces 1466B may generally comprise interfaces via which automated maintenance device 1400 is able to test physical components/resources of one or more types, which may include - without limitation - one or more of physical storage resources 205-1, physical accelerator resources 205-2, physical memory resources 205-3, and physical compute resources 205-4 of FIG. 2. In an example embodiment, interfaces 1466 may include a testing interface 1466B that enables automated maintenance device 1400 to test the functionality of a DIMM inserted into a testing slot. The embodiments are not limited to these examples.In various embodiments, interfaces 1466 may include one or more power interfaces 1466C. Power interfaces 1466C may generally comprise interfaces via which automated maintenance device 1400 can draw and/or source power. In various embodiments, power interfaces 1466C may include one or more interfaces via which automated maintenance device 1400 can draw power from external source(s). In some embodiments, automated maintenance device 1400 may feature one or more power interfaces 1466C configured to provide charge to one or more batteries (not shown), and automated maintenance device may draw its operating power from those one or more batteries. In various embodiments, automated maintenance device 1400 may feature one or more power interfaces 1466C via which it can directly draw operating power. In various embodiments, automated maintenance device 1400 may feature one or more power interfaces 1466C via which it can source power to external devices. For example, in various embodiments, automated maintenance device 1400 may feature a power interface 1466C via which it can source power to charge a battery of a second automated maintenance device. The embodiments are not limited to this example.In some embodiments, interfaces 1466 may include one or more user interfaces. User interfaces 1466D may generally comprise interfaces via which information can be provided to human technicians and/or user input can be accepted from human technicians. Examples of user interfaces 1466D may include displays, touchscreens, speakers, microphones, keypads, mice, trackballs, trackpads, joysticks, fingerprint readers, retinal scanners, buttons, switches, and the like. The embodiments are not limited to these examples.Memory/storage elements 1467 may generally comprise a set of electronic components and/or circuitry capable of retaining data, such as any of various types of data that may be generated, transmitted, received, and/or used by automated maintenance device 1400 during normal operation. In some embodiments, memory/storage elements 1467 may include one or both of volatile memory and non-volatile memory. For example, in various embodiments, memory/storage elements 1467 may include one or more of read-only memory (ROM), random- access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDR AM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, hard disks, an array of devices such as Redundant Array of Independent Disks (RAID) drives, solid state memory devices, solid state drives (SSDs), or any other type of media suitable for storing information. The embodiments are not limited to these examples.OMC elements 1468 may generally comprise a set of components and/or circuitry capable of performing computing operations required to implement logic for managing and controlling the operations of automated maintenance device 1400. In various embodiments, OMC elements 1468 may include processing circuitry, such as one or more processors/processing units. In some embodiments, an automation engine 1469 may execute on such processing circuitry. Automation engine 1469 may generally be operative to conduct overall management, control, coordination, and/or oversight of the operations of automated maintenance device 1400. In various embodiments, this may include management, coordination, control, and/or oversight of the operations/usage of various other elements within automated maintenance device 1400, such as any or all of locomotion elements 1462, manipulation elements 1463, sensory elements 1464, communication elements 1465, interfaces 1466, and memory/storage elements 1467. The embodiments are not limited in this context.FIG. 15 illustrates an example of an operating environment 1500 that may berepresentative of the implementation of an automated maintenance scheme in data center 1300 according to various embodiments. According to such an automated maintenance scheme, an automation coordinator 1555 may centrally manage/coordinate various aspects of automated maintenance operations in data center 1300. In some embodiments, automation coordinator1555 may centrally manage/coordinate various aspects of automated maintenance operations in data center 1300 based in part on telemetry data 1571 provided by a telemetry framework 1570. According to various embodiments, telemetry framework 1570 may be representative of an advanced telemetry system that performs telemetry reporting for physical infrastructure 1100A in data center 1100 of FIG. 11, and automation coordinator 1555 may be representative of automated maintenance coordination functionality of physical infrastructure management framework 1150A. The embodiments are not limited in this context.In some embodiments, management/coordination functionality of automation coordinator 1555 may be provided by a coordination engine 1572. In various embodiments, coordination engine 1572 may execute on processing circuitry of automation coordinator 1555. In various embodiments, coordination engine 1572 may generate automation commands 1573 for transmission to robots 1360 in order to instruct robots 1360 to perform automated maintenance tasks and/or actions associated with such tasks. In some embodiments, robots 1360 may provide automation coordinator 1555 with various types of feedback 1574 in order to - for example - acknowledge automation commands 1573, report the results of attempted maintenance tasks, provide information regarding the statuses of components, resources, and/or equipment, provide information regarding information regarding the statuses of robots 1360 themselves, and/or report measurements of one or more aspects of ambient conditions in the data center. The embodiments are not limited to these examples.In some embodiments, coordination engine 1572 may consider various types of information in conjunction with automated maintenance coordination/management. As reflected in FIG. 15, examples of such types of information may include physical infrastructure information 1575, data center operations information 1576, maintenance task information 1577, and maintenance equipment information 1579.Physical infrastructure information 1575 may generally comprise information identifying equipment, devices, components, interconnects, physical resources, and/or other infrastructure elements that comprise portions of the physical infrastructure of data center 1300, and describing characteristics of such elements. Data center operations information 1576 may generally comprise information describing various aspects of ongoing operations within data center 1300. In some embodiments, for example, data center operations information 1576 may include information describing one or more workloads currently being processed in data center 1300. In various embodiments, data center operations information 1576 may include metricscharacterizing one or more aspects of current operations in data center 1300. For example, in some embodiments, data center operations information 1576 may include performance metrics characterizing the relative level of performance currently being achieved in data center 1300, efficiency metrics characterizing the relative level of efficiency with which the physical resources of data center 1300 are being used to handle the current workloads, and utilization metrics generally indicative of current usage levels of various types of resources in data center 1300. In various embodiments, data center operations information 1576 may include telemetry data 1571, such as automation coordinator 1555 may receive via telemetry framework 1570 or from robots 1360. The embodiments are not limited in this context.Maintenance task information 1577 may generally comprise information identifying and describing ongoing and pending maintenance tasks of data center 1300. Maintenance task information 1577 may also include information identifying and describing previously completed maintenance tasks. In various embodiments, maintenance task information 1577 may include a pending task queue 1578. Pending task queue 1578 may generally comprise information identifying a set of maintenance tasks that need to be performed in data center 1300.Maintenance equipment information 1579 may generally comprise identifying and describing automated maintenance equipment - such as robots 1360 - of data center 1300. In some embodiments, maintenance equipment information 1579 may include a candidate device pool 1580. Candidate device pool 1580 may generally comprise information identifying a set of robots 1360 that are currently available for use in data center 1300. The embodiments are not limited in this context.In various embodiments, based on telemetry data 1571, automation coordinator 1555 may identify automated maintenance tasks to be performed in data center 1300 by robots 1360. For example, based on telemetry data 1571 indicating a high bit error rate at a DIMM, automation coordinator 1555 may determine that a robot 1360 should be assigned to replace that DIMM. In some embodiments, automation coordinator 1555 may use telemetry data 1571 to prioritize among automated maintenance tasks, such as tasks comprised in pending task queue 1578. For example, automation coordinator 1555 may use telemetry data 1571 to assess the respective expected performance impacts of multiple automated maintenance tasks in pending task queue 1578, and may assign out an automated maintenance task with the highest expected performance impact first. In some embodiments, in identifying and/or prioritizing among automated maintenance tasks, automation coordinator 1555 may consider any or all of physical infrastructure information 1575, data center operations information 1576, maintenance task information 1577, and maintenance equipment information 1579 in addition to - or in lieu of - telemetry data 1571.In a first example, automation coordinator 1555 may assign a low priority to an automated maintenance task involving replacement of a malfunctioning compute sled based on physical infrastructure information 1575 indicating that another sled in a different rack can be used as a substitute without need for replacing the malfunctioning compute sled. In a second example, automation coordinator 1555 may assign a high priority to an automated maintenance task involving replacing a malfunctioning memory sled based on data center operation information 1576 indicating that a scarcity of memory constitutes a performance bottleneck with respect to workloads being processed in data center 1300. In a third example, automation coordinator 1555 may determine not to add a new maintenance task to pending task queue 1578 based on a determination that a maintenance task already present in pending task queue 1578 may render the new maintenance task unnecessary and/or moot. In a fourth example, in determining an extent to which to prioritize an automated maintenance task that requires the use of particular robots 1360 featuring specialized capabilities, automation coordinator 1555 may consider maintenance equipment information 1579 indicating whether any robots 1360 featuring such specialized capabilities are currently available. The embodiments are not limited to these examples.In various embodiments, based on telemetry data 1571, automation coordinator 1555 may control the positioning and/or movement of robots 1360 within data center 1300. For example, having used telemetry data 1571 to identify a region of data center 1300 within which a greater number of hardware failures have been and/or are expected to be observed, automation coordinator 1555 may position robots 1360 more densely within that identified region than within other regions of data center 1300. The embodiments are not limited in this context.In some embodiments, in response to automated maintenance decisions - such as may be reached based on any or all of telemetry data 1571, physical infrastructure information 1575, data center operations information 1576, maintenance task information 1577, and maintenance equipment information 1579 - automation coordinator 1555 may send automation commands 1573 to robots 1360 in order to instruct robots 1360 to perform operations associated with automated maintenance tasks. For example, upon determining that a particular compute sled should be replaced, automation coordinator 1555 may send an automation command 1573 in order to instruct a robot 1360 to perform a sled replacement procedure to replace the sled. In various embodiments, automation coordinator 1555 may inform robots 1360 of various parameters characterizing assigned automated maintenance tasks by including such parameters in automation commands 1573. For instance, in the context of the preceding example, the automation command 1573 may contain fields specifying a sled ID uniquely identifying the sled to be replaced and a rack ID and/or sled space ID identifying the location of that sled within the data center, as well as analogous parameters associated with the replacement sled. The embodiments are not limited to this example.It is worthy of note that in various embodiments, with respect to some aspects of automated maintenance operations, decision-making may be handled in a distributed - rather than centralized - fashion. In such embodiments, robots 1360 may make some automated maintenance decisions autonomously. In some such embodiments, as illustrated in FIG. 15, robots 1360 may perform such autonomous decision-making based on telemetry data 1571 received from telemetry framework 1570. In an example embodiment, a robot 1360 may determine based on analysis of telemetry data 1571 that a particular CPU is malfunctioning, and autonomously decide to replace that malfunctioning CPU. In various embodiments, some or all of the robots 1360 in data center 1300 may have access to any or all of physical infrastructure information 1575, data center operations information 1576, maintenance task information 1577, and maintenance equipment information 1579, and may consider such information as well in conjunction with autonomous decision-making. In various embodiments, distributed coordination functions may be implemented to enable some types of maintenance tasks to be completed via collaborative maintenance procedures involving cooperation between multiple robots. The embodiments are not limited in this context.FIG. 16 illustrates an example of an operating environment 1600 that may berepresentative of various embodiments. In operating environment 1600, in conjunction with automated maintenance operations in data center 1300, robots 1360 may provide automation coordinator 1555 with feedback 1574 that includes one or more of position data 1681, assistance data 1682, and environmental data 1683. The embodiments are not limited to these examples. It is worthy of note that in some embodiments, although not depicted in FIG. 16, robots 1360 may gather various types of telemetry data 1571 in conjunction with automated maintenance operations and include such gathered telemetry data 1571 in the feedback 1574 provided to automation coordinator 1555. The embodiments are not limited in this context.Position data 1681 may generally comprise data for use by automation coordinator 1555 to determine/track the positions and/or movements of robots 1360 within data center 1300. In some embodiments, position data 1681 may comprise data associated with an indoor positioning system. In some such embodiments, the indoor positioning system may be a radio-based system, such as a Wi-Fi-based or Bluetooth-based indoor positioning system. In some otherembodiments, a non-radio based positioning system, such as a magnetic, optical, or inertial indoor positioning system may be used. In various embodiments, the indoor positioning system may be a hybrid system, such as one that combines two or more of radio-based, magnetic, optical, and inertial indoor positioning techniques. The embodiments are not limited in this context.Assistance data 1682 may generally comprise data for use by automation coordinator 1555 to provide human maintenance personnel with information aiding them in the identification and/or performance of manual maintenance tasks. In various embodiments, a given robot 1360 may generate assistance data 1682 in response to identifying a maintenance issue that it cannot correct/resolve in an automated fashion. For instance, after identifying a component that needs to be replaced and determining that it cannot perform the replacement itself, a robot 1360 take a picture of the component and provide assistance data 1682 comprising that picture to automation coordinator 1555. Automation coordinator 1555 may then cause the picture to be presented on a display for reference by human maintenance personnel in order to aid visual identification of the component to be replaced. The embodiments are not limited to this example.In some embodiments, the performance and/or reliability of various types of hardware in data center 1300 may potentially be affected by one or more aspects of the ambient conditions within data center 1300, such as ambient temperature, pressure, humidity, and air quality. For example, a rate at which corrosion occurs on metallic contacts of components such as DIMMs may depend on the ambient temperature and humidity. In various embodiments, it may thus be desirable to monitor various types of environmental parameters at various locations during ongoing operations of data center 1300.In some embodiments, robots 1360 may be configured to support environmental condition monitoring by measuring one or more aspects of ambient conditions within the data center during ongoing operations and providing those collected measurements to automation coordinator 1555 in the form of environmental data 1683. In various embodiments, robots 1360 may collect environmental data 1683 using sensors or sensor arrays comprising sensory elements such as sensory elements 1464 of FIG. 14. Examples of conditions/parameters that robots 1360 may measure and report to automation coordinator 1555 in the form of environmental data 1683 may include - without limitation - temperature, pressure, humidity, and air quality. In some embodiments, in conjunction with providing environmental condition measurements in the form of environmental data 1683, robots 1360 may also provide corresponding position data 1681 that indicates the locations at which the associated measurements were performed. The embodiments are not limited in this context.In various embodiments, access to dynamic, continuous, and location-specificmeasurements of such parameters may enable a data center operator to predict failures, dynamically configure systems for best performance, and dynamically move resources for data center optimization. In some embodiments, based on environmental data 1683 provided by robots 1360, a data center operator may be able to predict accelerated failure of parts versus standard factory specification and replace parts earlier (or move to lower priority tasks). In various embodiments, environmental data 1683 provided by robots 1360 may enable a data center operator to initiate service tickets ahead of predicted failure timelines. For example, a cleaning of DIMM contacts may be initiated in order to avoid corrosion build-up to the level where failures start occurring. In some embodiments, environmental data 1683 provided by robots 1360 may enable a data center operator to continuously and dynamically configure servers based on, for example, altitude, pressure and other parameters that may be important to such things as fan speeds and cooling configurations which in turn may affect performance of a server in a given environment and temperature. In various embodiments, environmental data 1683 provided by robots 1360 may enable a data center operator to detect and move data center resources automatically from zones/locations of the data center that may be affected by equipment failures or environment variations detected by the robot's sensors. For example, based on environmental data 1683 indicating an excessive temperature or air qualitydeterioration in a particular data center region, servers and/or other resources may be relocated from the affected region to a different region. The embodiments are not limited to these examples.FIG. 17 illustrates an example of an operating environment 1700 that may berepresentative of the implementation of an automated data center maintenance scheme according to some embodiments. In operating environment 1700, a robot 1760 may perform one or more automated maintenance tasks at a rack 1702. According to some embodiments, robot 1760 may be representative of a robot 1360 that performs operations associated with automated data center maintenance in data center 1300 of FIGs. 13, 15, and 16. In various embodiments, robot 1760 may be implemented using automated maintenance device 1400 of FIG. 14. In various embodiments, as reflected by the dashed line in FIG. 17, robot 1760 may move to a location of rack 1702 from another location in order to perform one or more automated maintenance tasks at rack 1702. In some embodiments, robot 1760 may perform one or more such tasks based on automation commands 1773 received from automation coordinator 1555. In various embodiments, robot 1760 may additionally or alternatively perform one or more such tasks autonomously, without intervention on the part of automation coordinator 1555. The embodiments are not limited in this context.In some embodiments, robot 1760 may perform one or more automated maintenance tasks involving the installation and/or removal of sleds at racks of a data center such as data center 1300. In various embodiments, for example, robot 1760 may be operative to install a sled 1704 at rack 1702. In some embodiments, robot 1760 may install sled 1704 by inserting it into an available sled space of rack 1702. In various embodiments, in conjunction with inserting sled 1704, robot 1760 may grip particular physical elements designed to accommodate robotic manipulation/ handling. In some embodiments, robot 1760 may use image recognition and/or other location techniques to locate the elements to be gripped, and may insert sled 1704 while gripping those elements. In various embodiments, rather than installing sled 1704, robot 1760 may instead remove sled 1704 from rack 1702 and install a replacement sled 1704B. In some embodiments, robot 1760 may install replacement sled 1704B in a same sled space as was occupied by sled 1704, once it has removed sled 1704. In various other embodiments, robot 1760 may install replacement sled 1704B in a different sled space, such that it does not need to remove sled 1704 before installing replacement sled 1704B. The embodiments are not limited in this context.In some embodiments, robot 1760 may perform one or more automated maintenance tasks involving upkeep, repair, and/or replacement of particular components on sleds of a data center such as data center 1300. In various embodiments, robot 1760 may be used to power up a component 1706 in accordance with a scheme for periodically powering up components in the data center on a periodic basis in order to improve the reliability of such components. In some embodiments, for example, storage and/or memory components may tend to malfunction when left idle for excessive periods of time, and thus robots may be used to power up such components according to a defined cycle. In such an embodiment, robot 1760 may be operative to power up an appropriate component 1706 by plugging that component 1706 into a powered interface/slot. The embodiments are not limited to this example.In various embodiments, robot 1760 may be operative to manipulate a given component1706 in accordance with a scheme for automated upkeep of pooled memory resources of a data center. According to such a scheme, robots may be used to assess/troubleshoot apparently malfunctioning memory resources such as DIMMs. In some embodiments, according to such a scheme, robot 1760 may identify a component 1706 comprising a memory resource such as a DIMM, remove that component 1706 from a slot on sled 1704, and clean the component 1706. Robot 1760 may then test the component 1706 to determine whether the issue has been resolved, and may determine to pull sled 1704 for "back-room" servicing if it finds that the problem persists. In various embodiments, robot 1760 may test the component 1706 after reinserting it into its slot on sled 1704. In some other embodiments, robot 1760 may be configured with a testing slot into which it can insert the component 1706 for the purpose of testing. The embodiments are not limited in this context.FIG. 18 illustrates an example of an operating environment 1800 that may berepresentative of the implementation of an automated data center maintenance scheme according to some embodiments. In operating environment 1800, a robot 1860 may perform automated CPU cache servicing for a sled 1804 at a rack 1802. According to some embodiments, robot1860 may be representative of a robot 1360 that performs operations associated with automated data center maintenance in data center 1300 of FIGs. 13, 15, and 16. In various embodiments, robot 1860 may be implemented using automated maintenance device 1400 of FIG. 14. In some embodiments, as reflected by the dashed line in FIG. 18, robot 1860 may move to a location of rack 1802 from another location in order to perform the automated CPU cache servicing for sled 1804. In various embodiments, robot 1860 may perform such automated CPU cache servicing based on automation commands 1873 received from automation coordinator 1555. In some other embodiments, robot 1860 may perform the automated CPU cache servicing autonomously, without intervention on the part of automation coordinator 1555. The embodiments are not limited in this context.As shown in FIG. 18, sled 1804 may comprise components 1806 that include a CPU 1806 A, cache memory 1806B for the CPU 1806 A, and a heatsink 1806C for the CPU 1806 A. In various embodiments, cache memory 1806B may underlie CPU 1806 A, and CPU 1806 A may underlie heatsink 1806C. In some embodiments, cache memory 1806B may comprise one or more cache memory modules. In various embodiments, the automated CPU cache servicing that robot 1860 performs in operating environment 1800 may involve replacing cache memory 1806B. For example, in some embodiments, cache memory 1806B may comprise one or more cache memory modules that robot 1860 removes from sled 1804 and replaces with one or more replacement cache modules. In various embodiments, the determination to perform automated CPU cache servicing and thus replace cache memory 1806B may be based on a determination that cache memory 1806B is not functioning properly or is outdated. For example, in some embodiments, automation coordinator 1555 may determine - based on telemetry data 1571 of FIG. 15 - that cache memory 1806B is not functioning, and may use robot 1860 to replace cache memory 1806B in response to that determination. The embodiments are not limited to this example.In various embodiments, according to a procedure for automated CPU cache servicing, robot 1860 may remove CPU 1806A and heat sink 1806C from sled 1804 in order to gain physical access to cache memory 1806B. In some embodiments, robot 1860 may remove sled 1804 from rack 1802 prior to removing CPU 1806A and heat sink 1806C from sled 1804. In various other embodiments, robot 1860 may remove CPU 1806A and heat sink 1806C from sled 1804 while sled 1804 remains seated within a sled space of rack 1802. In some embodiments, robot 1860 may first remove heat sink 1806C, and then remove CPU 1806A. In various other embodiments, robot 1860 may remove both heat sink 1806C and CPU 1806A simultaneously and/or as a collective unit (i.e., without removing heat sink 1806C from CPU 1806A). In some embodiments, after replacing cache memory 1806B, robot 1860 may reinstall CPU 1806A and heat sink 1806C upon sled 1804, which it may then reinsert into a sled space of rack 1802 in embodiments in which it was previously removed. The embodiments are not limited in this context. FIG. 19 illustrates an example of an operating environment 1900 that may berepresentative of the implementation of an automated data center maintenance scheme according to some embodiments. In operating environment 1900, a robot 1960 may perform automated storage and/or transfer of a compute state of a compute sled 1904 at a rack 1902. According to some embodiments, robot 1760 may be representative of a robot 1360 that performs operations associated with automated data center maintenance in data center 1300 of FIGs. 13, 15, and 16. In various embodiments, robot 1960 may be implemented using automated maintenance device 1400 of FIG. 14. In some embodiments, as reflected by the dashed line in FIG. 19, robot 1960 may move to a location of rack 1902 from another location in order to perform the automated storage and/or transfer of the compute state of compute sled 1904. In various embodiments, robot 1960 may perform such automated compute state storage and/or transfer based on automation commands 1973 received from automation coordinator 1555. In some other embodiments, robot 1960 may perform the automated compute state storage and/or transfer autonomously, without intervention on the part of automation coordinator 1555. The embodiments are not limited in this context.As shown in FIG. 19, compute sled 1904 may comprise components 1906 that include one or more CPUs 1906 A and a connector 1906B. In various embodiments, compute sled 1904 may comprise two CPUs 1906A. In some other embodiments, compute sled 1904 may comprise more than two CPUs 1906 A, or only a single CPU 1906 A. Connector 1906B may generally comprise a slot, socket, or other connective component designed to accept a memory daughter card for use to store a compute state of compute sled 1904. In various embodiments, compute sled 1904 may comprise two CPUs 1906 A and connector 1906B may be located between those two CPUs 1906A. The embodiments are not limited in this context.In some embodiments, according to a procedure for automated compute state storage and/or transfer, robot 1960 may insert a memory card 1918 into connector 1906B. In various embodiments, robot 1960 may remove compute sled 1904 from rack 1902 prior to inserting memory card 1918 into connector 1906B. In some other embodiments, robot 1960 may insert memory card 1918 into connector 1906B while compute sled 1904 remains seated within a sled space of rack 1902. In still other embodiments, memory card 1918 may be present and coupled with connector 1906B prior to initiation of the automated compute state storage and/or transfer procedure. In various embodiments, memory card 1918 may comprise a set of physical memory resources 1906C. In some embodiments, once memory card is inserted into/coupled with connector 1906B, a compute state 1984 of compute sled 1904 may be stored on memory card 1918 using one or more of the physical memory resources 1906C comprised thereon. In various embodiments, compute state 1984 may include respective states of each CPU 1906A comprised on compute sled 1904. In some embodiments, compute state 1984 may also include states of one or more memory resources comprised on compute sled 1904. The embodiments are not limited in this context.In various embodiments, robot 1960 may perform an automated compute statestorage/transfer procedure in order to preserve the compute state of compute sled 1904 during upkeep/repair of compute sled 1904. In some such embodiments, once compute state 1984 is stored on memory card 1918, robot 1960 may remove memory card 1918 from connector 1906B, perform upkeep/repair of compute sled 1904, reinsert memory card 1918 into connector 1906B, and then restore compute sled 1904 to the compute state 1984 stored on memory card 1918. For instance, in an example embodiment, robot 1960 may remove a CPU 1906A from a socket on compute sled 1904 and insert a replacement CPU into that socket, and then cause compute sled 1904 to be restored to the compute state 1984 stored on memory card 1918. In various other embodiments, robot 1960 may perform an automated compute state storage/transfer procedure in order to replace compute sled 1904 with another compute sled. In some such embodiments, once compute state 1984 is stored on memory card 1918, robot 1960 may remove memory card 1918 from connector 1906B, insert memory card 1918 into a connector on a replacement compute sled, insert the replacement compute sled into a sled space of rack 1902 or another rack, and cause the replacement compute sled to realize the compute state 1984 stored on memory card 1918. The embodiments are not limited in this context.FIG. 20 illustrates an example of an operating environment 2000. According to various embodiments, operating environment 2000 may be representative of the implementation of an automated data center maintenance scheme according to which some aspects of automated maintenance operations involve collaboration/cooperation between robots. In operating environment 2000, in conjunction with performing a collaborative maintenance task, robots 2060A and 2060B may coordinate with each other by exchanging interdevice coordination information 2086A and 2086B via one or more communication links 2085. Communication links 2085 may comprise wireless communication links, wired communication links, or a combination of both. According to some embodiments, robots 2060A and 2060B may be representative of robots 1360 that perform operations associated with automated data center maintenance in data center 1300 of FIGs. 13, 15, and 16. In various embodiments, one or both of robots 2060A and 2060B may be implemented using automated maintenance device 1400 of FIG. 14. It is worthy of note that the absence of automation coordinator 1555 in FIG. 20 is not intended to indicate that no aspects of automated maintenance would/could be centrally coordinated in operating environment 2000. It is both possible and contemplated that in various embodiments, distributed coordination may be implemented for some aspects of automated maintenance in a data center in which other aspects of automated maintenance are centrally coordinated by an entity such as automation coordinator 1555. For example, in operating environment 2000, a central automation coordinator may determine the need for performance of the collaborative maintenance task, select robots 2060A and 2060B as the robots that are to perform the collaborative maintenance task, and send automation commands to cause robots 2060 A and 2060B to initiate the collaborative maintenance task. Robots 2060 A and 2060B may then coordinate directly with each other in conjunction with performing the physical actions necessary to complete the collaborative maintenance task. The embodiments are not limited to this example.FIG. 21 illustrates an example of a logic flow 2100 that may be representative of the implementation of one or more of the disclosed techniques according to some embodiments. For example, logic flow 2100 may be representative of operations that automation coordinator 1555 may perform in any of operating environments 1500, 1600, 1700, 1800, 1900, and 2000 of FIGs. 15-20 according to various embodiments. As shown in FIG. 21, at 2102, a maintenance task that is to be performed in a data center may be identified. For example, in operating environment 1500 of FIG. 15, automation coordinator 1555 may identify a maintenance task that is to be performed in data center 1300.At 2104, a determination may be made to initiate automated performance of the maintenance task. For example, having added an identified maintenance task to pending task queue 1578 in operating environment 1500 of FIG. 15, automation coordinator 1555 may determine at a subsequent point in time that that maintenance task constitutes the highest priority task in the pending task queue 1578 and thus that its performance should be initiated. In another example, rather than adding the identified maintenance task to pending task queue 1578, automation coordinator 1555 may determine to initiate performance of the maintenance task immediately after it is identified.At 2106, an automated maintenance device to which to assign the maintenance task may be selected. For example, among one or more robots 1360 comprised in candidate device pool 1580 in operating environment 1500 of FIG. 15, automation coordinator 1555 may select a robot 1360 to which to assign an identified maintenance task. It is worthy of note that in someembodiments, the identified maintenance task may be handled by multiple robots according to a collaborate maintenance procedure. In such cases, more than one automated maintenance device may be selected at 2106 as an assignee of the maintenance task. For example, in operating environment 1500 of FIG. 15, automation coordinator 1555 may select multiple robots 1360 among those comprised in candidate device pool 1580 that are to work together according to a collaborative maintenance procedure to complete a maintenance task.At 2108, one or more automation commands may be sent to cause an automated maintenance device selected at 2106 to perform an automated maintenance procedure associated with the maintenance task. For example, in operating environment 1500 of FIG. 15, automation coordinator 1555 may send one or more automation commands 1573 to cause a robot 1360 to perform an automated maintenance procedure associated with a maintenance task to which that robot 1360 has been allocated. In some embodiments in which multiple automated maintenance devices are selected at 2106 as assignees of the same maintenance task, automation commands may be sent to multiple automated maintenance devices at 2108. For example, in operating environment 1500 of FIG. 15, automation coordinator 1555 may send respective automation command(s) 1573 to multiple robots 1360 to cause those robots to perform a collaborative maintenance procedure associated with the maintenance task to be completed. The embodiments are not limited to these examples.FIG. 22 illustrates an example of a logic flow 2200 that may be representative of the implementation of one or more of the disclosed techniques according to some embodiments. For example, logic flow 2200 may be representative of operations that may be performed in various embodiments by a robot such as a robot 1360 in one or both of operating environments 1500 and 1600 of FIGs. 15 and 16 and/or any of robots 1760, 1860, 1960, 2060A, and 2060B in operating environments 1700, 1800, 1900, and 2000 of FIGs. 17-20. As shown in FIG. 22, one or more automation commands may be received from an automation coordinator of a data center at 2202. For example, in operating environment 1500 of FIG. 15, a robot 1360 may receive one or more automation commands 1573 from automation coordinator 1555.At 2204, an automated maintenance procedure may be identified based on the one or more automation commands received at 2202. For example, based on one or more automation commands 1573 received from automation coordinator 1555 in operating environment 1500 of FIG. 15, a robot 1360 may identify an automated maintenance procedure that it is to perform. The automated maintenance procedure identified at 2204 may then be performed at 2206. In various embodiments, the identification of the automated maintenance procedure at 2204 may be based on a maintenance task code that is comprised in at least one of the received automation commands, and is defined to correspond to a particular automated maintenance procedure. For example, based on a maintenance task code comprised in an automation command 1573 received from automation coordinator 1555, a robot 1360 in operating environment 1500 of FIG. 15 may identify an automated DIMM testing procedure as an automated maintenance procedure to be performed. In various embodiments, the one or more automation commands received at 2202 may collectively contain one or more maintenance task parameters specifying particular details of the automated maintenance task, and such details may also be identified at 2204. For instance, in the context of the preceding example, the robot 1360 may identify - based on maintenance task parameters comprised in one or more automation commands 1573 received from automation coordinator 1555 - details such as a physical resource ID of a DIMM to be tested, an identity and location of a sled on which that DIMM resides, and an identity of a particular DIMM slot on that sled that currently houses the DIMM. The embodiments are not limited to these examples.FIG. 23 illustrates an example of a logic flow 2300 that may be representative of the implementation of one or more of the disclosed techniques according to some embodiments. For example, logic flow 2300 may be representative of operations that may be performed by robot 2060 A or robot 2060B in operating environment 2000 of FIG. 20. As shown in FIG. 23, a collaborative maintenance procedure that is to be performed in a data center may be identified at an automated maintenance device at 2302. For example, in operating environment 2000 of FIG. 20, robot 2060A may determine that a collaborative CPU replacement procedure is to be performed. In some embodiments, the identification of the collaborative maintenance procedure at 2302 may be based on one or more automation commands received by the automated maintenance device from a centralized automation coordinator such as automation coordinator 1555. In various other embodiments, the identification of the collaborative maintenance procedure at 2302 may be performed autonomously. For example, in operating environment 1500 of FIG. 15, a robot 1360 may determine based on analysis of telemetry data 1571 that a particular CPU is malfunctioning, and may then identify a collaborative maintenance procedure to be performed in order to replace that malfunctioning CPU. The embodiments are not limited to this example.A second automated maintenance device with which to collaborate during performance of the collaborative maintenance procedure may be identified at 2304, and interdevice coordination information may be sent to the second automated maintenance device at 2306 in order to initiate the collaborative maintenance procedure. For example, in operating environment 2000 of FIG. 20, robot 2060A may determine that it is to collaborate with robot 2060B in conjunction with a collaborative CPU replacement procedure, and may send interdevice coordination information 2086A to robot 2086B in order to initiate that collaborative CPU replacement procedure. In some embodiments, the identification of the second automated maintenance device may be based on information received from a centralized automation coordinator such as automation coordinator 1555. For example, in some embodiments, a centralized automation coordinator may be responsible for selecting the particular robots that are to work together to perform the collaborative maintenance procedure, and the identity of the second automated maintenance device may be indicated by a parameter comprised in an automation command received from the centralized automation coordinator. In other embodiments, the identification performed at 2304 may correspond to an autonomous selection of the second automated maintenance device. For example, in operating environment 1500 of FIG. 15, a first robot 1360 may select a second robot 1360 that is comprised among those in candidate device pool 1580 as the second automated maintenance device that is to participate in the collaborative maintenance procedure. The embodiments are not limited to these examples.FIG. 24A illustrates an embodiment of a storage medium 2400. Storage medium 2400 may comprise any computer-readable storage medium or machine -readable storage medium, such as an optical, magnetic or semiconductor storage medium. In some embodiments, storage medium 2400 may comprise a non-transitory storage medium. In various embodiments, storage medium 2400 may comprise an article of manufacture. In some embodiments, storage medium 2400 may store computer-executable instructions, such as computer-executable instructions to implement logic flow 2100 of FIG. 21. Examples of a computer-readable storage medium or machine-readable storage medium may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of computer-executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. The embodiments are not limited to these examples.FIG. 24B illustrates an embodiment of a storage medium 2450. Storage medium 2450 may comprise any computer-readable storage medium or machine -readable storage medium, such as an optical, magnetic or semiconductor storage medium. In some embodiments, storage medium 2450 may comprise a non-transitory storage medium. In various embodiments, storage medium 2450 may comprise an article of manufacture. According to some embodiments, storage medium 2450 may be representative of a memory/storage element 1467 comprised in automated maintenance device 1400 of FIG. 14. In some embodiments, storage medium 2450 may store computer-executable instructions, such as computer-executable instructions to implement one or both of logic flow 2200 of FIG. 22 and logic flow 2300 of FIG. 23. Examples of a computer-readable storage medium or machine -readable storage medium and of computer- executable instructions may include any of the respective examples identified above in reference to storage medium 2400 of FIG. 24A. The embodiments are not limited to these examples.FIG. 25 illustrates an embodiment of an exemplary computing architecture 2500 that may be suitable for implementing various embodiments as previously described. In various embodiments, the computing architecture 2500 may comprise or be implemented as part of an electronic device. In some embodiments, the computing architecture 2500 may berepresentative, for example, of a computing device suitable for use in conjunction with implementation of one or more of robots 1360, 1760, 1860, 1960, 2060A, and 2060B, automated maintenance device 1400, automation coordinator 1555, and logic flows 2100, 2200, and 2300. The embodiments are not limited in this context.As used in this application, the terms "system" and "component" and "module" are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution, examples of which are provided by the exemplary computing architecture 2500. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers. Further, components may be communicatively coupled to each other by various types of communications media to coordinate operations. The coordination may involve the uni-directional or bi-directional exchange of information. For instance, the components may communicate information in the form of signals communicated over the communications media. The information can be implemented as signals allocated to various signal lines. In such allocations, each message may be a signal. Further embodiments, however, may alternatively employ data messages. Such data messages may be sent across various connections. Exemplary connections include parallel interfaces, serial interfaces, and bus interfaces.The computing architecture 2500 includes various common computing elements, such as one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components, power supplies, and so forth. The embodiments, however, are not limited to implementation by the computing architecture 2500. As shown in FIG. 25, according to computing architecture 2500, a computer 2502 comprises a processing unit 2504, a system memory 2506 and a system bus 2508. In some embodiments, computer 2502 may comprise a server. In some embodiments, computer 2502 may comprise a client. The processing unit 2504 can be any of various commercially available processors, including without limitation an AMD® Athlon®, Duron® and Opteron® processors; ARM® application, embedded and secure processors; IBM® and Motorola® DragonBall® and PowerPC® processors; IBM and Sony® Cell processors; Intel® Celeron®, Core (2) Duo®, Itanium®, Pentium®, Xeon®, and XScale® processors; and similar processors. Dual microprocessors, multi-core processors, and other multi processor architectures may also be employed as the processing unit 2504.The system bus 2508 provides an interface for system components including, but not limited to, the system memory 2506 to the processing unit 2504. The system bus 2508 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures. Interface adapters may connect to the system bus 2508 via a slot architecture. Example slot architectures may include without limitationAccelerated Graphics Port (AGP), Card Bus, (Extended) Industry Standard Architecture ((E)ISA), Micro Channel Architecture (MCA), NuBus, Peripheral Component Interconnect (Extended) (PCI(X)), PCI Express, Personal Computer Memory Card International Association (PCMCIA), and the like.The system memory 2506 may include various types of computer-readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM(DDR AM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM(PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, an array of devices such as Redundant Array of Independent Disks (RAID) drives, solid state memory devices (e.g., USB memory, solid state drives (SSD) and any other type of storage media suitable for storing information. In the illustrated embodiment shown in FIG. 25, the system memory 2506 can include non-volatile memory 2510 and/or volatile memory 2512. A basic input/output system (BIOS) can be stored in the nonvolatile memory 2510. The computer 2502 may include various types of computer-readable storage media in the form of one or more lower speed memory units, including an internal (or external) hard disk drive (HDD) 2514, a magnetic floppy disk drive (FDD) 2516 to read from or write to a removable magnetic disk 2518, and an optical disk drive 2520 to read from or write to a removable optical disk 2522 (e.g., a CD-ROM or DVD). The HDD 2514, FDD 2516 and optical disk drive 2520 can be connected to the system bus 2508 by a HDD interface 2524, an FDD interface 2526 and an optical drive interface 2528, respectively. The HDD interface 2524 for external drive implementations can include at least one or both of Universal Serial Bus (USB) and IEEE 1394 interface technologies.The drives and associated computer-readable media provide volatile and/or nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For example, a number of program modules can be stored in the drives and memory units 2510, 2512, including an operating system 2530, one or more application programs 2532, other program modules 2534, and program data 2536.A user can enter commands and information into the computer 2502 through one or more wire/wireless input devices, for example, a keyboard 2538 and a pointing device, such as a mouse 2540. Other input devices may include microphones, infra-red (IR) remote controls, radio-frequency (RF) remote controls, game pads, stylus pens, card readers, dongles, finger print readers, gloves, graphics tablets, joysticks, keyboards, retina readers, touch screens (e.g., capacitive, resistive, etc.), trackballs, trackpads, sensors, styluses, and the like. These and other input devices are often connected to the processing unit 2504 through an input device interface 2542 that is coupled to the system bus 2508, but can be connected by other interfaces such as a parallel port, IEEE 1394 serial port, a game port, a USB port, an IR interface, and so forth.A monitor 2544 or other type of display device may also be connected to the system bus 2508 via an interface, such as a video adaptor 2546. The monitor 2544 may be internal or external to the computer 2502. In addition to the monitor 2544, a computer typically includes other peripheral output devices, such as speakers, printers, and so forth.The computer 2502 may operate in a networked environment using logical connections via wire and/or wireless communications to one or more remote computers, such as a remote computer 2548. The remote computer 2548 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 2502, although, for purposes of brevity, only amemory/storage device 2550 is illustrated. The logical connections depicted include wire/wireless connectivity to a local area network (LAN) 2552 and/or larger networks, for example, a wide area network (WAN) 2554. Such LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, for example, the Internet.When used in a LAN networking environment, the computer 2502 may be connected to the LAN 2552 through a wire and/or wireless communication network interface or adaptor 2556. The adaptor 2556 can facilitate wire and/or wireless communications to the LAN 2552, which may also include a wireless access point disposed thereon for communicating with the wireless functionality of the adaptor 2556.When used in a WAN networking environment, the computer 2502 can include a modem 2558, or may be connected to a communications server on the WAN 2554, or has other means for establishing communications over the WAN 2554, such as by way of the Internet. The modem 2558, which can be internal or external and a wire and/or wireless device, connects to the system bus 2508 via the input device interface 2542. In a networked environment, program modules depicted relative to the computer 2502, or portions thereof, can be stored in the remote memory/storage device 2550. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers can be used.The computer 2502 may be operable to communicate with wire and wireless devices or entities using the IEEE 802 family of standards, such as wireless devices operatively disposed in wireless communication (e.g., IEEE 802.16 over-the-air modulation techniques). This includes at least Wi-Fi (or Wireless Fidelity), WiMax, and Bluetooth™ wireless technologies, among others. Thus, the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices. Wi-Fi networks use radio technologies called IEEE 802. l lx (a, b, g, n, etc.) to provide secure, reliable, fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wire networks (which use IEEE 802.3-related media and functions).FIG. 26 illustrates a block diagram of an exemplary communications architecture 2600 suitable for implementing various embodiments as previously described. The communications architecture 2600 includes various common communications elements, such as a transmitter, receiver, transceiver, radio, network interface, baseband processor, antenna, amplifiers, filters, power supplies, and so forth. The embodiments, however, are not limited to implementation by the communications architecture 2600. As shown in FIG. 26, the communications architecture 2600 comprises includes one or more clients 2602 and servers 2604. The clients 2602 and the servers 2604 are operatively connected to one or more respective client data stores 2608 and server data stores 2610 that can be employed to store information local to the respective clients 2602 and servers 2604, such as cookies and/or associated contextual information. Any one of clients 2602 and/or servers 2604 may implement one or more of robots 1360, 1760, 1860, 1960, 2060A, and 2060B, automated maintenance device 1400, automation coordinator 1555, logic flows 2100, 2200, and 2300, and computing architecture 2500.The clients 2602 and the servers 2604 may communicate information between each other using a communication framework 2606. The communications framework 2606 may implement any well-known communications techniques and protocols. The communications framework 2606 may be implemented as a packet-switched network (e.g., public networks such as the Internet, private networks such as an enterprise intranet, and so forth), a circuit-switched network (e.g., the public switched telephone network), or a combination of a packet-switched network and a circuit-switched network (with suitable gateways and translators).The communications framework 2606 may implement various network interfaces arranged to accept, communicate, and connect to a communications network. A network interface may be regarded as a specialized form of an input output interface. Network interfaces may employ connection protocols including without limitation direct connect, Ethernet (e.g., thick, thin, twisted pair 10/100/1000 Base T, and the like), token ring, wireless network interfaces, cellular network interfaces, IEEE 802.11a-x network interfaces, IEEE 802.16 network interfaces, IEEE 802.20 network interfaces, and the like. Further, multiple network interfaces may be used to engage with various communications network types. For example, multiple network interfaces may be employed to allow for the communication over broadcast, multicast, and unicast networks. Should processing requirements dictate a greater amount speed and capacity, distributed network controller architectures may similarly be employed to pool, load balance, and otherwise increase the communicative bandwidth required by clients 2602 and the servers 2604. A communications network may be any one and the combination of wired and/or wireless networks including without limitation a direct interconnection, a secured custom connection, a private network (e.g., an enterprise intranet), a public network (e.g., the Internet), a Personal Area Network (PAN), a Local Area Network (LAN), a Metropolitan Area Network (MAN), an Operating Missions as Nodes on the Internet (OMNI), a Wide Area Network (WAN), a wireless network, a cellular network, and other communications networks. As used herein, the term "circuitry" may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group), and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable hardware components that provide the described functionality. In some embodiments, the circuitry may beimplemented in, or functions associated with the circuitry may be implemented by, one or more software or firmware modules. In some embodiments, circuitry may include logic, at least partially operable in hardware. Embodiments described herein may be implemented into a system using any suitably configured hardware and/or software.FIG. 27 illustrates an embodiment of a communication device 2700 that may implement one or more of robots 1360, 1760, 1860, 1960, 2060A, and 2060B, automated maintenance device 1400, automation coordinator 1555, logic flows 2100, 2200, and 2300, storage media 2400 and 2450, computing architecture 2500, clients 2602, and servers 2604. In various embodiments, device 2700 may comprise a logic circuit 2728. The logic circuit 2728 may include physical circuits to perform operations described for one or more of robots 1360, 1760, 1860, 1960, 2060A, and 2060B, automated maintenance device 1400, automation coordinator 1555, logic flows 2100, 2200, and 2300, computing architecture 2500, clients 2602, and servers 2604 for example. As shown in FIG. 27, device 2700 may include a radio interface 2710, baseband circuitry 2720, and computing platform 2730, although the embodiments are not limited to this configuration.The device 2700 may implement some or all of the structure and/or operations for one or more of robots 1360, 1760, 1860, 1960, 2060A, and 2060B, automated maintenance device 1400, automation coordinator 1555, logic flows 2100, 2200, and 2300, storage media 2400 and 2450, computing architecture 2500, clients 2602, servers 2604, and logic circuit 2728 in a single computing entity, such as entirely within a single device. Alternatively, the device 2700 may distribute portions of the structure and/or operations for one or more of robots 1360, 1760, 1860, 1960, 2060A, and 2060B, automated maintenance device 1400, automation coordinator 1555, logic flows 2100, 2200, and 2300, storage media 2400 and 2450, computing architecture 2500, clients 2602, servers 2604, and logic circuit 2728 across multiple computing entities using a distributed system architecture, such as a client-server architecture, a 3-tier architecture, an NT- tier architecture, a tightly-coupled or clustered architecture, a peer-to-peer architecture, a master- slave architecture, a shared database architecture, and other types of distributed systems. The embodiments are not limited in this context. In one embodiment, radio interface 2710 may include a component or combination of components adapted for transmitting and/or receiving single-carrier or multi-carrier modulated signals (e.g., including complementary code keying (CCK), orthogonal frequency division multiplexing (OFDM), and/or single-carrier frequency division multiple access (SC-FDMA) symbols) although the embodiments are not limited to any specific over-the-air interface or modulation scheme. Radio interface 2710 may include, for example, a receiver 2712, a frequency synthesizer 2714, and/or a transmitter 2716. Radio interface 2710 may include bias controls, a crystal oscillator and/or one or more antennas 2718-/. In another embodiment, radio interface 2710 may use external voltage-controlled oscillators (VCOs), surface acoustic wave filters, intermediate frequency (IF) filters and/or RF filters, as desired. Due to the variety of potential RF interface designs an expansive description thereof is omitted.Baseband circuitry 2720 may communicate with radio interface 2710 to process receive and/or transmit signals and may include, for example, a mixer for down-converting received RF signals, an analog- to-digital converter 2722 for converting analog signals to digital form, a digital-to-analog converter 2724 for converting digital signals to analog form, and a mixer for up-converting signals for transmission. Further, baseband circuitry 2720 may include a baseband or physical layer (PHY) processing circuit 2726 for PHY link layer processing of respective receive/transmit signals. Baseband circuitry 2720 may include, for example, a medium access control (MAC) processing circuit 2727 for MAC/data link layer processing. Baseband circuitry 2720 may include a memory controller 2732 for communicating with MAC processing circuit 2727 and/or a computing platform 2730, for example, via one or more interfaces 2734.In some embodiments, PHY processing circuit 2726 may include a frame construction and/or detection module, in combination with additional circuitry such as a buffer memory, to construct and/or deconstruct communication frames. Alternatively or in addition, MAC processing circuit 2727 may share processing for certain of these functions or perform these processes independent of PHY processing circuit 2726. In some embodiments, MAC and PHY processing may be integrated into a single circuit.The computing platform 2730 may provide computing functionality for the device 2700. As shown, the computing platform 2730 may include a processing component 2740. In addition to, or alternatively of, the baseband circuitry 2720, the device 2700 may execute processing operations or logic for one or more of robots 1360, 1760, 1860, 1960, 2060A, and 2060B, automated maintenance device 1400, automation coordinator 1555, logic flows 2100, 2200, and 2300, storage media 2400 and 2450, computing architecture 2500, clients 2602, servers 2604, and logic circuit 2728 using the processing component 2740. The processing component 2740 (and/or PHY 2726 and/or MAC 2727) may comprise various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processor circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software elements may include software components, programs, applications, computer programs, application programs, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation.The computing platform 2730 may further include other platform components 2750. Other platform components 2750 include common computing elements, such as one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components (e.g., digital displays), power supplies, and so forth. Examples of memory units may include without limitation various types of computer readable and machine readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, an array of devices such as Redundant Array of Independent Disks (RAID) drives, solid state memory devices (e.g., USB memory, solid state drives (SSD) and any other type of storage media suitable for storing information. Device 2700 may be, for example, an ultra-mobile device, a mobile device, a fixed device, a machine-to- machine (M2M) device, a personal digital assistant (PDA), a mobile computing device, a smart phone, a telephone, a digital telephone, a cellular telephone, user equipment, eBook readers, a handset, a one-way pager, a two-way pager, a messaging device, a computer, a personal computer (PC), a desktop computer, a laptop computer, a notebook computer, a netbook computer, a handheld computer, a tablet computer, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, consumer electronics, programmable consumer electronics, game devices, display, television, digital television, set top box, wireless access point, base station, node B, subscriber station, mobile subscriber center, radio network controller, router, hub, gateway, bridge, switch, machine, or combination thereof. Accordingly, functions and/or specific configurations of device 2700 described herein, may be included or omitted in various embodiments of device 2700, as suitably desired.Embodiments of device 2700 may be implemented using single input single output (SISO) architectures. However, certain implementations may include multiple antennas (e.g., antennas 2718- ) for transmission and/or reception using adaptive antenna techniques for beamforming or spatial division multiple access (SDMA) and/or using MIMO communication techniques.The components and features of device 2700 may be implemented using any combination of discrete circuitry, application specific integrated circuits (ASICs), logic gates and/or single chip architectures. Further, the features of device 2700 may be implemented usingmicrocontrollers, programmable logic arrays and/or microprocessors or any combination of the foregoing where suitably appropriate. It is noted that hardware, firmware and/or software elements may be collectively or individually referred to herein as "logic" or "circuit."It should be appreciated that the exemplary device 2700 shown in the block diagram ofFIG. 27 may represent one functionally descriptive example of many potential implementations. Accordingly, division, omission or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would be necessarily be divided, omitted, or included in embodiments.FIG. 28 illustrates an embodiment of a broadband wireless access system 2800. As shown in FIG. 28, broadband wireless access system 2800 may be an internet protocol (IP) type network comprising an internet 2810 type network or the like that is capable of supporting mobile wireless access and/or fixed wireless access to internet 2810. In one or more embodiments, broadband wireless access system 2800 may comprise any type of orthogonal frequency division multiple access (OFDMA)-based or single-carrier frequency division multiple access (SC-FDMA)-based wireless network, such as a system compliant with one or more of the 3GPP LTE Specifications and/or IEEE 802.16 Standards, and the scope of the claimed subject matter is not limited in these respects.In the exemplary broadband wireless access system 2800, radio access networks (RANs) 2812 and 2818 are capable of coupling with evolved node Bs (eNBs) 2814 and 2820, respectively, to provide wireless communication between one or more fixed devices 2816 and internet 2810 and/or between or one or more mobile devices 2822 and Internet 2810. One example of a fixed device 2816 and a mobile device 2822 is device 2700 of FIG. 27, with the fixed device 2816 comprising a stationary version of device 2700 and the mobile device 2822 comprising a mobile version of device 2700. RANs 2812 and 2818 may implement profiles that are capable of defining the mapping of network functions to one or more physical entities on broadband wireless access system 2800. eNBs 2814 and 2820 may comprise radio equipment to provide RF communication with fixed device 2816 and/or mobile device 2822, such as described with reference to device 2700, and may comprise, for example, the PHY and MAC layer equipment in compliance with a 3GPP LTE Specification or an IEEE 802.16 Standard. eNBs 2814 and 2820 may further comprise an IP backplane to couple to Internet 2810 via RANs 2812 and 2818, respectively, although the scope of the claimed subject matter is not limited in these respects.Broadband wireless access system 2800 may further comprise a visited core network (CN) 2824 and/or a home CN 2826, each of which may be capable of providing one or more network functions including but not limited to proxy and/or relay type functions, for example authentication, authorization and accounting (AAA) functions, dynamic host configuration protocol (DHCP) functions, or domain name service controls or the like, domain gateways such as public switched telephone network (PSTN) gateways or voice over internet protocol (VoIP) gateways, and/or internet protocol (IP) type server functions, or the like. However, these are merely example of the types of functions that are capable of being provided by visited CN 2824 and/or home CN 2826, and the scope of the claimed subject matter is not limited in these respects. Visited CN 2824 may be referred to as a visited CN in the case where visited CN 2824 is not part of the regular service provider of fixed device 2816 or mobile device 2822, for example where fixed device 2816 or mobile device 2822 is roaming away from its respective home CN 2826, or where broadband wireless access system 2800 is part of the regular service provider of fixed device 2816 or mobile device 2822 but where broadband wireless access system 2800 may be in another location or state that is not the main or home location of fixed device 2816 or mobile device 2822. The embodiments are not limited in this context.Fixed device 2816 may be located anywhere within range of one or both of eNBs 2814 and 2820, such as in or near a home or business to provide home or business customer broadband access to Internet 2810 via eNBs 2814 and 2820 and RANs 2812 and 2818, respectively, and home CN 2826. It is worthy of note that although fixed device 2816 is generally disposed in a stationary location, it may be moved to different locations as needed. Mobile device 2822 may be utilized at one or more locations if mobile device 2822 is within range of one or both of eNBs 2814 and 2820, for example. In accordance with one or more embodiments, operation support system (OSS) 2828 may be part of broadband wireless access system 2800 to provide management functions for broadband wireless access system 2800 and to provide interfaces between functional entities of broadband wireless access system 2800. Broadband wireless access system 2800 of FIG. 28 is merely one type of wireless network showing a certain number of the components of broadband wireless access system 2800, and the scope of the claimed subject matter is not limited in these respects.FIG. 29 illustrates an embodiment of a wireless network 2900. As shown in FIG. 29, wireless network comprises an access point 2902 and wireless stations 2904, 2906, and 2908. Any one of access point 2902 and wireless stations 2904, 2906, and 2908 may potentially implement one or more of robots 1360, 1760, 1860, 1960, 2060A, and 2060B, automated maintenance device 1400, automation coordinator 1555, logic flows 2100, 2200, and 2300, storage media 2400 and 2450, computing architecture 2500, clients 2602, servers 2604, and communication device 2700.In various embodiments, wireless network 2900 may comprise a wireless local area network (WLAN), such as a WLAN implementing one or more Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (sometimes collectively referred to as "Wi-Fi"). In some other embodiments, wireless network 2900 may comprise another type of wireless network, and/or may implement other wireless communications standards. In various embodiments, for example, wireless network 2900 may comprise a WW AN or WPAN rather than a WLAN. The embodiments are not limited to this example.In some embodiments, wireless network 2900 may implement one or more broadband wireless communications standards, such as 3G or 4G standards, including their revisions, progeny, and variants. Examples of 3G or 4G wireless standards may include without limitation any of the IEEE 802.16m and 802.16p standards, 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) and LTE- Advanced (LTE-A) standards, and International Mobile Telecommunications Advanced (IMT-ADV) standards, including their revisions, progeny and variants. Other suitable examples may include, without limitation, Global System for Mobile Communications (GSM)/Enhanced Data Rates for GSM Evolution (EDGE) technologies, Universal Mobile Telecommunications System (UMTS)/High Speed Packet Access (HSPA) technologies, Worldwide Interoperability for Microwave Access (WiMAX) or the WiMAX II technologies, Code Division Multiple Access (CDMA) 2000 system technologies (e.g., CDMA2000 lxRTT, CDMA2000 EV-DO, CDMA EV-DV, and so forth), High Performance Radio Metropolitan Area Network (HIPERMAN) technologies as defined by the European Telecommunications Standards Institute (ETSI) Broadband Radio Access Networks (BRAN), Wireless Broadband (WiBro) technologies, GSM with General Packet Radio Service (GPRS) system (GSM/GPRS) technologies, High Speed Downlink Packet Access (HSDPA) technologies, High Speed Orthogonal Frequency-Division Multiplexing (OFDM) Packet Access (HSOPA) technologies, High-Speed Uplink Packet Access (HSUPA) system technologies, 3GPP Rel. 8-12 of LTE/System Architecture Evolution (SAE), and so forth. The embodiments are not limited in this context.In various embodiments, wireless stations 2904, 2906, and 2908 may communicate with access point 2902 in order to obtain connectivity to one or more external data networks. In some embodiments, for example, wireless stations 2904, 2906, and 2908 may connect to the Internet 2912 via access point 2902 and access network 2910. In various embodiments, access network 2910 may comprise a private network that provides subscription-based Internet-connectivity, such as an Internet Service Provider (ISP) network. The embodiments are not limited to this example.In various embodiments, two or more of wireless stations 2904, 2906, and 2908 may communicate with each other directly by exchanging peer-to-peer communications. For example, in the example of FIG. 29, wireless stations 2904 and 2906 communicate with each other directly by exchanging peer-to-peer communications 2914. In some embodiments, such peer-to-peer communications may be performed according to one or more Wi-Fi Alliance (WFA) standards. For example, in various embodiments, such peer-to-peer communications may be performed according to the WFA Wi-Fi Direct standard, 2010 Release. In various embodiments, such peer-to-peer communications may additionally or alternatively be performed using one or more interfaces, protocols, and/or standards developed by the WFA Wi-Fi Direct Services (WFDS) Task Group. The embodiments are not limited to these examples.Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine -readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. Some embodiments may be implemented, for example, using a machine -readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software. The machine-readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium and/or storage unit, for example, memory, removable or non- removable media, erasable or non-erasable media, writeable or re-writeable media, digital or analog media, hard disk, floppy disk, Compact Disk Read Only Memory (CD-ROM), Compact Disk Recordable (CD-R), Compact Disk Rewriteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of Digital Versatile Disk (DVD), a tape, a cassette, or the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high-level, low- level, object-oriented, visual, compiled and/or interpreted programming language.The following examples pertain to further embodiments:Example 1 is a method for automated data center maintenance, comprising processing, by processing circuitry of an automated maintenance device, an automation command received from an automation coordinator for a data center, identifying an automated maintenance procedure based on the received automation command, and performing the identified automated maintenance procedure.Example 2 is the method of Example 1, the identified automated maintenance procedure to comprise a sled replacement procedure.Example 3 is the method of Example 2, the sled replacement procedure to comprise replacing a compute sled.Example 4 is the method of Example 3, the sled replacement procedure to comprise removing the compute sled from a sled space, removing a memory card from a connector slot of the compute sled, inserting the memory card into a connector slot of a replacement compute sled, and inserting the replacement compute sled into the sled space.Example 5 is the method of Example 4, the memory card to store a compute state of the compute sled.Example 6 is the method of Example 5, the sled replacement procedure to comprise initiating a restoration of the stored compute state on the replacement compute sled.Example 7 is the method of Example 2, the sled replacement procedure to comprise replacing an accelerator sled.Example 8 is the method of Example 2, the sled replacement procedure to comprise replacing a memory sled.Example 9 is the method of Example 2, the sled replacement procedure to comprise replacing a storage sled.Example 10 is the method of Example 1, the identified automated maintenance procedure to comprise a component replacement procedure.Example 11 is the method of Example 10, the component replacement procedure to comprise removing a component from a socket of a sled, and inserting a replacement component into the socket.Example 12 is the method of Example 11, the component to comprise a processor. Example 13 is the method of Example 11, the component to comprise a field- programmable gate array (FPGA).Example 14 is the method of Example 11, the component to comprise a memory module. Example 15 is the method of Example 11, the component to comprise a non-volatile storage device.Example 16 is the method of Example 15, the non- volatile storage device to comprise a solid-state drive (SSD).Example 17 is the method of Example 16, the SSD to comprise a three-dimensional (3D) NAND SSD.Example 18 is the method of Example 10, the component replacement procedure to comprise a cache memory replacement procedure.Example 19 is the method of Example 18, the cache memory replacement procedure to comprise replacing one or more cache memory modules of a processor on a sled.Example 20 is the method of Example 19, the cache memory replacement procedure to comprise removing a heat sink from atop the processor, removing the processor from a socket to facilitate access to one or more cache memory modules underlying the processor, removing the one or cache memory modules, inserting one or more replacement cache memory modules, reinserting the processor into the socket, and reinstalling the heat sink.Example 21 is the method of Example 1, the identified automated maintenance procedure to comprise a component servicing procedure.Example 22 is the method of Example 21, the component servicing procedure to comprise servicing a component on a sled.Example 23 is the method of Example 22, the component servicing procedure to comprise removing the sled from a sled space of a rack.Example 24 is the method of any of Examples 22 to 23, the component servicing procedure to comprise removing the component from the sled.Example 25 is the method of any of Examples 22 to 24, the component servicing procedure to comprise testing the component.Example 26 is the method of any of Examples 22 to 25, the component servicing procedure to comprise cleaning the component.Example 27 is the method of any of Examples 22 to 26, the component servicing procedure to comprise power-cycling the component.Example 28 is the method of any of Examples 22 to 27, the component servicing procedure to comprise capturing one or more images of the component. Example 29 is the method of Example 28, comprising sending the one or more captured images to the automation coordinator.Example 30 is the method of any of Examples 22 to 29, the component to comprise a processor.Example 31 is the method of any of Examples 22 to 29, the component to comprise a field- programmable gate array (FPGA).Example 32 is the method of any of Examples 22 to 29, the component to comprise a memory module.Example 33 is the method of any of Examples 22 to 29, the component to comprise a nonvolatile storage device.Example 34 is the method of Example 33, the non- volatile storage device to comprise a solid-state drive (SSD).Example 35 is the method of Example 34, the SSD to comprise a three-dimensional (3D) NAND SSD.Example 36 is the method of any of Examples 1 to 35, comprising identifying the automated maintenance procedure based on a maintenance task code comprised in the received automation command.Example 37 is the method of any of Examples 1 to 36, comprising performing the identified automated maintenance procedure based on one or more maintenance task parameters.Example 38 is the method of Example 37, the one or more maintenance task parameters to be comprised in the received automation command.Example 39 is the method of Example 37, at least one of the one or more maintenance task parameters to be comprised in a second automation command received from the automation coordinator.Example 40 is the method of any of Examples 37 to 39, the one or more maintenance task parameters to include one or more location parameters.Example 41 is the method of Example 40, the one or more location parameters to include a rack identifier (ID) associated with a rack within the data center.Example 42 is the method of any of Examples 40 to 41, the one or more location parameters to include a sled space identifier (ID) associated with a sled space within the data center.Example 43 is the method of any of Examples 40 to 42, the one or more location parameters to include a slot identifier (ID) associated with a connector socket on a sled within the data center. Example 44 is the method of any of Examples 37 to 43, the one or more maintenance task parameters to include a sled identifier (ID) associated with a sled within the data center.Example 45 is the method of any of Examples 37 to 44, the one or more maintenance task parameters to include a component identifier (ID) associated with a component on a sled within the data center.Example 46 is the method of any of Examples 1 to 45, the automation command to be comprised in signals received via a communication interface of the automated maintenance device.Example 47 is the method of Example 46, the communication interface to comprise a radio frequency (RF) interface, the signals to comprise RF signals.Example 48 is the method of any of Examples 1 to 47, comprising sending a message to the automation coordinator to acknowledge the received automation command.Example 49 is the method of any of Examples 1 to 48, comprising sending a message to the automation coordinator to report a result of the automated maintenance procedure.Example 50 is the method of any of Examples 1 to 49, comprising sending position data to the automation coordinator, the position data to indicate a position of the automated maintenance device within the data center.Example 51 is the method of any of Examples 1 to 50, comprising sending assistance data to the automation coordinator, the assistance data to comprise an image of a component that is to be manually replaced or serviced.Example 52 is the method of any of Example 1 to 51, comprising sending environmental data to the automation coordinator, the environmental data to comprise measurements of one or more aspects of ambient conditions within the data center.Example 53 is the method of Example 52, comprising one or more sensors to generate the measurements comprised in the environmental data.Example 54 is the method of any of Examples 52 to 53, the environmental data to comprise one or more temperature measurements.Example 55 is the method of any of Examples 52 to 54, the environmental data to comprise one or more humidity measurements.Example 56 is the method of any of Examples 52 to 55, the environmental data to comprise one or more air quality measurements.Example 57 is the method of any of Examples 52 to 56, the environmental data to comprise one or more pressure measurements. Example 58 is a computer-readable storage medium storing instructions that, when executed, cause an automated maintenance device to perform a method according to any of Examples 1 to 57.Example 59 is an automated maintenance device, comprising processing circuitry and computer-readable storage media storing instructions for execution by the processing circuitry to cause the automated maintenance device to perform a method according to any of Examples 1 to 57.Example 60 is a method for coordination of automated data center maintenance, comprising identifying, by processing circuitry, a maintenance task to be performed in a data center, determining to initiate automated performance of the maintenance task, selecting an automated maintenance device to which to assign the maintenance task, and sending an automation command to cause the automated maintenance device to perform an automated maintenance procedure associated with the maintenance task.Example 61 is the method of Example 60, comprising identifying the maintenance task based on telemetry data associated with one or more physical resources of the data center.Example 62 is the method of Example 61, comprising receiving the telemetry data via a telemetry framework of the data center.Example 63 is the method of any of Examples 61 to 62, the telemetry data to include one or more telemetry metrics associated with a physical compute resource.Example 64 is the method of any of Examples 61 to 63, the telemetry data to include one or more telemetry metrics associated with a physical accelerator resource.Example 65 is the method of any of Examples 61 to 64, the telemetry data to include one or more telemetry metrics associated with a physical memory resource.Example 66 is the method of any of Examples 61 to 65, the telemetry data to include one or more telemetry metrics associated with a physical storage resource.Example 67 is the method of any of Examples 60 to 66, comprising identifying the maintenance task based on environmental data received from one or more automated maintenance devices of the data center.Example 68 is the method of Example 67, the environmental data to include one or more temperature measurements.Example 69 is the method of any of Examples 67 to 68, the environmental data to include one or more humidity measurements.Example 70 is the method of any of Examples 67 to 69, the environmental data to include one or more air quality measurements. Example 71 is the method of any of Examples 67 to 70, the environmental data to include one or more pressure measurements.Example 72 is the method of any of Examples 60 to 71, comprising adding the maintenance task to a pending task queue following identification of the maintenance task.Example 73 is the method of Example 72, comprising determining to initiate automated performance of the maintenance task based on a determination that the maintenance task constitutes a highest priority task among one or more maintenance tasks comprised in the pending task queue.Example 74 is the method of any of Examples 60 to 73, comprising selecting the automated maintenance device from among one or more automated maintenance devices in a candidate device pool.Example 75 is the method of any of Examples 60 to 74, comprising selecting the automated maintenance device based on one or more capabilities of the automated maintenance device.Example 76 is the method of any of Examples 60 to 75, comprising selecting the automated maintenance device based on position data received from the automated maintenance device.Example 77 is the method of any of Examples 60 to 76, the automation command to comprise a maintenance task code indicating a task type associated with the maintenance task.Example 78 is the method of any of Examples 60 to 77, the automation command to comprise location information associated with the maintenance task.Example 79 is the method of Example 78, the location information to include a rack identifier (ID) associated with a rack within the data center.Example 80 is the method of any of Examples 78 to 79, the location information to include a sled space identifier (ID) associated with a sled space within the data center.Example 81 is the method of any of Examples 78 to 80, the location information to include a slot identifier (ID) associated with a connector socket on a sled within the data center.Example 82 is the method of any of Examples 60 to 81, the automation command to comprise a sled identifier (ID) associated with a sled within the data center.Example 83 is the method of any of Examples 60 to 82, the automation command to comprise a physical resource identifier (ID) associated with a physical resource within the data center.Example 84 is the method of any of Examples 60 to 81, the maintenance task to comprise replacement of a sled. Example 85 is the method of Example 83, the sled to comprise a compute sled, an accelerator sled, a memory sled, or a storage sled.Example 86 is the method of any of Examples 60 to 81, the maintenance task to comprise replacement of one or more components of a sled.Example 87 is the method of any of Examples 60 to 81, the maintenance task to comprise repair of one or more components of a sled.Example 88 is the method of any of Examples 60 to 81, the maintenance task to comprise testing of one or more components of a sled.Example 89 is the method of any of Examples 60 to 81, the maintenance task to comprise cleaning of one or more components of a sled.Example 90 is the method of any of Examples 60 to 81, the maintenance task to comprise power cycling one or more memory modules.Example 91 is the method of any of Examples 60 to 81, the maintenance task to comprise power cycling one or more non-volatile storage devices.Example 92 is the method of any of Examples 60 to 81, the maintenance task to comprise storing a compute state of a compute sled, replacing the compute sled with a second compute sled, and transferring the stored compute state to the second compute sled.Example 93 is the method of any of Examples 60 to 81, the maintenance task to comprise replacing one or more cache memory modules of a processor.Example 94 is a computer-readable storage medium storing instructions that, when executed by an automation coordinator for a data center, cause the automation coordinator to perform a method according to any of Examples 60 to 93.Example 95 is an apparatus, comprising processing circuitry and computer-readable storage media storing instructions for execution by the processing circuitry to perform a method according to any of Examples 60 to 93.Example 96 is a method for automated data center maintenance, comprising identifying, by processing circuitry of an automated maintenance device, a collaborative maintenance procedure to be performed in a data center, identifying a second automated maintenance device with which to collaborate during performance of the collaborative maintenance procedure, and sending interdevice coordination information to the second automated maintenance device to initiate the collaborative maintenance procedure.Example 97 is the method of Example 96, comprising identifying the collaborative maintenance procedure based on telemetry data associated with one or more physical resources of the data center. Example 98 is the method of Example 97, the telemetry data to include one or more telemetry metrics associated with a physical compute resource.Example 99 is the method of any of Examples 97 to 98, the telemetry data to include one or more telemetry metrics associated with a physical accelerator resource.Example 100 is the method of any of Examples 97 to 99, the telemetry data to include one or more telemetry metrics associated with a physical memory resource.Example 101 is the method of any of Examples 97 to 100, the telemetry data to include one or more telemetry metrics associated with a physical storage resource.Example 102 is the method of any of Examples 96 to 101, comprising identifying the collaborative maintenance procedure based on environmental data comprising measurements of one or more aspects of ambient conditions within the data center.Example 103 is the method of Example 102, comprising one or more sensors to generate the measurements comprised in the environmental data.Example 104 is the method of any of Examples 102 to 103, the environmental data to comprise one or more temperature measurements.Example 105 is the method of any of Examples 102 to 104, the environmental data to comprise one or more humidity measurements.Example 106 is the method of any of Examples 102 to 105, the environmental data to comprise one or more air quality measurements.Example 107 is the method of any of Examples 102 to 106, the environmental data to comprise one or more pressure measurements.Example 108 is the method of Example 96, comprising identifying the collaborative maintenance procedure based on an automation command received from an automation coordinator for the data center.Example 109 is the method of Example 108, comprising identifying the collaborative maintenance procedure based on a maintenance task code comprised in the received automation command.Example 110 is the method of any of Examples 96 to 109, comprising selecting the second automated maintenance device from among a plurality of automated maintenance devices in a candidate device pool for the data center.Example 111 is the method of any of Examples 96 to 110, comprising identifying the second automated maintenance device based on a parameter comprised in a command received from an automation coordinator for the data center. Example 112 is the method of any of Examples 96 to 111, the collaborative maintenance procedure to comprise replacing a sled.Example 113 is the method of Example 112, the sled to comprise a compute sled.Example 114 is the method of Example 113, the collaborative maintenance procedure to comprise removing the compute sled from a sled space, removing a memory card from a connector slot of the compute sled, inserting the memory card into a connector slot of a replacement compute sled, and inserting the replacement compute sled into the sled space.Example 115 is the method of Example 114, the memory card to store a compute state of the compute sled.Example 116 is the method of Example 115, the collaborative maintenance procedure to comprise initiating a restoration of the stored compute state on the replacement compute sled.Example 117 is the method of Example 112, the sled to comprise an accelerator sled, a memory sled, or a storage sled.Example 118 is the method of any of Examples 96 to 111, the collaborative maintenance procedure to comprise replacing a component on a sled.Example 119 is the method of Example 118, the component to comprise a processor. Example 120 is the method of Example 118, the component to comprise a field- programmable gate array (FPGA).Example 121 is the method of Example 118, the component to comprise a memory module.Example 122 is the method of Example 118, the component to comprise a non- volatile storage device.Example 123 is the method of Example 122, the non- volatile storage device to comprise a solid-state drive (SSD).Example 124 is the method of Example 123, the SSD to comprise a three-dimensional(3D) NAND SSD.Example 125 is the method of any of Examples 96 to 111, the collaborative maintenance procedure to comprise replacing one or more cache memory modules of a processor on a sled.Example 126 is the method of Example 125, the collaborative maintenance procedure to comprise removing a heat sink from atop the processor, removing the processor from a socket to facilitate access to one or more cache memory modules underlying the processor, removing the one or cache memory modules, inserting one or more replacement cache memory modules, reinserting the processor into the socket, and reinstalling the heat sink. Example 127 is the method of any of Examples 96 to 111, the collaborative maintenance procedure to comprise servicing a component on a sled.Example 128 is the method of Example 127, the collaborative maintenance procedure to comprise removing the sled from a sled space of a rack.Example 129 is the method of any of Examples 127 to 128, the collaborative maintenance procedure to comprise removing the component from the sled.Example 130 is the method of any of Examples 127 to 129, the collaborative maintenance procedure to comprise testing the component.Example 131 is the method of any of Examples 127 to 130, the collaborative maintenance procedure to comprise cleaning the component.Example 132 is the method of any of Examples 127 to 131, the collaborative maintenance procedure to comprise power-cycling the component.Example 133 is the method of any of Examples 127 to 132, the collaborative maintenance procedure to comprise capturing one or more images of the component.Example 134 is the method of any of Examples 127 to 133, the component to comprise a processor.Example 135 is the method of any of Examples 127 to 133, the component to comprise a field-programmable gate array (FPGA).Example 136 is the method of any of Examples 127 to 133, the component to comprise a memory module.Example 137 is the method of any of Examples 127 to 133, the component to comprise a non-volatile storage device.Example 138 is the method of Example 137, the non-volatile storage device to comprise a solid-state drive (SSD).Example 139 is the method of Example 138, the SSD to comprise a three-dimensional(3D) NAND SSD.Example 140 is the method of any of Examples 96 to 139, the interdevice coordination information to comprise a rack identifier (ID) associated with a rack within the data center.Example 141 is the method of any of Examples 96 to 140, the interdevice coordination information to comprise a sled space identifier (ID) associated with a sled space within the data center.Example 142 is the method of any of Examples 96 to 141, the interdevice coordination information to comprise a slot identifier (ID) associated with a connector socket on a sled within the data center. Example 143 is the method of any of Examples 96 to 142, the interdevice coordination information to comprise a sled identifier (ID) associated with a sled within the data center.Example 144 is the method of any of Examples 96 to 143, the interdevice coordination information to comprise a component identifier (ID) associated with a component on a sled within the data center.Example 145 is a computer-readable storage medium storing instructions that, when executed, cause an automated maintenance device to perform a method according to any of Examples 96 to 144.Example 146 is an automated maintenance device, comprising processing circuitry and computer-readable storage media storing instructions for execution by the processing circuitry to cause the automated maintenance device to perform a method according to any of Examples 96 to 144.Example 147 is an automated maintenance device, comprising means for receiving an automation command from an automation coordinator for a data center, means for identifying an automated maintenance procedure based on the received automation command, and means for performing the identified automated maintenance procedure.Example 148 is the automated maintenance device of Example 147, the identified automated maintenance procedure to comprise a sled replacement procedure.Example 149 is the automated maintenance device of Example 148, the sled replacement procedure to comprise removing a compute sled from a sled space, removing a memory card from a connector slot of the compute sled, inserting the memory card into a connector slot of a replacement compute sled, and inserting the replacement compute sled into the sled space.Example 150 is the automated maintenance device of Example 149, the memory card to store a compute state of the compute sled.Example 151 is the automated maintenance device of Example 150, the sled replacement procedure to comprise initiating a restoration of the stored compute state on the replacement compute sled.Example 152 is the automated maintenance device of Example 148, the sled replacement procedure to comprise replacing an accelerator sled, a memory sled, or a storage sled.Example 153 is the automated maintenance device of Example 147, the identified automated maintenance procedure to comprise a component replacement procedure.Example 154 is the automated maintenance device of Example 153, the component replacement procedure to comprise removing a component from a socket of a sled, and inserting a replacement component into the socket. Example 155 is the automated maintenance device of Example 154, the component to comprise a processor, a field-programmable gate array (FPGA), a memory module, or a solid- state drive (SSD).Example 156 is the automated maintenance device of Example 153, the component replacement procedure to comprise a cache memory replacement procedure.Example 157 is the automated maintenance device of Example 156, the cache memory replacement procedure to comprise replacing one or more cache memory modules of a processor on a sled.Example 158 is the automated maintenance device of Example 157, the cache memory replacement procedure to comprise removing a heat sink from atop the processor, removing the processor from a socket to facilitate access to one or more cache memory modules underlying the processor, removing the one or cache memory modules, inserting one or more replacement cache memory modules, reinserting the processor into the socket, and reinstalling the heat sink.Example 159 is the automated maintenance device of Example 147, the identified automated maintenance procedure to comprise a component servicing procedure.Example 160 is the automated maintenance device of Example 159, the component servicing procedure to comprise servicing a component on a sled.Example 161 is the automated maintenance device of Example 160, the component servicing procedure to comprise removing the sled from a sled space of a rack.Example 162 is the automated maintenance device of any of Examples 160 to 161, the component servicing procedure to comprise removing the component from the sled.Example 163 is the automated maintenance device of any of Examples 160 to 162, the component servicing procedure to comprise testing the component.Example 164 is the automated maintenance device of any of Examples 160 to 163, the component servicing procedure to comprise cleaning the component.Example 165 is the automated maintenance device of any of Examples 160 to 164, the component servicing procedure to comprise power-cycling the component.Example 166 is the automated maintenance device of any of Examples 160 to 165, the component servicing procedure to comprise capturing one or more images of the component.Example 167 is the automated maintenance device of any of Examples 160 to 166, the component to comprise a processor, a field-programmable gate array (FPGA), a memory module, or a solid-state drive (SSD). Example 168 is the automated maintenance device of any of Examples 147to 167, comprising means for identifying the automated maintenance procedure based on a maintenance task code comprised in the received automation command.Example 169 is the automated maintenance device of any of Examples 147to 168, comprising means for performing the identified automated maintenance procedure based on one or more maintenance task parameters.Example 170 is the automated maintenance device of Example 169, the one or more maintenance task parameters to be comprised in the received automation command.Example 171 is the automated maintenance device of Example 169, at least one of the one or more maintenance task parameters to be comprised in a second automation command received from the automation coordinator.Example 172 is the automated maintenance device of any of Examples 169 to 171, the one or more maintenance task parameters to include one or more location parameters.Example 173 is the automated maintenance device of Example 172, the one or more location parameters to include a rack identifier (ID) associated with a rack within the data center.Example 174 is the automated maintenance device of any of Examples 172 to 173, the one or more location parameters to include a sled space identifier (ID) associated with a sled space within the data center.Example 175 is the automated maintenance device of any of Examples 172 to 174, the one or more location parameters to include a slot identifier (ID) associated with a connector socket on a sled within the data center.Example 176 is the automated maintenance device of any of Examples 169 to 175, the one or more maintenance task parameters to include a sled identifier (ID) associated with a sled within the data center.Example 177 is the automated maintenance device of any of Examples 169 to 176, the one or more maintenance task parameters to include a component identifier (ID) associated with a component on a sled within the data center.Example 178 is the automated maintenance device of any of Examples 147to 177, the automation command to be comprised in signals received via a communication interface of the automated maintenance device.Example 179 is the automated maintenance device of Example 178, the communication interface to comprise a radio frequency (RF) interface, the signals to comprise RF signals. Example 180 is the automated maintenance device of any of Examples 147to 179, comprising means for sending a message to the automation coordinator to acknowledge the received automation command.Example 181 is the automated maintenance device of any of Examples 147to 180, comprising means for sending a message to the automation coordinator to report a result of the automated maintenance procedure.Example 182 is the automated maintenance device of any of Examples 147to 181, comprising means for sending position data to the automation coordinator, the position data to indicate a position of the automated maintenance device within the data center.Example 183 is the automated maintenance device of any of Examples 147to 182, comprising means for sending assistance data to the automation coordinator, the assistance data to comprise an image of a component that is to be manually replaced or serviced.Example 184 is the automated maintenance device of any of Example 147to 183, comprising means for sending environmental data to the automation coordinator, the environmental data to comprise measurements of one or more aspects of ambient conditions within the data center.Example 185 is the automated maintenance device of Example 184, comprising means for generating the measurements comprised in the environmental data.Example 186 is the automated maintenance device of any of Examples 184 to 185, the environmental data to comprise one or more temperature measurements.Example 187 is the automated maintenance device of any of Examples 184 to 186, the environmental data to comprise one or more humidity measurements.Example 188 is the automated maintenance device of any of Examples 184 to 187, the environmental data to comprise one or more air quality measurements.Example 189 is the automated maintenance device of any of Examples 184 to 188, the environmental data to comprise one or more pressure measurements.Example 189 is an apparatus for coordination of automated data center maintenance, comprising means for identifying a maintenance task to be performed in a data center, means for determining to initiate automated performance of the maintenance task, means for selecting an automated maintenance device to which to assign the maintenance task, and means for sending an automation command to cause the automated maintenance device to perform an automated maintenance procedure associated with the maintenance task. Example 190 is the apparatus of Example 189, comprising means for identifying the maintenance task based on telemetry data associated with one or more physical resources of the data center.Example 191 is the apparatus of Example 190, comprising means for receiving the telemetry data via a telemetry framework of the data center.Example 192 is the apparatus of any of Examples 190 to 191, the telemetry data to include one or more telemetry metrics associated with a physical compute resource.Example 193 is the apparatus of any of Examples 190 to 192, the telemetry data to include one or more telemetry metrics associated with a physical accelerator resource.Example 194 is the apparatus of any of Examples 190 to 193, the telemetry data to include one or more telemetry metrics associated with a physical memory resource.Example 195 is the apparatus of any of Examples 190 to 194, the telemetry data to include one or more telemetry metrics associated with a physical storage resource.Example 196 is the apparatus of any of Examples 189 to 195, comprising means for identifying the maintenance task based on environmental data received from one or more automated maintenance devices of the data center.Example 197 is the apparatus of Example 196, the environmental data to include one or more temperature measurements.Example 198 is the apparatus of any of Examples 196 to 197, the environmental data to include one or more humidity measurements.Example 199 is the apparatus of any of Examples 196 to 198, the environmental data to include one or more air quality measurements.Example 200 is the apparatus of any of Examples 196 to 199, the environmental data to include one or more pressure measurements.Example 201 is the apparatus of any of Examples 189 to 200, comprising means for adding the maintenance task to a pending task queue following identification of the maintenance task.Example 202 is the apparatus of Example 201, comprising means for determining to initiate automated performance of the maintenance task based on a determination that the maintenance task constitutes a highest priority task among one or more maintenance tasks comprised in the pending task queue.Example 203 is the apparatus of any of Examples 189 to 202, comprising means for selecting the automated maintenance device from among one or more automated maintenance devices in a candidate device pool. Example 204 is the apparatus of any of Examples 189 to 203, comprising means for selecting the automated maintenance device based on one or more capabilities of the automated maintenance device.Example 205 is the apparatus of any of Examples 189 to 204, comprising means for selecting the automated maintenance device based on position data received from the automated maintenance device.Example 206 is the apparatus of any of Examples 189 to 205, the automation command to comprise a maintenance task code indicating a task type associated with the maintenance task.Example 207 is the apparatus of any of Examples 189 to 206, the automation command to comprise location information associated with the maintenance task.Example 208 is the apparatus of Example 207, the location information to include a rack identifier (ID) associated with a rack within the data center.Example 209 is the apparatus of any of Examples 207 to 208, the location information to include a sled space identifier (ID) associated with a sled space within the data center.Example 210 is the apparatus of any of Examples 207 to 209, the location information to include a slot identifier (ID) associated with a connector socket on a sled within the data center.Example 211 is the apparatus of any of Examples 189 to 210, the automation command to comprise a sled identifier (ID) associated with a sled within the data center.Example 212 is the apparatus of any of Examples 189 to 211, the automation command to comprise a physical resource identifier (ID) associated with a physical resource within the data center.Example 213 is the apparatus of any of Examples 189 to 212, the maintenance task to comprise replacement of a sled.Example 214 is the apparatus of Example 213, the sled to comprise a compute sled, an accelerator sled, a memory sled, or a storage sled.Example 215 is the apparatus of any of Examples 189 to 212, the maintenance task to comprise replacement of one or more components of a sled.Example 216 is the apparatus of any of Examples 189 to 212, the maintenance task to comprise repair of one or more components of a sled.Example 217 is the apparatus of any of Examples 189 to 212, the maintenance task to comprise testing of one or more components of a sled.Example 218 is the apparatus of any of Examples 189 to 212, the maintenance task to comprise cleaning of one or more components of a sled. Example 219 is the apparatus of any of Examples 189 to 212, the maintenance task to comprise power cycling one or more memory modules.Example 220 is the apparatus of any of Examples 189 to 212, the maintenance task to comprise power cycling one or more non- volatile storage devices.Example 221 is the apparatus of any of Examples 189 to 212, the maintenance task to comprise storing a compute state of a compute sled, replacing the compute sled with a second compute sled, and transferring the stored compute state to the second compute sled.Example 222 is the apparatus of any of Examples 189 to 212, the maintenance task to comprise replacing one or more cache memory modules of a processor.Example 223 is an automated maintenance device, comprising means for identifying a collaborative maintenance procedure to be performed in a data center, means for identifying a second automated maintenance device with which to collaborate during performance of the collaborative maintenance procedure, and means for sending interdevice coordination information to the second automated maintenance device to initiate the collaborative maintenance procedure.Example 224 is the automated maintenance device of Example 223, comprising means for identifying the collaborative maintenance procedure based on telemetry data associated with one or more physical resources of the data center.Example 225 is the automated maintenance device of Example 224, the telemetry data to include one or more telemetry metrics associated with a physical compute resource.Example 226 is the automated maintenance device of any of Examples 224 to 225, the telemetry data to include one or more telemetry metrics associated with a physical accelerator resource.Example 227 is the automated maintenance device of any of Examples 224 to 226, the telemetry data to include one or more telemetry metrics associated with a physical memory resource.Example 228 is the automated maintenance device of any of Examples 224 to 227, the telemetry data to include one or more telemetry metrics associated with a physical storage resource.Example 229 is the automated maintenance device of any of Examples 223 to 228, comprising means for identifying the collaborative maintenance procedure based on environmental data comprising measurements of one or more aspects of ambient conditions within the data center. Example 230 is the automated maintenance device of Example 229, comprising one or more sensors to generate the measurements comprised in the environmental data.Example 231 is the automated maintenance device of any of Examples 229 to 230, the environmental data to comprise one or more temperature measurements.Example 232 is the automated maintenance device of any of Examples 229 to 231, the environmental data to comprise one or more humidity measurements.Example 233 is the automated maintenance device of any of Examples 229 to 232, the environmental data to comprise one or more air quality measurements.Example 234 is the automated maintenance device of any of Examples 229 to 233, the environmental data to comprise one or more pressure measurements.Example 235 is the automated maintenance device of Example 223, comprising means for identifying the collaborative maintenance procedure based on an automation command received from an automation coordinator for the data center.Example 236 is the automated maintenance device of Example 235, comprising means for identifying the collaborative maintenance procedure based on a maintenance task code comprised in the received automation command.Example 237 is the automated maintenance device of any of Examples 223 to 236, comprising means for selecting the second automated maintenance device from among a plurality of automated maintenance devices in a candidate device pool for the data center.Example 238 is the automated maintenance device of any of Examples 223 to 237, comprising means for identifying the second automated maintenance device based on a parameter comprised in a command received from an automation coordinator for the data center.Example 239 is the automated maintenance device of any of Examples 223 to 238, the collaborative maintenance procedure to comprise replacing a sled.Example 240 is the automated maintenance device of Example 239, the sled to comprise a compute sled.Example 241 is the automated maintenance device of Example 240, the collaborative maintenance procedure to comprise removing the compute sled from a sled space, removing a memory card from a connector slot of the compute sled, inserting the memory card into a connector slot of a replacement compute sled, and inserting the replacement compute sled into the sled space.Example 242 is the automated maintenance device of Example 241, the memory card to store a compute state of the compute sled. Example 243 is the automated maintenance device of Example 242, the collaborative maintenance procedure to comprise initiating a restoration of the stored compute state on the replacement compute sled.Example 244 is the automated maintenance device of Example 239, the sled to comprise an accelerator sled, a memory sled, or a storage sled.Example 245 is the automated maintenance device of any of Examples 223 to 238, the collaborative maintenance procedure to comprise replacing a component on a sled.Example 246 is the automated maintenance device of Example 245, the component to comprise a processor, a field-programmable gate array (FPGA), a memory module, or a solid- state drive (SSD).Example 247 is the automated maintenance device of any of Examples 223 to 238, the collaborative maintenance procedure to comprise replacing one or more cache memory modules of a processor on a sled.Example 248 is the automated maintenance device of Example 247, the collaborative maintenance procedure to comprise removing a heat sink from atop the processor, removing the processor from a socket to facilitate access to one or more cache memory modules underlying the processor, removing the one or cache memory modules, inserting one or more replacement cache memory modules, reinserting the processor into the socket, and reinstalling the heat sink.Example 249 is the automated maintenance device of any of Examples 223 to 238, the collaborative maintenance procedure to comprise servicing a component on a sled.Example 250 is the automated maintenance device of Example 249, the collaborative maintenance procedure to comprise removing the sled from a sled space of a rack.Example 251 is the automated maintenance device of any of Examples 249 to 250, the collaborative maintenance procedure to comprise removing the component from the sled.Example 252 is the automated maintenance device of any of Examples 249 to 251, the collaborative maintenance procedure to comprise testing the component.Example 253 is the automated maintenance device of any of Examples 249 to 252, the collaborative maintenance procedure to comprise cleaning the component.Example 254 is the automated maintenance device of any of Examples 249 to 253, the collaborative maintenance procedure to comprise power-cycling the component.Example 255 is the automated maintenance device of any of Examples 249 to 254, the collaborative maintenance procedure to comprise capturing one or more images of the component. Example 256 is the automated maintenance device of any of Examples 249 to 255, the component to comprise a processor, a field-programmable gate array (FPGA), a memory module, or a solid-state drive (SSD).Example 257 is the automated maintenance device of any of Examples 223 to 256, the interdevice coordination information to comprise a rack identifier (ID) associated with a rack within the data center.Example 258 is the automated maintenance device of any of Examples 223 to 257, the interdevice coordination information to comprise a sled space identifier (ID) associated with a sled space within the data center.Example 259 is the automated maintenance device of any of Examples 223 to 258, the interdevice coordination information to comprise a slot identifier (ID) associated with a connector socket on a sled within the data center.Example 260 is the automated maintenance device of any of Examples 223 to 259, the interdevice coordination information to comprise a sled identifier (ID) associated with a sled within the data center.Example 261 is the automated maintenance device of any of Examples 223 to 260, the interdevice coordination information to comprise a component identifier (ID) associated with a component on a sled within the data center.Numerous specific details have been set forth herein to provide a thorough understanding of the embodiments. It will be understood by those skilled in the art, however, that the embodiments may be practiced without these specific details. In other instances, well-known operations, components, and circuits have not been described in detail so as not to obscure the embodiments. It can be appreciated that the specific structural and functional details disclosed herein may be representative and do not necessarily limit the scope of the embodiments.Some embodiments may be described using the expression "coupled" and "connected" along with their derivatives. These terms are not intended as synonyms for each other. For example, some embodiments may be described using the terms "connected" and/or "coupled" to indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled," however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.Unless specifically stated otherwise, it may be appreciated that terms such as "processing," "computing," "calculating," "determining," or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical quantities (e.g., electronic) within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices. The embodiments are not limited in this context.It should be noted that the methods described herein do not have to be executed in the order described, or in any particular order. Moreover, various activities described with respect to the methods identified herein can be executed in serial or parallel fashion.Although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combinations of the aboveembodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. Thus, the scope of various embodiments includes any other applications in which the above compositions, structures, and methods are used.It is emphasized that the Abstract of the Disclosure is provided to comply with 37 C.F.R. § 1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate preferred embodiment. In the appended claims, the terms "including" and "in which" are used as the plain- English equivalents of the respective terms "comprising" and "wherein," respectively.Moreover, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. |
Examples may include techniques for allocating configurable computing resources from a pool of configurable computing resources to a logical server or virtual machine. The logical server or virtual machine may use allocated configurable computing resources to implement, execute or run a workload. |
1.A device that includes:a circuit for a controller of a configurable computing resource system;a requesting component executed by the circuit, the requesting component for receiving a request to allocate the configurable computing resource to a logical server to implement or execute a workload;a scoring component executed by the circuitry, the scoring component configured to determine a first weighted sum allocation score for the first portion of the configurable computing resources available for allocation to the logical server and for the configurable computing resource A second portion assignable to the logical server determines a second weighted sum allocation score;a ranking component executed by the circuit, the ranking component for comparing the first and second weighted sum allocation scores;An allocation component executed by the circuit, the distribution component for assigning the first portion or the second portion to the logical server based on the comparing.2.The apparatus of claim 1, the means for allocating a resource directory to indicate that the first or second portion is assigned to the logical server.3.The device of claim 1 , the configurable computing resource system comprising the configurable computing resources maintained in a plurality of racks.4.The apparatus of claim 3 comprising:Include the first portion and the second portion of the configurable computing resources of the corresponding first and second configurations, each of the first and second configurations having a plurality of separate physics belonging to one or more types element;The scoring component for determining the first and second weighted scores based on separate physical elements of the same type physically located in different ones of the plurality of racks;The allocating means for allocating the first portion or the second of the configurable computing resource based on a comparison of the first and second weighted scores by the ranking component and based on the request indicating allocation Part of it to meet high availability requirements.5.The apparatus of claim 4, the one or more types comprising a central processing unit type, a memory type, a storage type, or a network input/output type.6.The apparatus of claim 1, the first and second weighted scores are based on whether the request indicates that the configurable computing resources are to be weighted based on one of: a cost sensitive template, a performance sensitive template , high availability templates or balanced templates.7.The apparatus of claim 6 wherein said first and second weighted sum allocation scores are each configurable calculation included in said corresponding first and second portions of said configurable computing resource by said scoring component Determined by the various allocation properties of the resource.8.The apparatus of claim 7, the plurality of distribution attributes including operating temperature, power/energy consumption, total uptime in hours, or unit cost.9.The apparatus of claim 8, the cost sensitive template for causing a unit cost to have the highest weight among the plurality of allocation attributes, or the performance sensitive template and the high availability template are used to make total uptime The highest weight among the plurality of distribution attributes.10.The apparatus of claim 1 comprising:The allocating means for allocating the first portion of the allocatable computing resource to the logical server to implement or perform the workload;a monitoring component executed by the circuit, the monitoring component for monitoring a plurality of operational attributes of each configurable computing resource included in the first portion when the logical server implements or executes the workload ;The scoring component is configured to determine a first weighted total running score of the first portion based on the plurality of operational attributes monitored by the monitoring component;The ranking component is operative to compare one or more historical weighted sum run scores determined for one or more other portions of the configurable computing resource that were previously allocated to implement or perform the workload The weighted sum run scores are ranked;The allocating means is configured to modify which configurable computing resources are included in the first portion based on the ranking.11.The apparatus of claim 10, the first weighted sum operation score is weighted based on a service level agreement for the workload.12.The apparatus of claim 1, the first portion and the second portion of the configurable computing resource comprising respective first and second configurations, each of the first and second configurations having a A plurality of types of separate physical components, including one of a central processing unit type, a memory type, a storage type, or a network input/output type.13.The device of claim 1 including a digital display coupled to the circuitry to present a user interface view.14.A method comprising:Receiving, at a resource manager for a configurable computing resource system, a request to allocate the configurable computing resource to a logical server to implement or execute a workload;Determining a first weighted sum allocation score for a first portion of the configurable computing resources available for allocation to the logical server;Determining a second weighted sum allocation score for a second portion of the configurable computing resources available for allocation to the logical server;Comparing the first and second weighted sum allocation scores to rank the first portion relative to the second portion;The first portion or the second portion is assigned to the logical server based on the ranking.15.The method of claim 14 comprising:Updating the resource directory to indicate that the first or second portion is assigned to the logical server.16.The method of claim 14 comprising:Allocating the first portion of the allocatable computing resource to the logical server to implement or perform the workload;Monitoring a plurality of operational attributes of each configurable computing resource included in the first portion when the logical server implements or executes the workload;Determining a first weighted total running score of the first portion based on the plurality of monitored operational attributes;Comparing the first weighted sum with one or more historical weighted summation run scores determined for one or more other portions of the configurable computing resource that were previously allocated to implement or perform the workload Run a rating to rank; andModifying which configurable computing resources are included in the first portion based on the ranking.17.The method of claim 16 wherein said first weighted sum operation score is weighted based on a service level agreement for said workload.18.The method of claim 14, the first portion and the second portion of the configurable computing resource comprising respective first and second configurations, each of the first and second configurations having a A plurality of types of separate physical components, including one of a central processing unit type, a memory type, a storage type, or a network input/output type.19.An apparatus comprising means for performing the method of any one of claims 14 to 18.20.At least one machine readable medium comprising a plurality of instructions that cause the circuit to be responsive to execution by circuitry that is positioned with a configurable computing resource system:Receiving a request to allocate the configurable computing resource to a logical server to implement or execute a workload;Determining a first weighted sum allocation score for a first portion of the configurable computing resources available for allocation to the logical server;Determining a second weighted sum allocation score for a second portion of the configurable computing resources available for allocation to the logical server;Comparing the first and second weighted sum allocation scores to rank the first portion relative to the second portion;The first portion or the second portion is assigned to the logical server based on the ranking.21.The at least one machine readable medium of claim 20, the configurable computing resource system comprising the configurable resource maintained in a plurality of racks.22.The at least one machine readable medium of claim 21, the first portion and the second portion of the configurable computing resource comprising respective first and second configurations, the first and second configurations Each has a plurality of separate physical elements belonging to one or more types, the instructions being used to further cause the circuit:Determining the first and second weighted scores based on separate physical components of the same type physically located in different ones of the plurality of racks;The first portion or the second portion of the configurable computing resource is allocated based on a comparison of the first and second weighted scores and based on the request indicating the assignment to satisfy a high availability requirement.23.The at least one machine readable medium of claim 22, the one or more types comprising a central processing unit type, a memory type, a storage type, or a network input/output type.24.The at least one machine readable medium of claim 20, the first and second weighted scores are based on whether the request indicates that the configurable computing resource is to be weighted based on one of: costing Sensitive templates, performance-sensitive templates, high-availability templates, or balanced templates.25.The at least one machine readable medium of claim 24, the first and second weighted sum allocation scores are based on each configurable calculation included in respective first and second portions of the configurable computing resource Multiple distribution attributes of a resource.26.The at least one machine readable medium of claim 25, the plurality of distribution attributes including operating temperature, power/energy consumption, total uptime in hours, or unit cost. |
Technology for allocating configurable computing resourcesrelated caseThe present application claims priority to U.S. Provisional Patent Application No. 61/945,753, the entire disclosure of which is incorporated herein by reference.Technical fieldThe examples described herein relate generally to pooled or configurable computing resources.Background techniqueTechnological advances in networking have led to an increase in the use of pooled and/or configurable computing resources. The pooled and/or configurable computing resources may include a physical infrastructure for a cloud computing network. The physical infrastructure may include one or more computing systems having processors, memory, storage, networks, and the like. The management entities of these cloud computing networks may assign logical servers or virtual machines (VMs) to pooled and/or configurable computing resources of the allocated portions to place or constitute these logical servers for implementation, execution, or operation. Load (such as some type of application). Various types of application or application workloads can utilize this allocated infrastructure in a shared manner by accessing these placed or composed logical servers.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 shows an example system.Figure 2 shows a sample data center/rack management structure.Figure 3 shows an example assignment score and ranking.Figure 4 shows an example first logic flow.Figure 5 shows an example second logic flow.Figure 6 shows an example workload template.Figure 7 shows an example block diagram of the device.Figure 8 shows an example of a third logic flow.Figure 9 shows an example of a storage medium.Figure 10 shows an example of a computing platform.Detailed waysAs contemplated in this disclosure, various types of application or application workloads may utilize a shared infrastructure by accessing a logical server or VM that is placed or constructed, the infrastructure may be selected by the pool Resource composition. These pooled resources may include configurable computing resources consisting of separate physical elements or components belonging to one or more types such as, but not limited to, a central processing unit (CPU) type, a memory type, Storage type, or network input/output (NW I/O) type. The easiest way to allocate pooled resources from these separate physical components to form a logical server or VM is to use a round robin approach to ensure the long life of the assigned separate physical components. There is currently no known method for allocating pooled resources in a holistic manner, so that any combination of key performance indicators (KPIs) is counted, not only resource utilization, but also energy consumption, financial cost or performance of resources. .A technological innovation known as rack-level architecture (RSA) involves logically consisting of servers that separate pools of physical components to implement or execute incoming workload requests. These RSA servers can be deployed in big data centers, but face at least two problems. First, the physical components are initially selected to separate logical components to form a logical server or VM to implement, execute, or run a workload such that different stakeholders or users and their requirements (eg, power, performance, maintenance, cost, etc.) are met. Second, the logical server or VM that is comprised maintains the initially allocated performance for the KPIs that are needed or enforced during the continuous execution of the workload. Since the constituent logical servers or VMs can also be part of a Software Defined Infrastructure (SDI), the SDI enabled data center can include RSA servers that are dynamically composed to implement or execute workloads. Since it is dynamically composed, it is not only the initial allocation but also continuous optimization or adjustment at runtime to perform the workload. These continuous optimizations or adjustments can also be based on meeting different stakeholders or users and their requirements. Relative to these and/or other challenges, the examples described herein are needed.In some examples: techniques for allocating configurable computing resources can be implemented to include receiving, at a resource manager for a configurable computing resource system, allocating the configurable computing resources to a logical server to implement or perform work Load request. The techniques can also include determining a first weighted sum allocation score for a first portion of the configurable computing resources available for allocation to the logical server, and for assigning for the configurable computing resource A second weighted sum allocation score is determined for the second portion of the logical server. The techniques can also include comparing the first and second weighted sum allocation scores to rank the first portion relative to the second portion, and then, based on the ranking, the first portion or The second portion is assigned to the logical server.FIG. 1 illustrates an example system 100. As shown in FIG. 1, system 100 includes a rack 110, a resource pool 120, and an arrangement 130. In some examples, as shown in FIG. 1, rack 110 can include racks 112-1, 112-1 through 112-n, where "n" is any positive integer greater than two. Each rack can include a variety of configurable computing resources. These configurable computing resources can include various types of separate physical components. Types of separate physical components may include, but are not limited to, CPU type, memory type (eg, random access memory (RAM)), storage type (eg, hard disk or solid state drive), NW I/O type (eg, network interface card) ), power type (for example, power conversion box), type of cooling (for example, fan or chiller), or other resource type (for example, network switch type). These configurable computing resources can be made available in a resource pool (such as resource pool 120) (eg, available to a resource manager or controller).According to some examples, the logic and/or features of a resource manager, controller, or scheduler for a system (eg, system 100) may be able to be included in a resource pool (eg, resource pool 120), as described further below. It may be possible to store and then rank each configurable computing resource assigned to a logical server or VM. A logical server or VM, for example, can be composed for implementing or executing a workload. The allocation score and then ranking may be used to allocate at least a portion (eg, a configuration) of available configurable computing resources in the resource pool to support the placement of logical servers or VMs (such as those assigned to arrangement 130) or composition. As shown in FIG. 1, arrangement 130 includes logical servers/VMs 132-1 through 132-m, where "m" is any positive integer greater than three. The ranking may, for example, be an attempt to meet power, performance, cost, availability, or maintenance requirements, and also allow the system to be attributable to various operational scenarios that may result in modifications to the allocated portion of configurable computing resources. The long-term operation caused by dynamic demand is balanced to some extent. Thus, modifications to the assigned portion may be required.In some examples, as described more below, the logic and/or features of a resource manager, controller, or scheduler for a system (eg, system 100) may also be capable of being allocated for use as a logical server or The VM implements, runs, or executes a workload to monitor or configure various operational attributes of each configurable computing resource of a logical server or VM. For these examples, the logic and/or features may score the running configurable computing resources and then with one or more other portions of the configurable computing resources that were previously allocated for implementing or executing the workload. The determined one or more historical running scores are compared to rank that running score. Modifications to those configurable computing resources that are allocated may or may not be based on this ranking. For example, if the ranking of the first assigned configurable computing resource is lower than the historical running score of other configurable computing resources, the first allocated configurable computing resource can be placed with the new configurable computing resource.According to some examples, each logical server (such as those shown for arrangement 130 in FIG. 1) may include one or more VMs. For these examples, each of the one or more VMs may be assigned a portion of the allocated configurable computing resources. In other examples, the allocated configurable computing resources can be directly assigned to a given VM.FIG. 2 illustrates an example data center/rack management structure 200. In some examples, as shown in FIG. 2, rack management structure 200 includes various managers and application programming interfaces (APIs) for managing data centers having elements similar to system 100 shown in FIG. For example, the universal cloud service 210 can interface as a universal service application interface (API) 220 interface through the service coordination interface shown in FIG. 2 to communicate with the POD manager 230. The POD manager 230 may be capable of managing multiple racks including various types of separate physical components.According to some examples, POD manager 230 may include a resource manager 201 that includes logic and/or features that are capable of responding to configurable computing resources from universal cloud service 210 These separate physical components are scored, ranked, and assigned to a logical server or VM for implementing or executing a request for a workload that can be associated with the universal cloud service 210. The workload may be, for example, an application workload such as, but not limited to, video processing, encryption/decryption, web server, content distribution, or database. As described more below, resource manager 201 can maintain resource catalog 203 to track which configurable computing resources have been allocated and which configurable computing resources may be available for allocation in response to subsequent requests from universal cloud service 210. .In some examples, as shown in FIG. 2, the POD manager 230 can have an RSA Management Service API 240 for coupling to rack control plane management via the Representational State Transfer (REST) APIs 252-1 through 252-4. (RCPM) 250. RESTAPIs 252-1 through 252-4 may be part of an infrastructure coordination interface maintained between RCPM 250 and one or more POD managers, the one or more POD managers including for providing pairs at the rack level These are POD managers 230 that can configure access to computing resources. Such access may include access to separate physical elements maintained at the rack and metadata for techniques deployed in the racks, the metadata may include aggregated operational attributes of the separate physical elements. According to some examples, RCPM 250 may also provide access to these physical and logical asset landscapes or maps through Local Control Management Database (CMDB) 256 to speed up identification of available assets and allocate configurable computing resources in response to requests to constitute Or arrange a logical server or VM to implement or execute a workload.According to some examples, the RCPM 250 can provide a rack-level user interface to implement several basic functions such as discovery, reservation, polling, monitoring, scheduling, and usage. Also, for these examples, the RCPM 250 can be utilized to assemble high-level computing resources in a multi-rack architecture (eg, for executing workloads).In some examples, RCPM 250 may report assets under its management to POD manager 230, which includes resource manager 201. For these examples, the resource manager 201 can include logic and/or features that can assist the POD manager 230 in overall physical asset landscape structure from all of the racks included in the POD of the rack managed by the POD manager 230. Gather together into a single multi-rack asset view. According to some examples, RCPM 250 may also receive and/or respond to requests from POD manager 230 via REST APIs 252-1 through 252-4.The RCPM 250 can also interface with configurable computing resources including various types of separate physical components through firmware (FW) APIs 254-1 through 254-4. For example, the various types of separate physical components are shown in FIG. 2 as network I/O 260-1, CPU 260-2, storage 260-3, and memory 260-4. Controllers 262-1 through 262-4 can interface with corresponding FW APIs 254-1 through 254-4 to facilitate or enable communication between RCPM 250 and these various types of separate physical components. In some examples, controllers 262-1 through 262-4 can include, but are not limited to, a service processor or a baseboard management controller (BMC).According to some examples, POD manager 230 may receive a request to allocate a portion of configurable computing resources maintained in a plurality of racks, such as racks 112-1 through 112-n of system 100. For these examples, the POD manager 230 can receive the request through the generic service API 210 in a standardized protocol format, such as the Open Virtualization Format (OVF). The OVF can include hints about the type of workload (eg, metadata). The POD manager 230 may be able to determine what hardware configuration is required to place or compose a logical server or VM to implement or execute a workload. The POD manager 230 can then forward the request and indicate to the resource manager 201 the hardware configuration that may be needed. For example, the configuration of configurable computing resources including various types of separate physical components (such as CPU, memory, storage, and NW I/O) required to implement, operate, or execute a workload.In some examples, the logic and/or features of resource manager 201 may be able to score and then rank the available configurable computing resources included in the resource pool (such as resource pool 120 shown in FIG. 1) for distribution. These available configurable computing resources meet separate physical component configurations for implementing, running, or executing workloads. For these examples, the allocation score can be based on the weighted sum determined by the application using example equation (1).Equation (1):For example, equation (1), allocation score or si is a configurable computing resource ri (CPU, memory, storage, NW I/O, power, cooling, etc.) is normalized to the corresponding maximum value ri,max multiplied by the weight The sum of mi. The weight m i may allow certain attributes of the configurable computing resource to be prioritized over other attributes of the configurable computing resource (eg, by a user). In some examples, r i,max may be based on, but not limited to, one or more user license agreements (SLAs), maximum specifications from the manufacturer, or tested operational parameters or attributes. In some examples, the performance value r i,max may be automatically obtained from the resource r i (eg, from a basic input/output system (BIOS), a suitable manufacturer read-only memory (ROM), or an SLA). In other examples, r i,max can be dynamically adjusted to reflect functionality. For example, when a sensor deployed in a rack (for example, a SMART sensor) expects a fault value or a life-extinguishing value, by reducing the r i,max of a storage resource such as a hard disk drive.According to some examples, the configurable computing resources included in the resource pool may be separate physical components maintained in different trays and/or racks. For these examples, RCPM 150 may be able to track the properties of each configurable computing resource in real time. For example, one or more NW I/Os included in network I/O 260-1, one or more CPUs included in CPU 260-2, one or more storage devices or memories included in storage device 260-3 One or more memory devices included in 260-4. The attributes may include, but are not limited to, temperature (t, in degrees Celsius), power/energy consumption (e, in kilovolts milliamps), total normal operating time (u, in hours), or unit cost. (c, in US dollars).In some examples, a request can be received from the universal cloud service 210 to allocate configurable computing resources to a logical server or VM to implement or execute a workload. For these examples, the logic and/or features at the POD manager 230 can determine that the workload will require 1 CPU, 2 gigabytes (GB) of RAM, 1 terabyte (TB) of storage, and 10 thousand Megabit (Gb) capable NW I/O devices or NICs. Also, the request may indicate a template that may cause each of these attributes of the configurable computing resources to be weighted in a specific manner.According to some examples, the template may include, but is not limited to, a "cost sensitive" template, a "performance sensitive" template, or a "high availability" template. The template indicated in the request may set a weight or multiplier (m, ∑ m = 1) for each attribute. The cost sensitive template may have a weight of m t = 0.2, m p = 0.2, m u = 0.1, and m c = 0.5. The cost sensitive template can also make the unit cost c have the highest weight. The performance sensitive template may have a weight of m t = 0.2, m p = 0.1, m u = 0.6, and m c =1. The performance-sensitive template can have the highest normal operating time u with the highest weight, but the unit cost u and the power/energy consumption p have lower relative weights. The high availability template may have a weight of m t = 0.1, m p = 0.1, m u = 0.7, and m c = 0.1. A high availability template can have the highest normal working time u have the highest weight, but make all other attributes have a lower relative weight. The equilibrium template may have a balanced weight of m t = 0.25, m p = 0.25, m u = 0.25, m c = 0.25.In some examples, the resource manager 201 at the POD manager 230 can include a weighted sum allocation score for determining each available configurable computing resource using the example equation (2) and taking the lowest score on i. Logic and / or characteristics.Equation (2):Figure 3 shows an example allocation score and ranking 300. In some examples, CPU rank 310, memory rank 320, may be generated using example equation (2) based on any values of CPU, memory, storage, and nw I/O attributes (t, p, u, and c). Storage device ranking 330 and network ranking 340. For these examples, the lowest allocation score for each type of separate physical component is in bold, indicating the highest ranking among similar types of available configurable computing resources. For example, the CPU with the lowest score/highest rank has a universally unique identifier (UUID) that is cpu-2, and the memory with the lowest allocation score/highest rank has a UUID of mem-1, with the lowest allocation score/highest ranking storage. The device has a UUID of stor-4, and the network I/O with the lowest allocation score/highest ranking has a UUID of nw I/O-4.According to some examples, the above requests determined to require 1 CPU, 2 GB of RAM, 1 TB of storage, and 10 Gb capable NW I/O devices or NICs will include cpu-2, mem-1, stor-4, and Nw I/O-4 is the highest ranked configuration of available configurable computing resources and will therefore be used to compose a logical server or VM to implement or perform the workload associated with the request. However, as mentioned further below, in addition to pure ranking, other considerations may result in selecting a configuration with a configurable computing device from a separate rack (eg, to meet high availability requirements).In some examples, the allocated configurable computing resources may be marked as unreserved or unavailable when assigned to a logical server or VM that is comprised to implement or execute a workload. For these examples, a resource manager (eg, resource manager 201) can maintain a resource catalog (eg, resource catalog 203) to track which portion or portions of pooled resources have been allocated.Figure 4 shows an example first logic flow. As shown in FIG. 4, the first logic flow includes a process 400. In some examples, elements of system 100 and data center/rack measurement structure 200 as shown in FIGS. 1 and 2, rankings as shown in FIG. 3, or eg equation (1) or above, may be used. 2) To demonstrate example operations related to process 400. The example operations described are not limited to implementations on system 100, data center/rack measurement structure 200, rankings shown in FIG. 3, or example equations (1) or (2).From the beginning of the move to block 410 (receiving a resource allocation request), the logic or feature at the resource manager may receive a resource allocation request (eg, from a POD manager) for allocating available configurable computing resources. For example, resource manager 205 can receive a request from POD manager 230 to allocate resources from resource pool 120 to arrange one or more logical servers or VMs. The request may also indicate a template, such as a cost sensitive template, a performance sensitive template, a high availability template, or a balanced template that may have been indicated from the universal cloud service 210 to the POD manager 230.From block 410 to block 420 (processing request), the logic or feature at the resource manager can rank the available configurable computing resources based on the hardware configuration indicated in the resource allocation request. In some examples, the logic or features of resource manager 201 may rank available resources from resource pool 120 using example equations (1) and (2). Likewise, the template indicated in the request from the universal cloud service 210 may cause the logic or features to weight each of these available configurable computing resources as described above with respect to FIG.Moving from block 420 to decision block 430 (whether high availability is requested?), the logic or feature at the resource manager can determine if the received allocation request indicates a request for high availability. If a request for high availability is indicated, the process moves to block 450. Otherwise, the process moves to block 440.In some examples, the request from the universal cloud service 210 may have indicated high availability through a high availability template. For these examples, the allocation request may have been initially received by the POD manager 230 in an OVF format, the format including a flag indicator indicating the need for high availability resources. For these examples, POD manager 230 can forward this indicator to resource manager 201. The resource manager 201 can then apply the high availability template to cause the logic or feature to weight each operational attribute of the available configurable computing resources.From decision block 430 to block 440 (allocation is done anywhere based on scoring), the logic or features at the resource manager can allocate a portion of the configurable resources from any of the racks since high availability is not indicated. In some examples, the logic and/or features at resource manager 201 may allocate resources from resource pool 120, which may be pulled from one or more racks 112-1 through 112-n Available resources.According to some examples, the resource manager may include logic or features for updating the resource catalog to indicate an allocation of configurable computing resources. For example, the resource manager 201 can update the resource catalog 203.Moving from decision block 430 to block 450 (allocating the best score in different racks), the logic or feature at the resource manager assigns the best score from different racks for the configurable resource allocation due to high availability being indicated. . In some examples, resource manager 201 can allocate resources from different racks to avoid the possibility of rack layer hardware failure. The resources may even be assigned to another data center based on existing SLAs and supervise storage exposures in the aforementioned racks. In this way, highly available services and input/output (IO) limited jobs for logical servers or VMs that are arranged or composed can be prioritized in separate racks, thereby avoiding the same storage device overload and maximizing performance. . The process can then end.In some examples, the resource manager can include logic or features for updating the resource catalog to indicate an allocation of configurable computing resources. For example, the resource manager 201 can update the resource catalog 203.According to some examples, example equation (3) may be used to determine when a configurable computing resource is exceeded or is operating above a maximum operating condition.Equation (3):For these examples, depending on the symbol, the weight mi can be interpreted as either a reward or a fine for each resource, and in principle can be a way to inform the resource manager to reallocate resources elsewhere, for example, due to resource depletion or planned maintenance. Dynamic migration. In the case of an example of separating hardware in a data center, migration can be equivalent to exchanging consumed resources with only new resources. In this scenario, dynamic migration can potentially be less disruptive.Figure 5 shows an example second logic flow. As shown in FIG. 5, the second logic flow includes a process 500. In some examples, elements of system 100 and data center/rack measurement structure 200 as shown in Figures 1 and 2, the above-described example equations (1) through (3) or shown in Figure 4 may be used. Flow 400 shows example operations related to process 500. However, the example operations described are not limited to implementations on system 100, data center/rack measurement structure 200, rankings shown in FIG. 3, example equations (1) through (3) above, or flow 400.From the beginning of the move to block 510 (initial allocation), the logic or features at the resource manager may perform the initial assignment of the configurable computing resources to the logical server or VM to implement, execute, or run the workload, as described above with respect to FIG. The process 400 is illustrated. For example, resource manager 201 can allocate configurable computing resources from resource pool 120, which can include one or more that can reside in one or more racks among racks 110 Multiple separate physical components of type (such as CPU type, storage type, memory type, or nw I/O type).From block 510 to block 520 (monitoring attributes), the logic or features at the resource manager may be able to perform operational attributes for each configurable computing resource that is assigned to the logical server or VM to implement, execute, or run the workload. monitor. In some examples, a controller that is positioned with separate physical components belonging to each type (as shown in Figure 2 for nw I/O 260-1, CPU 260-2, storage device 260-3, and The controllers 262-1 to 262-4) of the memory 260-4 may have a hardware monitoring profile, which may be configured by the resource manager 201. The hardware monitoring configuration files can be configured by the resource manager 201 to specify to the controllers which operational attributes are to be monitored when the logical server or VM implements, executes, or runs the workload. For example, for performance run attributes, controller 262-1 can monitor the data throughput of nw I/O 260-1, controller 262-2 can monitor CPU utilization of CPU 260-2, and controller 262-3 can monitor memory read The number of writes/writes, or the controller 262-4 can monitor the input/output delay of the storage device 262-4. These monitored operational attributes can then be collected by resource manager 201 in response to an event, warning, or periodic polling.From block 520 to block 530 (generating a score), the logic or features at the resource manager may be able to generate or determine the assigned configurable calculation based on the monitored operational attributes when the logical server or VM implements or runs the workload. The first weighted sum operating score of the resource. In some examples, the first weighted total running score S 1 may be defined as the performance W j of the workload running on its assigned set of configurable computing resources. Wherein, the reference numeral 1 represents a configuration of a configurable computing resource composed of a logical server or a VM for implementing, executing or running the workload j. Example equation (4) can be used to determine the weighted sum run score for S 1 (W j ):Equation (4):For example, equation (4), U i (W j ) may represent resource utilization, and P i (W j ) may represent configured attributes, both of which should be maximized. Also, for example, equation (4), C i (W j ) may represent cost, and E i (W j ) may represent energy consumption associated with running workload W j , both of which should be minimized. The associated weights (M 1 , M 2 , M 3 , M 4 ) may be user-defined multipliers that may prioritize certain allowed attribute scores. These relative weights can be derived from the SLA of the workload. Similar to the relative weights m 1 , m 2 , m 3 , m 4 , (M, ∑M=1) described above.According to some examples, the performance P i (W j ) may be a dedicated metric, such as the number of transactions per second, the delay, or any suitable KPI associated with a given configuration i for a given workload j.In some examples, U i (W j ) may be determined by example equation (5):Equation (5):Where CPU i , mem i , nw I/O i and stor i may represent the average utilization of separate physical components of the CPU, memory, nw I/O and storage type included in a given configuration i for a given workload j rate.According to some examples, the energy consumption E i (W j ) may be the aggregate power required to run a given workload j calculated per logical server or VM. The energy consumption E i (W j ) can be determined by the example equation (6):Equation (6):<math> <mrow> <mn>100</mn> <mo>*</mo> <mrow> <mo>(</mo> <mo>(</mo> <mrow> <mfrac> <mrow> <msub> <mi>CPU</mi> <mi>i</mi> </msub> </mrow> <mrow> <msub> <mi>CPU</mi> <mi>max</mi> < /msub> </mrow> </mfrac> <msub> <mi>VA</mi> <mrow> <mi>C</mi> <mi>P</mi> <mi>U</mi> < /mrow> </msub> <mo>+</mo> <mfrac> <mrow> <msub> <mi>mem</mi> <mi>i</mi> </msub> </mrow> <mrow > <msub> <mi>mem</mi> <mi>max</mi> </msub> </mrow> </mfrac> <msub> <mi>VA</mi> <mrow> <mi>m </mi> <mi>e</mi> <mi>m</mi> </mrow> </msub> <mo>+</mo> <mfrac> <mrow> <mi>n</mi> <mi>w</mi> <mi> </mi> <mi>I</mi> <mo>/</mo> <msub> <mi>O</mi> <mi>i</mi> </msub> </mrow> <mrow> <mi>n</mi> <mi>w</mi> <mi> </mi> <mi>I</mi> <mo>/</mo> <msub> <mi>O</mi> <mi>max</mi> </msub> </mrow> </mfrac> <msub> <mi>VA</mi> <mrow> <mi>n< /mi> <mi>w</mi> <mi>I</mi> <mo>/</mo> <mi>O</mi> </mrow> </msub> <mo>+</mo > <mfrac> <mrow> <mi>s</mi> <mi>t</mi> <mi>o</mi> <mi>r</mi> </mrow> <mrow> <msub> < Mi>stor</mi> <mi>max</mi> </msub> </mrow> </mfrac> <msub> <mi>VA</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>o</mi> <mi>r</mi> </mrow> </msub> </mrow> <mo>)</ Mo> <mo>÷</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>Therein, each utilized configurable computing resource can be multiplied by the maximum VA consumed by each of the different types of separate physical components included in a given configuration i for a given workload j. In some examples, the aggregated power can be obtained from a controller for a given type of separate physical component. For example, the aggregated power for the CPU 260-1 can be obtained from the controller 262-1.In some examples, C i (W j ) may be determined using example equation (7) when running a given workload j while using the configurable computing resources included in a given configuration i:Equation (7):Where v is the monetary value of the configurable computing resource (eg, US dollars). For example, equation (7) uses multiple types of separate physical components such as CPU, memory, nw I/O, and unit currency values v CPU , v mem , v nwI/O , and v stor of the storage device; and for normalization purposes Also used is v cpu_max and so on. According to some examples, the resource catalog maintained by the resource manager may be updated over time to reflect the depreciation policy. The depreciation policy may depend on the type of physical component being separated. The price can be obtained online using the part number of each separate physical component or can be provided by the supplier.From block 530 to decision block 540 (whether performance is good?), the logic or feature at the resource manager can rank the determined first run and score S 1 (W j ) using example equation (4), And then ranking the first run and score S 1 (W j ) against one or more historical weighted sum run scores Si(W j ), the one or more historical weighted sum run scores are for One or more other configurations i of the configurable computing resources used to implement, execute, or run the workload j in the logical server or VM are determined. For these examples, good performance may be based on meeting an associated KPI, which may be a workload used to ensure proper performance and resource reallocation when adding or subtracting new configurable computing resources to or from the system or data center. Part of the SLA. The process ends if good performance is determined (eg, the associated KPI or ranking is successfully satisfied compared to the historical configuration). Otherwise, the process moves to block 550.Moving from decision block 540 to block 550 (modifying the assignment), the logic or feature at the resource manager can modify the allocation of configurable computing resources based on the first weighted summation running score S 1 (W j ), the first weighting The aggregate run score has an unfavorable ranking compared to the historical configuration, and/or does not satisfy the associated KPI. Modifying may include selecting a different CPU, memory, nw I/O, or storage device for performing the monitoring, generating a second weighted sum run score, and then again performing the second run and score with the historically configured run and score The run and score are increased after the comparison.FIG. 6 shows an example workload template 600. In some examples, as shown in FIG. 6, the workload template includes templates 610, 620, 630, 640, or 650 of corresponding application workloads for video processing, encryption/decryption, web server, content distribution Network or database. The disclosure is not limited to these examples of application workloads. Other application workloads are considered.For the example template shown in FIG. 6, resources for the allocation of specific application workloads may be arranged in a workload template that reflects those separate physical components that can consistently generate a high ranking weighted sum run score. . For example, for template 610 and video processing workloads, cpu-3, cpu-8, mem-1, and nw I/O-5 may represent the best configuration with the highest or best ranking weighted total running score for the configuration. A logical server or VM implements, executes, or runs a video processing application workload based on historical run attributes. Thus, template 610 can be used when a workload request for video processing is received. In another example, for template 640 and content distribution network, cpu-5, cpu-11, mem-1, stor-1, stor-2, nw I/O-7, and nw LO-8 may represent the highest or The optimal ranking weighted sum is the optimal configuration of the running scores for the logical server or VM to implement, execute, or run the content distribution network application workload based on historical operational attributes. Thus, template 640 can be used when a workload request to a content distribution network application is received.FIG. 7 shows an example block diagram of device 700. While the device 700 shown in Figure 7 has a limited number of components in a certain topology, it will be appreciated that the device 700 can include more or fewer components in an alternate topology as contemplated for a given implementation.Device 700 may be supported by circuitry 720 maintained at a computing device that includes logic or features for supporting a resource manager or controller to allocate configurable computing resources. Circuitry 720 can be arranged as a module or component 722-a for executing one or more software or firmware implementations. It is worth noting that "a" and "b" and "c" and similar designators as used herein are intended to be variables representing any positive integer. Thus, for example, if the implementation sets the value a=5, the complete software or firmware set of component 722-a may include components 722-1, 722-2, 722-3, 722-4, or 722-5. The examples presented are not limited in this context, and the different variables used throughout may represent the same or different integer values.According to some examples, circuit 720 can include a processor or processor circuit. Circuitry 720 can be part of a computing device circuit that includes a processing core (eg, used as a central processing unit (CPU)). The circuit including one or more processing cores may be any of a variety of commercially available processors, including but not limited to: ADM CorporationAthlonDuronand Snapdragonprocessor; ARMapplication, embedded and security processor; Qualcomm骁龙processor, IBMMotorolaDragon BallNvidia (Nvidia) GPUandprocessor; IBM and SonyCell processor; Intel CorporationCeleronCore (2) Duo (Core (2)), Core i3 (Core i3), Core i5 (Core i5), Core i7 (Core i7), ItaniumPentiumXeonandprocessors; and similar processor dual processors, multi-core processors, and other multi-processor architectures may also be used as part of circuit 720. According to some examples, circuit 720 may also be an application specific integrated circuit (ASIC), and at least some of the components 722-a may be implemented as hardware elements of an ASIC.According to some examples, device 700 can include request component 722-1. Request component 722-1 can be executed by circuitry 720 for receiving a request to allocate configurable computing resources to a logical server or VM for implementing, executing, or running a workload. For these examples, the request may be included in request 705 and may indicate a hardware configuration that may be required to place or compose the one or more logical servers or VMs. Request 705 may also indicate a template for setting weights or multipliers for various attributes of the hardware configuration. For example, a cost sensitive template, a performance sensitive template, a high availability template, or a balanced template may be included in request 705, which may weight differently configurable computing resource attributes such as, but not limited to, temperature, power, utilization, or cost.According to some examples, device 700 may also include scoring component 722-2. The scoring component 722-2 can be executed by the circuit 720 for determining a first weighted total allocation score for the first portion of the configurable computing resources that can be allocated to the logical server or VM and for the configurable computing resource A second portion of the logical server or VM that is available for allocation to determine a second weighted sum allocation score. For these examples, the weighted sum allocation scores can be based on the example equations (1) and (2) described above. Available configurable computing resources may be taken, for example, from separate physical components, such as a CPU, memory, nwI/O, or storage device, held in one or more racks of the data center (eg, rack 110 of system 100). Resource pool information 710 may include an indication of those available configurable computing resources, and may also include information regarding configurable computing resource attributes such as, but not limited to, temperature, power, utilization, or cost. In some examples, the temperature, power, utilization, or cost attributes of each available CPU, memory, nw I/O, or storage device are maintained in the one or more racks.In some examples, device 700 can also include ranking component 722-3. Ranking component 722-3 can be performed by circuitry 720 for comparing the first and second weighted sum allocation scores assigned by the scoring component 722-2 to the available configurable computing resources. The comparison may include, for example, ranking component 722-3 using the first and second weighted sum allocation scores to rank the first portion of configurable computing resources relative to the second portion of configurable computing resources.According to some examples, device 700 may also include a distribution component 722-4. Allocation component 722-4 can be performed by circuitry 720 for assigning the first portion or the second portion to the logical server or VM based on the comparison of ranking component 722-3. For these examples, the allocation component 722-4 can indicate that the allocation can be sent to the POD manager and/or to the allocation 715 of one or more RCPMs associated with the rack with the allocated resources. The allocation component 722-4 can also update the resource directory to indicate that the first or second portion is assigned to the logical server or VM.In some examples, device 700 can also include monitoring component 722-4. Monitoring component 722-4 can be executed by circuitry 720 for use by each of the configurable computing resources included in the assigned first or second portion when the logical server or VM implements, operates, or executes a workload A variety of operational attributes are monitored.According to some examples, the first portion of configurable computing resources may be allocated by the allocation component 722-4 to a logical server or VM for implementing, executing, or running a workload. Monitoring component 722-4 can then be used to monitor various operational attributes of each configurable computing resource included in the first portion when the logical server or VM implements, executes, or runs a workload. The scoring component 722-2 may then determine a first weighted total running score for the first portion based on the plurality of operational attributes monitored by the monitoring component 722-4. For these examples, scoring component 722-2 may determine the first weighted sum running score using example equations (4) through (7). Ranking component 722-3 can then compare one or more historical weighted total running scores determined for one or more other portions of the configurable computing resources that were previously allocated for implementing or running the workload. The first weighted total running score is ranked. Allocation component 722-4 can then modify which configurable computing resources are included in the first portion based on the ranking. For these examples, the allocation component 722-4 can indicate an assignment to the one or more RCPMs that are sent to the POD manager and/or sent to the rack associated with the allocated or previously allocated resources. Any modification of the aforementioned assignments in 715.Included herein is a logical set of processes that represent example methods for performing the novel aspects of the disclosed architecture. However, one or more of the methods illustrated herein are shown and described as a series of acts for the purpose of simplifying the description, and those skilled in the art will understand and appreciate that the methods are not limited to the order of the acts. Certain actions may occur (in accordance with oneself) in a different order and/or concurrently than other actions shown and described herein. For example, those skilled in the art will understand and appreciate that the method can be alternately represented as a series of interrelated states or events, such as a state diagram. Moreover, not all of the actions shown in the method will be required for a novel implementation.The logic flow can be implemented in software, firmware, and/or hardware. In software and firmware embodiments, the logic flow may be implemented by computer executable instructions stored on at least one non-transitory computer readable medium or machine readable medium, such as an optical, magnetic or semiconductor storage device. Embodiments are not limited in this context.Figure 8 shows an example of a logic flow. As shown in FIG. 8, the logic flow includes a logic flow 800. Logic flow 800 may represent part or all of the operations performed by one or more of the logic, features, or devices (e.g., device 700) described herein. Rather, the logic flow 800 can be implemented by at least the request component 722-1, the scoring component 722-2, the ranking component 722-3, or the allocating component 722-4.According to some examples, logic flow 800 may receive a request at block 802 for a resource manager for a configurable computing resource system to allocate the configurable computing resources to a logical server to implement or execute a workload. For these examples, the request may be received by request component 722-1.In some examples, logic flow 800 may determine a first weighted sum allocation score for a portion of the configurable computing resource that is available for allocation to the logical server at block 804. For these examples, scoring component 722-2 can determine the first weighted sum allocation score.According to some examples, logic flow 800 may determine a second weighted sum allocation score for a second portion of the configurable computing resource that is available for allocation to the logical server at block 806. For these examples, scoring component 722-2 can determine the second weighted sum allocation score.In some examples, at block 808, logic flow 800 can compare the first and second weighted sum allocation scores to rank the first portion relative to the second portion. For these examples, ranking component 722-3 can compare the first and second weighted sum allocation scores to rank relative to each other relative to each other.In some examples, at block 810, logic flow 800 can assign the first portion or the second portion to the logical server based on the ranking. For these examples, the allocation component 722-4 can cause or implement the allocation.FIG. 9 shows an example of a storage medium 900. Storage medium 900 can include articles of manufacture. In some examples, storage medium 900 can include any non-transitory computer readable medium or machine readable medium, such as an optical, magnetic, or semiconductor storage device. Storage medium 900 can store various types of computer-executable instructions, such as instructions for implementing logic flow 800. Examples of a computer readable or machine readable storage medium may include any tangible medium capable of storing electronic data, including volatile or nonvolatile memory, removable or non-removable memory, erasable memory, or Erase memory, writable memory or rewritable memory, etc. Examples of computer executable instructions may include any suitable type of code, such as source code, compiled code, translated code, executable code, static code, dynamic code, object oriented code, visual code, and the like. The examples are not limited in this context.FIG. 10 shows an example computing platform 1000. In some examples, as shown in FIG. 10, computing platform 1000 can include processing component 1040, other platform components, or communication interface 1060. According to some examples, computing platform 1000 may be implemented in a computing device, such as a server in a system, such as a data center or server farm that supports a POD manager and/or resource manager for allocating configurable computing resources as described above. .According to some examples, processing component 1040 can perform processing operations or logic of device 700 and/or storage medium 900. Processing component 1040 can include various hardware components, software components, or a combination of both. Examples of hardware components may include: devices, logic devices, components, processors, microprocessors, circuits, processor circuits, circuit components (eg, transistors, resistors, capacitors, inductors, etc.), integrated circuits, application specific integrated circuits (ASIC), Programmable Logic Device (PLD), Digital Signal Processor (DSP), Field Programmable Gate Array (FPGA), memory cells, logic gates, registers, semiconductor devices, chips, microchips, chipsets, and the like. Examples of software components may include: software components, programs, applications, computer programs, applications, device drivers, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, sub- A routine, function, method, process, software interface, application programming interface (API), instruction set, computing code, computer code, code segment, computer code segment, word, value, symbol, or any combination thereof. Determining whether an implementation of hardware components and/or software components is used may vary depending on a number of factors, as expected for a given example, such as expected calculation rate, power level, heat resistance, processing cycle budget, Input data rate, output data rate, memory resources, data bus speed, and other design or performance constraints.In some examples, other platform components 1050 can include general purpose computing components such as one or more processors, multi-core processors, coprocessors, memory cells, chipsets, controllers, peripherals, interfaces, oscillators, timing Devices, video cards, audio cards, multimedia input/output (I/O) components (eg, digital displays), power supplies, and the like. Examples of memory cells can include, but are not limited to, computer readable and machine readable storage media in the form of one or more higher speed memory cells of various types, such as read only memory (ROM), random access memory (RAM), dynamics RAM (DRAM), double data rate DRAM (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory (such as ferroelectric polymer memory), bidirectional memory, phase change or ferroelectric memory, silicon oxynitride (SONOS) memory, magnetic or optical card, device array (eg independent Redundant Array of Disks (RAID) drives, solid state memory drives (eg, USB storage), solid state drives (SSD), and any other type of storage medium suitable for storing information.In some examples, communication interface 1060 can include logic and/or features for supporting a communication interface. For these examples, communication interface 1060 can include one or more communication interfaces that operate in accordance with various communication protocols or standards to communicate over a direct or network communication connection. Direct communication can occur by using communication protocols or standards as described in one or more industry standards, including descendants and variants, such as those associated with the PCI specification. Network communication can be generated by using communication protocols or standards as described in one or more Ethernet standards promulgated by the Institute of Electrical and Electronics Engineers (IEEE). For example, one such Ethernet standard may include IEEE 802.3-2008, a carrier sense multiple access (CSMA/CD) access method with collision detection and a physical layer specification (hereinafter referred to as "IEEE"), published in December 2008. 802.3"). Network communications may also be generated in accordance with one or more OpenFlow specifications, such as the OpenFlow Hardware Abstraction API Specification. Network communications can also be generated in accordance with the Unlimited Broadband Technology Architecture Specification Release 1.2.1, Volume 1 ("Unlimited Broadband Technology Architecture Specification"), published in November 2007.The computing platform 1000 can be part of a computing device, which can be, for example, a server, a server array or server farm, a web server, a web server, an internet server, a workstation, a small computer, a host computer, a supercomputer, a network appliance, a website appliance , a distributed computing system, a multi-processor system, a processor-based system, or a combination thereof. Accordingly, the functions and/or specific configurations of computing system 1000 described herein may be included or omitted in various embodiments of computing platform 1000, as appropriate.The components and features of computing platform 1000 can be implemented using any combination of the following: separate circuits, application specific integrated circuits (ASICs), logic gates, and/or monolithic architectures. Further, features of computing platform 1000 may be implemented using a microcontroller, a programmable logic array, and/or a microprocessor or any combination of signatures, where appropriate. It should be noted that hardware, firmware, and/or software components may be referred to herein collectively or individually as "logic" or "circuitry."It should be understood that the exemplary computing platform 1000 shown in the block diagram of FIG. 10 may represent one functional descriptive example of many potential implementations. Accordingly, the partitioning, omission, or inclusion of the function of the blocks depicted in the drawings is not meant to be necessarily in the embodiment, the hardware components, circuits, software and/or components that are used to implement these functions. .One or more aspects of at least one example may be implemented by table attribute instructions stored on at least one machine readable medium, the instructions representing various logic within a processor, when read by a machine, computing device or system The instructions cause the machine, computing device or system to make logic for implementing the techniques described herein. Such representations (referred to as "IP cores") may be stored on a tangible, machine readable medium and provided to various customers or manufacturing facilities for loading into the fabrication machine that actually makes the logic or processor.Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware components can include: devices, logic devices, components, processors, microprocessors, circuits, circuit components (eg, transistors, resistors, capacitors, inductors, etc.), integrated circuits, application specific integrated circuits (ASIC), Programmable Logic Device (PLD), Digital Signal Processor (DSP), Field Programmable Gate Array (FPGA), memory cells, logic gates, registers, semiconductor devices, chips, microchips, chipsets, and the like. In some examples, software components can include: software components, programs, applications, computer programs, applications, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions , method, process, software interface, application programming interface (API), instruction set, calculation code, computer code, code segment, computer code segment, word, value, symbol, or any combination thereof. Determining whether to use hardware components and/or software components to implement an example may vary depending on a number of factors, as expected for a given implementation, such as expected computation rate, power level, heat resistance, processing cycle budget, Input data rate, output data rate, memory resources, data bus speed, and other design or performance constraints.Some examples may include an article of manufacture or at least one computer readable medium. The computer readable medium can include a non-transitory storage medium for storing logic. In some examples, a non-transitory storage medium may include one or more types of computer readable storage media capable of storing electronic data, including volatile or nonvolatile memory, removable or non-removable. Memory, erasable or non-erasable memory, writable or rewritable memory, and the like. In some examples, the logic may include various software components such as software components, programs, applications, computer programs, applications, system programs, machine programs, operating system software, middleware, firmware, software modules, routines , subroutines, functions, methods, procedures, software interfaces, APIs, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof.According to some examples, a computer readable medium may comprise a non-transitory medium for storing or maintaining instructions that, when executed by a machine, computing device or system, cause the machine, computing device or system to perform according to the described Example methods and/or operations. The instructions may include any suitable type of code, such as source code, compiled code, translated code, executable code, static code, dynamic code, and the like. The instructions may be implemented in accordance with a predefined computer language, form or syntax for directing a machine, computing device, or system to perform a certain function. The instructions can be implemented using any suitable high level, low level, object oriented, visual, compiled, and/or translation programming language.Some examples may be described using the expressions "in one example" and "an example" and their derivatives. These terms are meant to be inclusive of the specific features, structures, or characteristics described in connection with the examples. The appearances of the phrases "in one example" and "the"Some examples may be described using the expression "coupled" as well as "connected" and its derivatives. These terms are not necessarily intended as synonyms for each other. For example, the use of the terms "connected" and/or "coupled" may mean that two or more elements are in direct physical or electrical contact with each other. However, the term "coupled" may also mean that two or more elements are not in direct contact with each other, but still cooperate or interact with each other.The following example terms are additional examples of the techniques disclosed herein.Example 1. An example device can include circuitry for a controller of a configurable computing resource system. The apparatus can also include a request component executed by the circuitry, the request component for receiving a request to allocate the configurable computing resources to a logical server to implement or execute a workload. The apparatus can also include a scoring component executed by the circuitry, the scoring component configured to determine a first weighted sum allocation score for a first portion of the configurable computing resources available for allocation to the logical server and for A second portion of the configurable computing resources available for allocation to the logical server determines a second weighted sum allocation score. The apparatus can also include a ranking component executed by the circuitry, the ranking component for comparing the first and second weighted sum allocation scores. The apparatus can also include an allocation component that is executed by the circuitry, the distribution component for assigning the first portion or the second portion to the logical server based on the comparing.Example 2. The device of example 1, the allocating component can update a resource directory to indicate that the first or second portion is assigned to the logical server.Example 3. The device of example 1, the configurable computing resource system can include the configurable resource maintained in a plurality of racks.Example 4. The device of example 3, the first and second partial configurable computing resources can include respective first and second configurations, each of the first and second configurations having a plurality of types Separate physical components. The scoring component can determine the first and second weighted scores based on separate physical components of the same type physically located in different ones of the plurality of racks. The allocating means for allocating the first portion or the second portion of the configurable computing resource based on the comparison of the first and second weighted scores by the ranking component and based on the request indicating the assignment Thus meeting high availability requirements.Example 5. The device of example 4, the one or more types may comprise a central processing unit type, a memory type, a storage type, or a network input/output type.Example 6. The device of example 1, the first and second weighted scores may be based on whether the request indicates that the configurable computing resources are to be weighted based on one of: cost sensitive templates, performance sensitive templates , high availability templates or balanced templates.Example 7. The apparatus of example 6, the first and second weighted sum allocation scores are each configurable included in the corresponding first and second portions of the configurable computing resource by the scoring component Determined by the various allocation properties of the computing resource.Example 8. The apparatus of example 7, the plurality of distribution attributes may include operating temperature, power/energy consumption, total uptime in hours, or unit cost.Example 9. The device of example 8, the cost sensitive template may cause a unit cost to have the highest weight among the plurality of distribution attributes, or the performance sensitive and high availability template may cause total uptime to be in the plurality of allocations The attribute has the highest weight.Example 10. The device of example 1, the allocating component can assign the first portion of the allocatable computing resource to the logical server to implement or execute a workload. The apparatus can also include a monitoring component executed by the circuitry, the monitoring component for using a plurality of each configurable computing resource included in the first portion when the logical server implements or performs a workload Run properties for monitoring. The scoring component can determine a first weighted total running score of the first portion based on the plurality of operational attributes monitored by the monitoring component. The ranking component can compare the weighted sum to one or more historical weighted summation run scores determined for one or more other portions of the configurable computing resource that were previously allocated for implementing or executing a workload Run a rating to rank. The allocating component can modify which configurable computing resources are included in the first portion based on the ranking.Example 11. The device of example 10, the first weighted total running score may be weighted based on a service level agreement for a workload.Example 12. The device of example 1, the first and second partial configurable computing resources can include respective first and second configurations, each of the first and second configurations having a plurality of types Separate physical components. The one or more types may include a central processing unit type, a memory type, a storage type, or a network input/output type.Example 13. The device of example 1 can also include a digital display coupled to the circuitry for presenting a user interface view.Example 14. An example method can include receiving, at a resource manager for a configurable computing resource system, a request to allocate the configurable computing resource to a logical server to implement or execute a workload. The method can also include determining a first weighted sum allocation score for a first portion of the configurable computing resources available for allocation to the logical server. The method can also include determining a second weighted sum allocation score for a second portion of the configurable computing resources available for allocation to the logical server. The method can also include comparing the first and second weighted sum allocation scores to rank the first portion relative to the second portion. The method can also include assigning the first portion or the second portion to the logical server based on the ranking.Example 15. The method of example 14 can also include updating the resource catalog to indicate assigning the first or second portion to the logical server.Example 16. The method of example 14, the configurable computing resource system can include the configurable resource maintained in a plurality of racks.Example 17. The method of example 16, the first and second partial configurable computing resources can include respective first and second configurations, each of the first and second configurations having a plurality of types Separate physical components. The method can also include determining the first and second weighted scores based on separate physical components of the same type physically located in different ones of the plurality of racks. The method can also include assigning the first portion or the second portion of the configurable computing resource based on a comparison of the first and second weighted scores and based on the request indicating the allocation to satisfy a high Availability requirements.Example 18. The method of example 17, the one or more types may comprise a central processing unit type, a memory type, a storage type, or a network input/output type.Example 19. The method of example 14, the first and second weighted scores may be based on whether the request indicates that the configurable computing resources are to be weighted based on one of: cost sensitive templates, performance sensitive templates , high availability templates or balanced templates.Example 20. The method of example 19, the first and second weighted sum allocation scores are multiple assignments that can be based on each configurable computing resource included in respective first and second portions of the configurable computing resource Attributes.Example 21. The method of example 20, the plurality of distribution attributes can include operating temperature, power/energy consumption, total uptime in hours, or unit cost.Example 22. The method of example 21, the cost sensitive template may cause a unit cost to have the highest weight among the plurality of allocation attributes, or the performance sensitive and high availability template may cause total uptime to be in the plurality of allocations The attribute has the highest weight.Example 23. The method of example 14 can also include allocating the first portion of the allocatable computing resources to the logical server to implement or execute a workload. The method can also include monitoring a plurality of operational attributes of each configurable computing resource included in the first portion when the logical server implements or executes a workload. The method can also include determining a first weighted total running score of the first portion based on the plurality of monitored operational attributes. The method can also include comparing the one or more historical weighted total running scores determined for one or more other portions of the configurable computing resource that were previously allocated for implementing or executing the workload The first weighted sum run score is ranked. The method can also include modifying which configurable computing resources are included in the first portion based on the ranking.Example 24. The method of example 23, wherein the first weighted total running score can be weighted based on a service level agreement for a workload.Example 25. The method of example 14, the first and second partial configurable computing resources can include respective first and second configurations, each of the first and second configurations having a plurality of one or more types A plurality of separate physical components, the one or more types may include a central processing unit type, a memory type, a storage type, or a network input/output type.Example 26. The example at least one machine readable medium can include a plurality of instructions that, in response to being executed by the system, cause the system to implement the method of any of examples 14-25.Example 27. The example device can include means for performing the method of any of examples 14-25.Example 28. Example At least one machine readable medium can include a plurality of instructions that, in response to being executed by a circuit positioned with a configurable computing resource system, cause the circuit to: receive the configurable computing resource to a logical server for implementation Or request to perform a workload. The instructions may also cause the circuitry to determine a first weighted sum allocation score for a first portion of the configurable computing resources available for allocation to the logical server. The instructions may also cause the circuitry to determine a second weighted sum allocation score for a second portion of the configurable computing resources available for allocation to the logical server. The instructions may also cause the circuitry to: compare the first and second weighted sum allocation scores to rank the first portion relative to the second portion. The instructions may also cause the circuitry to assign the first portion or the second portion to the logical server based on the ranking.Example 29. The at least one machine readable medium of example 28, the instructions, can further cause the circuitry to: update a resource catalog to indicate assigning the first or second portion to the logical server.Example 30. The at least one machine readable medium of example 28, the configurable computing resource system can include the configurable resource maintained in a plurality of racks.Example 31. The at least one machine readable medium of example 30, the first and second portion configurable computing resources can include respective first and second configurations, each of the first and second configurations having a a plurality of types of separate physical elements, the instructions for further causing the circuitry to determine the first sum based on separate physical components of the same type physically located in different ones of the plurality of racks Second weighted score. The instructions may also cause: the first portion or the second portion of the configurable computing resource to be allocated based on a comparison of the first and second weighted scores and based on the request indicating the assignment, thereby satisfying High availability requirements.Example 32. The at least one machine readable medium of example 31, the one or more types may comprise a central processing unit type, a memory type, a memory type, or a network input/output type.Example 33. The at least one machine readable medium of example 28, the first and second weighted scores can be based on whether the request indicates that the configurable computing resource will be weighted based on one of: Sensitive templates, performance-sensitive templates, high-availability templates, or balanced templates.Example 34. The at least one machine readable medium of example 33, the first and second weighted sum allocation scores may be based on each configurable included in respective first and second portions of the configurable computing resource Calculate the various allocation properties of a resource.Example 35. The at least one machine readable medium of example 34, the plurality of distribution attributes can include operating temperature, power/energy consumption, total uptime in hours, or unit cost.Example 36. The at least one machine readable medium of example 35, the cost sensitive template may cause a unit cost to have the highest weight among the plurality of distribution attributes, or the performance sensitive and high availability template may cause total uptime to be Among the plurality of allocation attributes, the highest weight is included.Example 37. The at least one machine readable medium of example 28, the instructions, can further cause the circuitry to: assign the first portion of the allocatable computing resource to the logical server to implement or execute a workload. The instructions may also cause the circuitry to monitor a plurality of operational attributes of each configurable computing resource included in the first portion when the logical server implements or performs a workload. The instructions may also cause the circuitry to determine a first weighted sum run score for the first portion based on the plurality of monitored operational attributes. The instructions may also cause the circuitry to: compare one or more historical weighted summation run scores determined for one or more other portions of the configurable computing resource that were previously allocated for implementing or executing a workload The first weighted sum running score is ranked. The instructions may also cause the circuitry to modify which configurable computing resources are included in the first portion based on the ranking.Example 38. The at least one machine readable medium of example 37, the first weighted total running score may be weighted based on a service level agreement for a workload.Example 39. The at least one machine readable medium of example 28, the first and second portion configurable computing resources can include respective first and second configurations, each of the first and second configurations having a plurality of One or more types of separate physical elements, which may include a central processing unit type, a memory type, a storage type, or a network input/output type.It is emphasized that the following summary is provided to allow the reader to quickly determine the nature of the disclosure. The abstract is submitted based on its understanding of the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single example for the purpose of the disclosure. The method of the present disclosure is not to be interpreted as reflecting the following description: the claimed examples require more features than those explicitly recited in each claim. Rather, the following claims reflect that the subject matter of the invention is less than all features of the single disclosed examples. Therefore, the following claims are hereby incorporated into the Detailed Description, the claims In the appended claims, the terms "including" and "in which" are used as the Moreover, the terms "first," "second," "third," and the like are used merely as labels, and are not intended to impose numerical requirements on their objects.Although the subject matter has been described in language specific to structural features and/or method acts, it is understood that the subject matter defined in the appended claims Instead, the specific features and acts described above are disclosed as example forms of implementing the claims. |
An embodiment of the invention is a Schottky diode 22 having a semiconductor substrate 3, a first metal 24, a barrier layer 26, and second metal 28. Another embodiment of the invention is a method of manufacturing a Schottky diode 22 that includes providing a semiconductor substrate 3, forming a barrier layer 26 over the semiconductor substrate 3, forming a first metal layer 23 over the semiconductor substrate 3, annealing the semiconductor substrate 3 to form areas 24 of reacted first metal and areas 23 of un-reacted first metal, and removing selected areas 23 of the un-reacted first metal. The method further includes forming a second metal layer 30 over the semiconductor substrate 3 and annealing the semiconductor substrate 3 to form areas 28 of reacted second metal and areas 30 of un-reacted second metal. |
Tl-36795 GO 7 CLAIMS: 1. A Schottky diode comprising: a semiconductor substrate; a first metal area coupled to said semiconductor substrate; a barrier layer coupled to said first metal area; and a second metal area coupled to said barrier layer. 2. A Schottky diode comprising: . a semiconductor substrate; a first metal area coupled to said semiconductor substrate; and a second metal area coupled to said first metal. ... 3. An integrated circuit comprising: a semiconductor substrate; a first Schottky diode coupled to said semiconductor substrate, said first Schottky diode having a first amount of a first metal coupled to said semiconductor substrate, a first barrier layer coupled to said first amount of a first metal, and a second amount of a second metal coupled to said first barrier layer; and a second Schottky diode coupled to said semiconductor substrate, said second Schottky diode having a third amount of said first metal coupled to said semiconductor substrate, a second barrier layer coupled to said third amount of said first metal, and a fourth amount of said second metal coupled to said second barrier layer; wherein said first amount is at least. 1% more than said third amount and said second amount is at least.1% more than said fourth amount. 4. An integrated circuit comprising: a semiconductor substrate; a first Schottky diode coupled to said semiconductor substrate, said first Schottky diode having a first amount of a first metal coupled to said semiconductor substrate and a second amount of a second metal coupled to said first amount of a first metal; and Tl-36795 GB 8 a second Schottky diode coupled to said semiconductor substrate, said second Schottky diode having a third amount of said first metal coupled to said semiconductor substrate and a fourth amount of said second metal coupled to said third amount of said first metal; wherein said first amount is at least.1% more than said third amount and said second amount is at least. l % more than said fourth amount. 5. The Schottky diode of any of Claims 1 - 4, wherein said first metal area includes PtSi- as. . :e 6. The Schottky diode of any of Claims I - 5, wherein said second metal area includes TiSi2. : 7. The SchottLy diode of any of Claims I - 6, wherein said semiconductor substrate includes sit.... :: 8. The Schottky diode of Claim I or 3, or any one of Claims 5 - 7 which are dependent on Claim I or 3, wherein said barrier layer comprises at least one of SiO2 and SiN. 9. A method of manufacturing a Schottky diode comprising: providing a semiconductor substrate; forming a barrier layer over said semiconductor substrate; forming a first metal layer over said semiconductor substrate; annealing said semiconductor substrate to form areas of reacted first metal and areas of un-reacted first metal; removing selected areas of said un-reacted first metal; forming a second metal layer over said semiconductor substrate; and annealing said semiconductor substrate to form areas of reacted second metal and areas of un-reacted second metal. 10. A method of manufacturing a Schottky diode comprising: providing a semiconductor substrate; forming a first metal layer over said semiconductor substrate; Tl-36795 (]B 9 annealing said semiconductor substrate to form areas of reacted first metal and areas of un-reacted first metal; removing selected areas of said un-reacted first metal; forming a second metal layer over said semiconductor substrate; and annealing said semiconductor substrate to form areas of reacted second metal and areas of un-reacted second metal. I 1. A method of manufacturing an integrated circuit comprising: . . providing a semiconductor substrate; and.. .. forming at least a first Schottky diode and a second Schottky diode, said method of forming said first Schottky diode and said second Schottky diode comprising the following steps in the sequence set forth: .. :. forming a barrier layer over said semiconductor substrate; .... A. forming a first patterned photoresist layer over said semiconductor substrate, said first. . patterned photoresist layer exposing different portions of a first Schottky diode and a second Schottky diode locations; forming a first metal layer over said semiconductor substrate; removing said first patterned photoresist layer; annealing said semiconductor substrate to form areas of reacted first metal and areas of un-reacted first metal; removing selected areas of said un-reacted first metal; forming a second patterned photorcsist layer over said semiconductor substrate, said second patterned photoresist layer exposing different portions of said first Schottky diode and said second Schottky diode locations; forming a second metal layer over said semiconductor substrate: removing said second patterned photoresist layer; and annealing said semiconductor substrate to form areas of reacted second metal and areas of un-reacted second metal. 12. A method of manufacturing an integrated circuit comprising: providing a semiconductor substrate; and Tl-36795 GB 10 forming at least a first Schottky diode and a second Schottky diode, said method of forming said first Schottky diode and said second Schottky diode comprising the following steps in the sequence set forth: forming a first patterned photoresist layer over said semiconductor substrate, said first patterned photoresist layer exposing different portions of a first Schottky diode and a second Schottky diode locations; forming a first metal layer over said semiconductor substrate; removing said first patterned photoresist layer; .. ë annealing said semiconductor substrate to form areas of reacted first metal and arenas . Of un-reacted first metal; removing selected areas of said un-reacted first metal; forming a second patterned photoresist layer over said semiconductor substrate, said second patterned photoresist layer exposing different portions of said first Schottky diode and sa,id.... en. second Schottky diode locations; .. .. - forming a second metal layer over said semiconductor substrate; removing said second patterned photoresist layer; and annealing said semiconductor substrate to form areas of reacted second metal and areas of un-reacted second metal. 13. The method of any of Claims 9 - 12, further comprising a step of forming a contact coupled to said areas of un-reacted second metal. 14. The method of any of Claims 9 - 13, further comprising a step of removing selected areas of said un-reacted second metal. 15. The method of Claim 9 or 10, or Claim 13 or 14 dependent on Claim 9 or 10, further comprising a step of annealing said semiconductor substrate following said step of removing selected areas of un-reacted first metal. 16. A integrated circuit, including a first Schottky diode having a voltage drop more than 1% di fferent than a voltage drop of a second Schottky diode, manufactured in accordance with the Tl-36795 GO method of Claim I 1 or 12. . e e : :e . .. --e ë - ë |
24 1 2787 Tl-36795 GB 1DUAL METAL SCHOTTKY DIODEBACKGROUND OF THE INVENTION This invention relates to the structure and method of making a dual metal Schottky diode.BRIEF DESCRIPTION OF THE DRAWINGS FIGS. I A- 1 B are cross-section views of a partial integrated circuit in accordance with a first embodiment of the present invention. FIGS. 2-5 are cross-sectional diagrams of a process for forming the dual metal Schottky A. diode shown in FIG. I B. FIGS. 6-7 are cross-sectional diagrams of a process for forming a dual metal Schottky diode in accordance with a second embodiment of the present invention. ..... FIGS. 8-10 are cross-sectional diagrams of a process for forming a dual metal Schottky diode in accordance with a third embodiment of the present invention. Detailed Description of the Invention Referring to the drawings, FIG. I A is a cross-section view of a partial integrated circuit 2 in accordance with a first embodiment of the present invention. The integrated circuit is divided into two parts based on the fabrication or process flow: the Front-End-Of -Line (FEOL) section 4 and the Back-End-Of-Line (BEOL) section 5. The section that includes the silicon substrate 3 is called the FEOL of the integrated circuit 2. In general, the FEOL section 4 is the transistor layer formed on (and within) the semiconductor substrate 3. The partial FEOL 4 shown in FIGS. I A and 1 B includes a dual metal Schottky diode 22 of the present invention plus a transistor having a gate oxide 6, a gate electrode 7, and source/drain 8, 9; however, it is within the scope of the invention to have any form of logic within the FEOL section 4. Immediately above the Schottky diode 22 and the transistor is a layer of dielectric insulation containing metal contacts I I that electrically tie the Schottky diode 22 and the transistor to the other logic elements (not shown) of the FEOL section 4. Preferably, the dielectric insulation 10 is comprised of SiO2 and the contacts I I are comprised of W. However, the dielectric insulation 10 may be comprised of any suitable material such as SiN, SiC, SiON, or a low-k dielectric. In addition, the contacts may be comprised of any suitable material such as Al' Ti, or Cu. TT-36795 GB The BEOL section 5 contains a single damascene metal layer 12 and at least one dual damascene metal layer 13. However, it is within the scope of the invention to have an integrrated circuit 2 with only one (single or dual damascene) metal layer. L ayers 12 and 13 contain metal lines 14, 15 that properly route electrical signals and power properly throughout the electronic device. L ayer 13 also contains viasl6 that properly connect the metal lines of one metal layer (e.g. 14) to the metal I ines of another metal layer (e.g. 15). The metal 1 ines 14,15 may be comprised of any suitable material such as Al. Furthermore, metal lines 14,15 may be formed by any suitable process such as deposition, plating, or growth The single damascenc metal layer 12 has dielectric material 17 and possibly a dielectric barrier layer 18 that electrically insulates the metal lines 14. Similarly, the dual damascene layer 13 contains dielectric material 19 and possibly a dielectric barrier layer 20 that electrically insulates metal lines 15 and vies 16. In accordance with a preferred embodiment of the present invention, the integrated circuit 2 has a dual metal Schottky diode 22, shown in FIG. I B. The Schottky diode 22 consists of a lightly doped semiconductor substrate 3, a metal area (or metal islands) 24, a barrier layer 26, a metal area (or metal layer) 28, and a metal area (or metal layer) 30. The semiconductor substrate 3 may be comprised of any suitable material such as Si, GaAs, or InP (or a composite or layers of those elements). In addition, the barrier layer 26 may be a SiO2 or SiN dielectric film. However, other materials such as a deposited SiC or a spin-on-glass ("SOG") could be used for the barrier layer 26. Furthermore, the barrier layer 26 may be removed during the process of fabricating the Schottky diode 22. Preferably, the metal islands 24 are comprised of PtSi, the metal layer 28 is comprised of TiSi2 and the metal layer 30 is comprised of Ti. However, it is within the scope of the invention to have metal layers 24 and 28 comprised of any suitable materials such as CoSi2, VSi2, NiSi, NiSi2, ZrSi2, WSi2, TaSi2, MoSi2, or NbSi. Moreover, it is within the scope of the invention to omit barrier layer 26 and/or metal layer 30. Referring again to the drawings, FIGS. 2-5 show the process for manufacturing the dual metal Schottky diode 22 shown in FIG. I B. Before the dual metal Schottky diode is fabricated, a layer of photoresist (not shown) is applied and patterned using a lithography process. The openings in this photoresist layer define the locations and size of the dual metal Schottky diodes. In a preferred application, a barrier layer 26 is now formed over the entire substrate. The barrier layer may be formed using any manufacturing process such as Chemical Vapor Deposition ("CVD") or Tl36795 GB 3 Plasma-Enhanced Chemical Vapor Deposition ("PECVD"). Any standard manufacturing tool, such as the Centura (from AMAT) or Concept (from Novellus), may be used to create the barrier layer 26. In addition, the barrier layer 26 may be formed chemically by reacting the silicon surface with an oxidizer (such as hydrogen peroxide or nitric acid). In this example application, the barrier layer 26 is comprised of SiO2 and is 20A (20 nm) thick. However, it is within the scope of the invention to have any suitable barrier layer thickness appropriate for the composition of the dual metal layers 24, 2S, the barrier composition, and the.... desired voltage drop Vf of the final dual metal Sehottky diode. . Also as shown in FIG. 2, a first metal layer 23 is formed over the barrier layer 26. In a preferred application, the first metal layer 23 contains Pt and is approximately 300A thick. However, the thickness of the Pt layer 23 may be anything above 150A. In addition, the thickness of the first metal layer 23 may vary depending on the metal composition used. In the example application, the first metal layer is deposited by any well-known manufacturing tool, such as an Endura (from AMAT), a MAC/TEL (from Eclipse), or a Perkin Elmer 4400 series machine. The semiconductor wafer is now annealed. In the example application, a rapid thermal process ("RTP") is used to heat the wafer to approximately 575 C for 30-60 seconds in an 02 and a N2 ambient. A Centura RTP by AMAT may be used for this anneal; however other standard process tools and process parameters may be used. For example, a horizontal or vertical furnace may by used to heat the wafer to 500 C for 20 minutes in an 02 or a N2 ambient. During the anneal process the barrier layer 26 will limit the diffusion of Pt from the first layer metal 23 into the Si substrate 3. After annealing, islands of PtSi 24 are formed within the semiconductor substrate 3, as shown in FIG. 3. It is to be noted that the temperature for the anneal process is selected so that the first metal layer 23 reacts with the semiconductor substrate 3 but not other materials such as the field oxides or gate oxides. In a preferred application, the unreacted Pt layer 23 is now removed with an isotropic chemical etch process. More specifically, a standard chemical bench tool is used to etch Pt layer 23 (e.g. in a chemistry of H20: HCI:HNO for 10 minutes at 75 C). However, it is within the scope of the invention to use any method to remove the unreacted portions ofthe unreacted first metal layer 23. In addition, it is within the scope of the invention to perform an additional anneal after the removal of the unreacted Pt layer 23. As shown in FIG. 4, a second metal layer 30 is formed over the semiconductor wafer ( i.e. Tl-36795 GB 4 over the barrier layer 26 if the barrier layer is not removed, or over the silicon and first metal islands if the barrier layer is removed). In a preferred application, the second metal layer 30 contains Ti and is approximately 4004 thick. However, the thickness of the Ti layer 30 may range from 300-800A. In addition, the thickness of the second metal layer 30 may vary depending on the type of metal used. In the example application, the second metal layer is deposited by any well-known manufacturing tool, such as an Endura (from AMAT), a MAC/TEL (I'rom Eclipse), or a Perkin Elmer 4400 series machine. The semiconductor wafer is now annealed. In the example application, a rapid thermal process ("RTP") is used to heat the wafer to approximately 625-750 C for 20-40 seconds in a N: ambient. A Centura RTP by AMAT may be used for this anneal; however other standard process and tools may be used. For example, a horizontal or vertical furnace may by used to heat the wafer to 600-675 C for 30-60 minutes in a N2 ambient. After annealing, the Si from the semiconductor substrate 3 diffuses into the second metal layer 30 and forms a layer 28 of TiSi2 24, as shown in FIG.5. It is to be noted that the temperature for the anneal process is selected so that the second metal layer 30 reacts with the semiconductor. substrate 3 but not other materials such as the field oxides or gate oxides. In the example application, a step of etching the unreacted second metal layer 30 (or selected portions of that layer) is optional. If the second metal layer 30 is not removed than it may be used as an electrical contact for the dual metal Schottky diode 22. If the second metal layer 30 is removed, any well-known etch process may be used. For example, sulfuric based (piranha) chemistry or a chemistry of H20/H202 (5: I ratio) at 40-60 C for 30-60 minutes may be used to strip the unreacted Ti (or the selected portions of Ti). In the example application, a second anneal is now performed; however, this additional anneal is optional. Any standard process may be used for the second anneal. For example, a Centura RTP could be used at 820-910 C for 10-30 seconds in a N2 ambient, or a furnace could be used to heat the wafer to 750-850 C in a N2 ambient for 30-60 minutes. At this point, the fabrication of the semiconductor wafer continues until the integrated circuit is complete. That fabrication process would include the formation of contacts 11 shown in FIGS. I A and I B that electrically connect the dual metal Schottky diode 22 to the proper components of the integrated circuit 2. It is within the scope of the invention to use any suitable metal for the first metal area 24 and the second metal area 28 of the dual metal Schottky diode 22. As stated above, the metal Tl-36795 GB 5 components 24, 28 of the dual metal Schottky diode may be any suitable metal composition such as PtSi, TiSi2, CoSi2, VSi2, NiSi, ZrSi2, WSi2, TaSi2, MoSi2, or NbSi. It is also within the scope of the invention to use one or more masks to create a dual metal Schottky diode 22 in any one of many configurations. An example variation of the dual metal Schottky diode 22 is shown in FIGS. 6-7. A first mask is used to form the areas (i.e. a first amount) of a Pt first metal 32 and a second mask is used to form the areas (i.e. a first amount) of a Ti second metal 34 shown in FIG. 6. After the anneal and subsequent etch of the unreacted metal 32 and 34, the final dual metal Schottky diode structure 22wouldcontainareasofPtSi 36andareasofTiSi2 38.... as shown in FIG. 7. . . Alternatively, a lithography process could be used to create a patterned photoresist mask layer that is then used to create sections of a Pt first metal 42, as shown in FIG. 8. After removing the exposed metal and ashing the semiconductor wafer to remove the photoresist layer 40, them semiconductor wafer is annealed to form areas of reacted PtSi 44, as shown in FIG. 9. In this alternative embodiment, a Ti second metal layer 46 is deposited and the wafer is then annealed to form a layer of reacted TiSi2 48, as shown in FIG. 10. When one or more masks are used to fabricate the Schottky diode in accordance with this invention, it is within the scope of the invention to use a dual metal Schottky diode with the barrier layer 26 removed. If such a diode is desired then the dual metal Schottky diode is fabricated without a barrier layer 26, or the barrier layer 26 is eliminated with the removal ofthe first unreacted metal or after the removal of the first unreacted metal. Moreover, it is within the scope of the invention to use photoresist masks to create different dual metal Schottky diodes 22 throughout the integrated circuit 2. For example, patterned photoresist layers could be used throughout the fabrication process to form the dual metal Schottky diode 22 of FIG. 5 and the dual metal Schottky diode 22 of FIG. 10 at different locations within the same integrated circuit 2. It is to be noted that a variety of structures and metals can be used to create a dual metal Schottky diode having a Vf that is anywhere between the Vf of a Schottky diode containing the first metal and the Vf of a Schottky diode containing the second metal. Specifically, by using a barrier layer to limit the interaction of the first metal with the substrate, or by using a mask to apportion the area of the diode between the first and second metals, a Schottky diode can be fabricated to have any desired Vf between the Vf levels obtained with Schottky diodes comprised of single metals. The use Tl-36795 GB 6 of one or more photoresist masks during wafer fabrication also facilitates the incorporation of dual metal Schottky diodes having different voltage drops at different locations throughout the integrated circuit 2. Various modifications to the invention as described above are within the scope ofthe claimed invention. As an example, instead of placing the dual metal Schottky diode 22 immediately above the semiconductor substrate 3 as described above, the dual metal Schottky diode 22 may be placed in any location (or various locations simultaneously) within the front end section 4 or back end section of the integrated circuit. Also, the present invention may be used in any integrated circus configuration, including integrated circuits having different semiconductor substrates, metal layers, barrier layers, dielectric layers, device structures, active elements, passive elements, etc. In addition, barrier layer 26 may be a metal barrier film (TiSiN, I iN, TaN) instead of a dielectric barrier filrrt.. Furthermore, the invention can be used on a non-semiconductor substrate by using a depositecl. silicide formed by Chemical Vapor Deposition (using WSi), Physical Vapor Deposition (using. composite target), or by reactive sputtering. Moreover, the invention is applicable to other semiconductor technologies such as BiCMOS, bipolar, SOI, strained silicon, pyroelectric sensors, opto-electronic devices, microelectrical mechanical system ("MEMS"), or SiGe. |
A semiconductor processor is described. The semiconductor processor includes logic circuitry to perform a logical reduction instruction. The logic circuitry has swizzle circuitry to swizzle a vector's elements so as to form a swizzle vector. The logic circuitry also has vector logic circuitry to perform a vector logic operation on said vector and said swizzle vector. |
Claims 1. A method, comprising: executing a logical reduction instruction in a semiconductor processor, said executing comprising the following: storing a vector having multiple elements into a register; swizzling the elements of the vector with swizzle circuitry to form a first swizzled vector; performing a vector logic operation with vector logic circuitry on the vector and the first swizzled vector to form a first intermediate vector; swizzling said first intermediate vector's elements with swizzle circuitry to form a second swizzled vector; performing said vector logic operation with vector logic circuitry to form a second intermediate vector; and, performing said logic operation on less than all of said second intermediate vector's elements. 2. The method of claim 1 wherein said vector logic operation and said logic operation are a vector AND operation and an AND operation. 3. The method of claim 1 wherein said vector logic operation and said logic operation are a vector OR operation and an OR operation. 4. The method of claim 1 wherein said vector logic operation and said logic operation are a vector XOR operation and an XOR operation. 5. The method of claim 1 wherein said logic operation that is performed on less than all of said second intermediate vector's elements is a vector logic operation. 6. The method of claim 1 wherein said swizzle circuitry that forms said first swizzled vector and said swizzle circuitry that forms said second swizzled vector is the same swizzle circuitry. 7. A semiconductor processor, comprising: logic circuitry to perform a logical reduction instruction, said logic circuitry comprising: swizzle circuitry to swizzle a vector's elements so as to form a swizzle vector; vector logic circuitry to perform a vector logic operation on said vector and said swizzle vector. 8. The semiconductor processor of claim 7 wherein said logic circuitry further comprises second swizzle circuitry coupled to a register that stores a resulting intermediate vector produced by said vector logic operation, said second swizzle logic circuitry to swizzle said intermediate vector. 9. The semiconductor processor of claim 7 wherein said swizzle circuitry comprises multiplexers. 10. The semiconductor processor of claim 7 wherein said swizzle circuitry comprises demultiplexers. 11. The semiconductor processor of claim 7 wherein a data-path exists from an output of said vector logic circuitry to an input of said vector logic circuitry. 12. The semiconductor processor of claim 11 further comprising a ROM to store micro-ops that are used to implement said logical reduction instruction. 13. The semiconductor processor of claim 11 further comprising: second swizzle circuitry coupled to an output of said vector logic circuitry to produce a second swizzle vector, said second swizzle circuitry to swizzle an intermediate value vector from said vector logic circuitry; second vector logic circuitry coupled to an output of said vector logic circuitry and said second swizzle circuitry to perform a vector logic operation on said intermediate value vector and said second swizzle vector. 14. The semiconductor processor of claim 11 wherein said vector logic operation is one of: a vector AND; a vector OR; a vector XOR. 15. A computing system, comprising: a semiconductor processor, said semiconductor processor having logic circuitry to perform a logical reduction instruction, said logic circuitry comprising: swizzle circuitry to swizzle an input vector's elements so as to form a swizzle vector; vector logic circuitry to perform a vector logic operation on said input vector and said swizzle vector; a graphics processor; and, a liquid crystal display coupled to said graphics processor. 16. The computing system of claim 15 wherein said logic circuitry further comprises second swizzle circuitry coupled to a register that stores a resulting intermediate vector produced by said vector logic operation, said second swizzle logic circuitry to swizzle said intermediate vector. 17. The computing system of claim 15 wherein a data-path exists from an output of said vector logic circuitry to an input of said vector logic circuitry. 18. The computing system of claim 17 further comprising a ROM to store micro-ops that are used to implement said logical reduction instruction. 19. The computing system of claim 15 further comprising: second swizzle circuitry coupled to an output of said vector logic circuitry to produce a second swizzle vector, said second swizzle circuitry to swizzle an intermediate value vector from said vector logic circuitry; second vector logic circuitry coupled to an output of said vector logic circuitry and said second swizzle circuitry to perform a vector logic operation on said intermediate value vector and said second swizzle vector. 20. The semiconductor processor of claim 15 wherein said vector logic operation is one of: a vector AND; a vector OR; a vector XOR. |
VECTOR LOGICAL REDUCTION OPERATION IMPLEMENTED ON A SEMICONDUCTOR CHIP Field of Invention The field of invention relates generally to computer systems, and, more specifically, to a processor architecture for performing a vector logical reduction. Background Two types of processor architectures are widely recognized in the field of computer science: "scalar" and "vector". A scalar processor is designed to execute instructions that perform operations on a single set of data, whereas, a vector processor is designed to execute instructions that perform operations on multiple sets of data. Figs. 1 A and IB present a comparative example that demonstrates the basic difference between a scalar processor and a vector processor. Fig. 1 A shows an example of a scalar AND instruction in which a single operand set, A and B, are ANDed together to produce a singular (or "scalar") result C (i.e., AB=C). By contrast, Fig. IB shows an example of a vector AND instruction in which two operand sets, A/B and D/E, are respectively ANDed together in parallel to simultaneously produce a vector result C, F (i.e., AB=C and DE=F). As is well known in the art, typically, both input operands and output result are stored in dedicated registers. For example, many instructions will have two input operands. Therefore two distinct input registers will be used to temporarily store the respective input operands. Moreover, these same instructions will produce an output value which will be temporarily stored in a third (result) register. Respective input 101a,b and 102a,b and result registers 103a,b are observed in Figs. 1A and IB. Notably, the "scalar" vs. "vector" characterizations are readily discernable. That is, input registers 101a and 102a of the scalar design of Fig. 1A are observed holding only scalar values (A and B, respectively). Likewise, the result register 103 a of the scalar design of Fig. 1A is also observed holding only a scalar value (C). By contrast, the input registers 101b and 102b of the vector system of Fig. IB are observed holding vectors (A,D in register 101b and B,E in register 102b). Likewise, the result register 103b of the vector system of Fig. IB is also observed holding a vector value (C,F). As a matter of terminology, the contents of each of the registers 101b, 102b and 103b of the vector system of Fig. IB can be globally referred to as a "vector", and, each of the individual scalar values within the vector can be referred to as an "element". Thus, for example, register 101b is observed to be storing "vector" A, D which is composed of "element" A and "element" D. Some computer systems, regardless if the underlying processor is of scalar or vector design, effectively require a logical operation across elements of a single vector. In the case of, for example, an eight input AND operation (the logical diagram of which is shown in Fig. 2A), eight separate inputs (A, B, C, D, E, F, G, H) are ANDed together to produce a final scalar result (R). In the case of scalar processors, loop operations have to be written in software that accumulate the result over seven iterations of a scalar AND instruction (the pseudo-code for which is shown in Fig. 2B). Thus, in the case of a scalar processor, the multiple iterations require multiple executions of the scalar AND instruction in order to perform the calculation. By contrast, a vector processor can entertain the prospect of implementing such an operation with the execution of a single instruction designed to perform the logical operation outright. Figures The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which: Figs 1A and IB show scalar and vector logic operations; Figs 2A and 2B show a logic diagram of an eight input AND function and corresponding pseudo code with a scalar AND instruction; Fig. 3 shows a process to be performed by a semiconductor processor for performing a logical reduction operation. Fig. 4 shows a first embodiment of the process of Fig. 3; Fig. 5 shows a second embodiment of the process of Fig. 4; Fig. 6a shows a third embodiment of the process of Fig. 5; Fig. 6b shows an embodiment in which the swizzle operations are the same; Fig. 7 shows a design of an electronic circuit that can perform the process of Fig.3; Fig. 8 shows a diagram of a semiconductor processor; Fig. 9 shows a diagram of computing system. Fig. 3 shows a methodology for performing a logical operation across elements of a vector, also referred to as a "logical reduction", on a processor capable of executing vector instructions. Fig. 4 shows an example of an eight input AND function that conforms to the methodology of Fig. 3. Reference will be made to both Figs 3 and 4 to assist the reader's understanding of the methodology of Fig. 3. In the example of Fig. 4, the vector input 400 has the elements (A, B, C, D, E, F, G, H) that are to be ANDed together by way of the eight input AND to produce output result R = ABCDEFG. Detailed Description Fig. 3 shows a methodology for performing a logical operation across elements of a vector, also referred to as a "logical reduction", on a processor capable of executing vector instructions. Fig. 4 shows an example of an eight input AND function that conforms to the methodology of Fig. 3. Reference will be made to both Figs 3 and 4 to assist the reader's understanding of the methodology of Fig. 3. In the example of Fig. 4, the vector input 400 has the elements (A, B, C, D, E, F, G, H) that are to be ANDed together by way of the eight input AND to produce output result R = ABCDEFG. According to the methodology of Fig. 3, a first swizzle operation is performed 301, 401 on the vector input 400 to produce a first swizzle vector 402. In the example of Fig. 4, the first swizzle operation 401 is a dual swizzle operation in which the location of neighboring pairs of elements are swapped as observed in the pattern shown at inset 420. A vector logic operation of the reduction's logical operation is then performed 303, 403 using the vector input 400 and the first swizzle vector 402 as input vectors. In the example of Fig. 4, because the logical reduction corresponds to an eight input AND function, logical operation 303, 403 corresponds to a vector AND operation. It is noteworthy, however, that other logical reduction and corresponding logical operations having a commutation operation (such as OR, "add" (ADD) and "multiply" (MUL)) can be made to conform to the approach of Fig. 3. The result of the logical operation 303, 403 produces a first intermediate result 404. A second swizzle operation 305, 405 that is different than the first swizzle operation is performed on the first intermediate result 404 to produce a second swizzle vector 406. In the example of Fig. 4, the second swizzle operation 405 is a single swizzle operation in which the location of neighboring elements are swapped as observed in the pattern shown at inset 430. Another vector logic operation of the reduction's logical operation is then performed 307, 407 using the first intermediate result 404 and the second swizzle vector 406 as vector inputs. Again, because the example of Fig. 4 corresponds to a logical AND reduction, logical operation 407 of Fig. 4 corresponds to a vector AND operation. The result of the second vector logical operation 307, 407 produces a second intermediate result 408. A logic operation of the reduction's logical operation is then performed 309, 409 on selected elements of the second intermediate result 408 to produce the sought for reduction result 410. In the example of Fig. 4, the selected elements of the second intermediate result correspond to the elements in the first and eighth positions of the second intermediate result 408. However, inspection of the second intermediate result 408 reveals that the selection of any of one of the 1st through 4th elements, and, any one of the 5th through 8th elements will produce the correct reduction result. In order to prevent the design of specialized logic and/or micro-code operations to perform the last logical operation 309, 409, some formatting steps may be performed on the second intermediate result 408 so that the same vector logic operation used in steps 303,403 and 307,407 is used to implement operation 309, 409 (i.e., in the case of the example of Fig. 4, a vector AND operation). For example, a vector may be constructed that conforms to one of the selected elements being placed in the same vector location as the other selected element and padding the remaining vector element values with 0s (e.g., in the example of Fig. 4, formatting vector 408 to create constructed vector [0, 0, 0, 0, 0, 0, 0, ACBD]). Performing a vector AND operation on the constructed vector and the second intermediate result 408 produces the desired logical reduction result 410 in the same vector location of the output vector where the selected operand is found in the constructed vector (i.e., using the aforementioned constructed vector example, R = 0, 0, 0, 0, 0, 0, 0, ACBDHFGE). It is pertinent to note that the sequence of different swizzle operations, as well as the swizzle operations themselves, may vary from embodiment to embodiment. For example, Fig. 5 corresponds to the example of Fig. 4 with the single swizzle pattern 530 being performed before the dual swizzle pattern 520. Comparing the examples of Fig. 4 and 5, available terms are produced in the second intermediate vector 508 to obtain the correct result. It is also pertinent to note that different swizzle patterns besides the single and dual swizzle patterns 420/520, 430/530 discussed above may also be used. For example, Fig. 6 shows an example of a logical AND reduction on a 16 element vector that uses a quad swizzle pattern in which the locations of neighboring quintuplets of elements are swapped in a pattern as observed at inset 640. Also, the swizzling patterns themselves need not be different. For example, Fig. 6b shows a logical AND reduction where the same swizzle pattern is utilized end-to-end through the operation. For any embodiment, those of ordinary skill will be able determine appropriate swizzle patterns, corresponding selection criteria of the second intermediate vector and any associated formatting prior to a last vector logic operation. Moreover, although the examples above have emphasized AND reductions, the same principles can also be applied to effect any operations having a commutative operation such as logical OR, add and multiply. As discussed above, the logical reduction algorithm may be implemented as an instruction within the instruction set of a semiconductor processor. Fig. 7 shows a possible data path that may be implemented as logic circuitry within the execution units of a processor. According to the circuit diagram of Fig. 7, the input vector having the elements to be logically reduced through the logical reduction is stored in register 701. The output of register 701 flows to the input of first swizzle circuitry 702 and the input of the first vector logic circuitry 704 that performs the first vector logic operation (e.g., a vector AND, vector OR, or vector XOR). The output of the first swizzle circuitry 702 flows into a first swizzle register703. The output of the first swizzle register 703 flows into the first vector logic circuitry 704. As such, the first vector logic circuitry 704 accepts a first input vector from register 701 and a second input vector from register 703. First intermediate value register 705 holds the output vector produced by the first vector logic circuitry 704. The contents of register 705 are then provided to second swizzle logic circuitry 706 and second vector logic circuitry 708 that performs the second vector logic operation. The output of the second swizzle logic circuitry 706 is provided to a second swizzle register 707 which provides its output to second vector logic circuitry 708. Second vector logic circuitry 708 provides its output to second intermediate register 709. Selection and formatting logic 710 selects (and may format any of) the elements of the vector within the second intermediate register 709 that are needed as the operands for the final vector logic operation that is performed by third vector logic circuitry 711. The result of third vector logic circuitry 71 1 corresponds to the final result (the logical reduction) and is stored in result register 712. Note that, to implement the algorithm of Fig. 3, another stage of swizzle circuitry, intermediate vector register and vector logic circuitry (not shown) may be additionally incorporated into the circuitry of Fig. 7. Various alternate logic designs that use some of the components of Fig. 7 yet still perform the logical reduction are also possible. For instance, if the circuitry for the logical reduction is dedicated to the execution of the logical reduction instruction in a "straight line" data path (e.g., without a plurality of micro-ops) any of registers 703, 705, 707, 709 may be eliminated. By contrast, if the logical reduction instruction is to be performed via micro-code with a number of corresponding micro-ops, a number of elements of Fig. 7 may be eliminated while others may be reused. For instance, the first and second vector logic operations can be performed with vector logic 704 (such that vector logic 708 is eliminated) if the respective outputs of register 705 and register 707 are feedback as inputs to logic 704 (here it is understood that the micro-ops control multiplexers or other data path control circuits to properly move the data in conformity with the algorithm). Vector logic 71 1 can further be eliminated if the selection and formatting logic 710 accepts an input from register 705 and provides its output back to vector logic 704 (which also performs the third and final logic operation). The first and second swizzle circuits 702, 706 can also be merged into a common bank of multiplexers and/or de-multiplexers that switch between the correct swizzle patterns based on the state of their respective channel select input values. That is, the channel select inputs of the multiplexers and/or de-multiplexers receive a first input value that corresponds to the first swizzle pattern, and, receive a second input value that corresponds to the second swizzle pattern. The multiplexers and/or de-multiplexers form datapaths in response to the channel select values to effect the desired swizzle transfer. In an extended implementation less than all of the elements of the input vector stored in register 701 can be logically reduced with formatting circuitry (either preceding or following register 701) that forces a benign value into the input vector for those elements that are not to be considered for the logical reduction. For instance, if the logical reduction is to be a logical reduction of only elements A, B, C, D of input vector A, B, C, D, E, F, G, H - then - formatting logic would insert values of all Is for each of elements E, F, G and H such that the vector A, B, C, D, [all Is], [all Is], [all Is], [all Is] would be processed as the input vector for the reduction. For OR and XOR logical reductions, benign values correspond to all 0s rather than all Is. As discussed in reference to Fig. 7 above, the algorithm may be implemented within a vector logical reduction instruction that is executed by the execution units of a semiconductor processor. Fig. 8 shows a generic processing core 800 that is believed to describe many different types of processing core architectures such as Complex Instruction Set (CISC), Reduced Instruction Set (RISC) and Very Long Instruction Word (VLIW). The generic processing core 800 of Figure 8 includes: 1) a fetch unit 803 that fetches instructions (e.g., from cache and/or memory); 2) a decode unit 804 that decodes instructions; 3) a schedule unit 805 that determines the timing and/or order of instruction issuance to the execution units 806 (notably the scheduler is optional); 4) execution units 806 that execute the instructions (typical instruction execution units include branch execution units, integer arithmetic execution units (e.g., ALUs) floating point arithmetic execution units (e.g., FPUs) and memory access execution units); and 5) a retirement unit 807 that signifies successful completion of an instruction. Notably, the processing core 800 may or may not employ microcode 808. In the case of micro-coded processors, the micro-ops are typically stored in a non volatile machine readable medium (such as a Read Only Memory (ROM)) within the semiconductor chip that the processor is constructed on and cause the execution units within the processor to perform the desired function called out by the instruction. A processor having a logical reduction instruction can be implemented into various computing systems as well. Fig. 9 shows an embodiment of a computing system (e.g., a computer). The exemplary computing system of Figure 9 includes: 1) one or more processors 901 that may be design to include a vector logical reduction instruction; 2) a memory control hub (MCH) 902; 3) a system memory 903 (of which different types exist such as DDR RAM, EDO RAM, etc,); 4) a cache 904; 5) an I/O control hub (ICH) 905; 6) a graphics processor 906; 7) a display/screen 907 (of which different types exist such as Cathode Ray Tube (CRT), Thin Film Transistor (TFT), Liquid Crystal Display (LCD), DPL, etc.; 8) one or more I/O devices 908. The one or more processors 901 execute instructions in order to perform whatever software routines the computing system implements. The instructions frequently involve some sort of operation performed upon data. Both data and instructions are stored in system memory 903 and cache 904. Cache 904 is typically designed to have shorter latency times than system memory 903. For example, cache 904 might be integrated onto the same silicon chip(s) as the processor(s) and/or constructed with faster SRAM cells whilst system memory 903 might be constructed with slower DRAM cells. By tending to store more frequently used instructions and data in the cache 904 as opposed to the system memory 903, the overall performance efficiency of the computing system improves. System memory 903 is deliberately made available to other components within the computing system. For example, the data received from various interfaces to the computing system (e.g., keyboard and mouse, printer port, LAN port, modem port, etc.) or retrieved from an internal storage element of the computing system (e.g., hard disk drive) are often temporarily queued into system memory 903 prior to their being operated upon by the one or more processor(s) 901 in the implementation of a software program. Similarly, data that a software program determines should be sent from the computing system to an outside entity through one of the computing system interfaces, or stored into an internal storage element, is often temporarily queued in system memory 903 prior to its being transmitted or stored. The ICH 905 is responsible for ensuring that such data is properly passed between the system memory 903 and its appropriate corresponding computing system interface (and internal storage device if the computing system is so designed). The MCH 902 is responsible for managing the various contending requests for system memory 903 access amongst the processor(s) 901, interfaces and internal storage elements that may proximately arise in time with respect to one another. One or more I O devices 908 are also implemented in a typical computing system. I O devices generally are responsible for transferring data to and/or from the computing system (e.g., a networking adapter); or, for large scale non-volatile storage within the computing system (e.g., hard disk drive). ICH 905 has bi-directional point-to-point links between itself and the observed I/O devices 908. In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
Various arrangements for identifying a location of a hand of a person are presented. A group of pixels may be identified in an image of a scene as including the person. A reference point may be set for the group of pixels identified as the person. The hand may be identified as using a local distance maximum from the reference point. An indication, such as coordinates, of the location of the hand may be output based on the local distance. |
WHAT IS CLAIMED IS: 1. A method for identifying a location of a hand of a person, comprising: identifying a group of pixels in an image of a scene as representing the person; setting a reference point for the group of pixels identified as representing the person; identifying a local distance maximum from the reference point within the group of pixels identified as representing the person; and outputting an indication of the location of the hand of the person based on the identified local distance maximum. 2. The method for identifying the location of the hand of the person of claim 1, further comprising: defining a plane positioned and oriented based on coordinates of the group of pixels identified as representing the person. 3. The method for identifying the location of the hand of the person of claim 2 , wherein identifying the location of the hand of the person based on the local distance maximum from the reference point within the group of pixels identified as representing the person comprises ignoring one or more additional focal distance maximums that are within a threshold distance of the plane. 4. The method for identifying the location of the hand of the person of claim 1, wherein identifying the group of pixels in the image as including the person comprises: performing a principal component analysis on the group of pixels to identify the group of pixels as representing the person based on a presence of a shape resembling a head and shoulders in the group of pixels. 5. The method for identifying the location of the hand of the person of claim 1, further comprising: prior to identifying the group of pixels in the image as representing the person, identifying a plurality of pixels of the image of the scene as background, wherein the plurality of pixels of the image are not used when identifying the group of pixels as representing the person. 1 6. The method for identifying the location of the hand of the person of 2 claim 1 , further comprising: 3 creating a foreground model for a pixel based on the pixel being in the group 4 of pixels, wherein the foreground model indicates a depth and intensity. 1 7. The method for identifying the location of the hand of the person of 2 claim 1, further comprising: 3 identifying a second group of pixels in the image of the scene; and 4 excluding the second group of pixels from being identified as representing any 5 person based on a size of the second group of pixels. 1 8. The method for identifying the location of the hand of the person of 2 claim 1, wherein the indication of the location of the hand comprises three dimensional 3 coordinates. 1 9. The method for identifying the location of the hand of the person of 2 claim 1 , wherein identifying the group of pixels in the image of the scene as representing the 3 person comprises: 4 analyzing a his tory of pixel groups to de termine that a first group of pixels and 5 a second group of pixels are to be treated as the group of pixel s. 1 10. The method for identifying the location of the hand of the person of 2 claim 1, farther comprising: 3 determining a second group of pixels in the image of the scene does not 4 correspond to any person based on the second group of pixels being smaller than a predefined 5 minimum size threshold. 1 11. The method for identifying the location of the hand of the person of 2. claim 1, farther comprising: 3 prior to identifying the group of pixels in the image as representing the person, 4 receiving the image of the scene, wherein each pixel of the image has dep th data and in tensity 5 data. 1 12. A system for identifying a location of a hand of a person, the system 2. comprising: 3 a processor; and a memory communicatively coupled with and readable by the processor and having stored therein processor-readable instructions which, when executed by the processor, cause the processor to: identify a group of pixels in an image of a scene as representing the person; set a reference point for the group of pixels identified as representing the person: identify a local distance maximum from the reference point within the group of pixels identified as representing the person; and output an indication of the location of the hand of the person based on the identified local distance maximum. 13. The system for identifying the location of the hand of the person of claim 12, wherein the processor-readable instructions further comprise processor-readable instructions which, when executed by the processor, cause the processor to: define a plane positioned and oriented based on coordinates of the group of pixels identified as representing the person. 14. The system for identifying the location of the hand of the person of claim 13, wherein the processor-readable instructions which, when executed by the processor, cause the processor to identify the location of the hand of the person based on the local distance maximum from the reference point within the group of pixels identified as representing the person comprises processor-readable instructions which, when executed by the processor, cause the processor to: ignore one or more additional local distance maximums that are within a threshold distance of the plane, 15. The system for identifying the location of the hand of the person of claim 12, wherein the processor-readable instructions which, when executed by the processor, cause the processor to identify the group of pixels in the image as including the person comprises processor-readable instructions which, when executed by the processor, cause the processor to: perform a principal component analysis on the group of pixels to identify the group of pixels as representing the person based on a presence of a shape resembling a head and shoulders in the group of pixels. 16. The system for identifying the location of the hand of the person of claim 12, wherein the processor-readable instructions further comprise processor-readable instructions which, when executed by the processor, cause the processor to: prior to identifying the group of pixels in the image as representing the person, identif a plurality of pixels of the image of the scene as background, wherein: the plurality of pixels of the image are not used when identifying the group of pixels as representing the person. 17. The system for identifying the location of the hand of the person of claim 12, wherein the processor-readable instructions fitrther comprise processor-readable instructions which, when executed by the processor, cause the processor to: identify a second group of pixels in the image of the scene; and exclude the second group of pixels from being identified as representing any person based on a size of the second group of pixels. 18. The system for identifying the location of the hand of the person of claim 12, wherein the indication of the location of the hand comprises three dimensional coordinates. 19. The system for identifying the location of the hand of the person of claim 12, wherein the processor-readable instructions further comprise processor-readable instructions which, when executed by the processor, cause the processor to: prior to identifying the group of pixels in the image as representing the person, receive the image of the scene, wherein each pixel of the image has depth data and intensity data. 20. A non-transitory computer-readable medium having computer- readable instructions stored thereon, the computer-readable instructions being configured to cause a computer to: identify a group of pixels in an image of a scene as representing the person; set a reference point for the group of pixels identified as representing the person; identify a local distance maximum from the reference point within the group of pixels identified as representing the person; and output an indication of the location of the hand of the person based on the identified local distance maximum. 21. The non-transitory computer-readable medium of claim 20, wherein the computer-readable instructions further comprise computer-readable instructions configured to cause the computer to: define a plane positioned and oriented based on coordinates of the group of pixels identified as representing the person, 22. The non-transitory computer-readable medium of claim 21, wherein the computer-readable instructions configured to cause the computer to identify the location of the hand of the person based on the local distance maximum from the reference point within the group of pixels identified as representing the person comprises computer-readable instructions configured to cause the computer to: ignore one or more additional local distance maximums that are within a threshold distance of the plane. 23. The non-transitory computer-readable medium of claim 20, wherein the computer-readable instructions configured to cause the computer to identity the group of pixels in the image as including the person comprises computer-readable instructions configured to cause the computer to: perform a principal component analysis on the group of pixels to identify the group of pixels as representing the person based on a presence of a shape resembling a head and shoulders in the group of pixels. 24. The non-transitory computer-readable medium of claim 20, wherein the computer-readable instructions further comprise computer-readable instructions configured to cause the computer to: prior to identifying the group of pixels in the image as representing the person, identify a plurality of pixels of the image of the scene as background, wherein: the plurality of pixels of the image are not used when identifying the group of pixels as representing the person. 25. The non-transitory computer-readable medium of claim 20, wherein the computer-readable instructions further comprise computer-readable instructions configured to cause the computer to: identify a second group of pixels in the image of the scene; and exclude the second group of pixels from being identified as representing any person based on a. size of the second group of pixels. 26. The non-transitory computer-readable medium of claim 20, wherein the indication of the location of the hand comprises three dimensional coordinates. 27. The non-transitory computer-readable medium of claim 20, wherein the computer-readable instructions further comprise computer-readable instructions configured to cause the computer to: prior to identifying the group of pixels in the image as representing the person, receive th e image of th e scene, wherein each pixel of the image has depth data and intensity data. 28. An apparatus for identifying a location of a hand of a person, the apparatus comprising: means for identifying a group of pixels in an image of a scene as representing the person; means for setting a reference point for the group of pixels identified as representing the person; means for identifying a local distance maximum from the reference point within the group of pixels identified as representing the person; and means for outputting an indication of the location of the hand of the person based on the identified local distance maximum. 29. The apparatus for identifying the location of the hand of the person of claim 28, further comprising: means for defining a plane positioned and oriented based on coordinates of the group of pixels identified as representing the person. 30. The apparatus for identifying the location of the hand of the person of claim 29, wherein the means for identify ing the location of the hand of the person based on the local distance maximum from the reference point within the group of pixels identified as representing the person comprises: means for ignoring one or more additional local distance maximums that are within a threshold distance of the plane. 31. The apparatus for identifying the location of the hand of the person of claim 28, wherein the means for identify ing the group of pixels in the image as including the person comprises: 4 means for performing a principal component analysis on the group of pixels to 5 identify the group of pixels as representing the person based on a presence of a shape 6 resembling a head and shoulders in the group of pixels. 1 32. The apparatits for identifying the location of the hand of the person of 2 claim 28, further comprising: 3 means for identifying a plurality of pixels of the image of the scene as 4 background prior to identifying the group of pixels in the image as representing the person, 5 wherein: 6 the plurality of pixels of the image are not used when identifying the 7 group of pixels as representing the person. 1 33. The apparatus for identifying the location of the hand of the person of 2 claim 28, further comprising: 3 means for identifying a second group of pixels in the image of the scene: and 4 means for excluding the second group of pixels from being identified as 5 representing any person based on a size of the second group of pixels, 1 34. The apparatus for identifying the location of the hand of the person of 2 claim 28, wherein the indication of the location of the hand comprises three dimensional 3 coordinates, 1 35. The apparatits for identifying the location of the hand of the person of 2 claim 28, further comprising: 3 means for receiving the image of the scene prior to identifying the group of 4 pixels in the image as representing the person, wherein each pixel of the image has depth data 5 and intensity data, . 1 36, The apparatus for identifying the location of the hand of the person of 2 claim 28, further comprising: 3 means for creating a foreground model for a pixel based on the pixel being in 4 the group of pixels, wherein the foreground model indicates a depth and intensity. 1 37, The apparatus for identifying the location of the hand of the person of 2. claim 28, further comprising: means for determining a second group of pixels in the image of the scene does not correspond to any person based on the second group of pixels being smaller than a predefined minimum size threshold. 38. The apparatits for identifying the location of the hand of the person of claim 28, wherein the means for identifying the group of pixels in the image of the scene as representing the person comprises: means for analyzing a history of pixel groups to determine that a first group of pixels and a second group of pixels are to be treated as the group of pixels. |
HAND DETECTION, LOCATION, AND/OR TKAC BACKGROUND [8001] A person's movements may be used to control electronic devices. A hand movement or movement of another part of the person's body can be detected by an electronic device and used to determine a command to be executed by the device (e.g., provided to an interface being executed by the device) or to be output to an external device. Such movements by a person may be referred to as a gesture. Gestures may not require the person to physically manipulate an input device. Rather, one or more images of the person may be captured to identify the gesture being performed. A s an example, when watching television, a person may use gestures to change the channel, raise and lower the volume, and/or shut off the television. A hand or some other part of a person's body may be used to perform each gesture. Similarly, an object held or controlled by the person may be used to perform the gesture. [001)2] Gestures may be useful to control devices. However, reliably detecting gestures, or, more generally, determining a position of a part of a person's body, may be difficult and/or computationally expensive. SUMMARY [8003] in some embodiments, a method for identifying a location of a hand of a person is presented. The method may include identifying a group of pixels in an image of a scene as representing the person. 'T 'he method may include setting a reference point for the group of pixels identified as representing the person. The method may include identifying a local distance maximum from the reference point within the group of pixels identified as representing the person. The method may include outputting an indication of the location of the hand of the person based on the identified local distance maximum. [0004] Embodiments of such a method may include one or more of the following: The method may include defining a plane positioned and oriented based on coordinates of the group of pixels identified as representing the person. Identifying the location of the hand of the person based on the local distance maximum from the reference point within the group of pixels identified as representing the person may include ignoring one or more additional local distance maximums that are within a threshold distance of the plane. Identifying the group of pixels in the image as including the person may include performing a principal coniponeni analysis on ihe group of pixels to identify the group of pixels as representing the person based on a presence of a shape resembling a head and shoulders in the group of pixels. The method may include, prior to identifying the group of pixels in the image as representing the person, identifying a plurality of pixels of the image of the scene as background, wherein the plurality of pixels of the image are not used when identifying the group of pixels as representing the person. The method may include creating a foreground model for a pixel based on the pixel being of the group of pixels, wherein the foreground model indicates a depth and intensity. The method may include identifying a second group of pixels in the image of the scene. The method may include excluding the second group of pixels from being identified as representing any person based on a size of the second group of pixels. The indication of the location of ihe hand may include three dimensional coordinates. Identifying the group of pixel s in the image of the scene as representing the person may include analyzing a history of pixel groups to determine that a first group of pixels and a second group of pixels are to be treated as the group of pixels. The method may include determining a second group of pixels in the image of the scene does not correspond to any person based on the second group of pixels being smaller than a predefined minimum size threshold. The method may include, prior to identifying the group of pixels in the image as representing the person, receiving the image of the scene, wherein each pixel of the image has depth data and intensity data. [ΘΘ05] In some embodiments, a system for identifying a location of a hand of a person may be presented. The system may include a processor. The system may include a memory communicatively coupled with and readable by the processor and having stored therein processor-readable instructions. When executed by the processor, the processor-reada ble instructions cause the processor to identify a group of pixels in an image of a scene as representing t e person. When executed, the processor-readable instructions cause the processor to set a reference point for the group of pixels identified as representing the person. When executed, the processor-readable instructions cause the processor to identify a local distance maximum from the reference point within the group of pixels identified as representing the person. When executed, the processor- readable instructions cause the processor to output an indication of the location of the hand of the person based on the identified local distance maximum. [001)6] Embodiments of such a system may include one or more of the following: When executed, the processor-readable instructions cause the processor to define a plane positioned and oriented based on coordinates of the group of pixels identified as representing the person. The processor-readable instructions which, when executed by the processor, cause the processor to identify the location of the hand of the person based on the local distance maximum from the reference point within the group of pixels identified as representing the person may include processor-readable instructions which, when executed by the processor, cause the processor to ignore one or more additional local distance maximums that are within a threshold distance of the plane. The processor-readable instructions which, when executed by the processor, cause the processor to identify- the group of pixels in the image as including the person may include processor-readable instructions which, when executed by the processor, cause the processor to perform a principal component analysis on the group of pixels to identify the group of pixels as representing the person based on a presence of a shape resembling a head and shoulders in the group of pixels. When executed, the processor- readable instructions cause the processor to, prior to identifying the group of pixels in the image as representing the person, identify a plurality of pixels of the image of the scene as background, wherein the plurality of pixels of the image are not used when identifying the group of pixels as representing the person. When executed, the processor-readable instructions cause the processor to identify a second group of pixels in the image of the scene. When executed, the processor-readable instructions cause the processor to exclude the second group of pixels from being identified as representing any person based on a size of the second group of pixels. The indication of the location of the hand may include three dimensional coordinates. When executed, the processor- readable instructions cause the processor to, prior to identify ing the group of pixels in the image as representing the person, receive the image of the scene, wherein each pixel of the image has depth data and intensify data. [0007] In some embodiments, a computer program product residing on a computer- readable storage medium for identifying a location of a hand of a person is presented. The computer program product may include computer-readable instructions configured to cause a computer to identify a group of pixels in an image of a scene as representing the person. The computer-readable instructions may be configured to cause the computer to set a reference point for the group of pixels identified as representing the person. The computer-readable instructions may be configured to cause the computer to identify a local distance maximum from the reference point within the group of pixels identified as representing the person. The computer-readable instructions may be configured to cause the computer to output an indication of the location of the hand of the person based on the identified local distance maximum, [8(508] Embodiments of such a computer program product may include one or more of the following: The computer program product for identifying the location of the hand of the person of claim 20, wherein the computer-readable instructions further comprise computer-readable instructions which, when executed by the computer, cause the computer to define a plane positioned and oriented based on coordinates of the group of pixel s identified as representing the person. The computer-readable instructions which, when executed by the computer, cause the computer to identify the location of the hand of the person based on the local distance maximum from the reference point within ihe group of pixels identified as representing the person may include computer-readable instructions which, when executed by the computer, cause the computer to ignore one or more additional local distance maximums that are within a threshold distance of the plane. The computer-readable instructions which, when executed by the computer, cause the computer to identify ihe group of pixels in the image as including the person may include computer-readable instructions which, when executed by the computer, cause the computer to perform a principal component analysis on ihe group of pixels to identify the group of pixels as representing the person based on a presence of a shape resembling a head and shoulders in the group of pixels. The computer-readable instructions may be configured to cause the computer to, prior to identifying the group of pixels in the image as representing the person, identify a plurality of pixels of the image of the scene as background. The plurality of pixels of the image may not be used when identifying the group of pixels as representing the person. The computer-readable instructions may be configured to cause the computer to identify a second group of pixels in the image of the scene. The computer-readable instructions may be configured to cause the computer to exclude the second group of pixels from being identified as representing any person based on a size of the second group of pixels. The indication of the location of the hand may include three dimensional coordinates. The computer-readable instructions may include computer- readable instructions which, when executed by the compitter, cause the computer to, prior to identifying the group of pixels in the image as representing the person, receive the image of the scene, wherein each pixel of the image has depth data and intensity data. [0009] m some embodiments, an apparatus for identifying a location of a hand of a person is presented. The apparatus may include means for identifying a group of pixels in an image of a scene as representing the person. The apparatus may include means for setting a reference point for the group of pixels identified as representing the person. The apparatus may include means for identifying a local distance maximum from the reference point within the group of pixels identified as representing the person. The apparatus may include means for outputting an indication of the location of the hand of the person based on the identified local distance maximum, [00i Θ] Embodiments of such an apparatus may include one or more of the following: The apparatus may include means for defining a plane positioned and oriented based on coordinates of the group of pixels identified as representing the person. The means for identifying the location of the hand of the person based on the local distance maximum from the reference point within the group of pixels identified as representing the person may include means for ignoring one or more additional local distance maximums that are within a threshold distance of the plane. The means for identifying the group of pixels in the image as including the person may include means for performing a principal component analysis on the group of pixels to identify the group of pixels as representing the person based on a presence of a shape resembling a head and shoulders in the group of pixels. The apparatus may include means for identifying a plurality of pixels of the image of the scene as background prior to identifying the group of pixels in the image as representing the person. The plurality of pixels of the image may not be used when identifying the group of pixels as representing the person. The apparatus may include means for identifying a second group of pixels in the image of the scene. The apparatus may include means for excluding the second group of pixels from being identified as representing any person based on a size of the second group of pixels. The indication of the location of the hand may include three dimensional coordinates. The apparatus may include means for receiving the image of the scene prior to identifying the group of pixels in the image as representing the person, wherein each pixel of the image has depth data and intensity data. [0011 J Some embodiments may provide a method for identifying a position of a control object associated with a person. The method may comprise identifying a group of pixels in an image as representing at least a portion of the person, setting a reference point for the group of pixels identified as representing the person, determining distance from the reference point to at least one pixel in each of a plurality of pixel neighborhoods within the group of pixels identified as representing the person, and outpuiting an indication of the position of the control objeci based at leasi in part on the determined distances. BRIEF DESCRIPTION OF THE DRAWINGS [8012] A further understanding of the nature and advantages of various embodiments may be realized by reference to the following figures. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. [0013] FIG. 1 illustrates an embodiment of a system for determining a gesture performed by a person. [8014] FIG. 2 illustrates an embodiment of a system for tracking a position of a person's hand. [0015] FIG. 3 illustrates an embodiment of an image of a scene captured by an image capture module. [8016] FIG. 4 illustrates an embodiment of a. point cloud of a scene captured by an image capture module. [8017] FIG. 5 illustrates an embodiment of an image created from an image of a scene created using multiple background models and/or multiple foreground models. [0018] FIG. 6 illustrates an embodiment of a method for creating background models for individual pixels. [0019] FIG. 7 A. illustrates an embodiment of a method for creating foreground models for individual pixels. [0020] FIG. 7 B illustrates an embodiment of a method for creating background and foreground models for individual pixels. [0021] FIG. 8 illustrates an embodiment of a method for modeling a scene using background and/or foreground models, [0022] FIG. 9 illustrates another embodiment of a method for modeling a scene using background and/or foreground models. [0023] FIG. I OA illustrates an embodiment of a depth segmented image wherein a person's hand does not occlude at least a portion of the person's arm. [6024] FIG. 10B illustrates an embodiment of a depth segmented image wherein a person's hand occludes at least a portion of the person's arm. [0025] FIG. 1 1 illustrates an embodiment of an image following depth segmentation. [0026] FIG. 12 illustrates an embodiment of a plane fit to an image of a person. [0027] FIG. 13 illustrates an embodiment of an image with a calculated center-of- gravity and local distance maximums. [0028] FIG. 14 illustrates an embodiment of a system that performs depth segmentation and hand detection/tracking functions. [6029] FIG. 15A illustrates an embodiment of a method for determining a position of a hand. [0030] FIG. 1513 illustrates another embodiment of a method for determining a position of a hand. [6031] FIG. 16 illustrates another embodiment of a method for determining a position of a hand. [0032] FIG. 17 illustrates an embodiment of a method for determining a seed pixel and creating a pixel blob based on a pixel identified as a local distance maximum. [8033] FIG. 18 illustrates an embodiment of a method for analyzing a pixel blob to determine if it likely contains a hand and determine associated coordinates. 0034J FIG. 19 illustrates an embodiment of a computer system. DETAILED DESCRIPTION [8035] A position of a portion of a person's body, such as a hand, may be tracked for various reasons. As an example, in order to detect a gesture being performed by a person, it may be useful to track a location of a portion of a person's body. For instance, if a gesture is performed by a hand, detecting the gesture may involve determining the position of the person's hand in multiple images. The position of a person's hand may be tracked using images from an image capture device. An image capture device may be used to capture multiple images of a scene. This scene may at times have none, one, or more than one persons present within it. Rather than analyzing the entirety of each image to determine if a person is performing a gesture, it may be possible to discard portions of some images as unlikely to contain a person and focus analysis on one or more portions of the images likely to contain a person, who may perform a gesture. [8036] By not analyzing portions of images, the total amount of processing necessary to determine a location of a portion of a person's body may be decreased. As a simple example, if a person, table, chair, and bookcase are present within a scene being captured by an image capture device, it may be useful to ignore potions of the image containing the table, chair, and bookcase. Since only a location of a part of a person's body is desired, only the portions of the image containing the person may be worthwhile to process. As such, the portions of the scene where the table, chair, and bookcase are present may be ignored. This may result in only a smaller portion of the image requiring additional processing to determine a location of a portion of the person's body. Accordingly, the total amount of processing may be decreased by only analyzing for location portions of the image that may be part of a foreground that includes persons present in the scene. Further, not only may processing resources be conserved, but objects that are unlikely to provide a desired input (for example, people walking by the camera, things going on behind the user, etc.) may be ignored in some embodiments. Moreover, embodiments detailed herein may permit for a more accurate ident ication of foreground objects to be performed, which may enable accurate gesture detection. 0037J One or more background models and foreground models may be created for a scene. Such models may be created and used on a pixel-by-pixei basis. A particular pixel may have one or more background models. Each of these background models may defsne one or more values, such as an intensity value and a depth value. As such, pixels of an image may have three-dimensional information, if the intensity value arid'Or depth value of a pixel has not changed over a significant period of time, it may be determined ihai ihe pixel likely corresponds to a background object. Common background objects include walls, furniture, the floor, lighting appliances, etc. for an indoor scene. [8038] Multiple background models may be present for some pixels. While a background object may be less likely to move or otherwise change, such change may occur frequently enough that having multiple background models for a pixel is useful. For example, a cabinet present in a scene may typically be closed: thus, a background model may be created for pixels that correspond to the closed cabinet. However, if the cabinet is left open for a substantial period of time, a second background model (which may have a different depth value arid'Or intensity value) may be created for each pixel that corresponds to the open cabinet. In a later-captured image, if values of a pixel sufficiently correspond to either of the pixel's background models, it may be determined that the object represented by the pixel is part of the background of the scene. [8039] In addition to one or more background models being created for individual pixels, foreground models may be created for individual pixels. Some pixels may have no models, a foreground model only , a background model only, multiple background models only, a background model and a foreground model, or multiple background models and a foreground model. A foreground model may be created for a pixel if it is determined part of a person is represented by the pixel. For gesture detection, since only people may perform a gesture, a foreground model only corresponds to locations of persons. More generally, if a location of part of a person's body is desired, the foreground model may be desired to only represent the person. An indication of pixels corresponding to persons may be provided by a hardware -based or software-based module configured to identify a person using techniques such as a head and shoulder principal component analysis. A control object may be used to perform a gesture or otherwise be tracked by the system. The control object may be, for example, a person's hand or something held or worn by the user. As an example, a wand may be a control object. [804Θ] When a new image of a scene is received, which may happen multiple times per second, pixels of the image may be compared on a pixel-by-pixel basis with one or more background models for the pi el, if present, and a foreground model for the pixel, if present. Since it may take at least some time before a background model can be created for a pixel (because the pixel may need to remain approximately the same in intensity and depth for a time for the background model to be created), no background model may be present for the pixel. Based on a probability analysis, it may be determined whether a pixel is likely part of the background, foreground, or is part of an uncertain category. [8(541] If a pixel is determined to be part of the background, it may be ignored for further processing. Pixels that are uncertain or are part of the foreground may be subjected to further processing to find and track a location of part of the person (such as the person's hand). [0042] FIG. 1 illustrates an embodiment of a system 100 for determining a gesture performed by a person. More generally, system 100 may be used for tracking a specific portion of a person. For instance, system 100 may be used for tracking a person's hands. System 100 may be configured to track one or both hands of a person simultaneously. Further, system 100 may be configured to track hands of multiple persons simultaneously. While system 100 is described herein as being used to track the location of persons' hands, it should be understood that system 100 may be configured to track other parts of persons, such as heads, shoulders, torsos, legs, etc. The hand tracking of system 100 may be useful for detecting gestures perfonned by the one or more persons. System 100 itself may not determine a gesture performed by the person or may not perform the actual hand identification or tracking in some embodiments; rather, system 100 may output a position of one or more hands, or may simply output a subset of pixels likely to contain foreground objects. The position of one or more hands may be provided to and/ or determined by another piece of hardware or software for gestures, which might be performed by one or more persons. 13] System 100 may include image capture module 1 10, processing module 120, computer-readable storage medium 130, and gesture analysis module 140. Additional components may also be present. For instance, system 100 may be incorporated as part of a computer system, or, more generally, a computerized device. Computer system 1900 of FIG. 19 illustrates an exemplary computer system which may be incorporated with system 100 of FIG. 1. Image capture module 110 may be configured to capture multiple images. Image capture module 1 10 may be a camera, or, more specifically, a video camera. Image capture module 1 10 may capture a series of images in the form of video frames. These images may be captured periodically, such as 30 times per second. The images captured by image capture module 1 10 may include intensity and depth values for each pixel of ihe images generated by image capture module 1 10, Image capture module 1 10 may project radiation, such as infrared radiation (IR) out into its field-of-view (e.g., onto the scene). The intensity of the returned infrared radiation may be used for determining an intensity value for each pixel of image capture module 1 10 represented in each captured image. The projected radiation may also be used to determine depth information. As such, image capture module 1 10 may be configured to capture a three-dimensional image of a scene. Each pixel of the images created by image capture module 1 10 may have a depth value and an intensity value, in some embodiments, an image capture module may not project radiation, but may instead rely on light (or, more generally, radiation) present in the scene to capture an image. For depth information, the image capture module 1 10 may be stereoscopic (that is, image capture module 1 10 may capture two images and combine them into a single image having depth information) or may use other techniques for determining depth. [8045] The images captured by image capture module 1 10 may be provided to processing modide 120. Processing module 120 may be configured to acquire images from image capture module 1 10, Processing module 120 may analyze some or all of the images acquired from image capture module 1 10 to determine the location of one or more hands belonging to one or more persons present in one or more of the images. Processing module 120 may include software, firmware, and/or hardware. Further detail of processing module 120 is provided in reference to FIG. 2. Processing module 120 may be in communication with computer-readable storage medium 130. Computer- readable storage medium 130 may be used to store information related to backgroiEnd models and/or foreground models created for individual pixels of the images captured by image capture module 1 10. if the scene captured in images by image capture module 1 10 is static, it can be expected that a pixel at the same location in the first image and the second image corresponds to the same object. As an example, if a couch is present at a particular pixel in a first image, in the second image, the same particular pixel of the second image may be expected to also correspond to the couch. Background models and/or foreground models may be created for some or all of the pixels of the acquired images. Computer-readable storage medium 130 may also be configured to store additional informatio used by processing module 120 to determine a position of a hand (or some other part of a person's body). For instance, computer-readable storage medium 130 may contain information on thresholds (which may be used in determining the probability that a pixel is part of a foreground or background model) and/or may contain information used in conducting a principal component analysis (PCA), described in greater detail later in this document. Further, computer-readable storage medium 130 may store instructions for executing one or more methods or functions— as described in greater detail below— for example one or more of the methods 600, 700A, 700B, 800, and/or 900. [ΘΘ46] Processing module 120 may provide an output to another module, such as gesture analysis module 140. Processmg module 120 may output two-dimensional coordinates and'Or three-dimensional coordinates to another software module, hardware module, or firmware module, such as gesture analysis module 140. The coordinates output by processing module 120 may indicate the location of a detected hand (or some other part of the person's body), if more than one hand is detected (of the same person or of different persons), more than one set of coordinates may be output. Two- dimensional coordinates may be image-based coordinates, wherein an x-coordinate and y-coordinate correspond to pixels present in the image. Three-dimensional coordinates may incorporate depth information. Coordinates may be output by processing module 120 for each image in which at least one hand is located. Further, the processing module 120 may output one or more subsets of pixels having likely background elements extracted and'Or likely to include foreground elements for further processing. [0047] Gesture analysis module 140 may be any one of various types of gesture determination systems. Gesture analysis module 140 may be configured to use the two- or three-dimensional coordinates output by processing module 120 to determine a gesture being performed by a person. As such, processmg module 120 may output only coordinates of one or more hands, determining an actual gesture and'Or what function should be performed in response to the gesture may be performed by gesture analysis module 140. It should be understood that gesture analysis module 140 is illustrated in FIG. 1 for example purposes only. Other possibilities, besides gestures, exist for reasons as to why one or more hands of one or more users may be desired to be tracked. As such, some other module besides gesture analysis module 140 may receive locations of parts of persons' bodies. [8(548] FIG. 2 illustrates an embodiment of a system 200 for tracking a position of a person's hand. System 200 of FIG. 2 may be a subsystem of system 100 of FIG. 1. For instance, system 200 may be partially or wholly performed by processing module 120 of FIG. 1. Data stored within system 200 may be stored by computer-readable storage medium 130 of system 100. System 200 may also be incorporated as pari of some type of gesture-detection system other than system 100. System 200 may be used for some puipose other than gesture detection. System 200 may output one or more subsets of pixels having likely background elements extracted and/or likely to include foreground elements for further processing. In some embodiments, system 200 may output locations of one or more hands of one or more persons, and such locations may be used for various purposes. Locations of other parts of a person may also be output. System 200 may include: image acquisition module 210, depth segmentation module 220, background modeling module 230, foreground modeling module 240, background/foreground extraction module 250, and hand detection/tracking module 260. 004 J Image acquisition module 210 may acquire images from an image capture device, such as image capture module 1 10 of system 100. Images acquired by image acquisition module 210 may be acquired periodically, such as 30 times per second. As such, the images acquired by image acquisition module 210 may be video frames. Each image may contain multiple pixels and each pixel may have a depth value and an intensity value. The depth value and intensity value may be collectively referred to as a feature vector. The feature vector may be created by the image acquisition module 210 from the raw image data acquired from the image capture device. [805Θ] Depth segmentation module 220 may be configured to segment an image into multiple objects based on the depth information associated with each pixel. When system 200 is initially operated, no background models and no foreground models may ¬be present for pixels. As such, background/foreground extraction module 250 may not yet be functional Accordingly, depth segmentation module 220 may initially receive images from image acquisition module 210 without any pixels having been extracted by background/foreground extraction module 250. Depth segmentation module 220 may determine which pixels present within acquired images are connected and should be treated as a single object, perform a principal component analysis to identify one or more persons, and perform a body parameter estimate. Indications of which pixels are determined to correspond to a person may be output to foreground modeling module 240. The pixels output to foreground modeling module 240 by depth segmentation module 220 may include the feature vector of the pixel having a depth value and an intensity value. Further detail of the performance of depth segmentation module 220 is provided later in this document. [8051] Background modeling module 230 may create one or more background models for one or more pixels in the images acquired by image acquisition module 210. Background models created by background modeling module 230 are intended to correspond to objec ts within the scene of the images acquired by image acquisition module 210 that remain unchanged for at least a threshold period of time. Since a function of system 200 is to determine the location of one or more hands of one or more persons, objects other than persons are desired to be treated as background. Since static objects do not often move, the depth and intensity of pixels within acquired images that correspond to static objects may remain approximately constant in value for lengthy- periods of time. 8052] As an example of objects that may be associated with background models, consider a typical living room: a couch may face a television. To either side of the couch may be end tables. Upon each end table may be a lamp and a family picture. Behind the couch may be a wall with one or more pictures, bookcases, etc. In front of the couch may be a coffee table. Typically, each of these objects may not be moved. For instance, significant periods of time (e.g., days, weeks, months, years) may elapse without the couch, lamps, tables, or pictures being moved. Accordingly, in each image acquired by image acquisition module 210, the lamp, for example, may appear in the same location in the images. Therefore, the same pixel in multiple images may represent a portion of the lamp. Since the lamp's position is not changing, the intensity value and depth value of this pixel is unlikely to substantively change from one image to the next. 00S3J Background models may be created on a pixel-by-pixel basis. Accordingly, a background model may correspond to a particular pixel across multiple images acquired by image acquisition module 210. If a feature vector of a pixel does not substantively change for a period of time, it may be determined that the pixel represents at least a portion of an object that is part of the background. Typically, when a person is present in a scene, the person exhibits some level of movement. For example, a person watching television may periodically leave the scene or shift in position. As such, to build background models for pixels, the period of time over which a pixel's feature vector is required to remain at least approximately unchanged may be multiple hours. Since over a period of multiple hours it can be expected that a person will exhibit some level of motion, the person will not be taken as part of the background model. [8(554] To create a background model for a pixel, the feature vector of a pixel present in images acquired by image acquisition module 210 is monitored for at least a pre-defined threshold period of time (such as 5 hours) by background modeling module 230. If the feature vector of the pixel has remained unchanged (within a predefined threshold range for intensity and depth to account for measurement errors), the pixel may be determined by background modeling module 230 to correspond to a background object. A background model may be created using the feature vector (D <, I)) of the pixel (pixel 1) that has remained unchanged for at least the threshold period of time. 8055] Using the feature vector, a Gaussian mixture model (GMM) may be generated by background modeling module 230 for the pixel. The mean for the GMM may be {Dj, I]) with a variance of ( Var D!, Var ). Each GMM may be stored as the background model for the pixel. Background models may be created by background modeling module 230 for none, some, or all pixels at the same or at different times. A pixel may not have a background model initially and/or if the pixel's feature vector has not remained unchanged for at least the threshold period of time. Background models created by background modeling module 230 may be provided to background/foreground extraction module 250, The Gaussian components of the GMM for each background model may be stored along with an indication of the corresponding pixel (e.g., a two-dimensional coordinate that may be used to locate the pixel in images acquired by image acquisition module 2.10). [0056] Multiple background models may be created for one or more of the pixels by background modeling module 230. While objects in the background of a scene may be expected to not change, that is not to say such objects never change. As an example, consider a scene having a cabinet. Often, the cabinet is closed for hours at a time. A background model may be created for each pixel that represents the closed cabinet. A person may also leave the cabinet open for hours at a time. Additional background models may be created for each pixel that represents the open cabinet. As such, a separate background model may be present for the same pixel for the cabinet whether open or closed. When the feature vector of a pixel remains unchanged for at least a predefined threshold period of time, a background model may be created for the pixel, regardless of whether another background model has previously been created for the particular pixel. Further, having a plurality of background models may account for slight variations in camera position in some embodiments. For example, while a portion of a couch may generally be expected at a certain pixel, that pixel may correspond to a portion of a. wall when the camera has been rotated slightly. [8057] In some embodiments, a pixel may have a maximum number of background models, such as 2, 3, or 4. If a pixel already has the maximum number of background models and a new background model is created for the pixel, the oldest background model for the pixel may be deleted. [0058] Foreground modeling module 240 may create foreground models for individual pixels independently from the background models created by background modeling module 230 for individual pixels. As such, a pixel that has zero, one, or more than one background mode! may or may not have a foreground model. The presence or lack of a background model for a pixel may not affect the creation of a foreground model for the same pixel; likewise, the presence or lack of a foreground model for the pixel may not affect the creation of a background model for the pixel. A foreground model for a pixel may be created if it has been determined that a person is represented by the pixel. In some embodiments, that is the only time the foreground model is created. An indication of which pixels represent a person may be provided to foreground modeling module 240 by depth segmentation module 220. Which pixels correspond to a person may be determined based on a principal component analysis (PCA) conducted by a module. The principal component analysis may be used to identify an object that likely corresponds to a head and shoulders of a person. Other ways of detecting a person may- involve facial detection or an anatomical model . Foreground modeling module 240 may be used to determine the depths at which a person is likely to be detected. For instance, in a scene where a couch is positioned behind a coffee table, it may be significantly more likely that a person will be detected sitting on the couch than sitting on the coffee table. The likelihood that a person is present at particular depths and/or locations within the images of a scene may be used in assisting to extract the background from images by background/foreground extraction module 250. 805 ] For each pixel that foreground modeling module 240 has been notified corresponds to a person, a voting array may be created. The voting array may be of length L. L may be determined according to equation 1. L = - Eq. 1 [ΘΘ6Θ] In equation l, δ represents the depth resolution of the images and R represents the maximum depth range of depth values acquired by image acquisition module 210. When a pixel is determined to be occupied by a person at a particular depth, the depth may receive a "vote" in the pixel's array at the element corresponding to the depth. Over time, one or more local maximums may develop within an array (that is, one or more elements within the array that are greater in magnitude than neighboring elements) and one or more local minimums may develop within the array (that is, one or more elements within the array that are smaller in magnitude than neighboring elements). The width, in elements, of local maximums may be determined based on the location of adjacent local minimums. For each of the local maximums for the pixel, a Gaussian mixture model may be generated using the pixel's feature vector, having the form (/¾ IV), ( Var D,, Vara) if a GMM has not previously been generated for the pixel. In order to preserve processing power, the arrays for pixels may be populated while a person is present within images acquired by image acquisition module 210; however, the GMM for individual pixels for foreground models may only be computed by foreground modeling module 240 when no person is detected within the scene in images acquired by image acquisition module 21 0. [8061] The foreground models created by foreground modeling module 240 and the background models created by background modeling module 230 may be provided to (or may be accessible by) background/foreground extraction module 250, Collectively, creating the foreground models and background models by foreground modeling module 240 and background modeling module 230, respectively, may be referred to as environmental modeling. As the number of images acquired by image acquisition module 210 increases, the number of pixels ha v ing background models and/or foreground models may increase, thus providing a more detailed environmental model. Such a more detailed environmental model may permit a greater number of pixels to be categorized as background and ignored from additional processing to determine a location of part of a person's body, [8062] Once at least one background model has been created for one or more pixels, background/foreground extraction module 250 may be used to determine portions of images acquired by image acquisition module 210 that may be discarded. When background/foreground extraction module 250 has at least one background model, image acquisition module 210 may not pass acquired images in full to depth segmentation module 220. On a pixel-by-pixel basis, background/foreground extraction module 250 may analyze acquired images. If one or more background models are available for a pixel, a probability (i¾ that the pixel in the acquired image corresponds to one of the background models may be calculated. Similarly, if a foreground model is available for the pixel, a probability (P F) that the pixel in the acquired image corresponds to the foreground model may be calculated, it may then be determined whether it is more likely the pixel corresponds to the background model or the foreground model, that is PB>PF otPB PF- [8(563] ΙΐΡ Β>Ρ Ρand P Bis greater than a pre-defined threshold probability level (I), this pixel may be classified as background by background/foreground extraction module 250. If Pp>Ps and P Fis greater than a pre-defined threshold probability level (7), this pixel may be classified as foreground by background/foreground extraction module 250. If a pixel is classified as neither background or foreground (that is, T> P Fand/or T>P Bor no background or foreground model is available), the pixel may be classified as uncertain by background/foreground extraction module 250. The greater the threshold value T, the less variance from the foreground and background models may be tolerated. Increasing Fmay result in an increase in the number of pixels classified as uncertain. 0064J Pixels that have been labeled as background may not be passed to depth segmentation module 220 for additional processing. Accordingly, only pixels identified as foreground or uncertain are passed to depth segmentation module 220 for additional processing. Therefore, if at least one pixel is identified as background, the size of the images (e.g., the number of pixels) processed by depth segmentation module 220 may be reduced. Accordingly, the amount of processing required to be performed by depth segmentation module 220 may be reduced, thus possibly resulting in faster processing and/or less processing resources being needed. [8065] Images received by depth segmentation module 220 from background/foreground extraction module 250 may be reduced in size with various pixels having been removed. These pixels may have been identified by background/foreground extraction module 250 as representing a background object. As an example, consider a scene where a person is watching tele vision. In the scene, the person is seated on a couch, with end tables at either side of the couch, and each end table supporting a lamp. Behind the couch may be wail. If system 200 has been activated for a substantial period of time, such as several day s, a background model may be present for a significant number of pixels of the images acquired by image acquisition module 210. Pixels that represent the couch, end tables, lamps, and wall may all be extracted by background/foreground extraction module 250 as part of the background. As such, depth segmentation module 220 may receive only a substantially smaller portion of the image for processing. This smaller image may include the person watching television and, possibly, objects that were moved by the person, such as a throw pillow, and/or cushions of the couch affected by the person's presence (e.g., weight upon the couch). [8(566] In some embodiments, results of the scene modeling and'Or foreground and/or background determinations may be output to a hand detection/tracking module, for example the hand detection/tracking module 260. The hand detection/tracking module may be separate from or included in the system 200. The hand detection/tracking module may receive input from depth segmentation module 220. Depth segmentation module 220 may identify the location of one or more persons, if any, present in the reduced images received from background/foreground extraction module 250. The hand detection/tracking module may serve to locate and track a position of one or both of the person's hands (or of multiple persons' hands, if multiple persons are present). The output from the hand detectioiv'tracking module may be three-dimensional and/or two-dimensional coordinates that indicate a position of a hand. If multiple hands are detected (whether belonging to the same person or multiple persons) multiple sets of coordinates may be output. This output may be provided to another hardware, firmware, and/or software module, such as gesture analysis module 140 of FIG. 1. In some embodiments, the hand detection/tracking module is omitted. For example, results of the scene modeling and/or foreground and/or background determinations may be saved without performing a hand detection thereon, or the results of the scene modeling and/or foreground and/or background determinations may be input directly into a gesture analysis module, for example the gesture analysis module 140. [8067] FIG. 3 illustrates an embodiment of an image 300 of a scene captured by an image capture module. Image 300 may represent an image captured by image capture module 1 10 of FIG. 1 and acquired by image acquisition module 210 of system 200 of FIG. 2. Each pixel present in image 300 may include depth and intensity data. In the two-dimensional representation of image 300 (as illustrated) only the intensity data is illustrated. Image 300 is of a scene having a lamp 310 located behind a couch 320. Upon couch 320, a person 330 is seated with his hand raised. In front of couch 320 is a coffee table 340 supporting a mug 350 and a small object 360. [8(568] In image 300, since different objects may have similar intensity, the objects may appear as a single object. For instance, referring to person 330 and couch 320, the person's torso may be substantially indistinguishable from couch 320 using intensity values alone. Image 300 may represent an image that may be passed by image acquisition module 210 to depth segmentation module 2.20, background/foreground extraction module 250, and/or background modeling module 230. Such an image may be acquired 30 times every second or at some other interval. Ideally, since the embodiments of the system and methods detailed herein are directed to identifying the location of a person (and, more specifically, a part of a person, such as a hand), background objects are ignored. Objects such as lamp 310, some or all of couch 320, and coffee table 340 may be extracted and ignored from processing by depth segmentation module 220 if a background model is present for the pixels that correspond to each of these objects. [0069] Other objects in image 300 may not be excluded using background models for particular pixels. For example, referring to mug 350 and small object 360, the person (or someone else) may have recently placed these objects on table 340. As such, these objects may not have been present in the scene for a long enough period of time for a background model to be created for the corresponding pixels. As such, pixels of images thai correspond to mug 350 and small object 360 may be categorized as uncertain by background/foreground extraction module 250 of system 200. [807Θ] FIG. 4 illustrates an embodiment of a point cloud 400 of the scene captured by the image capture module. Point cloud 400 illustrates each pixel of image 300 based on each pixel's depth value. As such, point cloud 400 is a three-dimensional representation of the pixels of image 300. Point cloud 400 does not illustrate the intensity data presented in image 300. Objects that appeared as a single object in image 300 may be more clearly distinguishable when depth data for each pixel is analyzed. Referring to person 330 and couch 320, in image 300, the person's torso and couch 320 may be difficult to distinguish. However, if depth data for each pixel is analyzed, the body of person 330 may extend outward from the surface of couch 320 and can be distinguished. [8(571] FIG. 5 illustrates an embodiment of an image 500 created from an image of a scene created using one or more background models and/or one or more foreground models to extract pixels determined to correspond to the background. Image 500 may ¬be created based on image 300 of FIG. 3. Each pixel of image 500 may include a depth value and an intensify value. Image 300 of FIG. 3 may be acquired by image acquisition module 210 of FIG. 2, Background models and/or foreground models for at least some pixels were created by background modeling module 230 and foreground modeling module 240, respectfully, of system 200. [0072] In image 500 of FIG. 5, pixels pertaining to person 330, mug 350, and small object 360 were not extracted from image 300 by background/foreground extraction module 250. These pixels did not match (or did not have) a background model for the pixel. Each pixel that was discarded, such as pixels corresponding to lamp 310 of FIG. 3, were each determined to sufficiently match a background model for the pixel. Image 500 contains pixels that were determined as sufficiently matching a foreground model arid'Or were classified as uncertain (not sufficiently matching a foreground model or a background model). Image 500 contains fewer pixels than image 300 of FIG, 3. Image 500 may be passed by background' oreground extraction module 250 to depth segmentation module 220 of system 200 for additional processing. The processing performed by depth segmentation module 220 may be less computationally expensive because fewer pixels need to be processed. [8(573] Systems 100 and 200 of FIGS, i and 2, respectively, may be used to perform various methods. FIG. 6 illustrates an embodiment of a method 600 for creating a background model. Method 600 may be performed by a processing device, such as processing module 120 of FIG. I . As such, means for performing method 600 may include one or more computer systems (which may include one or more processors). Means for performing method 600 may include components of system 100 of FIG. 1. M ore specifically, steps of method 600 may be performed by background modeling module 230, image acquisition module 210, arid'Or background/foreground extraction module 250 of system 200. As such, means for performing each step of method 600 may include system 200 and, more specifically, background modeling module 230. [8074] At step 610, images may be acquired. Each image may include a plurality of pixels, each pixel having an intensity value and a depth value. In some embodiments, intensity and depth values are not both present. Color values may be present instead or in addition. Referring to system 200 and system 100, each image may be acquired by image acquisition module 210 from image capture module 110, which may be a camera. Each im age may be of the same scene. For example, the image capture module may be pointed at the contents of a room. The image capture module may be left stationary such that the scene in the image capture module's field-of-view does not substantially change. Means for performing step 610 may include one or more processors, an image acquisition module, an im age capture module, and/or any of the means discussed generally in reference to method 600. Θ075] For each image acquired at step 610, some or all pixels of the image may be individually analyzed to create a background model for that pixel at step 620. A particular pixel may be present in each image acquired at step 610. For example, a pixel within a first image acquired at step 610 is present at the same coordinates in subsequent images acquired at step 610. A background model for a particular pixel may be unaffected by other pixels, including those pixels adjacent to the particular pixel. Analyzing an individual pixel may include monitoring the intensity and/or depth value for the pixel across multiple images acquired at step 610. For instance, the depth and/or intensity values of a particular pixel may be monitored to see if the values each remain constant, within a threshold range, over a period of time. Such a period of time may be defined to be several minutes, hours, or even days, A lengthy period of time o v er which individual pixels are analyzed to create a background model may result in the background model being more likely to accurately represent a background object that corresponds to the particular pixel. Means for performing step 620 may include one or more processors, a background modeling modu le, and/or any of the means discussed generally in reference to method 600. [8076] If a particular pixel is analyzed and is determined to have remained constant, within a threshold range, for a threshold period of time in intensity and/or depth across the images acquired during the period of time, a background model may be created at step 630 for the pixel. Whether a background model is created for a particular pixel may be irrespective of whether a background was previously created for the pixel. The background model may be a Gaussian Mixture Model (GMM) having the form of ( ¾Is), ( Var D,, Vara). (D,, /;) may represent the observed constant depth and intensity of the pixel over the period of time, i Var Di, Yarn) may represent a predetermined amount of variance that is used for each pixel's background model(s) or may represent variances that are calculated based on slight variances in measured depth and measured intensity during the period of iime when the pixel remained approximately constant. Means for performing step 630 may include one or more processors, a background modeling module, and/or any of the means discussed generally in reference to method 600. [8077] At step 640, the background model for ihe pixel may be stored, such as at computer-readable storage medium 130 of FIG. 1. The background model may be stored with an indication of the associated pixel and may be made available to background/foreground extraction module 250 of FIG, 2. Means for performing step 640 may include one or more processors, a background modeling module, a (non- transitory) computer-readable storage medium and/or any of the means discussed generally in reference to method 600. [8078] While step 630 and step 640 of method 600 focus on the creation of a background model for a single pixel, background models may also be created for other pixels. As such, some or all pixels may have an associated background model, A pixel may not have a background model if the pixel has not remained constant long enough in intensity and/or depth for a background to be created. Systems 100 and 200 may be continuously acquiring images. As such, creating background models for each pixel may continuously be performed. Each pixel may be analyzed in each acquired image to determine if the pixel has remained constant for long enough for a background model to be created. More than one background model may be present for individual pixels. As such, zero, one, or more than one background model may exist for a particular pixel A maximum number of background models for a pixel may exist. For example, a maximum number of five background models per pixel may be established. If a pixel has five background models and a sixth background model is created, the oldest background model for the pixel may be discarded (e.g., a first- in, first-out arrangement). [8(579] The analyzing of pixels of images at step 620 and creation of background models at step 630 may be performed by background modeling module 230 concurrently with the same image being processed by background/foreground extraction module 250. Therefore, while background models are created by background modeling module 230, the background models are used by background/foreground extraction module 250 to determine whether pixels should be extracted from an image received from image acquisition module 210. [888Θ] While method 600 focused on the creation of background models for individual pixels, FIG. 7A illustrates an embodiment of a method 700A. for creating a foreground model for individual pixels. Method 700A may be performed by a processing device, such as processing module 120 of FIG. 1. As such, means for performing method 700A may include one or more computer systems (which may include one or more processors). Means for performing method 700A. may include components of system 100 of FIG. i . More specifically, steps of method 700 A may be performed by foreground modeling module 240, image acquisition module 210, depth segmentation module 22.0, and/or background/foreground extraction module 2.50 of system 200. As such, means for performing each step of method 700A may include system 200 and, more specifically, foreground modeling module 240. [8081] At step 710, images may be acquired. Each image may include a plurality of pixels, each pixel having an intensity value and a depth value. Referring to system 200 and system 100, each image may be acquired by image acquisition module 210 from image capture module 1 10, which may be a camera. Each image may be of the same scene. For example, the image capture module may be pointed at the contents of a room. The image capture module may be left stationary such that the scene in the image capture module's field-of- view does not substantially change. Means for performing step 710 may include one or more processors, an image acquisition module, an image capture module, and/or any of the means discussed generally in reference to method 700A. [0082] The images acquired at step 710 may be processed by background/foreground extraction module 250, and or background modeling module 230. Depth segmentation module 220 may process an image (which may have had pixels identified as corresponding to a background model extracted). The depth segmentation module 220, upon identifying one or more persons, may output the pixels corresponding to the one or more persons to foreground modeling module 240. As such, at step 720, foreground modeling module 240 may receive indications of pixels that are determined to correspond to one or more persons. These pixels may or may not have a background model. Since foreground models are created independent of background models, the existence of one or more background models for a pixel may be irrelevant to the creation of a foreground model for the pixel. Additional information as to how depth segmentation module 220 identifies the presence of a person is detailed later in this document. Means for performing step 720 may include one or more processors, a depth segmentation module, a foreground modeling module, and/or any of the means discussed generally in reference to method 700A, [8083] At step 730, for each pixel that was received at step 720, a voting array may be created (if one does not already exist) or the voting array may be modified (if a voting array already exists). As previously described in relation to equation 1, δ represents the depth resolution of the images and R represents the maximum depth range of depth values acquired at step 710. When a pixel is determined to be occupied by a person at a particular depth, the depth may receive a "vote" in the pixel's array at the array element corresponding to the depth. Over time, one or more local maximums may develop within a pixel's voting array (that is, one or more elements within the array that are greater in magnitude than other elements) and one or more local mi imums may develop within the array (that is, one or more elements within the array that are smaller in magnitude than other elements). The width, in elements, of local maximums may be determined based on the location of adjacent local minimums. For each of the local maximums for a pixel, a Gaussian mixture model (GMM) may be generated for the pixel's feature vector, having the form (i¾ I,), {Var Var ). This model may be used as the foreground model for the pixel. A pixel may be restricted to having one foreground model or may have multiple foreground models. In order to preserve processing power, the arrays for pixels may be populated while a person is present within images being acquired; however, the Gaussian mixture models for individual pixels for foreground models may only be computed by a foreground modeling modide when no person is detected within the scene of acquired images. For example, step 710 may be being performed continuously, with 30 images per second being captured. For each (or some) of these acquired images, indications may be received by the foreground modeling modide of which pixels correspond to a person in the scene. While such indications of pixels are being received, the voting arrays of individual pixels may be updated, but the Gaussian mixture models created using the arrays may not be calculated until pixels that indicate the presence of a person have not been received for a threshold period of time (e.g., one minute). Such an arrangement may prevent the foreground models for pixels from continually being calculated and potentially consuming excessive processing resources. Means for performing step 730 may include one or more processors, a foreground modeling module, and/or any of the means discussed generally in reference to method 700A. [8085] At step 740, the foreground models, which may be Gaussian mixture models, created for individual pixels may be stored. These foreground models may be transmitted to and/or made available to a background/foreground extraction module. By having foreground models for pixels, a person present at the pixel may be less likely to be incorrectly identified as background if a foreground model is available. Typically, a person does not appear at random depths within a scene. Referring to image 300 of FIG. 3, a person may be more likely to be seated on the couch than seated on the table in front of the couch. Since the GMM for a pixel is created based on depth and intensity data when the pixel is known to be occupied by a person, the GMM can be expected to accurately model future occurrences of a person at the pixel (for example, if a person sits on a couch, it is likely someone else will also sit on the couch in the future). Means for performing step 740 may include one or more processors, one or more (non- transitory) computer-readable storage medium, a foreground modeling module, and/or any of the means discussed generally in reference to method 700A. [8086] Method 700A may be being performed concurrently with method 600 of FIG. 6. For instance, while foreground models are being created for one or more pixels, background models may be created for other pixels. A particular pixel may at one point, when its intensity and/or depth has not varied substantially for a period of time, have a background model created for it while at another time, when a person is determined to correspond to the pixel, the pixel may have a foreground model created for it. [8087] Creation of background and/or foreground models may be an on-going process. As such, additional background models for a pixel may be created to supplement or replace other background models for that pixel. Likewise, a foreground model for a pixel may be supplemented or replaced with a new foreground model after a period of time. Similarly, background and/or foreground models may be removed from the set of models for a scene, in this way, one or more time-evolving models may be generated and/or maintained. As discussed herein, time-evolving background and/or foreground models may be used to determine a likelihood or probability that a point in an image, for example a depth image, comprises an element in the relevant foreground of the image. [0088] While method 600 focused on the creation of background models for individual pixels and method 700A focused on creating a foreground model for individual pixels, FIG. 7B illustrates an embodiment of a method 700B for creating both background and foreground models for pixels. Method 700B may be performed by a processing device, such as processing module 120 of FIG. 1. As such, means for performing method 700B may include one or more computer systems (which may include one or more processors). Means for performing method 700B may include components of system 100 of FIG. 1. More specifically, steps of method 700B may be performed by foreground modeling module 240, image acquisition module 210, depth segmentation module 220, and/or background/foreground extraction module 250 of system 200. As such, means for performing each step of method 700B may include system 200 and, more specifically, background modeling module 230 and foreground modeling module 240. [0089] At step 750, images may be acquired. Each image may include a plurality of pixels, each pixel having an intensity value and a depth value. Referring to system 200 and system 100, each image may be acquired or received by image acquisition module 210 from image capture module 1 10, which may be a camera. Each image may be of the same scene. For example, the image capture module may be pointed at the contents of a room and configured to capture images over a period of time. The image capture module may be left stationary such that the scene in the image capture module's field- of-view does not substantially change. Means for performing step 750 include one or more processors, an image acquisition module, an image capture module, and/or any of the means discussed generally in reference to method 700B. [0090] A background model may be created at step 760 for a pixel. In one embodiment, if a particular pixel remains constant in intensity and/or depth, within a threshold range, for a threshold period of time across the images acquired during the period of time, a background model may be created at step 760 for the pixel. Whether a background model is created for a particular pixel may be irrespective of whether a background model was previously created for the pixel. The background model may be a Gaussian Mixture Model (GMM) having the form of (A, //), (Vara, Vara). (A, /) may represent the observed constant depth and intensity of the pixel over the period of time. (Var Di, Yarn) may represent a predetermined amount of variance that is used for each pixel's background model(s) or may represent variances that are calculated based on slight variances in measured depth and measured intensity during the period of time when the pixel remained approximately constant. In some embodiments, a plurality of background models are created at step 760. At least one background model may be created for each pixel in the images in some embodiments. The background models may be indicative of the scene over the period of time. Means for performing step 760 may include one or more processors, a background modeling module, and/or any of the means discussed generally in reference to method 700. Step 760 may be performed for multiple pixels in the acquired images. [8091] At step 770, a foreground model for a pixel may be created. In some embodiments, a plurality of foreground models are created using the images. A foreground model may created for each pixel of at least a first subset of the pixels in the images, and/or the foreground models may be indicative of the scene over the period of time.. In some embodiments, for some or all pixels acquired at step 750, a voting array may be created (if one does not already exist) or the voting array may be modified (if a voting array already exists). As previously described in relation to equation 1, δ represents the depth resolution of the images and R represents the m ximum depth range of depth values acquired at step 750. When a pixel is determined to be occupied by a person at a particular depth, the depth may receive a "vote" in the pixel's array at the array element corresponding to the depth. Over time, one or more local maximums may develop within a pixel's voting array (that is, one or more elements within the array that are greater in magnitude than other elements) and one or more local minimums may- develop within the array (that is, one or more elements within the array that are smaller in magnitude than other elements). The width, in elements, of local maximums may be determined based on the location of adjacent local minimums. For each of the local maximums for a pixel, a Gaussian mixture model (GMM) may be generated for the pixel's feature vector, having the form (D i,), (Vara, Vcv<~u). This model may be used as the foreground model for the pixel. A pixel may be restricted to having one foreground model or may have mul iple foreground models. Step 770 may be performed for multiple pixels in the acquired images. [8092] In order to preserve processing power, the arrays for pixels may be populated while a person is present within images being acquired; however, the Gaussian mixture models for individual pixels for foreground models may only be computed by a foreground modeling module when no person is detected within the scene of acquired images in some embodiments. For example, step 750 may be being performed continuously, with 30 images per second being captured. For each (or some) of these acquired images, indications may be received by the foreground modeling module of which pixels correspond to a person in the scene. While such indications of pixels are being received, the voting arrays of indi vidual pixels may be updated, but the Gaussian mixture models created using the arrays may not be calculated until pixels that indicate the presence of a person have not been received for a threshold period of time (e.g., one minute). Such an arrangement may prevent the foreground models for pixels from continually being calculated and potentially consuming excessive processing resources. Means for performing step 770 may include one or more processors, a foreground modeling module, and/or any of the means discussed generally in reference to method 700 A and 700B. [0093] It should be understood that in addition to the steps of the illustrated embodiment of method 700B, other embodiments of method 700B may include additional steps from method 600 of FIG. 6 and/or method 700A of FIG. 7 A and/or may include other steps which are not illustrated. [8(594] FIG. 8 illustrates an embodiment of a method 800 for modeling a scene using a background and/or a foreground model Method 800 may involve pixels of an image of a scene being extracted that are determined to correspond to the background (and are unlikely to correspond to a person). Method 800 may be performed by a processing dev ice, such as processing module 120 of FIG. 1. As such, means for performing method 800 may include one or more computer systems (which may include one or more processors and computer-readable storage mediums). Means for performing method 800 may include components of system 100 of FIG. 1. More specifically, steps of method 800 may be performed by foreground modeling module 240, image acquisition module 210, depth segmentation module 220, and/or background/foreground extraction module 250 of system 200. As such, means for performing each step of method 800 may include the modules of system 200. When method 800 is performed, method 600 and/or method 700A or method 700B may have been previously performed for at least some pixels. [0095] At step 810, images may be acquired. Each image may include a plurality of pixels, each pixel having an intensity value and a depth value. Referring to system 200 and system 100, each image may be acquired by image acquisition module 210 from image capture module 1 10, which may be a camera. Each image may be of the same scene. For example, the image capture module may be pointed at the contents of a room. The image capture module may be left stationary such that the scene in the image capture module's field-of-view does not substantially change. Means for performing step 810 may include one or more processors, an image acquisition module, an image capture module (e.g., a camera), and/or any of the means discussed generally in reference to method 800. [0096] At step 820, each pixel of the image may be compared to one or more background models of the pixel (if available) and one or more foreground models of the pixel (if available). This process may be repeated for each pixel of the image. A pixel may be classified as either background, foreground, or uncertain. As part of step 820, a pixel may first be determined to be more likely matching a foreground or background model of the pixel. If a type of model for the pixel is not available, the probability of the missing model is taken as zero. Once it is determined if the pixel more likely matches a background model or a foreground model, the probability of a match to the determined model is compared to a threshold. If the probability exceeds the threshold, the pixel is considered to match the model, and if the probability does not exceed the threshold, the pixel is classified as uncertain. Means for performing step 820 may include one or more processors, one or more computer-readable storage mediums, a background/foreground extraction module, and/or any of the means discussed generally in reference to method 800. [0097] At step 830, only pixels that are classified as foreground or uncertain may be output. The output may be to a depth segmentation module. Referring to system 200, background/foreground extraction module 250 may output the foreground and uncertain pixels to depth segmentation module 220. The pixels classified as background may be extracted such that they are not provided to depth segmentation module 220. Means for performing step 830 may include one or more processors, one or more computer- readable storage mediums, a background/foreground extraction module, a depth segmentation module, and/or any of the means discussed generally in reference to method 800. In some embodiments, the pixels that are classified as uncertain may not be output. Thus, in these embodiments, only pixels representative of likely foreground elements may be output. )8] FIG. 9 illustrates another embodiment of a method 900 for modeling a scene using a background and/or a foreground model. Method 900 may involve pixels of an image of a scene being extracted that are determined to correspond to the background and are unlikely to correspond to a person. Method 900 may be performed by a processing device, such as processing module 120 of FIG. 1. As such, means for performing method 900 may include one or more computer systems (which may include one or more processors and computer-readable storage mediums). Means for performing method 900 may include components of system 100 of FIG. 1. More specifically, steps of method 900 may be performed by image acquisition module 210, foreground modeling module 240, depth segmentation module 220, and/or background/foreground extraction module 250 of system 200. As such, means for performing each step of method 900 may include the modules of system 200. When meihod 900 is performed, method 600 and/or method 700A or meihod 700B may have been previously performed for at least some pixels. Method 900 may represent a more detailed embodiment of method 800. [0099] At step 910, images may be acquired. Each image may include a plurality of pixels, each pixel having an intensity value and a depth value. Referring to system 200 and system 100, each image may be acquired by image acquisition module 210 from image capture module 1 10, which may be a camera . Each image may be of the same scene. For example, the image capture module may be pointed at the contents of a room. The image capture module may be left stationary such that the scene in the image capture module's fieid-of-view does not substantially change. Means for performing step 910 may include one or more processors, an image acquisition module, an image capture module (e.g., a camera), and/or any of the means discussed generally in reference to method 900. The image acquired at step 910 may also be provided to a background modeling module for creation of background models that correspond to pixels present across images. [8100] At step 920, for a particular pixel of the image acquired at step 910, it is determined whether a probability of the pixel matching a foreground model (if available) for the pixel is greater than the probability of the pixel matching a background model (if available) for the pixel. Therefore, it may be determined whether Pg>Pp or P B<Pp, where P Bis the probability that the pixel corresponds to the background model and P Fis the probability that the pixel corresponds to the foreground model. If multiple types of a model are available, such as multiple background models, it may first be evaluated which background model is more likely a match for the pixel, then compare the probability of the pixel matching that background model with the probability of the pixel matching a foreground model. If a particular type of model is not available, the probability of matching that type of model may be taken as zero. [0101] If, at step 920, a pixel is determined to more likely match an available foreground model of the pixel than a background model of the pixel (or no background model is available), method 900 proceeds to step 930. At step 930, the probabilit of the pixel matching the foreground model of the pixel is compared to a predefined threshold value (7). This threshold value may be preselected and may serve to determine how closely a pixel is required to match the foreground model for the pixel to be considered foreground. If P Fexceeds T. the pixel may be categorized as foreground at step 940. If "Γ exceeds P F, the pixel may be categorized as uncertain at step 970. [811)2] If, at step 920, a pixel is determined to more likely match an available background model of the pixel than a foreground model of the pixel (or no foreground model is available), method 900 may proceed to step 950. At step 950, the probability of the pixel matching the background model of the pixel is compared to a predefined threshold value (7). This threshold value may be preselected and may serve to determine how closely a pixel is required to match the background model for the pixel to be considered background. The same threshold value may be used as at step 930, or a different predefined threshold value may be used. If P Bexceeds T, the pixel may be categorized as background at step 960. If T exceeds P B, the pixel may be categorized as uncertain at step 970. Although Fis used to describe the threshold value to compare both P pand P gagainst, those of skill in the art will appreciate that P Fand P Bmay be compared against different threshold values. In some embodiments, however, both P Fand P gare compared against the same threshold value. 0H)3J At step 980, if the pixel was categorized as either foreground or uncertain, the pixel may be output. The output may be provided to a depth segmentation module for detection of a person, if present, in the image. The output may or may not indicate whether the pixels output are foreground or are uncertain. If the pixel is caiegorized as background, the pixel is not output. Rather, the pixel is extracted such that it is not output to a depth segmentation module. Steps 920 through 970 may be repeated for each pixel of the image acquired at step 910, such that an image is output at step 980. As such, at step 980, a reduced image may be output that contains fewer pixels than the image acquired at step 910. The image output may contain only foreground and uncertain pixels, thus static objects in the background of the acquired image may have been removed. Referring to FIG. 3 and FIG. 5, image 300 of FIG. 3 may represent the image acquired at step 910, while image 500 may represent the image output at step 980 with pixels identified as background extracted. The image output at step 980 may contain an indication of which pixels are foreground. This may be useful to limit the search for a control object or input, or to allow these pixels to be provided a higher priority when searching for a person in the output image than the uncertain pixels, in some embodiments, the pixels that are classified as uncertain may not be output at step 980. Thus, in these embodiments, only pixels representative of likely foreground elements may be output in step 980. [8104] FIGS. 5 through 9 were directed to the use of background and foreground models to determine whether individual pixels were likely part of the background (and thus did not need to be further analyzed), part of the foreground, or uncertain (with foreground and uncertain pixels being additionally analyzed). FIGS. 10 through 18 are directed to analyzing the image remaining after the background pixels have been removed, and identifying the locations of one or more hands of one or more persons present within the image. While the embodiments of FIGS. 10 through 1 5B are directed to images in which one hand of one person is detected and tracked, it should be understood that the embodiments detailed herein may be applied to situations where the image contains multiple persons and/or multiple hands. Certain of those embodiments are explicitly described herein, while other embodiments will be apparent to those of skill in the ail based on the materials herein. [8105] FIG. 10A illustrates an embodiment of a depth segmented image 1000A. Depth segmented image 1000A may provide a top-view of image 500 based on the depth data present for the pixels of image 500. Image 500 does not contain pixels that were identified by a background/foreground extraction module (such as background/foreground extraction module 250 of FIG. 2) as background. As such, depth segmented image 1000A may include pixels that were classified as foreground or uncertain. Referring to system 200 of FIG. 2, depth segmented image 1000A may be created by depth segmentation module 220 using the image output by background/foreground extraction module 250 that has background pixels removed (or has background pixels designated as background). [8106] Ideally, just pixels corresponding to a person would be classified as foreground or uncertain. However, objects in a scene may be moved or added to the scene, such as by the person. Since the background is based on the depth value and/or intensity value of a pixel remaining unchanged for a significant period of time (e.g., several hours), objects (or entities, such as pets) that have recently entered the scene may cause pixels not associated with a person to be classified as uncertain or foreground. Accordingly, further processing may be used to determine which foreground and/or uncertain pixels correspond to a person. In image 500, three entities are present that are associated with pixels that were identified as uncertain or foreground: person 330, mug 350, and small object 360. While person 330 is sitting on a couch (as can be seen in the initially received image 300 of FIG. 3), mug 350 and small object 360 are located on a coffee table positioned in front of the couch. As such, the depth values associated with mug 350 and small object 360 can be expected to indicate a smaller distance to the image capture device (e.g., camera). In FIG. 10A, three groups of pixels are present: pixel group 1010A, pixel group 1020, and pixel group 1030. Pixel group 101 OA corresponds to person 330, pixel group 1020 corresponds to mug 350, and pixel group 1030 corresponds to small object 360. [8107] Due to mug 350 being a distance in front of person 330, pixel group 1020 is a separate pixel group and is in front of pixel group 101 OA. Similarly, due to small object 360 being a distance in front of person 330, pixel group 1030 is a separate pixel group and is in front of pixel group 101 OA. Pixel group 1020 and pixel group 1030 may have approximately the same depth values because they are approximately equidistant from the image capture device. Accordingly, from image 500, three distinct groups of pixels can be identified based on depth. The process of identifying these distinct groups of pixels may be referred to as a depth segmentation process. At least some of these pixel groups may be dismissed as not being a person based on size. For instance, pixel groups that are too small or too large may be dismissed as not likely to correspond to a person. Accordingly, a minimum size threshold (and/or a maximum size threshold) for groups of pixels may be predefined and may be stored or may be accessible by the device or component performing the depth segmentation process. [8108] Each group of pixels identiiied during a depth segmentation process may be analyzed to determine if ii qualifies within minimum and/or maximum size threshold constraints. Referring to image 1000A, pixel groups 101 OA, 1020, and 1030 may each be analyzed. It should be understood that various pixels of pixels groups 101 OA, 1020, and 1030 may not be visible in FIG. 10A because pixels with the same x-axis coordinate and z-axis depth value would appear on top of each other in the top view of FIG. 10A. Determining whether a group of pixels qualifies within minimum and/or maximum size threshold constraints may include using pixels that are part of the pixel groups not visible in FIG. 10A. [8109] Pixel group 1020, corresponding to mug 350, may not exceed a minimum predefined threshold size. The size of a pixel group may be based on the number of pixels within the pixel group. Based on the number of pixels in pixel group 1020, mug 350 may be dismissed as unlikely to correspond to a person. Similarly, based on the number of pixels in pixel group 1030, small object 360 may be dismissed as unlikely to correspond to a person. No additional processing may be performed on pixel groups 1020 and 1030 and these pixel groups may be ignored from further processing or deleted from an image constructed from the pixels. [8118] Pixel group 1010A, which includes pixel group lOlOA-l (the person's torso and head) that is connected with pixel group 1010A-2 (the person's hand) via the person's arm may exceed the minimum predefined size threshold (and may meet other qualifications, such as being less than a maximum predefined size threshold). Accordingly, pixel group 1010A may be considered eligible to correspond to a person. While pixel groups 1020 and 1030 were eliminated based on threshold size conditions, pixel group 1010A may be maintained as a candidate group for corresponding to a person based on the threshold conditions. While not illustrated in FIG. 10A, a plurality of pixel groups may be maintained, for example when multiple people are present in the image or multiple items that are sized similar to a person are present. [81 ! 1] In some embodiments, additionally or alternatively to minimum and/or maximum size threshold conditions, dimensions of pixel groups along the x-axis, y-axis, and/or z-axis may be used to disqualify pixel groups as potentially corresponding to a person. In some embodiments, a minimum and/or maximum distance from the image capture device may be used to disqualify pixel groups. For instance, if a group of pixels is identified as being beyond a maximum threshold distance from the image capture device, it may be considered unlikely that the entity the group of pixels corresponds to a person likely attempting to interact with the detection system; as such, such pixel groups may be disqualified. Similarly, if a group of pixels is identified as closer than a minimum threshold distance from the image capture device, the group of pixels may be disqualified because a person may be unlikely to be positioned so close to the image capture device. For example, a person may be likely to be sitting on a couch, but not standing immediately in front of the image capture device. It should be understood that variations on these thresholds may be implemented; such as if a portion of a group of pixels exceeds the minimum or maximum threshold, the group of pixels may be disqualified. One or more thresholds may be user-defined. For example, if a user knows his couch is 10 feet from the television and the user always sits on his couch when using the television, the user may set a minimum threshold of 8 feet such that a person walking in from the couch is disqualified and cannot provide input. Continuing with the same example, the user may want to specify a maximum distance of 12 feet, such that a person walliing in the same room behind the couch is disqualified and cannot provide input. In some embodiments, one or more thresholds are learned, for example based on data acquired over time, or one or more thresholds could be set based on an initial configuration, for example based on an image captured of an empty room during a calibration procedure. [8112] When a person's hand is held in front of the person's body, such as to perform a gesture, the person's hand may occlude some or all of the person's arm. Accordingly, the person's hand may appear as a separate pixel group from the person's head, shoulders, and torso. FIG. 1ΘΒ illustrates an embodiment of a depth segmented image 1000B wherein a person's hand occludes at least a portion of the person's arm resulting in the person being associated with two pixel groups: pixel group 101 OB - 1 and pixel group 1010B-2. Image 1000B may al so represent a top-view of image 500 using the depth data present in the pixels of image 500 similar to image 1000 A, except that at least a portion of the person's arm is occluded from the image capture device by the person's hand. [0113] In order to reduce or eliminate the occurrences of a person's extended hand occluding the person's arm in a captured image (and showing that the person's hand is connected with the person's body), the image capture device (e.g., camera) may be placed at an angle to the scene such that a person present in the scene will be less likely to occlude the person's arm with their hand while performing a gesture. For example, if a person typically sits on a couch facing a television, the image capture device may be above the television and/or off to a side of the television, such that a gesture made by the person in the direction of the television is less likely to occlude the person's arm from the image capture device. [8114] In some embodiments, a history of pixel groups from previous images may be used to determine if separate pixel groups should be treated as part of a single pixel group (referred to as a compound pixel group) because the pixels groups likely correspond to the same object. Referring to FIG. 10B, pixel group l O.l OB-2 corresponds to a person's hand and is a separate pixel group from pixel group lOl OB-i, which corresponds to the person's shoulder's head, and torso. FIG. 10B may represent a depth segmentation image created some time after the depth segmentation image of FIG. 10A. In FIG. 1 OA, pixel group 101 OA is a single pixel group, because the person's arm is not occluded from the image capture device. However, in FIG. 10B, the person's arm has become occluded by the person's hand. Based on a stored history of pixel groups, it may be determined that both pixel group lQlOB-2 and pixel group 101 OB- 1 should be treated as a compound pixel group corresponding to the same pixel group because these pixel groups were previously determined to be part of a single pixel group (e.g., pixel group 101 OA of FIG. 10A). Determining two or more pixel groups should be treated as a compound pixel group may be based on location, size, shape and/or movement of the pixel groups. Distance may also be used to determine if two or more pixel groups should be treated as a compound pixel group. For example, a second pixel group close to a first pixel group of a user may be likely to be part of the user. A. pixel group directly in front of a pixel group associated with a user may be considered likely to represent part of the user. [8115] Following the size threshold analysis, only pixel group 101 OA or pixel groups l()i0B-l and 1010B-2, which may be treated as a compound pixel group, may remain for analysis. FIG. 11 illustrates an embodiment of image 1 100, which represents only pixel group 1010A. As such, image 1 100 represents image 500 of FIG. 5 with the pixel groups corresponding to mug 350 (pixel group 1020) and small object 360 (pixel group 1030) removed. The only pixels present in FIG. 11 are the pixels corresponding to pixel group 101 OA. It should be understood that in other embodiments more than one pixel group may qualify under a threshold analysis (e.g., an image of scene with multiple people present). As such, an image created based on the qualifying pixel groups may contain multiple entities, [8116] For each pixel group present in image 1 100, a principal component analysis (PCA) may be conducted. In the illustrated embodiment, since only one pixel group is present, the PC A may only be performed once. A PC A may involve the use of a set of training observations to determine if a pixel group likely corresponds to a person. Previously, a large number (e.g., tens, hundreds, thousands, etc.) of images of people's upper bodies may be captured. Each such sample may be converted into a binary silhouette, and normalized in a fixed direction. These samples may include samples in which the upper body (e.g., head and shoulders) of the persons are rotated along the x- axis, y-axis, and/or z-axis. This may be useful because a person in the scene may not have their head and shoulders directly facing the image capture device, such as a person laying a couch or sitting or standing at an angle to the image capture device. Based o the samples, a PCA is conducted to compute the covariance matrix of all the samples. The model created may consist of the N largest eigen vectors of the covariance matrix. n some embodiments, the 7 largest vectors (also referred to as principal components) may be used for the PCA of pixel groups in an image being analyzed. Accordingly, the principal components may be predetermined and may be stored onto the system performing the analysis, it should be understood that greater or fewer vectors may also be used for the model. The principal components may be used in conducting a PCA on each remaining pixel group to determine if a pixel group likely corresponds to a person. Besides conducting a PCA, other techniques may be used, such as a Kullbaek-Leibier divergence (KLD). [8117] Pixel groups on which a PCA is conducted thai are determined to not contain a head and shoulders may be disqualified as a candidate for corresponding to a person. Referring to FIG. 1 1 , a PCA of the pixel group in image 1100 is analyzed using the predetermined principal components. The pixel group of image 1 00 qualifies because a head and shoulder combination is def ected using the PCA, as highlighted by head/shoulder 1 110. At this point, it has been determined that each pixel of the pixel group of image 1 100 corresponds to a person. Accordingly, an indication of each pixel, which may include the pixel's depth and/or intensity value, may be output to a foreground modeling module. Referring to FIG. 2, depth segmentation module 220 may output an indication of each pixel of the pixel group of image 1 100 to foreground modeling module 240. This may occur for each image in which a group of pixels is determined to correspond to a person according to a head and shoulders PCA. As previously detailed in this document, foreground modeling module 240 may use the pixels provided by depth segmentation module 220 to create foreground models for individual pixels, which are provided to backgroun 'Toreground extraction module 250. [8118] For each group of pixels that is determined to correspond to a person (such as following a PCA), such as groups of pixels that have been identified as corresponding to a person, a plane may be fit to the group of pixels. This plane may be used, as detailed later in this document, for determining the location of a hand of the person corresponding to the group of pixels. Referring to FIG. 11, a plane may be fit to the group of pixels in image 1 100, This plane may be aligned with the torso, shoulders, and head of the group of pixels corresponding to the person. A s illustrated in FIG. 1 1, the group of pixels may be associated with a person that is extending his or her hand, such as to perform a gesture. If the average depth of pixels within the group of pixels is used to position the plane, the extension of the person's hand may influence the position of the plane away from the person's torso, shoulders, and head. [8119] To position the plane while limiting the effect of a possible extended hand and arm (as is present in image 1 100), a plane may initially be fst to the entire group of pixels. This plane may be orientated in three dimensional space. For instance, as a simple example, a person sitting may slouch, thus, along the y-axis the plane may extend away from the image capture device. As another example, a person sitting or standing at an angle to the image capture device may result in the plane not being parallel to the x-axis. To determine the initial position of the plane, the x, y, and z (depth) coordinates of the pixels of the pixel group may be used. [0120] The plane may be fit to the group of pixels to initially minimize a total amount of fitting error for the pixels of the group of pixels. The fitting error for a pixel is a function of the distance of the three dimensional point associated with the pixel to the plane. [0121 J The position of the pl ane may then be refined. Based on a factor such as the mean amount of fitting error for all the pixels of the pixel group, a threshold fitting error value may be calculated. Since the initial location and/or orientation of the plane may ¬be affected by an outstretched hand and arm, the plane ma be located in front of the person's torso, head, and shoulders. However, since the person's hand is smaller than the torso, head, and shoulders (combined), it may be assumed the plane will be closer to the person's torso, head, and shoulders than the person's hand. Accordingly, pixels with a fitting error greater than a threshold fitting error value may be eliminated from use in determining a refined position and orientation of the plane. Since the person's hand and arm likely correspond to at feast some of the pixels with farther coordinates from the plane, some or ail of these pixels will likely be eliminated from use in calculating the refined position of the plane. The location and orientation of the plane may then be recalculated and best fit to the coordinates of the pixels that were not eliminated. This new position Orientation of the plane may be used as the final position of the plane, or ihe process may be repeated additional times (with additional pixels being eliminated) to further refine the position and/or orientation of the plane. In some embodiments, only the initial estimate of the plane position and/or orientation is used. [8122] FIG. 12 illustrates an embodiment of an image 1200 having a plane 1210 fit to the coordinates of pixels of a group of pixels, image 1200 may represent a point cloud representation of the pixels of image 1 100 of FIG. I I with plane 1210 fit to the pixel group. As such, the coordinates of the pixels may correspond to pixels that have previously been determined to correspond to a person via a PCA. Plane 1210 has been fit to the coordinates of the pixels as detailed above. In FIG. 12, it can be seen that the person's hand extends in front of the person's body. The plane may be initially fit to the person 's torso, head, hand, and arm, then following refinement of the position and/or orientation of the plane, the pixel coordinates associated with some or all of the person's hand and/or arm may be eliminated from use in determining the position and/or orientation of the plane, as detailed abo ve. The position and/or orientation of the plane may be stored by a system, for example in a memory of the system 100 and/or 200 . Referrmg to system 200 of FIG. 2, the fitting of the plane may be performed by depth segmentation module 220. [8123] Following a PCA being used to determine a group of pixels corresponds to a person and a plane being fit to the group of pixels, a location of a hand of the person may be determined. Referring to FIG. 2, determination of the location of the hand may be performed by hand detection/tracking module 260. Depth segmentation module 220 may pass data to hand detection/tracking module 260, including: an indication of one or more planes and/or pixels corresponding to the pixel groups determined to correspond to person(s). Hand detection/tracking module 260 may use these inputs to output two- dimensional and/or three-dimensional coordinates of one or more hand locations. Such coordinates may be output for each image received by image acquisition module 210 in which at least one person (and a hand of the person) is identified. [8124] Hand detection/tracking module 260 may analyze the one or more pixel groups received from depth segmentation module 220. A reference point for each pixel group may be established. This reference point may be the "center-of-graviry" of the pixel group. As such, an average coordinate may be calculated based on the x, y, and z coordinates of each pixel of the pixel group. Once the location of a hand has been determined, another technique ma be employed for tracking the hand. In some embodiments, hand detection/tracking module 260 may repeat the detection process in order to track the position of the hand. Coordinates output for the hand position over a period of time may be used to determine if the hand has performed a gesture, such as a swipe, circle, etc, [8125] Next, a number of pixels that are local distance maximums from the reference point within each group of pixels may be determined. FIG. 13 illustrates an embodiment of an image 1300 illustrating a center-of-gravity 1310 and local distance maximum pixels 1320 with respect to image 1 100. A focal distance maximum pixel may be a pixel that, based on its coordinates, is farther from the center of gravity based on the pixel's three-dimensional coordinates than the pixel's neighbors that are also part of the pixel group. In the illustrated embodiment of FIG. 13, at least some local distance maximums of local distance maximum pixels 1320 are illustrated from center-of-gravity 1310. Each local distance maximum pixel 1320 is illustrated in combination with an imaginary dotted line from center-of-gravity 1310 to show the distance from center-of- gravity 1310. Each of focal distance maximum pixels 1320 may be treated as a candidate for representing the person's hand. Distance may be calculated based on the three dimensional coordinates of a pixel and the center-of-gravity 1310. [8126] As can be seen in FIG. 13, alf but two of focal distance maximum pixels f 320 do not represent a hand location. As such, ideally, each of these local distance maximum pixels that do not correspond to the person's hand location are eliminated as candidates and ignored from additional processing. In order to determine which focal distance maximum pixels of local distance maximum pixels 1320 should be ignored, the previously defined plane 1210 of FIG, 12 may be used. As previously noted, plane 12.10 is expected to be at least approximately aligned with the head, shoulders, and torso of the person. The local distance maximum pixels of local distance maximum pixels 1320 within a threshold distance of plane 1210 may be ignored from being considered as candidates for being a hand. In some embodiments, pixels behind the plane may be ignored. These locaf distance maximum pixels are likely due to the person's shoulders, head, and torso; not a hand of the person. Referring to the plane of FIG. 12 and local distance maximum pixels 1320 of FIG. 13, multiple local distance maximum pixels are A? likely within the threshold distance of the plane and can be eliminated as potential candidates for being a hand, including: local distance maximum pixel 1320-1, local distance maximum pixel 1320-2, local distance maximum pixel 1320-3, local distance maximum pixel 1320-4, local distance maximum pixel 1320-5, local distance maximum pixel 1320-6, local distance maximum pixel 1320-7, local distance maximum pixel 1320-8, and local distance maximum pixel 1320- 1 1. Each of these local distance maximums pixels correspond to the head of the person, shoulders of the person, torso of the person, or possibly a portion of the couch deformed by the person sitting down. Image 1300 of FIG. 13 may include depth information. As such, local distance maximum pixels 1320 may be in three dimensions. Accordingly, local distance maximum pixels 1320-9 and 1320-10 may extend away from the plane a distance along the z-axis. 8127] If a person is performing a gesture, the person's hand is likely extended a distance in front of the person, and thus would be a greater distance from the plane than the person's head, shoulders, or parts of the person's torso, as illustrated by the person's hand corresponding to local distance maximum pixels 1320-9 and 1320- 10. The threshold distance from the plane that is used to determine whether a local distance maximum pixel should be dismissed as a candidate for corresponding to a hand may be predefined. Following this application of the plane, at least some of the local distance maximum pixels may be dismissed as candidates for representing a hand of the person. [8128] For the remaining candidates, such as local distance maximum pixels 1320-9 and 1320- 10, a region growing analysis may be conducted. To do this, a window (e.g., a number of pixels in each direction) around each remaimng candidate local distance maximum pixel may be analyzed. Within the window, a depth variation for each pixel in comparison to its neighboring pixels may be calculated. A pixel within the window that has a small (e.g., the smallest) depth variation from other pixels within the window or its direct neighbors may be designated as a seed pixel. As such, a single seed pixel may be designated within a window around each remaining candidate local distance maximum. The seed pixel may be required to be pari of the pixel group. [8129] From a seed pixel selected for each remaining focal distance maximum pixel, a region growing analysis may be conducted. Pixels bordering the seed pixel may be analyzed on the basis of depth. If a pixel bordering the seed pixel are within a depth threshold of the seed pixel's depth (either closer or farther from the image capture device), this pixel may be added to a pixel "blob" associated with the seed pixel. Pixels thai border the pixel added to the blob may in turn be analyzed according to the depth threshold of the seed pixel's depth to determine if these pixels should be added to the pixel blob. If a pixel is outside the depth threshold based on the seed pixel, this pixel may not be added to the pixel blob and its neighboring pixels may not be analyzed. Rather than initially only comparing the depth of directly neighboring pixels to the seed pixel, a grid-based neighborhood of the seed pixel may be used, such as pixels in a five- by-five grid around the seed pixel, [0130] The pixel blob may continue to be grown until either a maximum permitted size of the blob (e.g., a maximum number of pixels) is reached or the blob is completely surrounded by a depth discontinuity that exceeds the depth threshold established based on the seed pixel. Such a pixel blob may be created using a seed pixel for each local distance maximum pixel that was not previously eliminated as a candidate for being a person's hand. After a pixel blob has been grown, the pixel blob may contain multiple local distance maximum pixels. For instance, referring to FIG, 13, a blob grown based on local distance maximum pixels 132.0-9 may also contain local distance maximum pixel 1320-10. In such instances, a single blob may be used for multiple focal distance maximums pixels. This may be especially useful if multiple local distance maximum pixels represent multiple fingers of a person's hand, [0131] Referring to FIG. 13, at this point a single pixel blob is present that contains local distance maximum pixels 1320-9 and 1320-10, This pixel blob is then analyzed to determine if it is likely to represent a person's hand. In other embodiments, if multiple pixel blobs were created based on local distance maximum pixels, each of these pixel blobs may be analyzed to determine if they are likely to represent a hand. Analysis of determining whether a pixel blob based on a local distance maximum likely represents a hand is detailed in accordance with method 17 of FIG. 17. [0132] In system 200 of FIG. 2, depth segmentation module 220 recei v es an image that may have one or more pixels removed that were determined to be background. Depth segmentation module 220 and hand detection/tracking module 260 may include multiple components that may include software, hardware, and/or firmware, FIG. 14 illustrates an embodiment of a system 1400 that performs depth segmentation and hand detection/tracking functions. System 1400 may represent a more detailed embodiment of depth segmentation module 220 and hand detection/tracking module 260. It should be understood that in other embodiments the modules of system 200 and system 1400 may be divided and/or combined differently. [8133] System 1400 may include: depth projection module 1410, connected component detection module 1420, principal component analysis (PC A) module 1430, plane positioning and orientation module 1440, reference point determination module 1450, local distance maximum analysis module 1460, seed extraction and region growing module 1470, and hand detection and location module 1480. It should be understood that these modules may be combined into fewer modules or divided in a greater number of modules in other embodiments. Further, the distinction between which modules are considered part of depth segmentation module 220 and which modules are considered part of the hand deteetion/tracking module 2.60 may be arbitrary. Each module may be implemented using software, firmware, and/or hardware. For example, the functions of each module may be implemented using a computerized device. An exemplary computer system 1900 is presented in FIG. 19. [8134] Depth projection module 1410 of depth segmentation module 220 may receive an image from background/foreground extraction module 250. This received image may have one or more pixels removed that were determined by background/foreground extraction module 250 to correspond to the background of a scene. If background models are available for a significant number of pixels, a large percentage of pixels of the image may be classified as background and ignored from further processing by system 1400. Each pixel present in the image received by depth projection module 1410 may have been categorized by background/foreground extraction module 250 as either a foreground pixel or an uncertain pixel. Depth projection module 1410, using the depth information associated with each pixel present, may identify various pixel groups that are likely to correspond to a particular object. [8135] If the image capture device that captured the image has its view partia lly occluded, an object may correspond to multiple pixel groups by depth projection module 1410. Connected component detection module 1420 may be used to determine that separate pixel groups identified by depth projection module 1410 should be considered part of the same pixel group (called a compound pixel group). A common situation where this may occur is if a person's hand is extended generally toward the image capture device, occluding at least a portion of the person's arm, such that the depth projection module 1410 ideniified separate pixel groups for the person's hand and the person's head, shoulders, and/or torso. Connected component detection module 1420 may determine if multiple pixel groups identified by depth projection module 1410 should be treated as a compound pixel group based on a history of pixel groups maintained from previous captured images. For example, referring to FIG. 10A, pixel group 1010A is detected as a single pixel group because the person's arm is not fully occluded from the image capture device. If image 1000B of FIG. 10B was captured at a later time, pixels groups 1010B- 1 and 101 OB-2 may be considered to be part of a compound pixel group because pixel groups 1010B- 1 and 101 OB-2 are similar to pixel group 101 OA based on time (e.g., within a certain number of captured images), location (e.g., similar coordinates), depth, size, and/or shape. [8136] For each pixel group (including compound pixel groups), a threshold size analysis may be performed to determine if the pixel group is greater than a minimum size threshold and/or smaller than a maximum size threshold by pixel group size threshold module 1425. Pixel groups that do not meet the threshold size qualifications may be discarded from further analysis by pixel group size threshold module 1425. Other pixel groups may be passed to PCA module 1430. [8137] PCA module 1430 may perform a PCA on each pixel group to identify pixel groups that include a head and shoulders. Only pixel groups (and compound pixel groups) that are determined to contain a head and shoulders may be passed to plane positioning and orientation module 1440. Besides a PCA being performed, some other technique may be used to determine if a pixel group likely corresponds to a person. [8138] Plane positioning and orientation module 1440 may fit a plane to each pixel group (and compound pixel group) received by plane positioning and orientation module 1440. A plane may be positioned and oriented based on the location and depth of each pixel of a pixel group. The plane may be fit to the group of pixels to initially minimize a total amount of fitting error of the pixels of the group of pixels. The fitting error for a pixel is a function of the shortest distance from the plane to the three dimensional coordinate of the pixel. [8139] The position of the plane may then be refined. Based on a factor such as the mean amount of fitting error for all the pixels of the pixel group, a threshold fitting error value may be calculated. Since the iniiial location and/or orientation of the plane may be affected by an outstretched hand and arm, the pl ane may be located in front of the person's torso, head, and shoulders. However, since the person's hand is smaller than the torso, head, and shoulders (combined), it can be assumed the plane may be closer to the person's torso, head, and shoulders than the person's hand, because the person's hand will have less of an effect on the fitting error due to its size compared to the person's head, shoulders, and torso. Pixels with a fitting error greater than a threshold fitting error value may be eliminated from use in determining a refined position and orientation of the plane. Since the person's hand and arm likely correspond to at least some of the pixels with coordinates from the plane outside the threshold, these pixels will likely be eliminated from use in calculating a refined position of the plane. The location and orientation of the plane may then be recalculated and best fit to the coordinates of the pixels that were not eliminated. This new position of the plane may be used as the final position of the plane. This process may be repeated additional times by plane positioning and orientation module 1440 to further refine the location of the plane. [814Θ] Once a plane has been positioned for each pixel group (and compound pixel group), reference point determination module 1450 may be used io determine a reference point for the group of pixels. This may represent the center point of the group of pixels in three-dimensional coordinates, referred to as a center-of-gravity. [0141 J Local distance maximum analysis module 1460 may identify pixels within the pixel group (or compound pixel group) that represents a focal distance maximum from the determined reference point. Each of these local distance maximitm pixels may be used as a candidate for representing a person's hand. For a pixel io be a local distance maximum, the pixel may be farther away from the reference point than neighboring pixels within the pixel group. The distances between pixels and the reference point may be determined in three dimensions. Local distance maximum analysis module 1460 may also dismiss certain local distance maximum pixels from being candidates for a corresponding hand based on proximity to the plane or location behind the plane with respect to the image capture device. The plan's orientation and location may have been previously determined by plane position and orientation module 1440. Pixels that are identified as local distance maximums but are within a threshold distance of the plane or behind the plane may be dismissed as candidates for representing a person's hand. [0142 J Seed extraction and region growing module 1470 may be used to identify a person's hand/arm from the remaining candidates. A window (e.g., a number of pixels in each direction) around each remaining candidate local distance maximum with the pixel group may be analyzed to determine a seed pixel. Within the window, a depth variation for each pixel may be calculated. A pixel of the pixel group within the window that has a small (e.g., the smallest) depth variation from neighboring pixels within (he window may be designated as the seed pixel. This seed pixel may be used for a region growing analysis. [8143] From each seed pixel selected for each remaining local disiance maximum pixel, the region growing analysis may be conducted. Pixels bordering or in the neighborhood of the seed pixel may be analyzed on the basis of depth, intensity for each pixel may be ignored because pixels' intensity values may tend to be noisier than pixels' depth values. If the depth value of a pixel bordering the seed pixel is within a threshold distance of the seed pixel 's depth, this pixel may be added to a pixel blob associated with the seed pixel. Pixels that border the added pixel may in turn be analyzed to determine if these pixels should be added to the pixel blob. If a pixel's coordinates are outside the depth threshold established based on the seed pixel, this pixel may not be added to the pixel blob and its neighboring pixels may not be analyzed. Rather than initially only comparing the depth of directly neighboring pixels to the seed pixel, a grid-based neighborhood may be used, such as pixels in a five-by-five grid around (he seed pixel and/or each pixel added to the pixel blob. [8144] Each pixel blob created by seed extraction and region growing module 1470 may be analyzed to determine if the pixel blob likely represents a hand (or hand/arm combination). A pixel blob may be determined to represent a person's hand (hand/arm) in a plurality of ways. For example, if the pixel blob represents an elongated object (e.g., longer in one direction than the other by at least a certain ratio) and, possibly, one end of the elongated object is determined to be open (not connected to another object) and one end of the elongated object is determined to be closed (connected to another object), the pixel blob may be determined to represent a person's hand and arm. As another example, if the pixel blob is determined likely to correspond to a previous pixel blob identified as a hand or hand/arm combination based on location, shape, and/or time, the pixel blob may be determmed to correspond to a hand. Pixel blobs that are not identified as a hand or hand/arm combination, for example based on being an elongated object or likely representing a previously detected hand or hand/arm combination, may be dismissed as being a candidate for representing a hand. Pixel blobs may also be filtered based on threshold blob sizes. In some embodiments, a model of a hand may be used to determine if a blob corresponds to a hand. Other techniques are also possible. [8145] Coordinate calculation and output module 1490 may determine a set of two dimensional and/or three dimensional coordinates to be output based on the one or more pixel blobs determined to correspond to a person's hand or hand/arm combination by hand detection and location module 1480. Coordinates for a pixel blob determined to contain a person's hand may be determined based on a weighted average of the pixels of the pixel blob. The closer a pixel of the pixel blob is to the image capture device (that is, the smaller the depth value of the pixel), the greater the weight given to pixel. The coordinates based on the weighted average may be output to another component, module, device, or system. For example, these coordinates may be used for determining a gesture being performed by a person's hand. In some embodiments, a bounding box surrounding the blob and/or hand or a portion thereof may be output, instead of or in addition to the coordinates, based on the pixel blob. [8146] Various methods may be used to perform the analysis described in relation to FIGS. 1 OA through 13. System 100 of FIG. I, system 200 of FIG. 2, and/or system 1400 of FIG. 14 may be used to performed various methods. FIG. ISA illustrates an embodiment of a method 1500A for determining a location of a person's hand. Method 1500A may be performed using system 100, system 200, system 1400 or some other system that is configured to capture images of a scene, locate a person's hand, and output coordinates of the person's hand. Method 1500A may be performed using a computerized device, such as computer system 1900 of FIG. 19. Various steps of method 1500A may be implemented using software, hardware, and/or firmware. Means for performing method 1500A may include computerized devices, components of system 100, components of system 200, and/or components of system 1400, [8147] At step 1510, a group of pixels in an image of scene may be identified as a person . The image of the scene used at step 1510 may have had one or more pixels removed. The image of the scene used may be the image output from method 800 of FIG. 8 at step 830 or method 900 of FIG. 9 at step 980. The pixels that were removed may have been identified as background, and thus are unlikely to correspond to a person. Rather than removing the pixels, the pixels may be classified as background. The pixels analyzed at step 1510 may have been classified as either foreground or may have received an uncertain classification. At step 1510, based on the depth of pixels, pixels may be grouped into one or more pixel groups. Accordingly, pixels that are proximate to each other and have a similar depth may be determined to likely correspond to the same object. These pixels may be grouped into the same pixel group. Each pixel group may be analyzed to determine if the pixel group likely contains a person. This may be accomplished by performing a principal component analysis (PCA). The PCA may be used to determine if a pixel group contains a head and shoulders. A pixel group can contain more than one head and shoulders (e.g., a pixel group may correspond to two or more persons). Step 1510 may be performed by modules 1410-1430 of system 1400 of FIG. 14. [8148] For each group of pixels, a plane may be positioned and oriented to minimize the total amount of fitting error between pixels of the group of pixel s and the plane at step 1515. Ideally, this plane may be aligned with the torso, shoulders, and head of the group of pixels likely corresponding to the person. To position the plane while limiting the effect of a possible extended hand and arm (as is present in image 1 100), a plane may initially be fit to the entire group of pixels. This plane may be in various orientations in three dimensional space. The plane may be fit to the group of pixels to minimize a total amount of fitting error for pixels of the group. The fitt ing error for an individual pixel may be a function of the shortest distance from the plane to the three dimensional coordinate of the pixel. As such, the distance may be determined along a line extending perpendicularly from the plane (the distance is zero if the point associated with the pixel falls on the plane). Step 1515 may be performed by module 1440 of system 1400 of FIG. 14. [8149] The position of the plane may then be refined. Based on a factor such as the mean amount of fitting error for all the pixels of the pixel group or a predefined threshold amount, a threshold fitting error value may be calculated. Since the initial location and/or orientation of the plane may be affected by an outstretched hand and arm, the plane may be located in front of the person's torso, head, and shoulders. However, since the person's hand is smaller than the torso, head, and shoulders (the hand is associated with fewer pixels), it can be assumed the plane may be closer to the person's torso, head, and shoulders than ihe person's hand because the total amount of fitting error is used to fit the plane. Accordingly, pixels with a fitting error greater than a threshold fitting error value may be eliminated from use in determining a next iteration of the position and orientation of the plane. Since a person's hand and arm likely correspond to at least some of the pixels with farther coordinates from the plane (than the person's torso, head or shoulders), the pixels associated with an outstretched hand and/or arm will likely be eliminated from use in calcul ting a refined position and orientation of the plane. The position and orientation of the plane may then be recalculated and best fit to the coordinates of the pixels that were not eliminated to minimize an amount of fitting error. This new position of the plane may be used as the final position of the plane, or the process may be repeated additional times to further refine the posit ion and orientation of the plane. In some embodiments, only the initial estimate of the plane location and orientation is used. At step 1520, a reference point, which may be referred to as the center of gravity, may be set at the center of the group of pixels. The reference point may be determined by taking an average of the x -value, y- value, and z-value (depth value) of each pixel in the pixel group. In some embodiments, a weighted average may be used to determine a reference point. For instance, a pixel closer to the image capture device (having a smaller depth value) may be afforded greater weight than pixels with a greater depth value. A reference point other than the average coordinates of the pixel group may be used in some embodiments. Step 1520 may be performed by module 1450 of system 1400 of FIG. 14. [8151] At step 1530, local distance maximum pixels may be determined for the group of pixels. Each local distance maximum may be a pixel of the group of pixels that is a greater distance away from the reference point than the pixel's neighboring pixels (that are also part of the pixel group). As such, local distance maximum pixels may be expected to be located at extremities of the group of pixels. Referring, for example, to FIG. 13, local distance maximums may occur at pixels corresponding to a person's head, shoulders, and fingers. Step 1530 may be performed by module 1460 of system 1400 of FIG. 14. [8152] At step 1535, the plane aligned with the group of pixels from step 1515 may be used to eliminate pixels identified as local distance maximums from the reference point as being candidates for representing a person's hand. If a pixel that was determined to be a local distance maximum from the reference point is within a threshold distance of the plane (on either side of the plane), the pixel may be dismissed as being a candidate for representing a person's hand. Since the plane is expected to be approximately aligned with the person's head, shoulders, and torso, if a person is performing a gesture, the person's hand is typically extended away from the person's body (where the plane is likely located) and thus would be outside the threshold distance to the plane. Thus, a local distance maximum pixel associated with the person's hand may be unlikely to be eliminated as a candidate based on the plane. Step 1535 may be performed by modules 1460- 1480 of system 1400 of FIG. 14. [81 S3] At step 1540, two dimensional and/or three dimensional coordinates may be output that indicates the position of a person's hand based on a local distance maximum pixel outside of the threshold distance from the plane. In some embodiments, if after eliminating candidates using the plane, only a single local distance maximum pixel remains, the coordinates of this remaining local distance maximum pixel may be used for identifying the location of the person's hand. In other embodiments, one or more local distance maximum pixels that have not been eliminated as candidates for being a person's hand may be further analyzed and used to output coordinates. Step 1540 may be performed by module 1490 of system 1400 of FIG. 14. It should be understood that coordinates for multiple hands may be output instead of coordinates for a single hand. For example, coordinates for two hands of a single person or hands of different persons may be output. In some embodiments, if multiple hands are detected, only coordinate for a particular hand may be output. For example, the hand closest to the image capture device may be given priority. In some embodiments, the larger hand is given priority (e.g., a parent's hand movement overrides a child's). In some embodiments, hands in one or more regions of a scene are given priority over any hands present in other regions of scene (e.g., a hand detected of a person sitting on a couch overrides a hand position of a person standing behind the couch). [8154] FIG. 15B illustrates an embodiment of a method 1500B for determining a location of a person's hand. Method 1500B may be performed using system 100, system 200, system 1400 or some other system that is configured to capture or receive images of a scene, locate a person's band, and output coordinates of the person's hand. Method 1500B may be performed using a computerized device, such as computer system 1900 of FIG. 19. Various steps of method 1500B may be implemented using software, hardware, and/or firmware. Means for performing method 1500B may include computerized devices, components of system 100, components of system 200, and/or components of system 1400. It should be understood that method 1500B may also include additional steps of method 1500A and/or method 1600 of FIG. 16 and/or may include steps which are not illustrated. [8155] At step 1550, a group of pixels in an image of a scene may be identified as a person or as representing a person. The image of the scene used at step 1550 may have had one or more pixels removed. The image of the scene received may be the image output from method 800 of FIG. 8 at step 830 or method 900 of FIG. 9 at step 980. The pixels that were removed may have been identified as background, and thus are unlikely to correspond to a person. Rather than removing the pixels, the pixels may be classified as background. The pixels analyzed at step 1550 may have been classified as either foreground or may have received an uncertain classification. At step 1550, based on the depth of pixels, pixels may be grouped into one or more pixel groups. Accordingly, pixels that are proximate to each other and have a similar depth may be determined to likely correspond to the same object. These pixels may be grouped into the same pixel group. Each pixel group may be analyzed to determine if the pixel group likely contains a person. This may be accomplished by performing a principal component analysis (PCA). The PCA may be used to determine if a pixel group contains a head and shoulders. A pixel group can contain more than one head and shoulders (e.g., a pixel group may correspond to two or more persons). Step 1550 may be performed by modules 1410-1430 of system 1400 of FIG, 14. [81S6] At step 1560, a reference point may be set for a group of pixels identified as representing the person. In some embodiments, a reference point, which may be referred to as the center of gravity, may be set at the center of the pixel group. The reference point may be determined by taking an average of the x-value, y-value, and z- value (depth value) of each pixel in the pixel group. In some embodiments, a weighted average may be used to determine a reference point. For instance, a pixel closer to the image capture device (having a smaller depth value) may be afforded greater weight than pixels with a greater depth value. A reference point other than the average coordinates of the pixel group may be used in some embodiments, in some embodiments ;a reference point, which may be set at the center of gravity, may be set for each group identified at step 1550, Step 1560 may be performed by module 1450 of system 1400 of FIG. 14. [8157] At step 1570, a local distance maximum from the reference point may be identified. For example, local distance maximum pixels may be determined for each group of pixels identified at step 1550. Each local distance maximum may be a pixel of the group of pixels that is a greater distance away from the reference point than the pixel's neighboring pixels (ihai are also part of the pixel group). As such, local distance maximum pixels may be expected to be located at extremities of the group of pixels. Referring, for example, to FIG. 13, local distance maximums may occur at pixels corresponding to a person's head, shoulders, and fingers. Step 1570 may be performed by module 1460 of system 1400 of FIG. 14. [8158] At step 1580, two dimensional and/or three dimensional coordinates may be output ihai indicates the position of a person's hand based on the identified local distance maximum. For exampl e, an indication of a position of the hand may be output based on a pixel that is a local maximum in distance from a reference point. In some embodiments, only a single local distance maximum pixel may be present, and the coordinates of this local distance maximum pixel may be used for identifying the location of the person's hand, in other embodiments, one or more local distance maximum pixels that have not been eliminated as candidates for being a person's hand may be further analyzed and/or used to output coordinates. Step 1580 may be performed by module 1490 of system 1400 of FIG. 14. It should be understood that coordinates for multiple hands may be output instead of coordinates for a single hand. For example, coordinates for two hands of a single person or hands of different persons may be output, for example when a plurality of groups of pixels were identified at step 1550. In some embodiments, if multiple hands are detected, only coordinates for a particular hand may be output. For example, the hand closest to the image capture device may be given priority. In some embodiments, the larger hand is given priority (e.g., a parent's hand movement overrides a child's). In some embodiments, hands in one or more regions of a scene are given priority over any hands present in other regions of scene (e.g., a hand detected of a person sitting on a couch overrides a hand position of a person standing behind the couch). [8159] FIG. 1 illustrates an embodiment of a method 1 600 for determining a position of a hand. Method 1600 may be performed using system 100, system 200, system 1400 or some other system that is configured to receive images of a scene, locate a person's hand, and output coordinates of the person's hand. Method 1 600 may be performed using a computerized device, such as computer system 1 900 of FIG. 19. Various steps of method 1600 may be implemented using software, hardware, and/or firmware. Means for performing method 1600 ma include computerized devices, components of system 100, components of system 200, and/or components of system 1400. Method 1 600 may represent a more detailed embodiment of method 1 500A. [Θ16Θ] At step 1 605, an image of a scene may be received. The image of the scene received at step 1605 may have had one or more pixels removed. The image of the scene received at step 1605 may be the image output from method 800 of FIG. 8 at step 830 or method 900 of FIG. 9 at step 980. These pixels that were removed may have been designated as background, and thus are unlikely to represent a person. In some embodiments, rather than removing the pixels, the pixels may be classified as background. The pixels received at step 1605 may have been classified as either foreground or received an uncertain classification. Referring to system 200 of FIG. 2, background/foreground extraction module 250 may have removed some pixels or designated some pixels as background in an image received from image acquisition module 210. If background/foreground extraction module 250 has insufficient information to determine if a pixel is likely part of the foreground or background, depth segmentation module 220 may receive an image with no pixels removed or designated as background. Step 1605 may be performed by module 1410 of system 1400 of FIG. 14. [8161] At step 1610, based on the depth of pixels, pixels may be grouped into one or more pixel groups. Accordingly, pixels that are proximate to each other and have a similar depth may be determined to likely correspond to the same object. These pixels may be grouped into the same pixel group. Referring to FIG. 10A, for example, three pixel groups are present: pixel group 1010A, pixel group 1020, and pixel group 1030. Step 1610 may be performed by module 1420 of sy stem 1400 of FIG. 14, [8162] In some embodiments, pixels that are initially grouped into different pixel groups may be treated as being part of the same pixel group (referred to as a compound pixel group). This may be based on two (or more) pixel groups likely previously being part of a single pixel group. A single pixel group may become two pixel groups if a portion of the object that the pixel groups represent becomes occluded. For example, referring to F G. 1 OB, a person's hand, represented by pixel group 1010B-2 may occlude the person's arm that connect the hand to the person's body of pixel group lOlOB-1. If the depth segmentation image of FIG. I0A was created based on an image captured prior to the depth segmentation image of FIG. 10B, based on the amount of time elapsed between the images being captured, the shape, and/or the location of pix els groups, both pixel groups l OlOB-1 and l O.l OB-2 may be determined to correspond to pixel group 101 OA of FIG. 10A. Accordingly, pixel group lOl OB-i and iOlOB-2 may ¬be an example of a compound pixel group. [8163] At step 1615, one or more groups of pixels may be eliminated from being candidates to correspond to a person based on size and/or distance from the image capture device. If a group of pixels is too small, too large, too close, or too far from the image capture device, the group of pixels may be eliminated as a candidate for containing a person. Whether a group of pixels is too small, too large, too close, or too far may be determined based on stored threshold values. Referring to FIG. lOB, pixel groups 1020 and 1030 may be eliminated as candidates for containing a person. Step 1615 may be performed by module 1425 of system 1400 of FIG. 14. [81 4] At step 1620, a principal component analysis (PCA) may be performed on the remaining candidate pixel groups to identity one or more sets of a head with shoulders. Previously, a large number (e.g., f ens, hundreds, thousands, etc.) of images of people's upper bodies may be captured. Each such sample image may be converted into a binary silhouette, normalized in a fixed direction. These samples may include samples where the upper body (e.g., head and shoulders) of the persons are rotated along the x-axis, y- axis, and/or z-axis. Based on the samples, a PCA is conducted to compute the covariance matrix of all the samples. The model created may consist of the N l argest eigen vectors of the covariance matrix. In some embodiments, the 7 largest vectors (also referred to as principal components) may be used for the PCA of pixel groups in an image being analyzed. The principal components may be predetermined and may be stored onto the system performing the analysis, it should be understood that greater or fewer vectors may also be used for the model. The predetermmed principal components may be used in conducting a PCA to determine if a pixel group likely corresponds to a person because it appears to have at least one set of a head and shoulders. At step 1625, based on the PCA of each remaining candidate pixel group, one or more pixel groups may be identified as corresponding to a person. Pixel groups without a head and shoulders may be dismissed and not analyzed further. As such, following step 1625, each remaining pixel group is considered to contain a person. Step 1620 may be performed by module 1430 of system 1400 of FIG. 14. [8165] At step 1630, an indication of each pixel determined to correspond to a person may be output. Each pixel that is part of a pixel group that was determined to have a head and shoulders at step 1625 may be output at step 1630. These pixels may be referred to as foreground pixels. The indication of these pixels may include the pixel's coordinates, depth, and/or intensity. Referring to system 200 of FIG. 2, indications of the foreground pixels may be output by depth segmentation module 220 and provided to foreground modeling module 240. Foreground modeling module 240 may use the indications of the foreground pixels to create a foreground model for use by background/foreground extraction module 250 as previously detailed. Step 1630 may be performed by module 1430 of system 1400 of FIG. 14. [8166] At step 1635, for each group of pixels that was determined to correspond to at least one person, a plane may be defined. For each group of pixels, a plane may be positioned and oriented to minimize the fitting error between some or all of the pixels of the group of pixels and the plane. Ideally, this plane may be aligned with the torso, shoulders, and head of the pixels corresponding to the person. To position the plane while limiting the effect of a possible extended hand and arm (as is present in image 1 100 of FIG. 1 1), a plane may initially be fit to the entire group of pixels. This plane may be in various orientations in three dimensional space. The fitting error for an individual pixel may be a function of the shortest distance from the plane to the three dimensional coordinate of the pixel As such, the shortest dist ance is along a line extending perpendicularly from the plane (the distance is zero of the point associated with the pixel falls exactly on the plane). Step 1635 may be performed by module 1440 of system 1400 of FIG. 14. [8167] A fter initially being positioned, the position of the plane may then be refined. Based on a factor such as the mean amo unt of fitting error for ail the pixels of the pixel group a threshold fitting error value may be calculated. A predefined threshold fitting error value may also be used. Since the initial location and/or orientation of the plane may be affected by an outstretched hand and arm (such as if the person is performing a gesture), the plane may be located in front of the person's torso, head, and shoulders. However, since the person's hand is smaller than the torso, head, and shoulders (the hand is associated with fewer pixels), it can be assumed the plane may be closer to the person's torso, head, and shoulders than the person's hand because the total amount of fitting error is used to fit the plane. Accordingly , pixels with a fitting error greater than a determined or predefined threshold fitting error value may be eliminated from use in determining the next iteration of the position and orientation of the plane. Since a person's outstretched hand and arm will likely correspond to at least some of the pixels of the pixel group with the farthest coordinates from the plane, the pixels associated with an outstretched hand and/or arm will likely be eliminated from use in calculating the next or subsequent iterations of the plane's position and orientation. The position and orientation of the plane may be recalculated and best fit to the coordinates of the pixels that were not eliminated to minimize an amount of fitting error. This new position/orientation of the plane may be used as the final position of the plane, or the process may be repeated for additional iterations of positioning and orienting the plane, m some embodiments, only the initial estimate of the plane location and orientation is used. [8168] At step 1640, a reference point, which may be referred to as a center-of-gravity, may be calculated for each remaining group of pixels. The reference point may be determined by taking an average of the x-value, y-value, and z-value (depth value) of each pixel in the pixel group. In some embodiments, a weighted average may be used to determine a reference point. For instance, a pixel closer to the image capture device (having a smaller depth value) may be afforded greater weight than a pixel with a greater depth value. In other embodiments, a reference point may be determined in a different way. Step 1640 may be performed by module 1450 of system 1400 of FIG. 14. [8169] At step 1645, pixels that are local distance maximums may be determined for the pixel groups remaining. Each local distance maximum pixel may be a pixel of the group of pixels that is a greater distance away from the reference point than the pixel's neighboring pixels (that are also pari of the pixel group). As such, local distance maximums may be located at extremities of the group of pixels. Referring, for example, to FIG. 13, local distance maximum pixels may correspond to a person's head, shoulders, and fingers. Local distance maximum pixels may be in three-dimensional space. As such, in FIG. 13, the person's hand may be extended in the general direction of the image capture device. Each pixel identified as a local distance maximum may be used as a candidate for a hand of the person. Step 1645 may be performed by module 1460 of system 1400 of FIG. 14. [0170] At step 1650 for each remaining pixel group, the plane aligned with the group of pixels (from step 1635) may be used to eliminate pixels identified as local distance maximums as being candidates for representing a hand of the person. If a pixel that is a local distance maximum is within a predefined threshold distance of the plane, the pixel may be dismissed as being a candidate for representing a person's hand. Since the plane is expected to be approximately aligned with the person's head, shoulders, and torso, if a person is performing a gesture, the person's hand is typically extended away from the person's body (where the plane is likely located) and thus would be outside the threshold distance from the plane. Referring to FIG. 12, plane 1210 is approximately aligned with the person's torso, head, and shoulders, however the person's hand and arm extends beyond the plane. Step 1650 may be performed by module 1460 of system 1400 of FIG. 14. [8171] Following step 1650, one or more local distance maximum pixels within each group of pixels may remain as candidates for representing a person's hand. (If no local distance maximum pixels remain, it may be determined that the person's hand is not outstretched, and the method may end.) To determine whether a local distance maximum pixel is likely to correspond to a person's hand, a seed pixel may be determined based on the local distance maximum pixel and/or a region growing analysis may be conducted at step 1655. Determination of the seed pixel and performing the region growing analysis ma be conducted in accordance with method 1700 of FIG. 17. Step 1655 may be performed by module 1470 of system 1400 of FIG. 14. [8172] At step 1660, an elongated object analysis may be conducted. When a person has his or her arm extended, it may be expected that the person's hand and at least some of the person's forearm will be a similar distance from the image capture device. The presence of an elongated object following the region growing analysis of step 1655 may signal the presence of a person's extended hand and forearm. Method 1800 may be performed to determine if a band is present following the region growing analysis of step 1655. Step 660 may be performed by module 1480 of system 1400 of FIG. 14. Other techniques besides an elongated object analysis may be performed to determine if an object comprises a hand. [8173] At step 1665, two dimensional and/or three dimensional coordinates may be output. These coordinates may be determined to correspond to the location of a hand in the image received at step 1605, If no hand is determined to be present, no coordinates may be output at step 1665. Conversely, if multiple hands are determined to be present, more than one set of coordinates may be output. For each image received at step 1605, a set of coordinates may be output at step 1665, if a hand is determined to be present. Step 1665 may be performed by module 1490 of system 1400 of FIG. 14, [8174] Such coordinates may be used for determining a gesture being performed by a person. At step 1670, a gesture performed by the person (via the person's hand) may be determined using the coordinates output at step 1665. In addition to gestures, the coordinates of the person's hand may have other uses, such as for manipulating a cursor on a screen. [8175] FIG. 17 illustrates an embodiment of a method 1700 for determining a seed pixel and creating a pixel blob based on a pixel identified as a local distance maximum. Method 1700 may be performed using system 100, system 200, system 1400 or some other system that is configured to receive images of a scene, locate a person's hand, and output coordinates of the person's hand. Method 1700 may be performed using a computerized device, such as computer system 1900 of FTG. 19. Various steps of method 1700 may be implemented using software, hardware, and/or firmware. Means for performing method 1700 may include computerized devices, components of sy stem 100, components of system 2.00, and/or components of system 1400. Method 1700 may be performed as part of another method, such as at step 1655 of method 1600 of FIG. 16. Each step of method 1700 may be performed by module 1470 of system 1400 of FIG. 14. [8176] At step 1710, for each pixel that is a local distance maximum that has not be otherwise eliminated as a candidate for being a hand of a person, a windo of pixels around the local distance maximum pixel may be examined. Since the local distance maximum pixel is likely located at a boundary between an object and space, such as at a fingertip of the person, intensity and/or depth measurements of the local distance maximum pixel may tend to be noisy. A pixel having noisy values may not be effective to serve as a seed pixel for a region growing analysis. As such, another pixel in the vicinity of the local distance maximum pixel may be selected to serve as a seed pixel that is used as the baseline for a region growing analysis. A window of pixels around the local distance maximum pixel may be determined. This window may be each neighboring pixel to the local distance maximum pixel. In some embodiments, a 3x3, 4x4, or 5x5 neighborhood of pixels is used. Other sized pixel neighborhoods may also be used, [0177] From within the window determined at step 1710, a seed pixel which will serve as the baseline pixel for a region growing analysis may be determined at step 1720. For use as a seed pixel, a pixel with little depth (and/or intensity) noise may be desired. From within the window, a pixel that has the least amount of variance in depth value from the average value of its neighboring pixels (or other pixels within the window) may be used as the seed pixel. As such, each pixel within the windo may be analyzed to determine which pixel's depth varies the least from its neighboring pixels. Following step 1720, a seed pixel may be selected for each local distance maximum pixel. In some embodiments, the seed pixel may be the local distance maximum pixel. [8178] At step 1730, each neighboring pixel (which may include pixels located diagonally) to the seed pixel may be compared based on each pixel's depth value. If a neighboring pixel has a depth value within a threshold amount of the depth value of the seed pixel, the neighboring pixel may be added to a pixel "blob," that includes the seed pixel. A. small depth threshold value may be used, such as an inch. If a neighboring pixel does not have a depth value within a threshold amount of the depth value of the seed pixel, this neighboring pixel is not added to the pixel blob. In some embodiments, rather than using only the directly neighboring pixels of the seed pixel, a larger neighborhood may be used, such as a 5x5 or 7x7 neighborhood. Other sized neighborhoods may also be used. [017 J At step 1740, for each pixel added to the pixel blob at step 1730, each of its neighboring pixels may, in turn, be analyzed in comparison to the depth value of the seed pixel and the neighboring pixel. As such, the global variation (from the seed pixel) and a local variation (for continuity) may be analyzed. If any of these neighboring pixels have a depth value within a threshold amount of the depth value of the seed pixel, the pixel within the threshold depth vakte may be added to the pixel blob. Pixels that do not have a depth v alue within a threshold amount of the depth v alue of the seed pixel, may not be added to the pixel blob. Again, in some embodiments, rather than using only directly neighboring pixels, a larger neighborhood may be used, such as a 5x5 or 7x7 neighborhood. Other sized neighborhoods may also be used. In many embodiments, an odd number is used for defining the neighborhood for symmetry: as such, the seed pixel can be located at the center of the neighborhood. [818Θ] The pixel blob may continue to be grown according to this method. For each pixel added to the pixel blob determined to be within a threshold depth of the seed pixel, its neighboring pixels may be in turn analyzed. This may continue until no neighboring pixels within the threshold depth value of the seed pixel's depth value are identified. At this point, the pixel blob may be complete, [8181] The pixel blob may grow substantially enough that the pixel blob combines with one or more other pixel blobs that are based on other local distance maximum pixels. If two or more pixel blobs incorporate one or more of the same pixels or adjacent pixels, these pixel blobs may be treated as a single pixel blob. If each pixel blob is grown individually, and a pixel blob grows such that it incorporates a second local distance maximum pixel (other than the first local distance maximum pixel that the pixel blob's seed pixel is based on), a separate pixel blob for the second local distance maximum pixel may not be created. Rather, the pixel blob may be used for both local distance maximum pixels. Following step 1740, each of the one or more created pixel blobs may ¬be analyzed to determine if each pixel blob is likely to correspond to a person's hand or not. Referring to method 1600 of FIG. 16, such an analysis may occur at step 1660. [0182] FIG. 18 illustrates an embodiment of a method for analyzing a pixel blob to determine if it likely contains a hand and determine associated coordinates. Method 1800 may be performed using system 100, system 200, system 1400 or some other system that is configured to receive images of a scene, locate a person's hand, and output coordinates of the person's hand. Method 1800 may be performed using a computerized device, such as computer system 1900 of FIG. 19. Various steps of method 1800 may be implemented using software, hardware, and/or firmware. Means for performing method 1800 may include computerized devices, components of system 100, components of system 200, and/or components of system 1400. Method 1800 may be performed as part of another method, such as at step 1660 of method 1600 of FIG. 16, Method 1800 may be performed for each pixel blob that is present following step 1655 of method 1600. [8183] At step 1810, the size of a pixel blob may be compared with various thresholds. If a pixel blob is greater than a maximum threshold size or smaller than a minimum threshold size, it may be eliminated as a candidate for containing a hand. Such thresholds may be predefined and/or previously stored. [8184] A t step 1 820, if the pixel blob qualified under the size conditions of step 1810, a determination may be made as to whether the pixel blob constitutes an elongated shape. An elongated shape may be defined as being at least as twice as long as wide (other definitions of an elongated shape or other types of shapes may also be used). When a person is performing a gesture, typically, the gesture may begin with the person's hand raised such that the person's hand is substantially coplanar with at least some of the person's forearm. Therefore, a pixel blob may appear longer in one direction (from the person's fingertips to part of the person's forearm) than in a perpendicular direction (across the person's hand or forearm). Detection of an elongated shape may be used to differentiate a pixel blob containing a hand from a pixel blob based on some other object or part of the person's body. If an elongated pixel blob is detected, method 1800 may proceed to step 1830. [8185] At step 1 830, an "open" end of the pixel blob may be determined. An open end may be defined as an end of the pixel blob not connected to any other object (e.g., part of the person's body). A person's hand would be at the open end of a pixel blob, while a forearm would be part of a closed end, because the forearm is connected with the person 's upper arm. To determine which end of the pixel blob is the open end, a Chamfer distance analysis may be conducted. A Chamfer distance analysis may be conducted using pixels along the border of the pixel blob. These border pixels may be analyzed to determine the difference in depth with pixels outside of the pixel blob (e.g., the pixels outside the pixel blob that neighbor the pixel blob). Since a person's hand is at the open end of the elongated pixel blob and is not connected to another object, it can be expected that the distance of pixels along the border of the open end will be a greater distance (as measured using the depth value) from neighboring pixels outside of the pixel blob than pixels of the closed end associated with the person's forearm. Using a predefined threshold distance, a number of neighbors can be found for either end of the elongated pixel blob. The end with the fewest neighbors within a threshold distance may be considered the open end, and thus may be considered to represent a hand. [8186] For pixels of the identified open end of the pixel blob, a weight may be assigned at step 1840. Pixels with the smallest depths may tend to be the more accurately measured pixel values, thus these pixels may be desired to be favored in determining coordinates for the hand. For pixels associated with the open end (e.g., pixels within a threshold distance of edge of the open end), a weighted average of the pixels coordinates (in two or three dimensions) may be calculated at step 1850. The weighted average may weight pixels with smaller depth values greater than pixels farther from the image capture device. [8187] If a pixel blob is not elongated, this does not necessarily mean the pixel blob is not associated with a hand. For instance, a hand outstretched toward the image capture device may occlude the person's forearm, and thus may appear as a non-elongated shape in captured images. Such pixel blobs may still be determined to be a hand if the pixel blob is considered likely to represent the same object as a pixel blob previously identified as an elongated object at step 1860. Such an analysis may be based on time, location, shape, and/or movement of the elongated pixel blob and the non-elongated pixel blob. [0188] If at step 1860 the pixel blob is determined to correspond to a previously identified elongated pixel blob, a weight may be assigned to each pixel of the non- elongated pixel blob at step 1870. A weighted average of the pixels coordinates (in two or three dimensions) may be calculated for the non-elongated pixel blob at step 1850. The weighted average may weight pixels with smaller depth values greater than pixels farther from the image capture device. Returning to step I 860, if the non-elongated pixel blob is not determined to correspond to a previously-identified elongated shape, the pixel blob may be discarded and no coordinates may be calculated for the pixel blob. [8189] Following method 1800, returning to method 1600, the two and/or three dimensional coordinates determined may be output at step 1665 to one or more other modules, components, or devices. Coordinates may only be output when a pixel blob determined to be associated with a hand is present. Such other module, components, or devices may use the coordinates to determine a gesture being performed by the person. The position of a person's hand may also be tracked for other reasons. [0190] A computer system as illustrated in FIG. 19 may be incorporated as part of the previously described computerized devices. For example, computer system 1900 can represent some of the components of the sy stems discussed in this application. FIG. 19 provides a schematic illustration of one embodiment of a computer system 1900 that can perform the methods provided by various other embodiments, as described herein, and/or can function as components of system 100, system 200, ami/or system 1400. It should be noted that FIG. 19 is meant only to provide a generalized illustration of various components, any or ail of which may be utilized as appropriate. FIG. 19, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner. [81 1] The computer system 1900 is shown comprising hardware elements that can be electrically coupled via a bus 1905 (or may otherwise be in communication, as appropriate). The hardware elements may include one or more processors 1910, including without limitation one or more general-purpose processors and/or one or more special-purpose processors (such as digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 1915, which can include without limitation a mouse, a keyboard, and/or the like; and one or more output devices 1920, which can include without limitation a display device, a printer, and/or the like. Input devices 1915 may comprise the image capture module 1 10 of system 100 in some embodiments. Processors 1910 may comprise processing module 120 in some embodiments. Storage devices 192.5 may include computer-readable storage medium 130. [81 2] Similarly, various components of system 200 may be performed by components of computer system 1900. For example, each module of system 200 may be performed by processors 1910 and storage devices 1925 of computer system 1900. Further, various components of system 1400 of FIG, 14 may be performed by components of computer system 1900. For example, each module of system 1400 may be performed by processors 1910 and storage devices 1925 of computer system 1900. [8193] The computer system 1900 may further include (and/or be in communication with) one or more non-transitory storage devices 1925, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, a solid-state storage device, such as a random access memory ("RAM"), and/or a read-only memory {" ■ROM"), which can be programmable, flash-updateabJe and/or the like. Such storage devices may be configured to implement any appropriate data stores, including without limitation, various file systems, database structures, and/or the like. [81 4] The computer system 1900 might also include a communications subsystem 1930, which can include without limitation a modem, a network card (wireless or wired), an infrared communication device, a wireless communication device, and/or a chipset (such as a Bluetooth™ device, an 802, 1 1 device, a WiFi device, a WiMax device, cellular communication facilities, etc.), and/or the like. The communications subsystem 1930 may permit data to be exchanged with a network (such as the network described below, to name one example), other computer systems, and/or any other devices described herein. In many embodiments, the computer system 1900 will further comprise a working memory 1935, which can include a RAM or ROM device, as described above. [8195] The computer system 1900 also can comprise software elements, shown as being currently located within the working memory 1935, including an operating system 1940, device drivers, executable libraries, ami/or other code, such as one or more application programs 1945, which may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods. [8196] A set of these instructions and/or code might be stored on a non-transitory computer-readable storage medium, such as the non-transitory storage device(s) 1925 described above, in some cases, the storage medium might be incorporated within a computer system, such as computer system 1900. In other embodiments, the storage medium might be separate from a computer system (e.g., a removable medium, such as a compact disc), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer system 1900 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer system 1900 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.), then takes the form of executable code. [8197] It will be apparent to those skilled in the art that substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed. [8198] As mentioned above, in one aspect, some embodiments may employ a computer system (such as the computer system 1900) to perform methods in accordance with various embodiments of the invention. According to a set of embodiments, some or all of the procedures of such methods are performed by the computer system 900 in response to processor 1910 executing one or more sequences of one or m ore instructions (which might be incorporated into the operating system 1940 and/or other code, such as an application program 1945) contained in the working memory 935. Such instructions may be read into the working memory 1935 from another computer- readable medium, such as one or more of the non-transitory storage deviee(s) 1925. Merely by way of example, execution of the sequences of instructions contained in the working memory 1935 might cause the processor(s) 1910 to perform one or more procedures of the methods described herein. Processor(s) 1910 may be used to implement the processing module 120 in some embodiments. [8199] The terms "machine-readable medium" and "computer-readable medium," as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. In an embodiment implemented using the computer system 1900, various computer-readable media might be involved in providing instructions/code to processor(s) 191 0 for execution and/or might be used to store and/or cany such instructions/code. In many implementations, a computer- readable medium is a physical and/or tangible storage medium. Such a medium may iake the form of a non-volatile media or volatile media. Non- volatile media include, for example, optical and/or magnetic disks, such as the non-transitory storage device(s) 1925. Volatile media include, without limitation, dynamic memory, such as the working memory 1935. |020©] Common forms of physical and/or tangible computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any- other physical medium with patterns of holes, a RAM, a PR OM, EPROM, a Π AS! !- EPROM, any other memory chip or cartridge, or any other medium from which a computer can read instructions and/or code. [8201] Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to the processors ) 1910 for execution. Merely by wa of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer system 1900. [8202] The communications subsystem 1930 (and/or components thereof) generally will receive signals, and the bus 1905 then might cany the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 1935, from which the processor^) 1910 retrieves and executes the instructions. The instructions received by the working memory 1935 may optionally be stored on a non-transitory storage device 1925 either before or after execution by the processors) 1910. Non-transitory storage device 1925 may function as a computer-readable storage medium 130 of FIG. 1 in some examples. [0203] Those having skill in the art will appreciate that the terms foreground and background do not limit the models, objects, or positions of objects described herein. Thus, an object in the "background" of a scene may actually be closer to a sensor or camera than an object in a "foreground" of the scene. In certain embodiments described above, background extraction is described as removing objects behind a user, for example a couch or wall. In some embodiments, however, the background extraction may be used to remove an object in front of a user, for example a table, rug, or ottoman. The user may thus still be identified as being in the "foreground" of the scene and foreground models generated to describe a potential location of the user when the user is located behind one or more objects. [8204] The methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods may be performed in an order different from that described, and/or various stages may be added, omitted, and/or combined. Also, features described with respect to certain configurations may be combined in various other configurations. Different aspects and elements of the configurations may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples and do not limit the scope of the disclosure or claims. [8205] Specific details are given in the description to provide a thorough understanding of example configurations (including implementations). However, configurations may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configurations of the claims. Rather, the preceding description of the configurations will provide those skilled in the art with an enabling description for implementing described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure. [8206] Also, configurations may be described as a process which is depicted as a flow diagram or block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently, in addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure. Furthermore, examples of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks may be stored in a non-transitory computer-readable medium such as a storage medium. Processors may perform the described tasks. [0207] Having described several example configurations, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may be components of a larger system, wherein other rules may take precedence over or otherwise modify the application of the invention. Also, a number of steps may be undertaken before, during, or after the above elements are considered. Accordingly, the above description does not bound the scope of the claims. |
A snoop request cache maintains records of previously issued snoop requests. Upon writing shared data, a snooping entity performs a lookup in the cache. If the lookup hits (and, in some embodiments, includes an identification of a target processor) the snooping entity suppresses the snoop request. If the lookup misses (or hits but the hitting entry lacks an identification of the target processor) the snooping entity allocates an entry in the cache (or sets an identification of the target processor) and directs a snoop request such to the target processor, to change the state of a corresponding line in the processor's L1 cache. When the processor reads shared data, it performs a snoop cache request lookup, and invalidates a hitting entry in the event of a hit (or clears it processor identification from the hitting entry), so that other snooping entities will not suppress snoop requests to it. |
CLAIMS What is claimed is: 1. A method of filtering a data cache snoop request to a target processor having a data cache, by a snooping entity, comprising: performing a snoop request cache lookup in response to a data store operation; and suppressing the data cache snoop request in response to a hit. 2. The method of claim 1 wherein suppressing the data cache snoop request in response to a hit further comprises suppressing the data cache snoop request in response to an identification of the snooping entity in a hitting cache entry. 3. The method of claim 1 wherein suppressing the data cache snoop request in response to a hit further comprises suppressing the data cache snoop request in response to an identification of the target processor in a hitting cache entry. 4. The method of claim 1 further comprising allocating an entry in the snoop request cache in response to a miss. 5. The method of claim 4 further comprising forwarding the data cache snoop request to the target processor in response to a miss. 6. The method of claim 4 wherein allocating an entry in the snoop request cache comprises including in the snoop request cache entry an identification of the snooping entity. 7. The method of claim 4 wherein allocating an entry in the snoop request cache comprises including in the snoop request cache entry an identification of the target processor. 8. The method of claim 1 further comprising forwarding the data cache snoop request to the target processor in response to a hit wherein the target processor's identification is not set in the hitting cache entry; and setting the identification of the target processor in the hitting cache entry. 9. The method of claim 1 wherein the snooping entity is a processor having a data cache, further comprising performing a snoop request cache lookup in response to a data load operation. 10. The method of claim 9 further comprising, in response to a hit, invalidating the hitting snoop request cache entry. 11. The method of claim 9 further comprising, in response to a hit, removing the processor's identification from the hitting cache entry. 12. The method of claim 1 wherein the snoop request cache lookup is performed only for data store operations on data having a predetermined attribute. 13. The method of claim 12 wherein the predetermined attribute is that the data is shared. 14. The method of claim 1 wherein the data cache snoop request is operative to change the cache state of a line in the target processor's data cache. 15. The method of claim 14 wherein the data cache snoop request is a snoop kill request operative to invalidate a line from the target processor's data cache. 16. A computing system, comprising: memory; a first processor having a data cache; a snooping entity operative to direct a data cache snoop request to the first processor upon writing to memory data having a predetermined attribute; and at least one snoop request cache comprising at least one entry, each valid entry indicative of a prior data cache snoop request; wherein the snooping entity is further operative to perform a snoop request cache lookup prior to directing a data cache snoop request to the first processor, and to suppress the data cache snoop request in response to a hit. 17. The system of claim 16 wherein the snooping entity is further operative to allocate a new entry in the snoop request cache in response to a miss. 18. The system of claim 16 wherein the snooping entity is further operative to suppress the data cache snoop request in response to an identification of the snooping entity in a hitting cache entry. 19. The system of claim 16 wherein the snooping entity is further operative to suppress the data cache snoop request in response to an identification of the first processor in a hitting cache entry. 20. The system of claim 19 wherein the snooping entity is further operative to set the first processor's identification in a hitting entry in which the first processor's identification is not set. 21. The system of claim 16 wherein the predetermined attribute indicates shared data. 22. The system of claim 16 wherein the first processor is further operative to perform a snoop request cache lookup upon reading from memory data having a predetermined attribute, and to alter a hitting snoop request cache entry in response to a hit. 23. The system of claim 22 wherein the first processor is operative to invalidate the hitting snoop request cache entry. 24. The system of claim 22 wherein the first processor is operative to clear from the hitting snoop request cache entry an identification of itself. 25. The system of claim 16 wherein the at least one snoop request cache comprises a single snoop request cache in which both the first processor and the snooping entity perform lookups upon writing to memory data having a predetermined attribute. 26. The system of claim 16 wherein the at least one snoop request cache comprises: a first snoop request cache in which the first processor is operative to perform lookups upon writing to memory data having a predetermined attribute; and a second snoop request cache in which the snooping entity is operative to perform lookups upon writing to memory data having a predetermined attribute. 27. The system of claim 26 wherein the first processor is further operative to perform lookups in the second snoop request cache upon reading from memory data having a predetermined attribute. 28. The system of claim 26 further comprising: a second processor having a data cache; and a third snoop request cache in which the snooping entity is operative to perform lookups upon writing to memory data having a predetermined attribute. |
SNOOP FILTERING USING A SNOOP REQUEST CACHEBACKGROUNDThe present invention relates in general to cache coherency in multiprocessor computing systems, and in particular to a snoop request cache to filter snoop requests.Many modern software programs are written as if the computer executing them had a very large (ideally, unlimited) amount of fast memory. Most modern processors simulate that ideal condition by employing a hierarchy of memory types, each having different speed and cost characteristics. The memory types in the hierarchy vary from very fast and very expensive at the top, to progressively slower but more economical storage types in lower levels. Due to the spatial and temporal locality characteristics of most programs, the instructions and data executing at any given time, and those in the address space near them, are statistically likely to be needed in the very near future, and may be advantageously retained in the upper, high-speed hierarchical layers, where they are readily available.A representative memory hierarchy may comprise an array of very fast General Purpose Registers (GPRs) in the processor core at the top level. Processor registers may be backed by one or more cache memories, known in the art as Level-1 or L1 caches. L1 caches may be formed as memory arrays on the same integrated circuit as the processor core, allowing for very fast access, but limiting the L1 cache's size. Depending on the implementation, a processor may include one or more on- or off-chip Level-2 or L2 caches. L2 caches are often implemented in SRAM for fast access times, and to avoid the performance-degrading refresh requirements of DRAM. Because there are fewer restraints on L2 cache size, L2 caches may be several times the size of L1 caches, and in multi-processor systems, one L2 cache may underlie two or more L1 caches. High performance computing processors may have additional levels of cache (e.g., L3). Below all the caches is main memory, usually implemented in DRAM or SDRAM for maximum density and hence lowest cost per bit. [0004] The cache memories in a memory hierarchy improve performance by providing very fast access to small amounts of data, and by reducing the data transfer bandwidth between one or more processors and main memory. The caches contain copies of data stored in main memory, and changes to cached data must be reflected in main memory. In general, two approaches have developed in the art for propagating cache writes to main memory: write-through and copy-back. In a write-through cache, when a processor writes modified data to its L1 cache, it additionally (and immediately) writes the modified data to lower-level cache and/or main memory. Under a copy-back scheme, a processor may write modified data to an L1 cache, and defer updating the change to lower-level memory until a later time. For example, the write may be deferred until the cache entry is replaced in processing a cache miss, a cache coherency protocol requests it, or under software control.In addition to assuming large amounts of fast memory, modern software programs execute in a conceptually contiguous and largely exclusive virtual address space. That is, each program assumes it has exclusive use of all memory resources, with specific exceptions for expressly shared memory space. Modern processors, together with sophisticated operating system software, simulate this condition by mapping virtual addresses (those used by programs) to physical addresses (which address actual hardware, e.g., caches and main memory). The mapping and translation of virtual to physical addresses is known as memory management. Memory management allocates resources to processors and programs, defines cache management policies, enforces security, provides data protection, enhances reliability, and provides other functionality by assigning attributes to segments of main memory called pages. Many different attributes may be defined and assigned on a per-page basis, such as supervisor/user, read-write/read-only, exclusive/shared, instruction/data, cache write-through/copy-back, and many others. Upon translating virtual addresses to physical addresses, data take on the attributes defined for the physical page. [0006] One approach to managing multi-processor systems is to allocate a separate "thread" of program execution, or task, to each processor. In this case, each thread is allocated exclusive memory, which it may read and write without concern for the state of memory allocated to any other thread. However, related threads often share some data, and accordingly are each allocated one or more common pages having a shared attribute. Updates to shared memory must be visible to all of the processors sharing it, raising a cache coherency issue. Accordingly, shared data may also have the attribute that it must "write-through" an L1 cache to an L2 cache (if the L2 cache backs the L1 cache of all processors sharing the page) or to main memory. Additionally, to alert other processors that the shared data has changed (and hence their own L1 -cached copy, if any, is no longer valid), the writing processor issues a request to all sharing processors to invalidate the corresponding line in their L1 cache. Inter-processor cache coherency operations are referred to herein generally as snoop requests, and the request to invalidate an L1 cache line is referred to herein as a snoop kill request or simply snoop kill. Snoop kill requests arise, of course, in scenarios other than the one described above.Upon receiving a snoop kill request, a processor must invalidate the corresponding line in its L1 cache. A subsequent attempt to read the data will miss in the L1 cache, forcing the processor to read the updated version from a shared L2 cache or main memory. Processing the snoop kill, however, incurs a performance penalty as it consumes processing cycles that would otherwise be used to service loads and stores at the receiving processor. In addition, the snoop kill may require a load/store pipeline to reach a state where data hazards that are complicated by the snoop are known to have been resolved, stalling the pipeline and further degrading performance. [0008] Various techniques are known in the art to reduce the number of processor stall cycles incurred by a processor being snooped. In one such technique, a duplicate copy of the L1 tag array is maintained for snoop accesses. When a snoop kill is received, a lookup is performed in the duplicate tag array. If this lookup misses, there is no need to invalidate the corresponding entry in the L1 cache, and the penalty associated with processing the snoop kill is avoided. However, this solution incurs a large penalty in silicon area, as the entire tag for each L1 cache must be duplicated, increasing the minimum die size and also power consumption. Additionally, a processor must update two copies of the tag every time the L1 cache is updated. [0009] Another known technique to reduce the number of snoop kill requests that a processor must handle is to form "snooper groups" of processors that may potentially share memory. Upon updating an L1 cache with shared data (with write-through to a lower level memory), a processor sends a snoop kill request only to the other processors within its snooper group. Software may define and maintain snooper groups, e.g., at a page level or globally. While this technique reduces the global number of snoop kill requests in a system, it still requires that each processor within each snooper group process a snoop kill request for every write of shared data by any other processor in the group.Yet another known technique to reduce the number of snoop kill requests is store gathering. Rather then immediately executing each store instruction by writing small amounts of data to the L1 cache, a processor may include a gather buffer or register bank to collect store data. When a cache line, half-line, or other convenient quantity of data is gathered, or when a store occurs to a different cache line or half-line than the one being gathered, the gathered store data is written to the L1 cache all at once. This reduces the number of write operations to the L1 cache, and consequently the number of snoop kill requests that must be sent to another processor. This technique requires additional on-chip storage for the gather buffer or gather buffers, and may not work well when store operations are not localized to the extent covered by the gather buffers.Still another known technique is to filter snoop kill requests at the L2 cache by making the L2 cache fully inclusive of the L1 cache. In this case, a processor writing shared data performs a lookup in the other processor's L2 cache before snooping the other processor. If the L2 lookup misses, there is no need to snoop the other processor's L1 cache, and the other processor does not incur the performance degradation of processing a snoop kill request. This technique reduces the total effective cache size by consuming L2 cache memory to duplicate one or more L1 caches. Additionally, this technique is ineffective if two or more processors backed by the same L2 cache share data, and hence must snoop each other.SUMMARYAccording to one or more embodiments described and claimed herein, one or more snoop request caches maintain records of snoop requests. Upon writing data having a shared attribute, a processor performs a lookup in a snoop request cache. If the lookup misses, the processor allocates an entry in the snoop request cache and directs a snoop request (such as a snoop kill) to one or more processors. If the snoop request cache lookup hits, the processor suppresses the snoop request. When a processor reads shared data, it also performs a snoop cache request lookup, and invalidates a hitting entry in the event of a hit.One embodiment relates to a method of issuing a data cache snoop request to a target processor having a data cache, by a snooping entity. A snoop request cache lookup is performed in response to a data store operation, and the data cache snoop request is suppressed in response to a hit.Another embodiment relates to a computing system. The system includes memory and a first processor having a data cache. The system also includes a snooping entity operative to direct a data cache snoop request to the first processor upon writing to memory data having a predetermined attribute. The system further includes at least one snoop request cache comprising at least one entry, each valid entry indicative of a prior data cache snoop request. The snooping entity is further operative to perform a snoop request cache lookup prior to directing a data cache snoop request to the first processor, and to suppress the data cache snoop request in response to a hit.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 is a functional block diagram of a shared snoop request cache in a multi-processor computing system.Figure 2 is a functional block diagram of multiple dedicated snoop request caches per processor in a multi-processor computing system.Figure 3 is a functional block diagram of a multi-processor computing system including a non-processor snooping entity.Figure 4 is a functional block diagram of a single snoop request cache associated with each processor in a multi-processor computing system.Figure 5 is a flow diagram of a method of a method of issuing a snoop request.DETAILED DESCRIPTIONFigure 1 depicts a multi-processor computing system, indicated generally by the numeral 100. The computer 100 includes a first processor 102 (denoted P1 ) and its associated L1 cache 104. The computer 100 additionally includes a second processor 106 (denoted P2) and its associated L1 cache 108. Both L1 caches are backed by a shared L2 cache 110, which transfers data across a system bus 112 to and from main memory 1 14. The processors 102, 106 may include dedicated instruction caches (not shown), or may cache both data and instructions in the L1 and L2 caches. Whether the caches 104, 108, 110 are dedicated data caches or unified instruction/data caches has no impact on the embodiments describe herein, which operate with respect to cached data. As used herein, a "data cache" operation, such as a data cache snoop request, refers equally to an operation directed to a dedicated data cache and one directed to data stored in a unified cache.Software programs executing on processors P1 and P2 are largely independent, and their virtual addresses are mapped to respective exclusive pages of physical memory. However, the programs do share some data, and at least some addresses are mapped to a shared memory page. To ensure that each processor's L1 cache 104, 108 contains the latest shared data, the shared page has the additional attribute of L1 write-through. Accordingly, any time P1 or P2 update a shared memory address, the L2 cache 110, as well as the processor's L1 cache 104, 108, is updated. Additionally, the updating processor 102, 106 sends a snoop kill request to the other processor 102, 106, to invalidate a possible corresponding line in the other processor's L1 cache 104, 108. This incurs performance degradation at the receiving processor 102, 106, as explained above.A snoop request cache 1 16 caches previous snoop kill requests, and may obviate superfluous snoop kills, improving overall performance. Figure 1 diagrammatically depicts this process. At step 1 , processor P1 writes data to a memory location having a shared attribute. As used herein, the term "granule" refers to the smallest cacheable quantum of data in the computer system 100. In most cases, a granule is the smallest L1 cache line size (some L2 caches have segmented lines, and can store more than one granule per line). Cache coherency is maintained on a granule basis. The shared attribute (or alternatively, a separate write-through attribute) of the memory page containing the granule forces P1 to write its data to the L2 cache 110, as well as its own L1 cache 104. [0023] At step 2, the processor P1 performs a lookup in the snoop request cache 116. If the snoop request cache 116 lookup misses, the processor P1 allocates an entry in the snoop request cache 116 for the granule associated with P1 's store data, and sends a snoop kill request to processor P2 to invalidate any corresponding line (or granule) in P2's L1 cache 108 (step 3). If the processor P2 subsequently reads the granule, it will miss in its L1 cache 108, forcing an L2 cache 1 10 access, and the latest version of the data will be returned to P2.If processor P1 subsequently updates the same granule of shared data, it will again perform a write-through to the L2 cache 1 10 (step 1 ). P1 will additionally perform a snoop request cache 1 16 lookup (step 2). This time, the snoop request cache 1 16 lookup will hit. In response, the processor P1 suppresses the snoop kill request to the processor P2 (step 3 is not executed). The presence of an entry in the snoop request cache 1 16, corresponding to the granule to which it is writing, assures processor P1 that a previous snoop kill request already invalidated the corresponding line in P2's L1 cache 108, and any read of the granule by P2 will be forced to access the L2 cache 1 10. Thus, the snoop kill request is not necessary for cache coherency, and may be safely suppressed.However, the processor P2 may read data from the same granule in the L2 cache 1 10 - and change its corresponding L1 cache line state to valid - after the processor P1 allocates an entry in the snoop request cache 1 16. In this case, the processor P1 should not suppress a snoop kill request to the processor P2 if P1 writes a new value to the granule, since that would leave different values in processor P2's L1 cache and the L2 cache. To "enable" snoop kills issued by the processor P1 to reach the processor P2 (i.e., not be suppressed), upon reading the granule at step 4, the processor P2 performs a lookup on the granule in the snoop request cache 1 16, at step 5. If this lookup hits, the processor P2 invalidates the hitting snoop request cache entry. When the processor P1 subsequently writes to the granule, it will issue a new snoop kill request to the processor P2 (by missing in the snoop request cache 1 16). In this manner, the two L1 caches 104, 108 maintain coherency for processor P1 writes and processor P2 reads, with the processor P1 issuing the minimum number of snoop kill requests required to do so.On the other hand, if the processor P2 writes the shared granule, it too must do a write-through to the L2 cache 1 10. In performing a snoop request cache 116 lookup, however, it may hit an entry that was allocated when processor P1 previously wrote the granule. In this case, suppressing a snoop kill request to the processor P1 would leave a stale value in PVs L1 cache 104, resulting in non-coherent L1 caches 104, 108. Accordingly, in one embodiment, upon allocating a snoop request cache 1 16 entry, the processor 102, 106 performing the write-through to the L2 cache 110 includes an identifier in the entry. Upon subsequent writes, the processor 102, 106 should only suppress a snoop kill request if a hitting entry in the snoop request cache 116 includes that processor's identifier. Similarly, when performing a snoop request cache 1 16 lookup upon reading the granule, a processor 102, 106 must only invalidate a hitting entry if it includes a different processor's identifier. In one embodiment, each cache 1 16 entry includes an identification flag for each processor in the system that may share data, and processors inspect, and set or clear the identification flags as required upon a cache hit.The snoop request cache 1 16 may assume any cache organization or degree of association known in the art. The snoop request cache 116 may also adopt any cache element replacement strategy known in the art. The snoop request cache 116 offers performance benefits if a processor 102, 106 writing shared data hits in the snoop request cache 1 16 and suppresses snoop kill requests to one or more other processors 102, 106. However, if a valid snoop request cache 116 element is replaced due to the number of valid entries exceeding available cache 1 16 space, no erroneous operation or cache non-coherency results - at worst, a subsequent snoop kill request may be issued to a processor 102, 106 for which the corresponding L1 cache line is already invalid.In one or more embodiments, tags to the snoop request cache 116 entries are formed from the most significant bits of the granule address and a valid bit, similar to the tags in the L1 caches 104, 108. In one embodiment, the "line," or data stored in a snoop request cache 1 16 entry is simply a unique identifier of the processor 102, 106 that allocated the entry (that is, the processor 102, 106 issuing a snoop kill request), which may for example comprise an identification flag for each processor in the system 100 that may share data. In another embodiment, the source processor identifier may itself be incorporated into the tag, so a processor 102, 106 will only hit against its own entries in a cache lookup pursuant to a store of shared data. In this case, the snoop request cache 1 16 is simply a Content Addressable Memory (CAM) structure indicating a hit or miss, without a corresponding RAM element storing data. Note that when performing the snoop request cache 1 16 lookup pursuant to a load of shared data, the other processors' identifiers must be used.In another embodiment, the source processor identifier may be omitted, and an identifier of each target processor - that is, each processor 102, 106 to whom a snoop kill request has been sent - is stored in each snoop request cache 116 entry. The identification may comprise an identification flag for each processor in the system 100 that may share data. In this embodiment, upon writing to a shared data granule, a processor 102, 106 hitting in the snoop request cache 1 16 inspects the identification flags, and suppresses a snoop kill request to each processor whose identification flag is set. The processor 102, 106 sends a snoop kill request to each other processor whose identification flag is clear in the hitting entry, and then sets the target processors' flag(s). Upon reading a shared data granule, a processor 102, 106 hitting in the snoop request cache 1 16 clears its own identification flag in lieu of invalidating the entire entry - clearing the way for snoop kill requests to be directed to it, but still blocked from being sent to other processors whose corresponding cache line remains invalid.Another embodiment is described with reference to Figure 2, depicting a computer system 200 including a processor P1 202 having an L1 cache 204, a processor P2 206 having an L1 cache 208, and a processor P3 210 having an L1 cache 212. Each L1 cache 204, 208, 212 connects across the system bus 213 to main memory 214. Note that, as evident in Figure 2, no embodiment herein requires or depends on the presence or absence of an L2 cache or any other aspect of the memory hierarchy. Associated with each processor 202, 206, 210 is a snoop request cache 216, 218, 220, 222, 224, 226 dedicated to each other processor 202, 206, 210 (having a data cache) in the system 200 that can access shared data. For example, associated with processor P1 is a snoop request cache 216 dedicated to processor P2 and a snoop request cache 218 dedicated to processor P3. Similarly, associated the processor P2 are snoop request caches 220, 222 dedicated to processors P1 and P3, respectively. Finally, snoop request caches 224, 226, respectively dedicated to processors P1 and P2, are associated with processor P3. In one embodiment, the snoop request caches 216, 218, 220, 222, 224, 226 are CAM structures only, and do not include data lines.The operation of the snoop request caches is depicted diagrammatically with a representative series of steps in Figure 2. At step 1 , the processor P1 writes to a shared data granule. Data attributes force a write-through of P1 's L1 cache 204 to memory 214. The processor P1 performs a lookup in both snoop request caches associated with it - that is, both the snoop request cache 216 dedicated to processor P2, and the snoop request cache 218 dedicated to processor P3, at step 2. In this example, the P2 snoop request cache 216 hits, indicating that P1 previously sent a snoop kill request to P2 whose snoop request cache entry has not been invalidated or over-written by a new allocation. This means the corresponding line in P2's L2 cache 208 was (and remains) invalidated, and the processor P1 suppresses a snoop kill request to processor P2, as indicated by a dashed line at step 3a. [0032] In this example, the lookup of the snoop request cache 218 associated with P1 and dedicated to P3 misses. In response, the processor P1 allocates an entry for the granule in the P3 snoop request cache 218, and issues a snoop kill request to the processor P3, at step 3b. This snoop kill invalidates the corresponding line in P3's L1 cache, and forces P3 to go to main memory on its next read from the granule, to retrieve the latest data (as updated by P1 's write).Subsequently, as indicated at step 4, the processor P3 reads from the data granule. The read misses in its own L1 cache 212 (as that line has been invalidated by P1 's snoop kill), and retrieves the granule from main memory 214. At step 5, the processor P3 performs a lookup in all snoop request caches dedicated to it - that is, in both P1 's snoop request cache 218 dedicated to P3, and P2's snoop request cache 222, which is also dedicated to P3. If either (or both) cache 218, 222 hits, the processor P3 invalidates the hitting entry, to prevent the corresponding processor P1 or P2 from suppressing snoop kill requests to P3 if either processor P1 or P2 writes a new value to the shared data granule.Generalizing from this specific example, in an embodiment such as that depicted in Figure 2 - where associated with each processor is a separate snoop request cache dedicated to each other processor sharing data - a processor writing to a shared data granule performs a lookup in each snoop request cache associated with writing processor. For each one that misses, the processor allocates an entry in the snoop request cache and sends a snoop kill request to the processor to which the missing snoop request cache is dedicated. The processor suppresses snoop kill requests to any processor whose dedicated cache hits. Upon reading a shared data granule, a processor performs a lookup in all snoop request caches dedicated to it (and associated with other processors), and invalidates any hitting entries. In this manner, the L1 caches 204, 208, 212 maintain coherency for data having a shared attribute. [0035] While embodiments of the present invention are described herein with respect to processors, each having an L1 cache, other circuits or logical/functional entities within the computer system 10 may participate in the cache coherency protocol. Figure 3 depicts an embodiment similar to that of Figure 2, with a non-processor snooping entity participating in the cache coherency protocol. The system 300 includes a processor P1 302 having an L1 cache 304, and a processor P2 306 having an L1 cache 308.The system additionally includes a Direct Memory Access (DMA) controller 310. As well known in the art, a DMA controller 310 is a circuit operative to move blocks of data from a source (memory or a peripheral) to a destination (memory or a peripheral) autonomously of a processor. In the system 300, the processors 302, 306, and DMA controller 310 access main memory 314 via the system bus 312. In addition, the DMA controller 310 may read and write data directly from a data port on a peripheral 316. If the DMA controller 310 is programmed by a processor to write to shared memory, it must participate in the cache coherency protocol to ensure coherency of the L1 data caches 304, 308.Since the DMA controller 310 participates in the cache coherency protocol, it is a snooping entity. As used herein, the term "snooping entity" refers to any system entity that may issue snoop requests pursuant to a cache coherency protocol. In particular, a processor having a data cache is one type of snooping entity, but the term "snooping entity" encompasses system entities other than processors having data caches. Non-limiting examples of snooping entities other than the processors 302, 306 and DMA controller 310 include a math or graphics co-processor, a compression/decompression engine such as an MPEG encoder/decoder, or any other system bus master capable of accessing shared data in memory 314. [0038] Associated with each snooping entity 302, 306, 310 is a snoop request cache dedicated to each processor (having a data cache) with which the snooping entity may share data. In particular, a snoop request cache 318 is associated with processor P1 and dedicated to processor P2. Similarly, a snoop request cache 320 is associated with processor P2 and dedicated to processor P1. Associated with the DMA controller 310 are two snoop request caches: a snoop request cache 322 dedicated to processor P1 and a snoop request cache 324 dedicated to processor P2. [0039] The cache coherency process is depicted diagrammatically in Figure 3. The DMA controller 310 writes to a shared data granule in main memory 314 (step 1 ). Since either or both processors P1 and P2 may contain the data granule in their L1 cache 304, 308, the DMA controller 310 would conventionally send a snoop kill request to each processor P1 , P2. First, however, the DMA controller 310 performs a lookup in both of its associated snoop request caches (step 2) - that is, the cache 322 dedicated to processor P1 and the cache 324 dedicated to processor P2. In this example, the lookup in the cache 322 dedicated to processor P1 misses, and the lookup in the cache 324 dedicated to processor P2 hits. In response to the miss, the DMA controller 310 sends a snoop kill request to the processor P1 (step 3a) and allocates an entry for the data granule in the snoop request cache 322 dedicated to processor P1. In response to the hit, the DMA controller 310 suppresses a snoop kill request that would otherwise have been sent to the processor P2 (step 3b).Subsequently, the processor P2 reads from the shared data granule in memory 314 (step 4). To enable snoop kill requests directed to itself from all snooping entities, the processor P2 performs a look up in each cache 318, 324 associated with another snooping entity and dedicated to the processor P2 (Ae., itself). In particular, the processor P2 performs a cache lookup in the snoop request cache 318 associated with processor P1 and dedicated to processor P2, and invalidates any hitting entry in the event of a cache hit. Similarly, the processor P2 performs a cache lookup in the snoop request cache 324 associated with the DMA controller 310 and dedicated to processor P2, and invalidates any hitting entry in the event of a cache hit. In this embodiment, the snoop request caches 318, 320, 322, 324 are pure CAM structures, and do not require processor identification flags in the cache entries.Note that no snooping entity 302, 306, 310 has associated with it any snoop request cache dedicated to the DMA controller 310. Since the DMA controller 310 does not have a data cache, there is no need for another snooping entity to direct a snoop kill request to the DMA controller 310 to invalidate a cache line. In addition, note that, while the DMA controller 310 participates in the cache coherency protocol by issuing snoop kill requests upon writing shared data to memory 314, upon reading from a shared data granule, the DMA controller 310 does not perform any snoop request cache lookup for the purpose of invalidating a hitting entry. Again, this is due to the DMA controller 310 lacking any cache for which it must enable another snooping entity to invalidate a cache line, upon writing to shared data.Yet another embodiment is described with reference to Figure 4, depicting a computer system 400 including two processors: P1 402 having L1 cache 404 and P2 406 having L1 cache 408. The processors P1 and P2 connect across a system bus 410 to main memory 412. A single snoop request cache 414 is associated with processor P1 , and a separate snoop request cache 416 is associated with processor P2. Each entry in each snoop request cache 414, 416 includes a flag or field identifying a different processor to which the associated processor may direct a snoop request. For example, entries in the snoop request cache 414 include identification flags for processor P2, as well as any other processors (not shown) in the system 400 with which P1 may share data.Operation of this embodiment is depicted diagrammatically in Figure 4. Upon writing to a data granule having a shared attribute, the processor P1 misses in its L1 cache 404, and writes-through to main memory 412 (step 1 ). The processor P1 performs a cache lookup in the snoop request cache 414 associated with it (step 2). In response to a hit, the processor P1 inspects the processor identification flags in the hitting entry. The processor P1 suppresses sending a snoop request to any processor with which it shares data and whose identification flag in the hitting entry is set (e.g., P2, as depicted by the dashed line at step 3). If a processor identification flag is clear and the processor P1 shares the data granule with the indicated processor, the processor P1 sends a snoop request to that processor, and sets the target processor's identification flag in the hitting snoop request cache 414 entry. If the snoop request cache 414 lookup misses, the processor P1 allocates an entry, and sets the identification flag for each processor to which it sends a snoop kill request. [0044] When any other processor performs a load from a shared data granule, misses in its L1 cache, and retrieves the data from main memory, it performs cache lookups in the snoop request caches 414, 416 associated with each processor with which it shares the data granule. For example, processor P2 reads from memory data from a granule it shares with P1 (step 4). P2 performs a lookup in the P1 snoop request cache 414 (step 5), and inspects any hitting entry. If P2's identification flag is set in the hitting entry, the processor P2 clears its own identification flag (but not the identification flag of any other processor), enabling processor P1 to send snoop kill requests to P2 if P1 subsequently writes to the shared data granule. A hitting entry in which P2's identification flag is clear is treated as a cache 414 miss (P2 takes no action). [0045] In general, in the embodiment depicted in Figure 4 - where each processor has a single snoop request cache associated with it - each processor performs a lookup only in the snoop request cache associated with it upon writing shared data, allocates a cache entry if necessary, and sets the identification flag of every processor to whom it sends a snoop request. Upon reading shared data, each processor performs a lookup in the snoop request cache associated with every other processor with which it shares data, and clears its own identification flag from any hitting entry. [0046] Figure 5 depicts a method of issuing a data cache snoop request, according to one or more embodiments. One aspect of the method "begins" with a snooping entity writing to a data granule having a shared attribute at block 500. If the snooping entity is a processor, the attribute (e.g., shared and/or write-through) forces a write-through of the L1 cache to a lower level of the memory hierarchy. The snooping entity performs a lookup on the shared data granule in one or more snoop request caches associated with it at block 502. If the shared data granule hits in the snoop request cache at block 504 (and, in some embodiments, the identification flag for a processor with whom it shares data is set in a hitting cache entry), the snooping entity suppresses a data cache snoop request for one or more processors and continues. For the purposes of Figure 5, it may "continue" by subsequently writing another shared data granule at block 500, reading a shared data granule at block 510, or performing some other task not pertinent to the method. If the shared data granule misses in a snoop request cache (or, in some embodiments, it hits but a target processor identification flag is clear), the snooping entity allocates an entry for the granule in the snoop request cache at block 506 (or sets the target processor identification flag), and sends a data cache snoop request to a processor sharing the data at block 508, and continues. [0047] Another aspect of the method "begins" when a snooping entity reads from a data granule having a shared attribute. If the snooping entity is a processor, it misses in its L1 cache and retrieves the shared data granule from a lower level of the memory hierarchy at block 510. The processor performs a lookup on the granule in one or more snoop request caches dedicated to it (or whose entries include an identification flag for it) at block 512. If the lookup misses in a snoop request cache at block 514 (or, in some embodiments, the lookup hits but the processor's identification flag in the hitting entry is clear), the processor continues. If the lookup hits in a snoop request cache at block 514 (and, in some embodiments, the processor's identification flag in the hitting entry is set) the processor invalidates the hitting entry at block 516 (or, in some embodiments, clears its identification flag), and then continues.If the snooping entity is not a processor with an L1 cache - for example, a DMA controller - there is no need to access the snoop request cache to check for and invalidate an entry (or clear its identification flag) upon reading from a data granule. Since the granule is not cached, there is no need to clear the way for another snooping entity to invalidate or otherwise change the cache state of a cache line when the other entity writes to the granule. In this case, the method continues after reading from the granule at block 510, as indicated by the dashed arrows in Figure 5. In other words, the method differs with respect to reading shared data, depending on whether or not the snooping entity performing the read is a processor having a data cache. [0049] According to one or more embodiments described herein, performance in multi-processor computing systems is enhanced by avoiding the performance degradations associated with the execution of superfluous snoop requests, while maintaining L1 cache coherency for data having a shared attribute. Various embodiments achieve this enhanced performance at a dramatically reduced cost of silicon area, as compared with the duplicate tag approach known in the art. The snoop request cache is compatible with, and provides enhanced performance benefits to, embodiments utilizing other known snoop request suppression techniques, such as processors within a software-defined snooper group and for processors backed by the same L2 cache that is fully inclusive of L1 caches. The snoop request cache is compatible with store gathering, and in such an embodiment may be of a reduced size, due to the lower number of store operations performed by the processor. [0050] While the discussion above has been presented in terms of a write-through L1 cache and suppressing snoop kill requests, those of skill in the art will recognize that other cache writing algorithms and concomitant snooping protocols may advantageously utilize the inventive techniques, circuits, and methods described and claimed herein. For example, in a MESI (Modified, Exclusive, Shared, Invalid) cache protocol, a snoop request may direct a processor to change the cache state of a line from Exclusive to Shared.The present invention may, of course, be carried out in other ways than those specifically set forth herein without departing from essential characteristics of the invention. The present embodiments are to be considered in all respects as illustrative and not restrictive, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein. |
A supporting structure is wafer-bonded to the upper face side of a partially or fully processed device wafer. The device wafer includes a transistor having a well region that extends into the substrate material of the device wafer. The source and drain regions of the transistor extend into the well region. After attachment of the supporting structure, the device wafer is thinned from the back side until the bottom of the well region is reached. To reduce source and drain junction capacitances, etching can continue until the source and drain regions are reached. In one embodiment, all of the well-to-substrate junction is removed in a subsequent etching step, thereby reducing or eliminating the well-to-substrate junction capacitance of the resulting transistor. Resistance between the well electrode and the transistor channel is reduced because the well contact is disposed on the back side of the device wafer directly under the gate of the transistor. |
1. A method, comprising:(a) attaching a supporting structure to a face side of a device wafer, a transistor being disposed on the face side of the device wafer, the transistor comprising a gate, a source region of a first conductivity type and a drain region of the first conductivity type, the source and drain regions extending into a region of the device wafer having a second conductivity type opposite the first conductivity type, the device wafer also having a substantially planar back side surface, a layer of the device wafer being disposed between the source and drain regions and the back side surface of the device wafer; (b) processing the back side surface of the device wafer and thereby removing the layer of the device wafer such that a portion of the source region is exposed and such that a portion of the drain region is exposed; and (c) removing an additional amount of the device wafer without removing the source and drain regions such that substantially all of the region of the device wafer of the second conductivity type that is in contact with the source region that remains is disposed between the source region and the drain region. 2. The method of claim 1, wherein a second transistor is disposed on the face side of the device wafer, the second transistor having a source region of the second conductivity type and a drain region of the second conductivity type, wherein after the removing of step (c) the drain region of the second conductivity type of the second transistor is in contact with the drain region of the first conductivity type.3. The method of claim 1, wherein substantially all of the region of the device wafer of the second conductivity type that is in contact with the drain region that remains after step (c) is disposed between the source region and the drain region.4. The method of claim 1, wherein the layer is removed in step (b) by chemical mechanical polishing (cmp).5. The method of claim 1, wherein after step (c) a key-shaped feature of the region of the device wafer of the second conductivity type remains, the key-shaped feature having a contact portion and a channel portion, the channel portion contacting the source region and contacting the drain region.6. The method of claim 5, wherein prior to step (a) the device wafer comprises a metal conductor, and wherein after step (c) the remaining key-shaped feature of the second conductivity type is electrically coupled to the metal conductor.7. The method of claim 5, wherein the key-shaped feature and the source region and the drain region together form an island of semiconductor material.8. The method of claim 1, wherein after step (c) the supporting structure remains attached to the face side of the device wafer such that the supporting structure and the device wafer together are a supporting structure/device wafer assembly, the method further comprising:(d) cutting the supporting structure/device wafer assembly into a plurality of integrated circuit dice. 9. The method of claim 1, wherein the region of the device wafer of the second conductivity type into which the source and drain regions extend is a well region, and wherein prior to step (a) the well region extends into a substrate region of the device wafer, the substrate region being of the first conductivity type.10. The method of claim 1, wherein after step (c) there remains no portion of the substrate region in contact with any portion of the well region. |
FIELD OF THE INVENTIONThe present invention relates to bond and etchback semiconductor-on-insulator (BESOI) semiconductor processing technology and related structures.BACKGROUND INFORMATIONFIG. 1 (Prior Art) is a cross-sectional diagram of a conventional complementary metal oxide semiconductor (CMOS) transistor structure 1 often used in contemporary ultra large scale integration. The diagram is simplified to better illustrate the related issues. Structure 1 includes a P-channel transistor 2 having a source region 3, a drain region 4 and a gate 5. A channel region exists between source region 3 and drain region 4. Source region 3 and drain region 4 extend into in an N-type well region 6.The structure also includes an N-channel transistor 7 having a source region 8, a drain region 9 and a gate 10. A channel region exists between source region 8 and drain region 9. Source region 8 and drain region 9 extend into in a P-type well region 11. Well regions 6 and 11 are diffused into a bulk semiconductor substrate 12. Bulk substrate 12 in this case is monocrystalline silicon of a silicon wafer. In this example, well region 11 is reverse biased with respect to substrate 12. Each of the wells and the substrate is provided with a contact so that the wells and substrate can be maintained at the appropriate potentials. Above the upper surface 13 (sometimes called the "face side") of the semiconductor wafer are multiple interleaved layers of metallization and insulation (not shown). The metallization layers interconnect the various transistors to form a desired integrated circuit.In MOS transistors such as the transistors of FIG. 1, switching speed is limited by the time required to charge and discharge the capacitances between device electrodes. If parasitic capacitances between the device electrodes can be reduced, then device speed can be increased. In each of the two transistors of FIG. 1, there exists a junction capacitance between the source region and the well region, a junction capacitance between the drain region and the well region, and a junction capacitance between the P-well region and the substrate. A process is desired that reduces these capacitances and therefore speeds transistor operation.In addition to transistors 2 and 7 of FIG. 1 being slowed due to the presence of parasitic junction capacitances, the performance of transistors 2 and 7 also suffers due to a resistance existing between the well contact and the channel of each transistor. Radiation such as alpha particles can be penetrate into the semiconductor material of the transistors. Each alpha particle generates electron-hole pairs along its path as it passes into the semiconductor material of the device. If, for example, the electron-hole pairs are generated in a portion of the semiconductor material in which an electric field is present (for example, due to the reverse bias of a well-to-substrate junction), then the electrons and holes may be separated by the electric field. The resulting current is then typically drawn out of the well region through the well contact. One such alpha particular may, for example, generate one million such electron hole pairs. If the current path of the resulting current passes under the channel on its way from the well-to-substrate junction to the well contact, then a momentary voltage drop will exist across the current path due to the resistance of the well under the channel. This momentary voltage may affect the threshold voltage of the transistor or otherwise affect transistor operation.In addition to currents flowing past the channel region of a transistor due to alpha particles, the normal switching of the transistors can also cause undesirable currents to flow in the transistor structure of FIG. 1. A first junction capacitance exists between the well region of the N-channel transistor and the substrate. A second junction capacitance exists between the substrate and the well region of the P-channel transistor. These capacitances are oriented in series with one another. Consider the situation in which the drains of the N-channel and P-channel transistors are coupled together to that the transistors form an inverter. As the transistors switch, the voltages on the drains of the transistors change, thereby causing small local changes in the voltages in the well regions. The result is current flow in a current path formed by the series coupled capacitances. This current through the well resistance, like the current due to alpha particles, may cause momentary voltage changes as the current flows through the resistance of the well region underneath a transistor channel. Such voltage fluctuations may adversely affect transistor operation.These and other problems exist due to resistances and junction capacitances of the structure.Using silicon-on-insulator (SOI) processing technology, transistors can be fabricated in a thin semiconductor layer that is supported and insulated from an underlying supporting substrate. In one so-called "bond and etchback" SOI (BESOI) device architecture, an insulating layer is formed over a device wafer. Etch stops are formed into the surface of the device wafer. A supporting "handle" wafer is then bonded to the insulating layer of the device wafer, and the back side of the device wafer is thinned in a planar fashion using a thinning technique until the etch stops are reached. Chemical mechanical polishing (CMP) may be used to perform this thinning. The resulting structure is a very thin layer of the device wafer that is insulated from the underlying supporting substrate by the insulating layer. Transistors are then formed into this thin layer of the device wafer. Because the transistors do not have well regions that extend into the underlying supporting substrate, the transistors do not have the associated junction capacitances. Commonly acknowledged advantages of BESOI devices include: 1) less junction capacitance so higher speed can be achieved, 2) reduced susceptibility to problems causes by radiation such as alpha particles, and 3) better isolation between transistors and increased freedom from latchup.Although such BESOI techniques exist, the transistors nonetheless still suffer from an amount of junction capacitance. Moreover, the resistance of the well material in the area underneath the channel is present. Current through this area can still cause voltages that have undesirable influences on transistor operation. Resistance of the transistors to single event upsets, although improved, still remains. In addition to the well contacts involving a resistance, they also occupy an amount of area on the surface SOI wafer.An improved processing technology is desired.SUMMARYA supporting structure such as a silicon wafer is wafer-bonded to the upper face side of a partially processed or fully processed device wafer. The device wafer includes a field effect MOS transistor. The field effect transistor includes a well region that extends into the substrate material of the device wafer. The source region and drain region of the field effect transistor extend at least partly into the well region.After attachment of the supporting structure, the device wafer is thinned from the back side of the device wafer until the bottom of the well region is exposed. A well contact region is then ion implanted into the exposed bottom surface of the well region and a metal electrode is formed to make contact to the well region from the back side of the device wafer. The resulting transistor structure has a reduced amount of well-to-substrate parasitic junction capacitance because the well region to substrate junction area that would have otherwise existed on the bottom of the well region has been removed. Resistance between the well contact and the channel region of the transistor is reduced because the well contact is disposed close to the channel region directly under the gate of the transistor.In another embodiment, the substrate region of a device wafer is thinned from the back side until the bottom of the well region is exposed. All the substrate material disposed underneath the well region is therefore removed. A subsequent etching step is then performed to etch away all remaining portions of the substrate region (for example, between transistors). The result is islands of well material. There is little or no well-to-substrate material interface because all or substantially all of the substrate material is removed. The associated parasitic well-to-substrate junction capacitance is therefore eliminated or reduced. In one embodiment, contact is made to the well regions by metal that extends down from the top of the source regions and across the bottoms of the well regions to well contact regions disposed directly underneath the gates of the transistors on the back side of the device wafer.In another embodiment, a device wafer is thinned from the back side past the point of exposing the well region of the transistor. Rather, the device wafer is thinned from the back side until the bottoms of the source and drain regions of the transistor are reached. Only a small amount of the well region remains. This amount of well region material is disposed principally between the source region and the drain region. Accordingly, the associated source-to-well and drain-to-well junction capacitances are reduced. Contact is made to the narrow amount of remaining well material between the source region and the drain region by leaving a relatively large contact portion of the well material in contact with the narrow portion of the well material. The remaining well material therefore has a key-shaped structure. The wide part of the key-shaped structure is the contact portion. The narrow portion of the key-shaped structure is the narrow channel portion.In one embodiment, a metal well electrode in the interconnect portion of the device wafer makes electrical connection with the key-shaped well structure via the contact portion of the key-shaped well region. The contact portion of the key-shaped well region is therefore coupled in a vertical direction to the well electrode in the overlaying interconnect portion of the device wafer.In another embodiment, metal is deposited and patterned onto the back side of the device wafer to make a well electrode that contacts the remaining well region from the back side of the device wafer. A bias voltage is placed onto this well electrode from a source disposed on the back side of the device wafer, as opposed to being supplied from a well electrode disposed in the interconnect portion of the device wafer. By placing the well electrode on the back side of the device wafer, space on the upper device side of the device wafer that would otherwise be used for the well contact is now usable for other purposes such as, for example, achieving closer component spacing.In another embodiment, the narrow portion of the key-shaped well structure between the source region and the drain region is thinned from the back side so that it is thinner than the adjacent source and drain regions. After thinning, the source region, drain region and thinned well material are oxidized to form a thin thermal oxide. An area of the thin thermal oxide is then removed to form a second gate contact area. Metal is then deposited on the thin thermal oxide and is patterned to form a second gate electrode. Metal of the second gate electrode makes electrical contact with the electrode of the first gate through the second gate contact region. The resulting double gate transistor structure has substantially no substrate-to-well region junction capacitance because all or substantially all of the substrate-to-well junction has been removed. The resulting double gate transistor has very little source-to-well or drain-to-well junction capacitance because the well has been thinned and patterned such that the only contact between the well material and the source and drain is in the narrow channel region between the source region and the drain region. The threshold voltage of each of the channels of the resulting double gate transistor can be adjusted to improve subthreshold leakage of the double gate transistor.By eliminating the substrate material altogether, by reducing the size of the drain regions as compared to the source regions, and/or by placing the well electrodes and associated contacts on the back side of the device wafer, less device wafer surface area is required to fabricate the transistors of the present invention as compared to the transistors of the conventional structure of FIG. 1. Closer component spacing is therefore possible without reducing the minimum feature size achievable with the semiconductor fabrication process used and without any reduction in minimum lithography dimensions.Other structures and methods are described in the detailed description below. This summary does not purport to define the invention. The invention is defined by the claims.BRIEF DESCRIPTION OF THE FIGURESFIG. 1 (Prior Art) is a simplified cross-sectional diagram of a conventional CMOS field effect transistor structure.FIG. 2 is a simplified cross-sectional diagram of a device wafer having a polish stop in accordance with a step in a first method.FIG. 3 is a simplified cross-sectional diagram of the device wafer of FIG. 2 after a supporting structure has been wafer-bonded to the face side surface of the device wafer in accordance with the first method.FIG. 4 is a simplified cross-sectional diagram of a subsequent step wherein the back side of the device wafer is thinned to remove a layer of substrate material and to expose a bottom portion of well regions in accordance with the first method.FIG. 5 is a simplified cross-sectional diagram of a device wafer in accordance with a step in a second method.FIG. 6 is a simplified cross-sectional diagram of the device wafer of FIG. 5 after a supporting structure has been wafer-bonded to the face side surface of the device wafer in accordance with the second method.FIG. 7 is a simplified cross-sectional diagram of a subsequent step wherein the back side of the device wafer is thinned to remove a layer of substrate material and to expose a bottom portion of well regions in accordance with the second method.FIG. 8 is a simplified cross-sectional diagram of a subsequent step wherein all of the remaining substrate material is removed so that substantially no well-to-substrate junction capacitance remains in accordance with the second method.FIG. 9 is a simplified cross-sectional diagram of a subsequent step wherein well electrodes are placed on the exposed bottom surfaces of the well regions in accordance with the second method.FIG. 10 is a simplified diagram of the back side of the device wafer showing columns of strip-shaped well electrodes in accordance with the second method.FIG. 11 is a simplified diagram of a structure wherein the well region of a P-channel transistor contacts the well region of an N-channel transistor in accordance with the second method.FIG. 12 is a simplified cross-sectional diagram of a device wafer in accordance with a step in a third method.FIG. 13 is a simplified cross-sectional diagram of the device wafer of FIG. 12 after a supporting structure has been wafer-bonded to the face side surface of the device water in accordance with the third method.FIG. 14 is a simplified cross-sectional diagram of a subsequent step wherein the back side of the device wafer is thinned to remove a layer of substrate material and to expose a bottom portion of source and drain regions in accordance with the third method.FIG. 15 is a diagram showing what areas of the transistor structure of FIG. 14 will be masked in a subsequent etching step in accordance with the third method.FIG. 16 is a simplified cross-sectional diagram of a subsequent step wherein substantially all of the semiconductor material is etched away but for the source region, the drain region, and a key-shaped channel region in accordance with third method.FIG. 17 is a simplified top-down diagram of the back side of the device wafer showing the resulting transistor structure in accordance with the third method.FIG. 18 is a simplified cross-sectional diagram taken along sectional line A-A of FIG. 17.FIG. 19 is a simplified cross-sectional diagram taken along sectional line B-B of FIG. 17.FIG. 20 is a simplified top-down diagram of the back side of the device wafer showing which area will be etched in order to thin the channel region in a step in accordance with a fourth method.FIG. 21 is a simplified top-down diagram of the back side of the device wafer showing the resulting double gate transistor structure having a thinned channel region in accordance with the fourth method.FIG. 22 is a simplified cross-sectional diagram taken along sectional line A-A of FIG. 21.FIG. 23 is a simplified cross-sectional diagram taken along sectional line B-B of FIG. 22.DETAILED DESCRIPTIONFIG. 2 is a simplified cross-sectional diagram of a device wafer 100 in an initial step of a first method. Device wafer 100 includes a semiconductor wafer portion 101 and an overlying interconnect portion 102. Device wafer 100 has an upper face side surface 103 and a back side surface 104. Device wafer 100 is, in this example, a wafer of monocrystalline silicon.A first transistor 105 and a second transistor 106 are formed on and into an upper surface 107 of the semiconductor wafer portion 101 of device wafer 100. First transistor 105 is a P-channel MOS field effect transistor having a P-type source region 108, a P-type drain region 109, and a gate 110. A channel region exists between source region 108 and drain region 109. A thin thermal oxide gate insulating layer 111 separates gate 110 from the underlying channel region. The source and drain regions 108 and 109 are regions of diffusion that extend down into an N-type well region 112. N-type well region 112 is made smaller in the lateral dimensions than the N-well region of the conventional structure of FIG. 1 because no well contact and well electrode is provided on the upper surface 107. The area on upper surface 107 that would otherwise have been consumed by a well contact and well electrode is usable for other purposes. N-type well region 112 extends down into a less heavily doped N-type substrate region 113 of the bulk semiconductor material of device wafer 100.Second transistor 106 is an N-channel MOS field effect transistor of similar construction to P-channel transistor 105 except that the component regions of N-channel transistor 106 are of opposite conductivity types. The regions that are N-type in transistor 105 are P-type in transistor 106, and regions that are P-type in transistor 105 are N-type in transistor 106.N-channel transistor 106 has an N-type source region 114, an N-type drain region 115, and a gate 116. A channel region exists between source region 114 and drain region 115. A thin thermal oxide gate insulating layer 117 separates gate 116 from the underlying channel region. The source and drain regions 114 and 115 are regions of diffusion that extend down into a P-type well region 118. P-type well region 118 in turn extends down into the into substrate region 113 of the bulk semiconductor material of device wafer 100. The P-type well region 118 is made smaller than the P-well region of the conventional structure of FIG. 1 because no well contact or well electrode is provided on the upper surface 107. The surface area on upper surface 107 that would otherwise have been consumed by a well contact and well electrode is usable for other purposes such as, for example, placing the transistors 105 and 106 closer together.A highly doped N+ substrate contact region 119 is provided on upper surface 107. The associated substrate contact electrode is omitted from the diagram. The substrate contact region 119 is used to reverse bias the well regions 112 and 118 with respect to substrate region 113. Substrate region 113 in this case is lightly doped N minus minus (denoted N-) with respect to the more heavily N minus doped (denoted N-) N-type well region 112. In the structure of FIG. 2, a layer 120 of substrate region 113 is disposed between the bottom extent of the well regions 112 and 118 and back side surface 104 of device wafer 100. Although not illustrated in the diagram, source and drain electrodes are provided to make electrical contact with the source and drain regions of transistors 105 and 106 in conventional fashion.A polish stop structure 121 (sometimes loosely termed an "etch stop" structure) extends down into device wafer 100 from surface 107 to a predetermined depth. Etch stop 121 may, for example, be formed by reactive ion etching (RIE) a hole or trench of a predetermined depth and then filling the hole or trench with metal or an oxide. The depth of the polish stop is deeper than the bottom extent of the source and drain regions 108, 109, 115 and 114 of transistors 105 and 106 but is shallower than the bottom extent of well regions 112 and 118.Device wafer 100 is a partially processed or fully processed wafer in that the many transistors of the wafer are interconnected in a desired manner by interleaved layers of metal and insulation (not shown). These metal and insulator layers are disposed in region 102 above upper surface 107. Surface 103 represents the upper surface of the partially processed or fully processed device wafer. Upper surface 103 may, for example, be the upper surface of a smooth planarized layer of deposited oxide. The deposited oxide may, for example, be TEOS (tetraethoxysilane) or BPSP (borophospho-silicate glass) that is deposited and then planarized by chemical mechanical polishing. Care is taken to ensure that the upper surface 103 of this planarized layer is parallel with respect to the upper surface 107 of semiconductor wafer portion 101.FIG. 3 shows a subsequent step in accordance with the first method. A supporting structure 122 is attached to upper surface 103 of device wafer 100. Supporting structure 122 may, for example, be a silicon wafer (sometimes called a "handle" wafer) that is covalently bonded to device wafer 100 using conventional wafer-bonding techniques. Alternatively, a large number of small non-oxidized aluminum posts can be provided both on the upper surface 103 of device wafer 100 as well as on the bottom surface of supporting structure 122. Each of the aluminum posts on device wafer 100 contacts a corresponding one of the aluminum posts on supporting structure 122 when the device wafer and the supporting structure are brought together so that the posts of each each pair of contacting posts cold welds together and thereby bond supporting structure 122 to device wafer 100. For additional details on a technique for bonding a supporting structure to a device wafer using aluminum posts, see: U.S. patent application Ser. No. 10/405,789, entitled "Stacked Die Bonded To Aluminum Posts", by Robert O. Conn, filed Apr. 1, 2003, the subject matter of which is incorporated herein by reference. Other suitable wafer bonding techniques can also be used to attach supporting structure 122 to device wafer 100.Next, device wafer 100 is thinned from its back side surface 104 so that layer 120 of the substrate semiconductor material is removed. Removing layer 120 results in a portion of well region 112 and a portion of well region 118 being exposed. In the present embodiment, layer 120 is removed by chemical mechanical polishing (CMP) the back side of the device wafer 100 until etch stop 121 is reached. An optional plasma etch is then performed to further smooth the ground down back side surface. The resulting thinned device wafer 100 may, for example, be approximately 20 microns thick. Well regions 112 and 118 appear as islands surrounded by N-type substrate material 113 when the thinned device wafer 100 is viewed from back side surface 104.Next, well contact diffusion regions 123 and 124 are ion implanted into well regions 112 and 118, respectively, from the back side of device wafer 100. The dopants in the well contact regions are activated. Metal is deposited onto the back side surface of device wafer 100 and is patterned to form metal well contact electrodes 125 and 126.In one embodiment, the distance between the source and drain regions is approximately 0.1 microns, the depth of the source region and the drain region is approximately 0.5 microns, the depth of the well regions after thinning is approximately one micron, and the distance in the vertical dimension between the top of well contact region and the bottom the source and drain regions is slightly less than one micron.Parasitic junction capacitances between well region 112 and substrate region 113 and between well region 118 and substrate region 113 are reduced in comparison to the structure of FIG. 1 because the bottoms of the well regions are no longer in contact with substrate material. The attendant semiconductor junction is therefore no longer present.After the well contact regions 123 and 124 and well electrodes 125 and 126 are fabricated, the bonded supporting structure and device wafer assembly is diced into individual integrated circuit dice. The supporting structure portion of each integrated circuit die supports its associated 20 micron thin portion of the device wafer.A plurality of P-channel transistors can be disposed in a row in a single island of well region 112 such that a single strip-shaped well electrode runs underneath the channel regions of all the transistors of the row and makes contact to the well region directly underneath the channel of each P-channel transistor in the row. Similarly, a plurality of N-channel transistors can be disposed in a row in a single island of well region 118 such that a single strip-shaped well electrode runs underneath the channel regions of all the transistors of the row.In each of transistors 105 and 106, the well contact and well electrode is located directly underneath the channel region. Accordingly, the resistance between the channel region and the well electrode is reduced in comparison to the conventional structure of FIG. 1. Problems encountered in the conventional structure of FIG. 1 due to the well contacts being located farther away from the channel regions are therefore reduced or eliminated. In one embodiment, the resulting integrated circuit die is mounted in an integrated circuit package face down flip-chip style. The back side of the device wafer 100 is left exposed to air in the cavity of the integrated circuit package or is covered with a layer of passivation.FIG. 5 is a simplified cross-sectional diagram of a device wafer 200 in an initial step of a second method. As in the case of device wafer 100 of FIG. 2, device wafer 200 of FIG. 5 includes a semiconductor wafer portion 201 and an overlying interconnect portion 202. A P-channel field effect transistor 203 including a P-type source region 204, a P-type drain region 205, and a gate 206 is disposed in an N-type well region 207. Similarly, an N-channel field effect transistor 208 including an N-type source region 209, an N-type drain region 210, and a gate 211 is disposed in an P-type well region 212. The well regions 207 and 212 extend into the bulk lightly doped N-type (N minus minus) semiconductor substrate material 213 of semiconductor wafer portion 201.Metal polish stop structures 214 and 215 extend down into device wafer 200 from the upper surface 216 of the semiconductor wafer portion 201. The polish stop structures extend past the depth of the source and drain regions of transistors 203 and 208 but do not extend to the depth of well regions 207 and 212. Polish stop structure 215 is coupled to source region 204 of transistor 203 by a portion of metal 217. Polish stop structure 214 is coupled to source region 212 of transistor 208 by a portion of metal 218.The semiconductor design rules applicable to the particular CMOS process used to fabricate transistors 203 and 208 may require that a certain distance be provided between the edge of a contact to a diffusion region and any adjacent diffusion-to-diffusion boundary. This design rule would typically require that source region 204 be wide enough to accommodate the source electrode contact as well as the extra lateral space required by the design rule. In the embodiment of FIG. 5, however, the extra lateral space between the source region contact edge and the leftmost edge of the source region is not provided because metal 217 extends over and contacts the upper surface of well region 207. Accordingly, the transistor structure is made smaller in the lateral dimension.Similarly, polish stop structure 214 in the N-channel transistor 208 is coupled to source region 209 by metal 218. Transistor 208 is made smaller in the lateral dimension because the extra space typically required by design rules between the rightmost edge of the source contact and the source-to-well boundary to the right is not provided. Rather, metal 218 extends over and contacts the upper surface of well region 212 in this area.FIG. 6 illustrates a subsequent step in the second method. Upper surface 219 of device wafer 200 is planarized and smoothed so that surface 219is parallel with upper surface 216 of semiconductor wafer portion 201. A supporting structure 220 such as, for example, a silicon wafer is attached to upper surface 219 of device wafer 200. Supporting structure 220 may, for example, be covalently wafer-bonded to upper surface 219 of device wafer 200. A layer 221 of substrate region 213 is disposed between the bottom extent of well regions 207 and 212 and the back side surface 222 of device wafer 200.FIG. 7 illustrates a subsequent step wherein device wafer 200 with the supporting structure 220 attached is thinned from its back side surface 222 until layer 221 is removed and until the polish stops 215 and 214 are reached. The result is that bottom portions of well regions 207 and 212 are exposed. Well regions 207 and 212 then appear as islands surrounded by substrate material 213 when device wafer 200 is viewed from the back side.FIG. 8 illustrates a subsequent step wherein the remaining portion of substrate region 213 is removed. To do this, the back side of device wafer 200 may be patterned with photoresist and etched such that substantially no substrate material of substrate region 213 remains in contact with either well region 207 or well region 212. Accordingly, there is substantially no well region to substrate capacitance in either of the transistors 203 or 208 because there remains no portion of the substrate material 213 in contact with a well region.FIG. 9 illustrates a subsequent step wherein an N-type well ohmic contact region 223 is formed into the bottom exposed surface of well region 207 directly underneath the channel region of transistor 205. Well contact region 223 may, for example, be formed by ion implanting N-type dopants into the bottom exposed surface of well region 207. Similarly, a P-type well ohmic contact region 224 is formed on the bottom exposed surface of well region 212 directly underneath the channel region of transistor 208. Metal is then deposited over the back side surface of device wafer 200 and is patterned to form well electrodes 225 and 226. Well electrode 225 extends laterally from the bottom extent of polish stop 215 to make contact with the well contact region 223 directly underneath the channel region of transistor 205. Well electrode 226 extends laterally from the bottom extent of polish stop 214 to make contact with the well contact region 224 directly underneath the channel region of transistor 208.FIG. 10 is a view of device wafer 200 when viewed from the back side. Alternating columns of P-channel transistors and N-channel transistors extend from left to right across the wafer. Current flow through each transistor extends from left to right. N-type well region 207 is a vertically oriented strip-like island. P-type well region 212 is a vertically oriented strip-like island. The gap 227 between the well regions 207 and 212 contains no semiconductor material. Gap 227 may, for example, be filled with air or an insulator such as silicon oxide or silicon nitride. By providing an air gap, source-to-drain punch-through immunity is improved.The drains of the various P-channel and N-channel transistors can be coupled together by metal (not shown) to form logic elements or other circuit components. Because there remains no well-to-substrate junction capacitance in the transistors, the transistors do not suffer any lose in switching speed due to such a capacitance. Susceptibility to problems due to alpha particles is reduced because the reverse biased well-to-substrate junction is not present to separate electron and hole pairs.Although the diagram of FIG. 9 shows there being a region 228 of well material disposed between drain region 205 and gap 227, and although the diagram of FIG. 9 shows there being a region 229 of well material disposed between drain region 210 and gap 22.7, these regions 228 and 229 need not be left remaining in the finished transistor structure. In one embodiment, gap 227 is made wider so that these regions 228 and 229 are etched away. The result is that the drain regions of transistors 205 and 208 have a reduced amount of drain-to-well junction. The parasitic drain-to-well capacitance of transistors 205 and 208 is therefore reduced.FIG. 11 shows an alternative structure wherein N-type well 207 is made to contact P-type well 212 in the initial step of FIG. 5. The resulting transistor structure therefore has no gap 227 between the N-type well region 207 and the P-type well region 212, but rather the two well regions contact one another as illustrated in FIG. 11. Where the drains regions of transistors 203 and 208 are coupled together in the resulting integrated circuit, the drain regions 205 and 210 are coupled together with metal 230. Because the metal extends from drain region 205 and across the surface 216 of the semiconductor material to drain region 210, the space reserved between the metal to diffusion contact and the edge of the diffusion need not be provided in the area between the two drain regions 205 and 210. Drain regions 205 and 210 can therefore be placed closer together than design rules would otherwise permit. Closer component spacing is therefore possible without changing the critical dimension that can be achieved with the photolithographic process used.FIG. 12 is a simplified cross-sectional diagram of a device wafer 300 in an initial step of a third method. In the structure of FIG. 12, the N-type well region 301 of P-channel transistor 302 does not extend beyond the lateral boundary of the source and drain regions 303 and 304 in the dimension shown in FIG. 12. Similarly, the P-type well region 305 of N-channel transistor 306 does not extend beyond the lateral boundary of the source and drain regions 307 and 308 in the dimension shown in FIG. 12. The drain regions 304 and 308 are made smaller in the illustrated lateral dimension because space need not be reserved between the edge of the metal to drain region contact and the edge of the drain diffusion between the two transistors. As can be seen from the diagram of FIG. 12, the drain region 304 is smaller in the illustrated lateral dimension than is source region 303, and drain region 308 is smaller in the illustrated lateral dimension than source region 307. Metal 309 couples drain regions 304 and 308 together. A polish stop 310 is optionally provided. Polish stop 310 is slightly shallower than the depth of source and drain regions 303, 304, 307 and 308.FIG. 13 illustrates a subsequent step in the third method in which a supporting structure 311 is attached to the upper face side surface 312 of device wafer 300. A layer 313 of substrate semiconductor material exists between the bottom extent of the source and drain regions 303, 304, 307 and 308 and the back side surface 314 of device wafer 300.FIG. 14 illustrates a subsequent step wherein device wafer 300 is thinned to remove layer 313, thereby exposing the bottom portions of each of the sources and drains 303, 304, 307 and 308. A chemical mechanical polishing (CMP) thinning process may be employed to thin device wafer 300 from the back side until polish stop 310 is. encountered. A light plasma etching may then be used to smooth the ground down back side surface after the CMP grinding step. FIG. 14 is a cross-sectional view of the resulting structure. All portions of well regions 301 and 305 that were disposed below the bottom extent of the source and drain regions of transistors 302 and 306 have been removed.FIG. 15 is a view of the back side of device wafer 300 that illustrates how a subsequent etching step is performed. Before the etching step, the source and drain regions of transistors 302 and 306 appear as rectangles when device wafer 300 is viewed from the back side. Well regions 301 and 305 appear as vertically oriented strips of semiconductor diffusion material. Dashed line 315 represents the outside boundary of a mask that covers the source and drain regions 303, 304, 307 and 308 as well as a contact portion 318 of N-well diffusion material and a contact portion 320 of P-well diffusion material.After the masking step, an etching step is performed such that all semiconductor material in the darkened area located outside the mask boundary 315 is removed. The bottom of the interconnect portion 323 at surface 322 is exposed in the unmasked area. After this etching, the remaining portion of the strip 301 of N-well diffusion material has a key shape 316. This key shaped portion of N-well diffusion material includes a channel portion 319 and the contact region 318. Similarly, the remaining portion of the strip 305 of P-well diffusion material has a key shape 317. This key shaped portion of P-well diffusion material includes a channel portion 321 and the contact region 320.FIG. 16 shows the resulting structure taken along sectional line A-A of FIG. 15.FIG. 17 is a more detailed top-down diagram of the structure of FIG. 16 when viewed from the back side of device wafer 300. FIG. 18 is a simplified cross-sectional diagram taken along sectional line A-A of FIG. 17. FIG. 19 is a simplified cross-sectional diagram taken along sectional line B-B of FIG. 17. In the diagrams of FIGS. 17-19, a square symbol with a cross drawn through it represents a contact area between a diffusion region and a layer above it. Layer 324 in FIGS. 18 and 19 is a thin layer of thermal oxide. The N-well of P-channel transistor 302 is biased by driving an appropriate voltage onto metal N-well electrode 325 (see FIG. 18). N-well electrode 325 makes ohmic contact with the key-shaped portion of N-well material 316 through contact 326. The same electrode and contact structure is provided for biasing the P-well of transistor 306. A source electrode 327 is shown in FIG. 19 making contact to source region 303 through a contact, and a source electrode 328 is shown making contact to source region 307 through a contact. A patch of metal 329 forms the drain electrodes for N-channel transistor 302 and for P-channel transistor 306.The area of contact between well regions and the source and drain regions of transistors 302 and 306 is reduced in comparison with the area of contact in the transistor structure of FIGS. 9 and 11. Accordingly, the parasitic well-to-source and well-to-drain junction capacitance is reduced. Substantially the only well material in contact with a source or drain region is well material located in the channel region beneath the gate.By eliminating the substrate material altogether, by reducing the size of the drain regions 304 and 308 as compared to the source regions, and/or by placing the well electrodes and associated contacts on the back side of the device wafer, less device wafer surface area is required to fabricate the transistors of the present invention as compared to the transistors of the conventional structure of FIG. 1. Closer component spacing is therefore achievable using the method of the present invention and this is possible without reducing the minimum feature size achievable with the semiconductor fabrication process used and without any advance in photolithographic techniques.In the embodiment of FIG. 11, contact was made to the well regions by placing a metal electrode and contact on the back side of the device wafer directly underneath the gate. By placement of the contact on the back side of the well where there exists a relatively large exposed surface area of well material, contact is made close to the very narrow channel region directly underneath the gate. In the embodiment of FIGS. 17-19 where the well regions are ground away from the back side until the bottoms of the source and drain regions are reached, the exposed surface of well material remaining in the channel area is so narrow that making contact to the well material in that narrow area is difficult or impossible. Extending contact portions 318 and 320 of the key-shaped well regions are therefore provided in the embodiment of FIGS. 17-19 to make electrical contact with the relatively narrow channel portions of the well regions.In one example of the structure of FIGS. 17-19, the gate has a length of approximately 0.2 microns (drawn length), the effective length of the gate is approximately 0.13 microns, the distance between the edge of the contact and the nearest diffusion boundary is approximately 0.15 microns, the width of a contact is 0.15 microns, the width of the metal (for example, the well electrode) in the area of the contact where the metal passes over the contact area is 0.25 microns, and the distance between the edge of the source or drain electrode and the closest edge of the gate electrode is approximately 0.2 microns. The gate is a polysilicon gate structure, whereas the source and drain electrodes are 0.25 micron wide traces of metal.FIG. 20 is a view of a transistor structure partway through the process of fabricating a double gate transistor in accordance with a fourth method. FIG. 20 is a top-down view of the back side of a device wafer 408 where all the semiconductor material has been removed but for source and drain P-type regions 400 and 401 and an intervening key-shaped portion of N-type well material 402. The process used to reach the process stage in FIG. 20 is the same as described above in connection with the embodiment of FIGS. 17-19, except that a single isolated P-channel transistor structure is illustrated in the example of FIG. 20. The P-channel transistor of FIG. 20 appears in cross-section similar to the transistors of 17-19. In FIG. 20, the darkened area represents a mask 403 used to mask the entire transistor structure but for the rectangular channel region 404 of N-type well material between the source and drain regions. In this case the contact portion of key-shaped portion 402 is masked.Channel region 404 is then etched from the back side of device wafer 408 thereby thinning the narrow rectangular channel region 404 of N-type well material between the source and drain regions.After the thinning of channel region 404, a thin oxide layer 404 is formed on the bottom exposed surface of source region 400, drain region 401 and the key-shaped well region 402. This thin oxide layer 404 serves as a gate insulator for a second gate. A contact area 410 for a second gate is formed by removing a portion of the thin oxide layer underneath gate electrode 411. A metal layer is then deposited on the back side surface of the structure and is patterned to form a second gate electrode 405.FIG. 21 is a view of the resulting transistor structure viewed from the back side of device wafer 408. FIG. 22 is a cross-sectional diagram of the structure taken along sectional line A-A of FIG. 21. FIG. 23 is a cross-sectional diagram of the structure taken along sectional line B-B of FIG. 21.As seen in FIGS. 21 and 22, a conductor 406 within the interconnect portion 407 of device wafer 408 extends across the gate insulator 409 and becomes the first gate electrode 411 of the upper (first) transistor. The metal of the second gate electrode 405 establishes electrical contact with metal conductor 406 up through contact area 410. AS indicated by the darkened area in FIG. 21, the metal of second gate 405 has a key-shape when viewed from the back side of device wafer 408. The metal of the upper (first) gate electrode and the metal of the second gate electrode therefore sandwich the intervening narrow channel portion of N-type well material 402. As shown in FIG. 21, the N-type well material has a key-shape when viewed from the back side of the device wafer 408. In the diagram of FIG. 21, the narrow channel portion of the key-shaped N-type well region 402 points upward whereas the narrow portion of the key-shaped second metal gate electrode 405 extends downward. After fabrication of the double gate transistor structure, the composite wafer structure involving the supporting structure 412 and the thinned device wafer 408 is diced into individual integrated circuit dice.In operation, when the threshold voltage of the transistor of FIGS. 21-23 is reached, the upper first gate electrode induces a first conductive channel to form on the upper surface of N-type well material 402 between source region 400 and drain region 401. Second gate electrode 405 also induces a second conductive channel to form on the lower surface of N-type well material 402 between source region 400 and drain region 401. Second gate electrode 405 may also, if the channel region is sufficiently thin, enhance conductivity of the first conductive channel.Providing second gate 405 in the transistor structure of FIGS. 21-23 has certain advantages. The threshold voltage of the double gate structure can be adjusted such that less leakage current flows through one of the channels between the source and the drain when a zero gate to source voltage is present than would be the case in a single gate structure of similar construction. The result is a reduction in the amount of current flow between the source and drain when the transistor is on. In the double gate structure of FIGS. 21-23, the second gate causes a second conductive channel to form along with the first channel when the transistor is turned on. The second gate also enhances conductivity of the first channel. The current that flows through the second channel compensates at least to some degree for the reduction in the source drain current in the first channel caused by the threshold voltage adjustment. Accordingly, the threshold voltage of the double gate transistor is adjusted to reduce leakage when the transistor is off at a particular subthreshold voltage without the transistor having a reduced source-to-drain current flow for a particular gate-to-source voltage when the transistor is turned on and operating above the threshold voltage.Although certain specific exemplary embodiments are described above in order to illustrate the invention, the invention is not limited to the specific embodiments. Certain specific transistor structures are set forth above as illustrations of transistor structures that can be made in accordance with the new bond and back side etchback process described above. These transistor structures are, however, not the only transistor structures that can be realized using the new process. Numerous configurations of transistors and contacts and electrodes are possible. In addition to field effect transistors, bipolar transistors can be fabricated using the bond and back side etchback process. Many device structures made using an epitaxial silicon processing technology can be made using the above-described bond and back side etchback process. Rather than being made out of epitaxial silicon, however, these structures are made out of higher quality bulk silicon substrate material by bonding a supporting structure to the face side of a processed device wafer and then grinding away the back side of the device wafer and then processing the thinned device wafer from the back side.In one embodiment, islands of semiconductor material on the back side of the device wafer are cooled by directing a flow of air or other heat removing gas or fluid across the islands and/or directly onto the source and drain regions of the transistors. The supporting structure bonded to the device wafer may be a structure other than a handle wafer of silicon. In one example, the supporting structure is a piece of metal (for example, copper) that has a tunnel running though it in the lateral dimension parallel with the upper surface of the device wafer. Cooling fluid can be circulated through the tunnel in the metal so that the fluid withdraws heat from the supporting structure. Heat is withdrawn from the device wafer via the cooled metal supporting structure. In one embodiment, a layer of an insulating and passivating material is deposited over the entire back side surface of the device wafer after the transistors structures described above have been fabricated. Accordingly, various modifications, adaptations, and combinations of various features of the described embodiments can be practiced without departing from the scope of the invention as set forth in the following claims. |
Resist coated wafers are rapidly and uniformly cooled by a fluid that has been cooled through the Joule-Thompson effect. Fluid from a high pressure reservoir is vented into a chamber that contains the substrates. By varying the pressure difference between the reservoir and the chamber, the temperature of the cooling fluid entering the chamber can be controlled. By also controlling the flow rate through the chamber, the average temperature difference between the fluid in the chamber and the substrates may be limited, whereby more uniform cooling is obtained. While the chamber pressure is lower than that in the high pressure reservoir, the chamber pressure may still be substantially greater than atmospheric. An elevated chamber pressure raises the specific heat and residence time of the fluid in the chamber, which also promotes uniform cooling. |
1. A system for cooling coated semiconductor substrates, said system comprising:a chamber for receiving at least one coated semiconductor substrate; a high pressure fluid reservoir that vents cooling fluid into the chamber; a coupling coupled to the chamber and the high pressure fluid reservoir for placing the chamber in fluid communication with the high pressure fluid reservoir, the coupling comprising a filter to exclude contaminate particles from the fluid; an inlet valve attached to the coupling for controlling a flow of cooling fluid between the high pressure fluid reservoir and the chamber, wherein the pressure drop across the inlet valve affects the cooling fluid temperature and is at least about 10 bar depending on independent adjustments made to the inlet valve and outlet valve; a controller coupled to the inlet valve that selectively controls the inlet valve to optimize the pressure drop across the inlet valve separately and apart from the outlet valve by ma adjustments to each independently of the other based on calculated temperature readings of the respective valves. 2. The system of claim 1 wherein the pressure drop across the inlet valve is at last about 100 bar.3. The system of claim 1 wherein the controller controls the temperature of the cooling fluid at a point within the chamber.4. The system of claim 1 further comprising an outlet valve that releases at least a portion of the cooling fluid from the chamber as determined by the controller, wherein the controller incrementally opens or closes at least one of the inlet valve and the outlet valve to facilitate optimizing increased uniform cooling of the substrate.5. The system of claim 1 wherein the controller controls the rate of cooling fluid flow through the chamber and temperature of the cooling fluid as it enters the chamber to obtain an optimal pressure drop as the fluid enters the chamber.6. The system of claim 1 wherein cooling fluid entering the chamber from the reservoir substantially mixes with fluid already in the chamber before contacting the at least one semiconductor substrate.7. The system of claim 6 further comprising a baffle that is positioned with resect to the cooling fluid flow, wherein the cooling fluid flowing into the chamber is directed against the baffle before making contact with the substrate.8. A cooling system for coated semiconductor substrates comprising:means for receiving at least one coated semiconductor substrate; means for venting cooling fluid into the chamber; means for coupling the chamber and the high pressure fluid reservoir to place the chamber in fluid communication with the high pressure fluid reservoir, the means for coupling comprising a means for excluding contaminate particles from the fluid; in-flow means for selectively controlling an in-flow of cooling fluid between the high pressure fluid reservoir and the chamber, wherein the pressure drop across the in-flow means affects the cooling fluid temperature and is at least about 10 bar depending on independent adjustments made to the inlet valve and an out-flow means; controlling means that controls the in-flow means to optimize the pressure drop across the in-flow means separately and apart from the out-flow means by making adjustments to each independently of the other based on calculated temperature readings of the respective valves. 9. The system of claim 8, wherein the out-flow means releases at least a portion of the cooling fluid from the chamber as determined by the controlling means, wherein the controlling means incrementally opens or closes at least one of the in-flow means and the out-flow means to facilitate optimizing increased uniform cooling of the substrate.10. The system of claim 8 wherein the controlling means controls the rate of cooling fluid flow through the chamber and temperature of the cooling fluid as it enters the chamber to obtain an optimal pressure drop as the fluid enters the chamber. |
CROSS REFERENCE TO RELATED APPLICATIONSThis application claims the benefit of U.S. Provisional Patent Application Ser. No. 60/242,626, filed Oct. 23, 2000, entitled SYSTEM FOR RAPIDLY AND UNIFORMLY COOLING RESIST.TECHNICAL FIELDThe present invention relates to semiconductor processing and in particular to a system for uniformly and rapidly cooling a resist.BACKGROUND OF THE INVENTIONIn the semiconductor industry, there is a continuing trend toward higher device densities. To achieve these high densities there has been, and continues to be, efforts toward scaling down the device dimensions (e.g., at submicron levels) on semiconductor wafers. In order to accomplish such high device packing density, smaller and smaller features sizes are required. This may include the width and spacing of interconnecting lines, spacing and diameter of contact holes, and the surface geometry such as corners and edges of various features.The requirement of small features with close spacing between adjacent features requires high resolution lithographic processes. In general, lithography refers to processes for pattern transfer between various media. It is a technique used for integrated circuit fabrication in which a silicon slice, the wafer, is coated uniformly with a radiation-sensitive film, the resist, and the film exposed with a radiation source (such as optical light, x-rays, or an electron beam) that illuminates selected areas of the surface through an intervening master template, the mask, forming a particular pattern. The lithographic coating is generally a radiation-sensitive coating suitable for receiving a projected image of the subject pattern. Once the image is projected, it is indelibly formed in the coating. The projected image may be either a negative or a positive image of the subject pattern. Exposure of the coating through a photomask causes the image area to become either more or less soluble (depending on the coating) in a particular solvent developer. A positive-tone resist is one that becomes more soluble in the developer after exposure to actinic radiation. A negative-tone resist becomes less soluble in the developer after exposure. The more soluble areas are removed in the developing process to leave the pattern image in the coating.A resist coating is typically prepared by dripping or spraying a resist solution onto a spinning substrate. This forms a relatively uniform coating of the resist solution, which is then "soft-baked." Soft-baking drives off solvent, improves adhesion of the resist to the substrate, and anneals stresses caused by shear forces encountered in the spinning process. Typically, the solvent level is reduced from the 20% to 30% range to about the 4% to 7% range.The time and temperature of the soft-bake determines a number of parameters that affect subsequent processing steps. The degree of soft-baking affects the residual solvent content of the resist, which in turn affects the rate of attack of the resist by the developer. Under-baked resists may show inadequate differentiation between the dissolution rates of exposed and un-exposed regions. On the other hand, over-baking reduces photosensitivity of the resist, which also reduces the ability to create sharp contrast between exposed and unexposed regions. Consequently, the soft-bake must be carefully optimized and controlled.Particularly where extremely fine patterns are sought, the pre-bake process must not only be controlled from substrate to substrate, but also across each individual substrate. Both the overall temperature history and variations in the temperature across the photoresist must be controlled. Variation in the temperature history across the substrate during pre-bake can lead, after exposure of the resist, to unintended lengthwise variations in the width of features such as lines and gaps. Chemically amplified photoresists are particularly susceptible to such variations. The feature sizes of chemically amplified photoresists can be drastically affected by only a few degrees difference in temperature. Line size deviations often occur unless temperature is maintained within 0.5[deg.] C. tolerance across the substrate. Temperature control within ±0.2[deg.] C. may be required.Much attention has been given to systems for uniformly heating photoresist coated substrates. While convection ovens have been used, they have limitations. The temperature uniformity of convection ovens is not particularly good and particles may enter the ovens and become embedded in the heated resist. Infrared ovens have been widely utilized. These ovens have much shorter heating times than convection ovens (3-4 minutes versus approximately 30 minutes). Hot-plates also permit rapid heating.Less attention has been given to cooling systems, although several have been suggested. Natural convection cooling under ambient conditions has been used, but this is relatively slow and results in substantial non-uniformities. Cold-plates are somewhat better. These can be cooled by cooling fluids or Peltier elements. However, substrate temperature gradients form when using cold plates, since heat must travel from the substrates and the surroundings to the cold plates. It has been proposed to submerge the substrates in a liquid such as water. Cooling in this case may be too rapid and cause mechanical damage to the substrate. Submerging also has the disadvantage of requiring a drying step. Use of a cooling gas has been suggested, but a cooling gas does not appear to have been successfully used to achieve uniform cooling.Therefore, there remains an unsatisfied need for a system and method of rapidly and uniformly cooling resist coated substrates.SUMMARY OF THE INVENTIONAccording to the invention, resist coated wafers are rapidly and uniformly cooled by a fluid that has been cooled through the Joule-Thompson effect. Fluid from a high pressure reservoir is vented into a chamber that contains the substrates. By varying the pressure difference between the reservoir and the chamber, the temperature of the cooling fluid entering the chamber can be controlled. By also controlling the flow rate through the chamber, the average temperature difference between the fluid in the chamber and the substrates may be limited, whereby more uniform cooling is obtained. While the chamber pressure is lower than that in the high pressure reservoir, the chamber pressure may still be substantially greater than atmospheric. An elevated chamber pressure raises the specific heat and residence time of the fluid in the chamber, which also promotes uniform cooling.In one aspect, the invention provides a system including a chamber adapted to receive one or more coated semiconductor substrates, a coupling for placing the chamber in fluid communication with a fluid reservoir, an inlet valve controlling the flow of fluid between the fluid reservoir and the chamber, and a controller that controls the inlet valve.In another aspect, the invention provides a system for cooling coated semiconductor substrates including means for cooling a fluid by at least about 10[deg.] C. through the Joule-Thompson effect and means for contacting the cooled fluid with the substrates.In a further aspect, the invention provides a method of cooling coated semiconductor substrates including the steps of cooling a fluid by at least about 10[deg.] C. through the Joule-Thompson effect and contacting the substrates with the cooled fluid.In a further aspect, the invention provides a method of cooling coated semiconductor substrates including the steps of heating a fluid to a temperature above ambient, subsequently flowing the fluid into a chamber containing the substrates, and cooling the substrates by contacting them with the fluid that has been heated.In a further aspect, the invention provides a system for cooling coated semiconductor substrates including a first sub-system for cooling a fluid using the Joule-Thompson effect and a second subsystem for contacting the coated semiconductor substrates with the cooled fluid.The invention extends to features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative examples of the invention. These examples are indicative, however, of but a few of the various ways in which the principles of the invention may be employed. Other objects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1a is a general schematic of a system according to the present invention for cooling coated semiconductor substrates.FIG. 1b is another schematic of a system according to the present invention for cooling a coated semiconductor substrate.FIG. 2 is a flow diagram of a control strategy for use with a process of the present invention.FIG. 3 is a flow diagram of another control strategy for use with a process of the present invention.FIG. 4 is a flow diagram of still another a control strategy for use with a process of the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention will now be described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout.FIG. 1a is a general schematic of a cooling system 10 in accordance with the present invention. Cooling system 10 includes a first sub-system 20 in which a fluid is cooled through the Joule-Thompson effect and a second sub-system 30 in which the cooled fluid is contacted with one or more coated semiconductor substrates 40.FIG. 1b is a schematic of a cooling system 100 in accordance with the present invention. System 100 includes high pressure reservoir 110, inlet valve 140, chamber 180, outlet valve 220, and controller 150. In accordance with a method of the present invention, one or more coated substrates 190 are cooled by venting fluid from high pressure reservoir 110 into chamber 180, which contains substrates 190. Fluid is vented into chamber 180 through inlet valve 140 and released through outlet valve 220 to exhaust 240, whereby a continuous flow of cooling fluid and constant pressure may be maintained within chamber 180. Controller 150 monitors the cooling process through substrate temperature sensor 250 and controls the process by manipulating inlet valve 140 and outlet valve 220. Additional information may be provided to controller 150 by inlet valve 140, outlet valve 220, flow meter 230, reservoir fluid temperature sensor 260, chamber inlet temperature sensor 270, and chamber exhaust temperature sensor 280. This information may be used by controller 150 to maximize the cooling rate while limiting temperature variations within and among substrates 190.The substrates are typically semi-conducting materials, such as silicon. In addition to a semiconducting material, the substrates may include various elements and/or layers; including metal layers, barrier layers, dielectric layers, device structures, active elements and passive elements including silicon gates, word lines, source regions, drain regions, bit lines, bases emitters, collectors, conductive lines, conductive plugs, etc. Sudden temperature changes may result in delamination or detachment of the substrate coating, substrate layers, or substrate devices.The substrate coating may be of any type. It may be liquid or solid. Generally, the coating will have properties that are sensitive to temperature. The invention is particularly useful when the coating is a resist. The resist may be organic or inorganic. It may be a photoresist responsive to visible light, ultraviolet light, or x-rays, or it may be an electron beam resist or an ion beam resist. The resist may be positive or negative tone. The resist may be chemically amplified, whereby its sensitivity to actinic radiation is enhanced.Substrates 190 are contained in chamber 180. This chamber may be the same chamber that was used to heat the coated substrates. On the other hand, it may be a separate chamber. Using the same chamber to heat and cool the substrates has the advantages of not having to move the substrate and avoiding exposure of the substrates to uncontrolled temperature changes during the transportation process. On the other hand, using one chamber for heating and cooling increases the cooling requirement. For example system 100 includes heating plate 210. Heating plate 210 increases the cooling requirement. Substrates may be moved from a heating chamber to a cooling chamber quickly and automatically, by a robotic arm for example.Substrates 190 are supported on pins 200 over hot plate 210. The cooling process is facilitated by ensuring hot plate 210, or whatever structure supports the substrates, has a low thermal inertia. Pins 200 reduces the risk of contamination of substrates 190 by the structure on which the substrates are supported.Substrates 190 are cooled by venting fluid from high pressure reservoir 110 into chamber 180 through inlet valve 140. In one aspect of the invention, the pressure drop across valve 140 is at least about 1 bar. In another aspect, the pressure drop is at least about 10 bar. In a further aspect, the pressure drop is at least about 100 bar. Reservoir 110 is usually a high pressure gas cylinder, although it could be the outlet of a pump. The fluid generally cools as it is vented into chamber 180. In one aspect of the invention, it cools by at least about 10[deg.] C. In another aspect, it cools by at least about 25[deg.] C. In a further aspect, it cools by at least about 50[deg.] C.Reservoir 100 is in fluid communication with chamber 180 through coupling 120. Coupling 120, or another part of the system, may include a means of excluding particles from the fluid stream. The means may be a filter in coupling 120 to remove particles from the fluid, or may involve providing a fluid source that is relatively free of particles. Particles in the cooling fluid may contaminate the substrates, if not controlled.The fluid can be any fluid that exhibits the Joule-Thomson effect in the temperature range of interest and is chemically inert with respect to coated substrates 190. The Joule-Thomson effect is the cooling of a fluid upon adiabatic expansion. When a fluid expands freely from a high pressure reservoir into a lower pressure chamber, as in expansion through a valve, the process is generally, to a good approximation, adiabatic. The change of temperature can be determined from the pressure change and the formula:dT/dP=(T/Cp)([partial differential]V/[partial differential]T)p-TV/Cp If the expression on the right hand side is positive for a particular gas at a particular temperature and pressure, the gas will exhibit the Joule-Thomson effect under those conditions. Nitrogen will exhibit the Joule-Thomson effect between the temperatures of -156 and 277[deg.] C. Nitrogen can therefore be used in the invention. Carbon dioxide, and in some cases air, can also be used.While the fluid may condense as it cools, this may or may not be an advantage depending on the physical configuration of the system. Cooling through condensation is very rapid and provides a comparatively constant temperature in the cooling medium. However, cooling by condensation may be too rapid and result in excessive temperature gradients within the substrates. Therefore, it is advantageous to use a fluid that has a relatively high thermal inertia but does not liquify or form a two phase system upon venting into the chamber. Supercritical carbon dioxide can be used to achieve rapid cooling without phase changes.Fluid from reservoir 10 vents into chamber 180 through inlet valve 140. Inlet valve 140 may be any type of valve that allows a reasonably controlled rate of flow over a range of settings. For example, it may be a ball valve, a globe valve, or a needle valve. A needle valve can be used to achieve precise flow control.Venting fluid into chamber 180 and exhausting it through valve 200 causes convection within chamber 180, but it may be beneficial to increase convection within chamber 180, using a fan 160 for example. Increasing convection within chamber 180 increases heat transfer between the cooling fluid and substrates 190. Thereby, the rate of cooling is increased. If convection within chamber 180 is increased without increasing the rate of flow through chamber 180, uniformity of temperature within the cooling fluid increases, making the cooling process more uniform as well.Fluid is released from chamber 180 through exhaust valve 220. Like inlet valve 140, exhaust valve 220 may be any type of valve that allows a reasonably controlled rate of flow over a range of settings. For example, it may be a ball valve, a globe valve, or a needle valve. Exhaust valve 220, and inlet valve 140, may provide controller 150 with an indications of their position, e.g., whether and to what extent they are open.Inlet valve 140 and outlet valve 220 can be adjusted independently to separately control the flow rate of cooling fluid through chamber 180 and the pressure drop across inlet valve 140. The pressure drop 140 affects the temperature of the cooling fluid as it enters chamber 180. Therefore, inlet valve 140 and outlet valve 220 can be used to independently control two parameters, such as the temperature of the cooling fluid as it enters chamber 180 and the flow rate of cooling fluid through chamber 180.Temperature sensors 250, 260, 270, and 280 table suitable type for the temperatures and media (fluid or solid) that are being measured. For example, they may be thermocouples, thermistors, resistance temperature detectors, or radiation thermometers. Preferably, temperature sensor 250 senses the temperature of substrates 290 without touching or contaminating them. A sensor based on reflected radiation may be used. For example, temperature sensor 250 may be an interferometer detecting thermal expansion or a spectrophotometer detecting changes in fluorescence or color. Temperature sensor 250 samples the substrate temperature at one point. However, multiple sensors giving an average temperature can also be used. Alternatively, sensor 250 may measure the temperature of an object that has a temperature approximating that of the substrate. For example, temperature sensor 250 may sense the temperature of hot plate 210.Pressure sensors 130 and 170 and flow meter 230 may be of conventional types. Flow meter 230 may be, for example, a thermal dispersion mass flow meter, a differential pressure flow meter, a positive displacement flow meter, or a Coriolis mass flow meter.In a method of the invention, fluid from reservoir 110 is vented into chamber 180 at a controlled rate and temperature. The rate and temperature may be set by controller 150 thorough manipulation of inlet valve 140 and outlet valve 220.The rate at which the substrates are cooled, qS, may be represented by the following equation:qS=HS(TS-TC) where HS is an overall heat transfer coefficient, TS is the substrate temperature, and TC is the average temperature of the fluid in the chamber. The heat taken up by the substrates must equal the heat released by the flowing fluid. Assuming that the fluid leaving the chamber is at the average temperature for fluid in the chamber:qS=F CV(TC-Ti) where F is the volumetric flow rate of cooling fluid, CV is the fluid's heat capacity on a unit volume basis, and Ti is the temperature of the fluid entering the chamber. Solving for the average chamber temperature:TC=(HSTS-F CVTi)/(HS+F CV) When the heat transfer coefficient is very high in comparison with the flow rate, the average temperature of the fluid in the chamber approaches the substrate temperature. When the flow rate is high compared to the heat transfer coefficient, the average temperature of the fluid in the chamber approaches the temperature of the fluid entering the chamber.Uniform cooling of the substrate may be facilitated by keeping the average temperature of fluid in the chamber comparatively close to the substrate temperature. This slows the cooling rate, allowing time for heat to disperse evenly. Reducing the temperature difference between the fluid in the chamber and the substrates may also reduce the size of temperature differences within the cooling fluid near the substrates. To realize this latter benefit, it is preferable that cooling fluid entering the chamber does not contact the substrates immediately. Rather, it is advantageous if the entering fluid flow is directed against a wall or a baffle 290, whereby the cooling fluid entering the chamber substantially mixes with the fluid already in the chamber before contacting the substrates.Uniform cooling may also be facilitated by having the degree of re-circulation or mixing within the chamber high relative to the flow rate through the chamber. The ratio of re-circulation to that of flow through may be increased by reducing the flow rate, particularly when a fan or other device forces convection within the chamber.Rapid cooling may be facilitated by increasing the temperature difference between the chamber fluid and the substrates. The temperature difference may be increased by increasing the flow rate of cooling fluid and/or reducing the temperature of the fluid entering the chamber.A balance between cooling rate and cooling uniformity may need to be struck. The location of that balance and the control strategy used to obtain it depends on the demands of the particular application and the physical configuration of the chamber and the substrates. The control strategy may be implemented by controller 150. Controller 150 is a logic circuit, such as a programmable logic circuit. Typically, controller 150 includes a microprocessor and a memory containing suitable software instructions.A control strategy 300, which may be effective in obtaining uniform cooling, is illustrated in FIG. 2. Strategy 300 seeks a constant cooling fluid temperature at the inlet and a fixed temperature difference between the substrate and the average cooling fluid in chamber 190. An advantage of this strategy is that is conserves the use of cooling fluid from high pressure reservoir 110. In step 310. The inlet fluid temperature is measured by sensor 270. In step 320, controller 150 compares the inlet fluid temperature to the target value. If the inlet fluid temperature is greater than the target value, inlet valve 140 is incrementally closed in step 340. Incrementally closing inlet valve 140 tends to decrease the chamber pressure, increase the pressure drop across inlet valve 140, and decrease the inlet gas temperature. If the inlet fluid temperature is less than the target value, inlet valve 140 is incrementally opened in step 330. Incrementally opening inlet valve 140 tends to increase the chamber pressure, decrease the pressure drop across inlet valve 140, and increase the inlet gas temperature.In step 350, the substrate temperature is measured by sensor 250 and the average chamber fluid temperature is measured (approximately) by sensor 280. Controller 150 compares the difference between these two temperatures to a target difference in step 360. If the temperature difference is greater than the target, outlet valve 220 is incrementally closed in step 380. Incrementally closing outlet valve 220 decreases the flow rate through the chamber, which tends to decrease the temperature difference between the fluid in the chamber and the substrate. If the temperature difference is less than the target, outlet valve 220 is incrementally opened in step 370. Incrementally opening outlet valve 220 increases the flow rate through the chamber, which tends to increase the temperature difference between the fluid in the chamber and the substrate. The steps, beginning again with step 310, are repeated.Strategy 300 may be improved by simultaneously adjusting both the inlet and outlet valves taking into account the results of both comparisons. While strategy 300 uses inlet valve 140 to adjust the inlet fluid temperature and outlet valve 220 to adjust the temperature difference, adjustments to inlet valve 140 affect the temperature difference and adjustments to outlet valve 220 affect the inlet fluid temperature. Using a mathematical model, these cross-correlations could be taken into account and both valves adjusted simultaneously. These and other possible improvements in this and other control strategies discussed herein will be readily apparent to one of ordinary skill in the art.Another control strategy 400 that may be effective in obtaining a uniform rate of cooling is illustrated in FIG. 3. Strategy 400 uses a constant flow rate throughout the cooling process and also maintains a fixed temperature difference between the substrate and the cooling fluid in chamber 190. Keeping the flow rate low in comparison to the re-circulation or mixing rate within chamber 190 is particularly effective in obtaining uniform cooling rates. In step 410, the flow rate is measured by flow meter 230. In step 420, controller 150 compares the flow rate to the target value. If the flow rate is greater than the target value, outlet valve 220 is incrementally closed in step 440. If the flow rate is less than the target value, outlet valve 220 is incrementally opened in step 430.In step 450, the substrate temperature is measured by sensor 250 and the average chamber fluid temperature is measured (approximately) by sensor 280. Controller 150 compares the difference between these two temperatures to a target difference in step 460. If the temperature difference is greater than the target, inlet valve 140 is incrementally opened in step 480. Incrementally opening inlet valve 140 while keeping the flow rate constant, through adjustments to outlet valve 220, increases the pressure in the chamber, decreases the pressure drop across inlet valve 140, and increases the temperature of the cooling fluid. If the temperature difference is less than the target, inlet valve 140 is incrementally closed in step 470. Incrementally closing inlet valve 140 while keeping the flow rate constant, through adjustments to outlet valve 220, decreases the pressure in the chamber, increases the pressure drop across inlet valve 140, and decreases the temperature of the cooling fluid.Control strategy 400 demonstrates that it may be desirable to heat the fluid in reservoir 110 while it is in the reservoir or as it flow from reservoir 110 to chamber 190. Ordinarily, adjusting the pressure drop across inlet valve 140 permits the inlet fluid temperature to be adjusted only in the range at or below the reservoir temperature. In a constant flow rate process, heating the cooling fluid may be desirable to reduce the temperature difference between the cooling fluid and the substrate to a target level, particularly during the early stages of the cooling process when the substrate may be comparatively hot.Control strategies 300 and 400 are over simplified in that they show the valves being incrementally opened or closed at every step whenever there is a difference between a measured value and its target. Control strategies are generally more complex, involving for example, proportional, integral and differential control. The strategy used depends on the dynamics of the system being controlled, but is selected to keep response times short and limit problems such as oscillation and overshoot.While control strategies 300 and 400 both maintain an approximately constant overall cooling rate, it may be desirable to vary the cooling rate. During the early stages of the cooling process when the substrates are hotter, temperature variations in the substrates have greater effects. During the later stages of the cooling process, temperature differences may be less important and more rapid cooling may be permissible. Therefore, it may be desirable to increase the target temperature difference between the substrate and the fluid in the chamber as substrate temperature decreases.In some situations the effect of temperature variations in the substrates may be mitigated by rapid overall cooling. For example, if the only effect of a temperature variation on a particular substrate is a difference in solvent evaporation rate, the effect will be mitigated if the entire substrate cools before significant evaporation takes place. In such circumstances, the control objective may be to maintain a low cooling fluid temperature in the chamber. This may be accomplished by opening outlet valve 220 to a large extent so that the chamber pressure is nearly atmospheric and the flow rate of cooling fluid through the chamber is high.The forgoing discussion of control strategies has been premised, to some extent, on the assumption that the overall heat transfer coefficient between the chamber gas and the substrates and the uniformity of heat transfer between the chamber gas and the substrates are not substantially affected by the pressure in the chamber or the flow rate of cooling fluid through the chamber. While these assumptions are valid in many circumstances, there are other circumstances where they become significant considerations. For example, when re-circulation within the chamber is low in comparison with the flow rate through the chamber, there may be significant variations in the cooling fluid temperature, which may result in nonuniform cooling of the substrates.A control strategy aimed at maintaining an elevated pressure within the chamber may increase uniformity of cooling. Increasing the chamber pressure at constant mass flow rate through the chamber increases the residence time and thermal inertia of the fluid within the chamber. Higher thermal inertia and higher residence time will generally result in more uniform cooling. In one embodiment of the invention, the pressure in the chamber is maintained at or above about 2 bar. In another embodiment, the pressure in the chamber is maintained at or above about 10 bar. In a further embodiment, the pressure in the chamber is maintained at or above about 20 bar.FIG. 4 illustrates a control strategy 500 for maintaining a high flow rate through the chamber and a constant pressure within the chamber. The target pressure may be, for example, the pressure necessary to maintain CO2 in a supercritical state. In step 510, the chamber pressure is measured by meter 170. In step 520, controller 150 compares the measured chamber pressure to the target chamber pressure. If the measured chamber pressure is too high, controller 150 checks, in step 530, whether outlet valve 220 is fully open. If it is fully open, inlet valve 140 is closed in step 550 by a proportionality factor, b, times the pressure difference. If outlet valve 220 is not fully open, outlet valve 220 is opened in step 540 by a proportionality factor, a, times the pressure difference. Proportionality factors a and b are proportional control factors, which are selected by the user based on experience with the valves and the system dynamics.If in step 520 the measure chamber pressure is less than or equal to the target pressure, control proceeds to step 560 wherein controller 150 checks whether inlet valve 140 is fully open. If it is fully open, outlet valve 220 is closed in step 540 by a proportionality factor, a, times the pressure difference. If outlet valve 220 is not fully open, inlet valve 140 is opened in step 550 by a proportionality factor, b, times the pressure difference. Steps 540 and 550 return control to step 510, so the process of measurement, comparison, and adjustment repeats. Differential and integral control can be added to strategy 500 (and strategies 300 and 400) if needed, to improve the control system's stability and responsiveness.The methods of the invention may be used to limit the variation in temperature within and among substrates during cooling. In one embodiment, the temperature never varies by more then about 5[deg.] C. among and within the substrates 190. In another embodiment, the temperature never varies by more then about 2[deg.] C. among and within the substrates 190. In a further embodiment, the temperature never varies by more then about 0.5[deg.] C. among and within the substrates 190.What has been described above is the present invention and several of its specific aspects. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art will recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. |
Embodiments of an eNodeB and method for Machine Type Communication in a Wireless Network are generally described herein. In some embodiments, a method performed by circuitry of an evolved Node B(eNodeB) can include receiving, by the eNodeB, a notification that a User Equipment (UE) is configured to be used for Machine Type Communication (MTC). The method can include determining whether the UE is in a Radio Resource Control Connected (RRC_Connected) state and determining whether the UE can enter a power saving mode. The method can include configuring the UE to change to an RRC Deep Idle mode, in response to determining that the UE is in the RRC_Connected state and the UE can enter the power saving mode. |
CLAIMSWhat is claimed is:1. An evolved Node B (eNodeB) comprising:a processor arranged to:receive a notification that a User Equipment (UE) is configured to be used for Machine Type Communication (MTC);determine whether the UE is in a connected state; determine whether the UE can enter a power saving mode; and configure the UE to change to a power saving mode, in response to determining that the UE is in the connected state and the UE can enter the power saving mode.2. The eNodeB of claim 1, wherein the connected state comprises a Radio Resource Control Connected (RRC_Connected) state and the power saving mode comprises a Radio Resource Control (RRC) Deep Idle mode.3. The eNodeB of claim 2, wherein operations to determine whether the UE can enter the power saving mode include, upon expiration of a specified amount of time, operations to configure the UE to change to a Radio Resource Control Idle (RRC_Idle) state.4. The eNodeB of claim 2, wherein the RRC Deep Idle mode is a configuration in a Radio Resource Control Idle (RRC_Idle) state.5. The eNodeB of claim 2, wherein the RRC Deep Idle mode is an RRC Deep Idle state and is separately configured from a Radio Resource Control Idle (RRC_Idle) state. 6. The eNodeB of claim 2, wherein operations to configure the UE to change to an RRC Deep Idle mode include operations to send a network indication to the UE.7. The eNodeB of claim 6, wherein the network indication includes a new NAS signaling message.8. The eNodeB of claim 6, wherein the network indication includes a Radio Resource Control Connection Release (RRCConnectionRelease) message.9. The eNodeB of claim 6, wherein the network indication includes a new a Radio Resource Control (RRC) Power Saving Release message. 10. The eNodeB of claim 2 further comprising, operations to configure the UE to change to a Radio Resource Control Idle (RRC_Idle) state.11. The eNodeB of claim 10, wherein operations to configure the UE to change to the RRC Idle state occur only if data activity is expected.12. The eNodeB of claim 1 further comprising, operations to use a timer to determine when to configure the UE to change to a power saving mode.13. The eNodeB of any one of claims 1-12, wherein the operations to receive a notification occur over a wireless communications network comprising a 3rdGeneration Partnership Project (3 GPP) long term evolution (LTE) network.14. The eNodeB of any one of claims 1-12 further comprising, operations to configure the UE to leave the power saving mode upon expiration of a specified amount of time.15. A method performed by circuitry of an evolved Node B (eNodeB) comprising:receiving, by the eNodeB, a notification that a User Equipment (UE) is configured to be used for Machine Type Communication (MTC);determining whether the UE is in a connected state;determining whether the UE can enter a power saving mode; and configuring the UE to change to a power saving mode, in response to determining that the UE is in the connected state and the UE can enter the power saving mode. 16. The method of claim 15 further comprising, using a timer to determine when to configure the UE to change to an RRC Deep Idle mode.17. The method of claim 15, wherein determining whether the UE can enter a power saving mode includes, upon expiration of a specified amount of time, configuring the UE to change to a Radio Resource Control Idle (RRC_Idle) state.18. The method of any one of claims 15-17, wherein configuring the UE to change to an RRC Deep Idle mode includes sending a network indication to the UE.19. At least one machine-readable medium comprising instructions for operation of a computing system, which when executed by a machine, cause the machine to perform operations that:determine, by the UE, that the UE is configured to be used for MachineType Communication (MTC);determine whether the UE is in a Radio Resource Control Connected (RRC_Connected) state;determine whether the UE can enter a power saving mode; and configure the UE to change to an RRC Deep Idle mode, in response to determining that the UE is in the RRC_Connected state and the UE can enter the power saving mode.20. The machine-readable medium of claim 19 further comprising, operations to use a timer to determine when to configure the UE to change to an RRC Deep Idle mode.21. The machine-readable medium of claim 19, wherein operations to determine whether the UE can enter a power saving mode include, upon expiration of a specified amount of time, operations to configure the UE to change to a Radio Resource Control Idle (RRC_Idle) state.22. The machine-readable medium of any one of claims 19-21, wherein operations to configure the UE to change to an RRC Deep Idle mode include operations to send a network indication to the UE. 23. User Equipment (UE) comprising:a transceiver configured to be used for Machine Type Communication (MTC); anda processor, coupled to the transceiver, arranged to:determine whether the UE is in a Radio Resource Control Connected (RRC_Connected) state;determine whether the UE can enter a power saving mode; and configure the UE to change to an RRC Deep Idle mode, in response to determining that the UE is in the RRC_Connected state and the UE can enter the power saving mode.24. The UE of claim 23, wherein determine whether the UE can enter a power saving mode includes, upon expiration of a specified amount of time, configure the UE to change to a Radio Resource Control Idle (RRC_Idle) state. 25. The UE of claim 24, wherein configure the UE to change to an RRC Deep Idle mode includes send a network indication to the UE. |
POWER SAVING MODE OPTIMIZATIONS AND RELATED PROCEDURESCLAIM OF PRIORITY[0001] This patent application claims the benefit of priority to U.S.Application Serial No. 14/318,085, filed June 27, 2014, which claims priority to U.S. Provisional Patent Application Serial Number 61/863,902, filed on August 8, 2013, both of which are hereby incorporated by reference herein in their entirety.BACKGROUND[0002] User Equipment (UE) that is used for Machine TypeCommunication (MTC) or MTC applications, such as a smart meter, have certain characteristics such as being nomadic, having low mobility, having low priority data transmissions, or sending small amounts of MO (MobileOriginated) or MT (Mobile Terminated) data very infrequently or according to a schedule. Given the wide array of possibilities of MTC applications and devices, it is expected that there will be trillions of Machine to Machine (M2M) communications. Accordingly, the various data generated by the M2M communications is intended to be transferred efficiently and use minimum power consumption from the UE in order to increase the life of the UE.BRIEF DESCRIPTION OF THE DRAWINGS[0003] FIG. 1 illustrates generally an example of a diagram showingUser Equipment (UE) states and transitions including an RRC Deep Idle state in accordance with some embodiments.[0004] FIG. 2 illustrates generally an example of a diagram showing UE states and transitions including a power saving state in accordance with some embodiments. [0005] FIG. 3 illustrates generally an example of a diagram showing signaling messages in accordance with some embodiments.[0006] FIG. 4 illustrates generally an example of a flowchart showing a cell selection transition in accordance with some embodiments.[0007] FIG. 5 illustrates generally an example of a diagram showing UE states and transitions including a deep idle sub-state in accordance with some embodiments.[0008] FIG. 6 illustrates generally an example of a diagram showing UE state transitions when leaving connected mode in accordance with some embodiments.[0009] FIG. 7 illustrates generally examples of waveforms illustratingUE state transitions in accordance with some embodiments.[0010] FIG. 8 illustrates generally a technique, such as a method, that can include configuring a UE to change to a Radio Resource Control (RRC) Deep Idle mode in accordance with some embodiments.[0011] FIG. 9 illustrates generally an example of a block diagram of a machine upon which one or more embodiments can be implemented in accordance with some embodiments.[0012] In the drawings, which are not necessarily drawn to scale, like numerals can describe similar components in different views. Like numerals having different letter suffixes can represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document. DETAILED DESCRIPTION[0013] Techniques to minimize power consumption in User Equipment(UE) used for Machine Type Communication (MTC) are desired. One technique to convey power saving related information can be through a Radio Resource Control (RRC) Connection Release or an equivalent message in RRC. The new RRC power saving mode is referred to herein as a Radio Resource Control DeepIdle (RRC Deep Idle) mode or an RRC Deep Idle state and can also be understood to be a sub-state within an RRC_Idle state or power saving mode that a UE could apply when in an RRC_Idle state. The power saving mode can support an efficient algorithm to transfer or check for data without incurring the signaling overhead and with maximum power saving by minimizing the UE connected time. The new power saving state, mode, or sub-state can include indicating when a UE is still registered to a network but can have Access Stratum (AS) turned OFF. The UE can include not having any pending idle mode related activities, such as checking for paging, taking measurements, or performing a cell reselection procedure. The new power saving sub-state within RRC Idle is referred to herein as an RRC Deep Idle mode or state.[0014] In an example, the UE can transit between the RRC Deep Idle state and legacy states using an efficient technique that allows the UE to send, receive, or check for data reducing the time to minimize UE power consumption. The technique can also reduce the signaling overhead.[0015] Another method can include an enhancement to the Core Network(CN) procedures, to prevent a download of the UE context to an evolved Node B (eNodeB) if no data activity is expected in uplink (UL) or downlink (DL). Stated another way, the technique can include downloading the UE context to an eNodeB only if data activity is expected in UL or DL. Before the UE moves to connected mode from the new power saving state, the eNodeB can request a Mobility Management Entity (MME) to transfer the UE context.[0016] In an example, the transmission of UE context can be minimized if no data activity is expected for the UE. In an example, the UE can go back to the new RRC power saving mode if data activity is not expected in UL or DL. For example, the UE can indicate to the eNodeB that the UE is establishing a connection without a need to send any UL data or without UL data to send. The eNodeB can request the UE context from the MME if there is DL data waiting to be sent, or the eNodeB can skip requesting UE context from the MME if there is no DL data waiting to be sent. In the example where the eNodeB requests the UE context from the MME, the MME can send the UE context. In another case of the example above, the MME can enable a flag saying that the UE is reachable but that the UE context will not be conveyed to the eNodeB unless DL data is received. In an example, the MME can send some simple communication to the eNodeB or coordinate with the eNodeB to keep the MME and the eNodeB in sync about the current UE state. The MME can also reject a request from the eNodeB for the UE context, such as by indicating that no DL is waiting to be sent to the UE.[0017] The MME can determine that the UE came from the new RRC power saving mode for a periodic TAU. The MME can also check if any MT (DL) data might be pending to be sent to the UE. For example, the technique for determining that the UE came from the new RRC power saving mode can use the current RRC-Establishment-Cause or NAS-PDU message or the technique can use a new IE (e.g. MT-check) to indicate that the UE came back to connected mode although no MO data activity is expected. In an example, S-GW or P-GW can be used to determine if there is DL data to be sent.[0018] In an example, the techniques to improve power consumption in the UE can include conveying power saving related information through an RRC Connection Release message or an equivalent message. Another example can include additional details in relation to the new RRC power saving mode, such as in relation to transferring or checking for data without incurring the signaling overhead. The additional details can also include saving power by minimizing the UE connected time.[0019] In an example, the RRC Connection Release message or anotherRRC message already existing or a new RRC message can trigger power saving techniques. The RRC Connection Release message or an equivalent message can also indicate or convey information to the UE to save extra power, such as extended Discontinuous Reception (DRX) Cycle value, support, activation or related parameters (e.g. other timers in relation to how often periodic TAU needs to be done based on an extended DRX Cycle). The RRC Connection Release message or an equivalent message can include releasing the Cause Indicator to have the UE transition to the new RRC power saving mode (e.g., RRC Deep Idle state or RRC Deep Idle mode sub-state within RRC Idle). Additionally, the RRC Connection Release message or an equivalent message can include timers related to the new RRC power saving mode.[0020] In an example, a technique to identify support of the new RRC power saving mode can include determining whether the UE can support extreme delays, such as when a first packet is sent while the UE is in the new RRC power saving mode. In another example, identifying support of the new RRC power saving mode can include sending or receiving an indication that a UE supports the new functionality of the RRC power saving mode, which can include an indication from a UE, from an eNodeB, to a UE, to an eNodeB, or any combination of these indications.[0021] UE radio capabilities can be used to indicate the support of the new RRC power saving mode in the UE, such as by using a new parameter. In an example, the new parameter can be expressed by a new field, such as 4.3.8.10 extremeDelayTolerant, for example, as can be added to a technical specification similar to 3GPP Technical Specification 36.306 "Evolved Universal Terrestrial Radio Access (e-UTRA); User Equipment (UE) radio access capabilities", (e.g., release version 12.0.0 or later). The new parameter can include defining whether the device (such as a UE) can delay its data transmission or reception in order of the extended DRX cycle for all its applications, such as anextremeDelayTolerant, powerSavingSupport, deepIdleSupport, orunreachableSupport parameter where the value of 1 can indicate that the device tolerates long delays, (e.g., in order of the extended DRX cycle).[0022] The new RRC power saving state can be a dynamic setting that can include being enabled or disabled depending on UE specific requirements. The dynamic setting can include an indication, such as through Non- Access Stratum (NAS) protocol data unit (PDU) or an RRC message. In an example, the indication can include a NAS PDU sent by the UE to the Mobility Management Entity (MME) having Power Saving Related information, such as Attach or Tracking Area Update (TAU) request. The MME can also convey the information to the eNodeB such as through the context transfer. In an example, the indication can include an RRC message sent by the UE including sending Power Saving related Information through the uplink RRC messages to the eNodeB, such as by RRC Connection reconfiguration complete or RRCConnection setup complete indications. The Power Saving related Information can include the deactivation or activation of the new RRC power saving mode through a new information element, such as a Boolean or enumerator indicator.The Power Saving related Information can also include new Timer values related to the new RRC power saving mode, such as a timer to indicate when the UE should enter into the new RRC power saving mode or a timer to indicate how long the UE should stay in the new RRC power saving state before coming back. In an example, the new Timer values can be included in an RRC message sent to the eNodeB or in the NAS PDU information sent to the MME by the UE through TAU or Attach.[0023] In an example, a network can indicate its support or parameters associated with the new RRC power saving mode through broadcast or dedicated signaling, such as using a System Information Block (SIB) message, Mac- MainConfig IE, Other-Config IE, using existing RRC or NAS messages (e.g., RRC Connection Release) or using a new RRC or NAS message. The support or parameters associated with the new RRC power saving mode can include the deactivation or activation of the new RRC power saving mode through a new information element, such as a Boolean or enumerator indicator. The support or parameters associated with the new RRC power saving mode can also include new Timer values related to the new RRC power saving mode, such as a timer to indicate when the UE should enter into the new RRC power saving mode or a timer to indicate how long the UE should stay in the new RRC power saving mode before returning to the other state.[0024] FIG. 1 illustrates generally an example of a diagram 100 showingUE states and transitions including an RRC Deep Idle state in accordance with some embodiments. In an example, a new RRC power saving state can include an RRC Deep Idle State 102 and the RRC Deep Idle State 102 can be called an RRC power saving state or a sub-state or mode of an RRC Idle state 104. The RRC Deep Idle state 102 can be reached after the UE stays in an RRC Idle state 104 for a certain time, such as until an active timer expires using Idle to Deep Idle transition 108. The UE can stay in the RRC Deep Idle state 102 until new Uplink (UL) data is pending, or until an internal timer expires, such as TAU or a timer to check whether there is Downlink (DL) data waiting to be sent to the UE. The UE can leave the RRC Deep Idle state 102 for the RRC Idle state 104, including by using Deep Idle to Idle transition 110.[0025] In an example, the RRC Connected state 106 of the UE can be released and the UE can directly transfer to the RRC Deep Idle state 102 including by using Connected to Deep Idle transition 112. In another example, the UE RRC connection can be retained, such as for cases in which the UE comes out of a power saving state and there is no data activity waiting to be sent or no indication that any server or application has tried to reach the UE.[0026] In an example, a UE can transition to a power saving state directly from a connected state, such as by using the Connected to Deep Idle transition 112 to go from RRC Connected state 106 to RRC Deep Idle state 102. The transition can include using the Connected to Deep Idle transition 112 when a network sends an indication, such as a new message (e.g., creating a new RRC Power Saving Release message) or a new Information Element (IE) in any of the existing messages (e.g., using RRC Release message with a newpowerSavinglndication IE). In another example, the transition can include using the Connected to Deep Idle transition 112 after a certain time has elapsed, such as by expiration of a timer predefined by the network or the UE or negotiated or defined by the technical specification (e.g. a connected to power saving timer).[0027] In an example, a UE can transition to the RRC Idle state 104 automatically from the RRC Connected state 106 using Connected to Idle transition using timer 120 after a certain time has elapsed, such as by expiration of a timer predefined by the network or the UE or negotiated or defined by the technical specification (e.g. a connected to idle timer or a connected timer). In this technique, using the RRC release message can be avoided.[0028] The techniques described above can also be applied in cases that the UE sends or receives data.[0029] In an example, the UE RRC Connection can be released so that the UE transitions from the RRC Connected state 106 to the RRC Idle state 104 by using Connected to Idle transition using indication 118, such as by a network indication through RRC Release Timer. The UE can transition from the RRC Idle state 104 to the RRC Connected state 106 by using Idle to Connected transition 116, including by establishing an RRC Connection, such as by a network indication through a page or a UE decision due to UL data or by the expiration of periodic timers (e.g., TAU periodic timer or T3412). The UE can transition from the RRC Idle state 104 to the RRC Deep Idle state 102 by using the Idle to Deep Idle transition 108, such as by a UE decision due to the expiration of a timer (e.g., an active timer or reachable timer or idle timer). The UE can transition from the RRC Deep Idle state 102 to the RRC Idle state 104 by using the Deep Idle to Idle transition 110, such as by a UE decision due to Mobile-Originated (MO) or DL data or the expiration of a timer (e.g., a TAU timer or unreachable timer or power saving timer). The UE can thus transition from the RRC Deep Idle state 102 to the RRC Connected state 106 via the RRC Idle state 104 by using Deep Idle to Idle transition 110 and Idle to Connected transition from Deep Idle 114, and the UE can establish an RRC Connection.[0030] In an example, the RRC Connected state 106 can include the UE being reachable by the eNodeB. In an example, the RRC Idle state 104 can include the UE being reachable by the eNodeB. In an example, the RRC Deep Idle state 102 can include the UE being unreachable by the eNodeB. The RRC Deep Idle state 102 can include deactivating the AS.[0031] FIG. 2 illustrates generally an example of a diagram 200 showingUE states and transitions including a power saving state in accordance with some embodiments. In an example, the new RRC power saving mode can include an RRC_Power_Saving or RRC_Dormant state 206. Transitions between the RRC_Connected state 202 and the RRC_Idle state 204 were previously discussed in relation to the RRC Connected state 106 and the RRC Idle state 104 of FIG. 1. Similarly, transitions between the RRC_Idle state 204 and the RRC_Power_Saving or RRC_Dormant state 206 were previously discussed in relation to the RRC Idle state 104 and the RRC Deep Idle state 102 of FIG. 1. The RRC_Power_Saving or RRC_Dormant state 206 can include a related Evolved Packet System (EPS) Mobility Management (EMM) state where the UE is unreachable, such as EMM-REGISTERED. DEEP-IDLE or EMM- REGISTERED. UNREACHABLE or EMM-REGISTERED.POWER-SAVING or EMM-REGISTERED. DORMANT. For example, the RRC ConnectionRelease to Power Saving transition 208 can include entering the UE unreachable EMM state, such as where the UE is registered, the AS activity is deactivated, and the device is considered unreachable.[0032] FIG. 3 illustrates generally an example of a diagram showing signaling messages in accordance with some embodiments. In an example, theUE 302 can be in an RRC Power Saving state 306. The UE can transition from the RRC Power Saving state 306 to an RRC Idle state 308. The UE in the RRCPower Saving state 306 can include not performing AS selection (cell/RAT/PLMN, where RAT stands for Radio Access Technologies and PLMN stands for public land mobile network) or NAS (MM) procedures, although periodic registrations (RAU/TAU) procedures can continue. In the RRC Power Saving state 306, mobility management activities can be disabled or not executed. Additionally, for example, low mobility or stationary devices, such as devices enabled for MTC, cell re-selection can be optional as UE locations can remain unchanged due to low nomadic mobility.[0033] In an example, the UE 302 can be in the RRC Power Saving state306 until the expiration of a timer, such as a new specific timer defined for the power saving state (e.g. "unreachable timer" or "power saving timer" or "deep idle timer"). The timer can also be the TAU timer, which can include the same procedure as for periodic TAU. The timer can also be set according to each UE independently to the TAU timer and procedure. However, if a new power saving timer expired, the UE is not limited to one technique to move to a connected state and can move to a connected state using different establishment cause and NAS-PDU, such as can be defined and explained further below.[0034] In an example, when the power saving timer expires, the UE 302 can select the cell and become connected, such as using an initial cell search or stored information cell selection. The UE 302 can be, for example, not camped in any cell with selected PLMN. The UE 302 can transition to an RRCConnected state using the tracking area update or service request. The UE 302 can send PRACH Preamble 310, receive a Random Access Response (RAR) 312, send RRC Connection Request (mo-Signaling or mt- Access or mo-Data) 314, receive RRC Connection Setup 316, and send RRC Connection Setup Complete 318. The RRC Connection Setup Complete 318 can include NAS PDU set as TAU Service Request. The RRC Connection Setup Complete 318 can be modified to include new NAS initial message. The UE 302 transition to an RRC Connected state can include a modified technique so the eNodeB and MME can adjust their responses differently for a UE that comes out of a power saving state to perform a MO (UL) data transfer or a TAU update or to check if there is any MT (DL) data that the network wants to transmit to it.[0035] FIG. 4 illustrates generally an example of a flowchart showing a cell selection transition in accordance with some embodiments. The UE in RRC power saving state 402 can transition to RRC Idle and then to RRC Connected using a cell selection transition. If the UE in RRC power saving state 402 is not camped and selected PLMN is available (UE Movement 404 is yes), the UE can perform cell selection, such as searching for a suitable cell on which to camp. If the UE is stationary or the UE is a low mobile UE under good signal condition (UE Movement 404 is no), the UE can assume it is still camped on the same cell and can use the stored information cell selection 406 without checking the stored information for suitability. In either case, the UE can then proceed to be camped normally 408 and the UE can acquire the system information and enter a connected mode 410.[0036] Table 1 illustrates generally a table of signaling messages. In an example, RRC Establishment Cause and NAS PDU Signaling messages can be mapped and described. Table D.1.1., as referred to below in Table 1, includes 3 GPP Technical Specification 24.301 "Non-Access-Stratum (NAS) protocol for Evolved Packet System (EPS); Stage 3", release version 12.4.0 (March 17, 2014) as can be amended.Table 1. Mapping and description of RRC Establishment Cause and NASPDU Signaling messages [0037] In an example, the MME can use an UE characteristic to avoid downloading UE context and capabilities information to the eNodeB (as it can be done in the periodic TAU), such as when there is no MT (DL) data expected. The MME can update a reachability information or flag when the UE returns from the RRC Power Saving state even if the UE context did not get downloaded. The MME can communicate some lightweight information to the eNodeB, and the eNodeB can be aware of the UE situation using the lightweight information. In an example, after sending the message to the MME, the eNodeB can react in a number of ways depending on the MME response. For example, the eNodeB can release the connection similarly to the periodic TAU updates, such as by passing the UE into RRC Idle. The eNodeB can establish a dedicated bearer for MT (DL) data. The eNodeB also can send the UE back into a Power Saving state without going through Idle, such as by using an explicit new message (e.g. RRC Power Saving Release message) or by using a pre-defined or pre- negotiated timer.[0038] FIG. 5 illustrates generally an example of a diagram showing UE states and transitions including a deep idle sub-state in accordance with some embodiments. In an example, a Power Saving or Deep Idle sub-state 506 is a UE mode that is a sub-state of an RRC_Idle state 504 but separate from anRRC_Connected state 502. The Power Saving or Deep Idle sub-state 506 can differentiate on the UE when or how idle activities can save UE power consumption. Differences between the Power Saving or Deep Idle sub-state 506 and the RRC_Idle state 504 can include reducing idle activity, such as AS related to inter or intra cell search and measurements. The UE can include MTC devices with low mobility. Similar techniques and apparatuses as described above for the new RRC power saving mode can apply to the Power Saving or Deep Idle new sub-state 506 and transitions between it and the RRC_Idle state 504, as well as transitions from the Power Saving or Deep Idle sub-state 506 ultimately to the RRC_Connected state 502 directly or indirectly through the RRC_Idle state 504. .[0039] FIG. 6 illustrates generally an example of a diagram showing UE state transitions when leaving connected mode in accordance with some embodiments. In an example, the UE can be maintained in a number of different statuses, such as camped normally or camped on any cell or any cell selection. The UE can further be maintained in different AS modes such as Sleep, OFF, or disabled, or active or ON modes. The UE can include a simplified state, such as camped with AS OFF or camped AS deactivated 608. The UE can use cell selection 602 when leaving connected mode. In the camped AS deactivated 608 state, the UE can avoid performing cell reselection or the triggers to perform cell reselection can be disabled. The AS Sleep or OFF or disabled mode can be defined in an RRC Idle sub-state, such as an RRC Deep Idle mode, within the existing RRC Idle state or in a separate RRC Deep Idle state. In an example, the UE can be moved into RRC Idle by the network after a timer has expired, where the timer can include a pre-negotiated timer, or by direct indication of the network, such as through a new IE in RRC Connection Release or a new RRC message, and the UE can turn the AS off or go into the RRC Deep Idle mode or RRC Idle sub-state. This transition can occur even if the UE is mobile. The UE can wake up in time to listen to a paging channel. The UE can optionally perform cell synchronization or cell reselection. The UE can save power during the sleep periods, the sleep periods including time outside of paging occasions, if DRX paging cycles are extended beyond specified values, such as by entering the RRC Deep Idle mode. The RRCConnectionRelease can be extended to transition a UE to an RRC Idle state with AS Sleep mode. ThisRRCConnectionRelease can allow the UE to avoid maintaining a timer to turn the AS off. The RRC Idle state with AS sleep mode can include an extended DRX cycle value and corresponding timer.[0040] In an example, when leaving connected mode, the UE can use cell selection 602 to find a suitable cell. If AS is deactivated, the UE can enter a camped AS deactivated mode 608 using a suitable cell found, AS deactivated transition 610. If AS is active, the UE can enter a camped normally mode 606 using a suitable cell found, AS active transition 612. The UE can leave the camped normally mode 606 and enter the camped AS deactivated mode 608 using a deactivate AS transition 620. Similarly, the UE can leave the camped AS deactivated mode 608 and enter the camped normally mode 606 using an activate AS transition 622. The UE can use a leave idle mode transition 618 to leave the camped normally mode and enter a connected mode 604. From the connected mode, the UE can return to Power Saving mode 614 or return to Idle mode 616.[0041] FIG. 7 illustrates generally examples of waveforms illustratingUE state transitions in accordance with some embodiments. Examples of three different waveforms 704, 710, and 720 are shown in graphs 700A, 700B, and 700C, respectively. In all three examples, when the waveforms are at 2, the examples can include an instance when the UE is in a connected mode that allows data TX or RX. When the waveforms are at 1 can include when the UE is active with no data pending (e.g. check PDCCH). When the waveforms are at 0, the examples can include an instance when the UE is in an Idle or Sleep mode. When the waveforms are at -1, the examples can include an instance when the UE is in an Idle or Sleep mode in a Power Saving sub -state or when the UE is in a Deep Idle mode. When the waveforms transition from 2 to 1 in all three examples, this can represent the UE transitioning from active data TX or RX to when there is no data to TX or RX.[0042] The UE can transition from active with no data to an Idle mode, such as is represented by an RRC Connection Release 702 where the waveform 704 moves from 1 to 0. In graph 700A, the UE can be in an Idle mode, such as where the waveform 704 is 0 and can periodically check for data by entering a connected mode, such as where the waveform 704 is 1.[0043] The UE can transition from active with no data to an Idle mode, such as is represented by an RRC Connection Release 706 where the waveform 710 moves from 1 to 0. In graph 700B, the UE can be in a Deep Idle state, such as where the waveform 710 is -1. The UE can transition to the power saving sub- state or Deep Idle state immediately from the connected state, or it can transition after the next paging operation (PO) as shown in graph 700B in the first transition zone 708, such as by first entering an Idle mode (waveform 710 is 0) and then entering the Deep Idle state after the next PO (where the waveform 710 transitions from 0 to 1 to -1).[0044] The UE can transition from active with no data to an Idle mode, such as is represented by an RRC Connection Release 712 where the waveform720 moves from 1 to 0. In graph 700C, the UE can be in a Deep Idle state, such as where the waveform 720 is -1. The UE can transition to the power saving sub- state or Deep Idle state after the expiration of a timer, such as a Power Saving Timer 714. The Power Saving Timer 714 can start immediately after the transition to an Idle state or it can start after the next PO as shown in graph 700C in the timer start zone 716. After the timer expires, the UE can transition to the power saving sub-state or Deep Idle state immediately from the connected state or from the Idle state or it can transition after the next PO as shown in graph 700C in the second transition zone 718 (where the waveform 720 transitions from 1 to 0 to -1).[0045] In an example, the UE can transition between the connected and power saving sub-state immediately or after a short delay, such as to wake up before a PO to synchronize and compensate a clock drift.[0046] FIG 8. illustrates generally a technique, such as a method, that can include receiving, by an eNodeB, a notification that a UE is configured to be used for MTC 802, determining whether the UE is in an RRC_Connected state 804, determining whether the UE can enter a power saving mode 806, and configuring the UE to change to an RRC Deep Idle mode, in response to determining that the UE is in the RRC_Connected state and the UE can enter the power saving mode 808 in accordance with some embodiments.[0047] FIG. 9 illustrates generally an example of a block diagram of a machine 900 upon which any one or more of the techniques (e.g.,methodologies) discussed herein can perform in accordance with some embodiments. In alternative embodiments, the machine 900 can operate as a standalone device or can be connected (e.g., networked) to other machines. In a networked deployment, the machine 900 can operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine 900 can act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine 900 can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.[0048] Examples, as described herein, can include, or can operate on, logic or a number of components, modules, or mechanisms. Modules are tangible entities (e.g., hardware) capable of performing specified operations when operating. A module includes hardware. In an example, the hardware can be specifically configured to carry out a specific operation (e.g., hardwired). In an example, the hardware can include configurable execution units (e.g., transistors, circuits, etc.) and a computer readable medium containing instructions, where the instructions configure the execution units to carry out a specific operation when in operation. The configuring can occur under the direction of the executions units or a loading mechanism. Accordingly, the execution units are communicatively coupled to the computer readable medium when the device is operating. In this example, the execution units can be a member of more than one module. For example, under operation, the execution units can be configured by a first set of instructions to implement a first module at one point in time and reconfigured by a second set of instructions to implement a second module.[0049] Machine (e.g., computer system) 900 can include a hardware processor 902 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 904 and a static memory 906, some or all of which can communicate with each other via an interlink (e.g., bus) 908. The machine 900 can further include a display unit 910, an alphanumeric input device 912 (e.g., a keyboard), and a user interface (UI) navigation device 914 (e.g., a mouse). In an example, the display unit 910, alphanumeric input device 912 and UI navigation device 914 can be a touch screen display. The machine 900 can additionally include a storage device (e.g., drive unit) 916, a signal generation device 918 (e.g., a speaker), a network interface device 920, and one or more sensors 921, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine 900 can include an output controller 928, such as a serial (e.g., universal serial bus(USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).[0050] The storage device 916 can include a machine readable medium922 that is non-transitory on which is stored one or more sets of data structures or instructions 924 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 924 can also reside, completely or at least partially, within the main memory 904, within static memory 906, or within the hardware processor 902 during execution thereof by the machine 900. In an example, one or any combination of the hardware processor 902, the main memory 904, the static memory 906, or the storage device 916 can constitute machine readable media.[0051] While the machine readable medium 922 is illustrated as a single medium, the term "machine readable medium" can include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 924.[0052] The term "machine readable medium" can include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 900 and that cause the machine 900 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non- limiting machine readable medium examples can include solid-state memories, and optical and magnetic media. In an example, a massed machine readable medium comprises a machine readable medium with a plurality of particles having invariant (e.g., rest) mass. Accordingly, massed machine-readable media are not transitory propagating signals. Specific examples of massed machine readable media can include: non- volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory(EPROM), Electrically Erasable Programmable Read-Only Memory(EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD- ROM disks.[0053] The instructions 924 can further be transmitted or received over a communications network 926 using a transmission medium via the network interface device 920 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks can include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 920 can include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 926. In an example, the network interface device 920 can include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term "transmission medium" shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 900, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.Various Notes & Examples[0054] Additional examples of the presently described method, system, and device embodiments are suggested according to the structures and techniques described herein. Other non- limiting examples can be configured to operate separately, or can be combined in any permutation or combination with any one or more of the other examples provided above or throughout the present disclosure.[0055] Example 1 includes the subject matter embodied by an evolved Node B (eNodeB) comprising: a processor arranged to: receive a notification that a User Equipment (UE) is configured to be used for Machine TypeCommunication (MTC), determine whether the UE is in a connected state, determine whether the UE can enter a power saving mode, and configure the UE to change to a power saving mode, in response to determining that the UE is in the connected state and the UE can enter the power saving mode.[0056] In Example 2, the subject matter of Example 1 can optionally include wherein the connected state comprises a Radio Resource Control Connected (RRC_Connected) state and the power saving mode comprises a Radio Resource Control (RRC) Deep Idle mode.[0057] In Example 3, the subject matter of one or any combination ofExamples 1-2 can optionally include wherein operations to determine whether the UE can enter the power saving mode include, upon expiration of a specified amount of time, operations to configure the UE to change to an RRC Idle state.[0058] In Example 4, the subject matter of one or any combination ofExamples 1-3 can optionally include wherein the RRC Deep Idle mode is a configuration in an RRC Idle state.[0059] In Example 5, the subject matter of one or any combination of Examples 1-4 can optionally include wherein the RRC Deep Idle mode is an RRC Deep Idle state and is separately configured from an RRC Idle state[0060] In Example 6, the subject matter of one or any combination ofExamples 1-5 can optionally include wherein operations to configure the UE to change to an RRC Deep Idle mode include operations to send a network indication to the UE.[0061] In Example 7, the subject matter of one or any combination ofExamples 1-6 can optionally include wherein the network indication includes a new NAS signaling message.[0062] In Example 8, the subject matter of one or any combination of Examples 1-7 can optionally include wherein the network indication includes aRadio Resource Control Connection Release (RRCConnectionRelease) message.[0063] In Example 9, the subject matter of one or any combination ofExamples 1-8 can optionally include wherein the network indication includes a new RRC Power Saving Release message.[0064] In Example 10, the subject matter of one or any combination ofExamples 1-9 can optionally include operations to configure the UE to change to an RRC Idle state. [0065] In Example 11 , the subject matter of one or any combination ofExamples 1-10 can optionally include wherein operations to configure the UE to change to an RRC Idle state occur only if data activity is expected.[0066] In Example 12, the subject matter of one or any combination of Examples 1-11 can optionally include operations to use a timer to determine when to configure the UE to change to a power saving mode.[0067] In Example 13, the subject matter of one or any combination ofExamples 1-12 can optionally include wherein the operations to receive a notification occur over a wireless communications network comprising a 3rd Generation Partnership Project (3 GPP) long term evolution (LTE) network.[0068] In Example 14, the subject matter of one or any combination ofExamples 1-13 can optionally include operations to configure the UE to leave the power saving mode upon expiration of a specified amount of time.[0069] Example 15 can include, or can optionally be combined with all or portions of the subject matter of one or any combination of Examples 1-14 to include the subject matter embodied by a method performed by circuitry of an evolved Node B (eNodeB) including: receiving, by the eNodeB, a notification that a User Equipment (UE) is configured to be used for Machine TypeCommunication (MTC), determining whether the UE is in a connected state, determining whether the UE can enter a power saving mode, and configuring the UE to change to a power saving mode, in response to determining that the UE is in the connected state and the UE can enter the power saving mode.[0070] In Example 16, the subject matter of Example 15 can optionally include using a timer to determine when to configure the UE to change to an RRC Deep Idle mode.[0071] In Example 17, the subject matter of one or any combination ofExamples 15-16 can optionally include wherein determining whether the UE can enter a power saving mode includes, upon expiration of a specified amount of time, configuring the UE to change to an RRC Idle state.[0072] In Example 18, the subject matter of one or any combination ofExamples 15-17 can optionally include wherein configuring the UE to change to an RRC Deep Idle mode includes sending a network indication to the UE. [0073] Example 19 can include, or can optionally be combined with all or portions of the subject matter of one or any combination of Examples 1-18 to include the subject matter embodied by at least one machine-readable medium including instructions for operation of a computing system, which when executed by a machine, cause the machine to perform operations including: determine, by the UE, that the UE is configured to be used for Machine Type Communication (MTC), determine whether the UE is in a Radio Resource Control Connected (RRC_Connected) state, determine whether the UE can enter a power saving mode, and configure the UE to change to an RRC Deep Idle mode, in response to determining that the UE is in the RRC_Connected state and the UE can enter the power saving mode.[0074] In Example 20, the subject matter of Example 19 can optionally include operations to use a timer to determine when to configure the UE to change to an RRC Deep Idle mode.[0075] In Example 21 , the subject matter of one or any combination ofExamples 19-20 can optionally include wherein operations to determine whether the UE can enter a power saving mode include, upon expiration of a specified amount of time, operations to configure the UE to change to an RRC Idle state.[0076] In Example 22, the subject matter of one or any combination of Examples 19-21 can optionally include wherein operations to configure the UE to change to an RRC Deep Idle mode include operations to send a network indication to the UE.[0077] Example 23 can include, or can optionally be combined with all or portions of the subject matter of one or any combination of Examples 1-22 to include the subject matter embodied by User Equipment (UE) including: a transceiver configured to be used for Machine Type Communication (MTC), and a processor, coupled to the transceiver, arranged to: determine whether the UE is in a Radio Resource Control Connected (RRC_Connected) state, determine whether the UE can enter a power saving mode, and configure the UE to change to an RRC Deep Idle mode, in response to determining that the UE is in theRRC_Connected state and the UE can enter the power saving mode.[0078] In Example 24, the subject matter of Example 23 can optionally include wherein determine whether the UE can enter a power saving mode includes, upon expiration of a specified amount of time, configure the UE to change to an RRC Idle state.[0079] In Example 25, the subject matter of one or any combination ofExamples 23-24 can optionally include wherein configure the UE to change to an RRC Deep Idle mode includes send a network indication to the UE.[0080] Each of these non- limiting examples can stand on its own, or can be combined in various permutations or combinations with one or more of the other examples.[0081] The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments that can be practiced. These embodiments are also referred to herein as "examples." Such examples can include elements in addition to those shown or described.However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.[0082] In the event of inconsistent usages between this document and any documents so incorporated by reference, the usage in this document controls.[0083] In this document, the terms "a" or "an" are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of "at least one" or "one or more." In this document, the term "or" is used to refer to a nonexclusive or, such that "A or B" includes "A but not B," "B but not A," and "A and B," unless otherwise indicated. In this document, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein." Also, in the following claims, the terms "including" and "comprising" are open-ended, that is, a system, device, article, composition, formulation, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.[0084] Method examples described herein can be machine or computer- implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods can include code, such as microcode, assembly language code, a higher- level language code, or the like. Such code can include computer readable instructions for performing various methods. The code can form portions of computer program products. Further, in an example, the code can be tangibly stored on one or more volatile, non- transitory, or non- volatile tangible computer-readable media, such as during execution or at other times. Examples of these tangible computer-readable media can include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.[0085] The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) can be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to comply with 37 C.F.R. § 1.72(b), to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features can be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter can lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the embodiments should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.[0086] The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment. |
A system and method for providing a sort in a computer system is disclosed. The sort is based on a plurality of values of a key. Each of the plurality of items has an associated value of the plurality of values. The method and system include providing a new item of the plurality of items to a plurality of sort cells. The new item includes a new value of the plurality of values. The plurality of sort cells is for sorting the plurality of items. Each sort cell is for sorting a corresponding item of the plurality of items. The corresponding item has a corresponding value of the plurality of values. The method and system further include comparing the new value to the corresponding value for each of the plurality of sort cells to determine whether to retain the corresponding item. Each of the plurality of sort cells retains the corresponding item if the corresponding item is to be retained. For each of the plurality of sort cells, the method and system determine whether to accept the new item or an item corresponding to the previous sort cell if the corresponding item is not to be retained. If the corresponding item is not to be retained, the method and system allow a sort cell to accept the new item or the item corresponding to the previous sort cell. |
What is claimed is: 1. A system for sorting items in a computer system, the sort being based on a plurality of values of a key, each item having a value of the plurality of values, the system comprising:a plurality of sort cells for sorting at least a portion of the plurality of items, the plurality of sort cells including a first sort cell, each sort cell of the plurality of sort cells being a hardware sort cell, each sort cell for sorting a corresponding item, the corresponding item having a corresponding value, each sort cell further having an associated key storage for storing the corresponding value, a comparator for comparing the corresponding value to a new value associated with a new item and an output, wherein each of the plurality of sort cells except the first sort cell further includes a first input, the first input of the each of the plurality of sort cells except the first sort cell being coupled with the output of a previous sort cell; and a second input coupled with each of the plurality of sort cells, the second input for providing the new item to the plurality of sort cells; such that each of the plurality of sort cells compares the new value to the corresponding value to determine whether to retain the corresponding item and retains the corresponding item if it is determined that the corresponding item is to be retained; and such that each of the plurality of sort cells accepts either the new item or an item corresponding to the previous sort cell from the output of the previous sort cell if the corresponding item is not to be retained and provides the corresponding item over the output if the corresponding item is not retained. 2. The system of claim 1 wherein the comparator further provides a resultant, and wherein each of the plurality of sort cells further includes a controller coupled with the comparator and the storage, the controller for determining whether the corresponding item is to be retained based on the resultant and for allowing each sort cell to further accept the new item or the item corresponding to the previous sort cell if the corresponding item is not to be retained.3. The system of claim 2 wherein the output of each of the plurality of sort cells provides a signal to the next cell indicating that the corresponding item is to be accepted by the next cell if the corresponding item is not to be retained.4. The system of claim 3 wherein the controller further determines that the corresponding item is to be retained when the comparator indicates the corresponding value is not greater than the new value.5. The system of claim 4 wherein the controller determines that the new item is to be accepted when the corresponding value is greater than the new value and the previous cell has not provided a signal indicating that the item corresponding to the previous cell is to be accepted by the sort cell.6. The system of claim 5 wherein the controller determines that the item corresponding to the previous cell is to be accepted when the corresponding value is greater than the new value and the previous cell has provided a signal indicating that the item corresponding to the previous cell is to be accepted by the sort cell.7. The system of claim 6 wherein each of the plurality of items further has an identification associated with it and wherein the controller further allows the plurality of sort cells to sort the portion of the plurality of items having the same identification.8. The system of claim 7 wherein each of the plurality of items has data associated with it and wherein each of the plurality of sort cells further has an associated data storage for storing the data associated with the corresponding item, the sort cell providing the data associated with the corresponding item over the output if the corresponding item is not to be retained, and the sort cell storing data associated with the new item or data associated with the item corresponding to the previous cell in the data storage if the corresponding item is not to be retained.9. The system of claim 8 wherein the computer system is a computer graphics system including a display, wherein each of the plurality of items is a fragment, the key is a z value, and the data associated with the corresponding item includes color and blending data.10. The system of claim 7 wherein each of the plurality of items has data associated with it, and wherein the system further comprises:data storage for storing the data associated with the corresponding item for each of the plurality of sort cells. 11. The system of claim 10 wherein the computer system is a computer graphics system including a display, wherein each of the plurality of items is a fragment, the key is a z value, and the data associated with the corresponding item includes color and blending data.12. The system of claim 7 wherein the key storage is located remote from each sort cell.13. The system of claim 1 wherein the comparator further provides a resultant, and wherein the system further includes:a controller coupled with the plurality of sort cells, the controller for determining whether the corresponding item for each of the plurality of sort cells is to be retained based on the resultant of each of the plurality of sort cells, the controller further instructing each sort cell to accept the new item or the item corresponding to the previous sort cell if the corresponding item is not retained. 14. The system of claim 13 wherein the controller further provides a signal to each of the plurality of sort cells indicating that the item corresponding to the previous sort cell is to be accepted if the corresponding item is not to be retained.15. The system of claim 14 wherein the controller determines whether the corresponding item is to be retained when the comparator indicates the corresponding value is not greater than the new value.16. The system of claim 15 wherein the controller determines that the new item is to be accepted when the corresponding value is greater than the new value and the previous cell has not provided a signal indicating that the item corresponding to the previous cell is to be accepted by the sort cell.17. The system of claim 16 wherein the controller determines that the item corresponding to the previous cell is to be accepted when the corresponding value is greater than the new value and the previous cell has provided a signal indicating that the item corresponding to the previous cell is to be accepted by the sort cell.18. The system of claim 17 wherein each of the plurality of items further has an identification associated with it and wherein the controller further allows the plurality of sort cells to sort the portion of the plurality of items having the same identification.19. The system of claim 18 wherein each of the plurality of items further has an identification associated with it and wherein the controller further allows the plurality of sort cells to sort the portion of the plurality of items having the same identification.20. The system of claim 19 wherein each of the plurality of items has data associated with it and wherein each of the plurality of sort cells further includes:data storage for storing the data associated with the corresponding item, the sort cell providing the data associated with the corresponding item over the output if the corresponding item is not to be retained, and the sort cell storing data associated with the new item or data associated with the item corresponding to the previous cell in the data storage if the corresponding item is not to be retained. 21. The system of claim 20 wherein the computer system is a computer graphics system including a display, wherein each of the plurality of items is a fragment, the key is a z value, and the data associated with the corresponding item includes color and blending data.22. The system of claim 19 wherein each of the plurality of items has data associated with it and wherein the system further comprises:data storage for storing the data associated with the corresponding item for each of the plurality of sort cells. 23. The system of claim 22 wherein the computer system is a computer graphics system including a display, wherein each of the plurality of items is a fragment, the key is a z value, and the data associated with the corresponding item includes color and blending data.24. The system of claim 23 wherein the data storage is located remote from each sort cell.25. A method sorting a plurality of items in a computer system, the sort being based on a plurality of values of a key, each of the plurality of items having an associated value of the plurality of values, the method comprising the steps of:(a) providing at least one new item of the plurality of items to a plurality of sort cells, each sort cell of the plurality of sort cells being a hardware sort cell, the at least one new item including at least one new value of the plurality of values, the plurality of sort cells for sorting the plurality of items, each sort cell for sorting at least one corresponding item of the plurality of items, the at least one corresponding item having at least one corresponding value of the plurality of values; (b) comparing the at least one new value to the at least one corresponding value in each of the plurality of sort cells to determine whether to retain the at least one corresponding item in each of the plurality of sort cells; (c) for each of the plurality of sort cells, retaining the at least one corresponding item if the at least one corresponding item is to be retained; (d) for each of the plurality of sort cells, determining whether to accept the at least one new item or at least one item corresponding to the previous sort cell if the at least one corresponding item is not retained; (e) if the at least one corresponding item is not to be retained, accepting the at least one new item or the item corresponding to the previous sort cell. 26. The method of claim 25 wherein the steps (a) through (e) are performed in one clock cycle.27. The method of claim 26 wherein the at least one corresponding item further includes data associated with it, and wherein the method further comprises the step of:(f) storing the data associated with the at least one corresponding item in a data storage, the data being retained if the at least one corresponding item is to be retained and being provided over the output if the at least one corresponding item is not to be retained. 28. The method of claim 27 wherein the computer system is a computer graphics system and the data associated with the at least one corresponding item further includes color and blending data for a particular fragment for a particular pixel on the display.29. The method of claim 28 further comprising the steps of:(g) providing an identification for each of the plurality of items; and (h) sorting the portion of the plurality of items having the same identification. 30. The system of claim 1 wherein the at least one new item is provided to plurality of sort cells substantially in parallel.31. The system of claim 1 wherein the plurality of sort cells compare the new value to the corresponding value substantially in parallel.32. The system of claim 1 wherein the plurality of sort cells retain the corresponding item if it is determined that the corresponding item is to be retained substantially in parallel.33. The system of claim 1 wherein the plurality of sort cells accept either the new item or the item corresponding to the previous sort cell if the corresponding item is not to be retained and provide the corresponding item over the output if the corresponding item is not retained substantially in parallel.34. The method of claim 25 wherein the providing the step (a) includes the step of:(a1) providing the at least one new item to the plurality of sort cells substantially in parallel. 35. The method of claim 25 wherein the plurality of sort cells perform the comparing step (b) substantially in parallel.36. The method of claim 25 wherein the plurality of sort cells perform the retaining step (c) substantially in parallel.37. The method of claim 25 wherein the plurality of sort cells perform the determining step (d) substantially in parallel.38. The method of claim 25 wherein the plurality of sort cells perform the accepting step (e) substantially in parallel. |
CROSS-REFERENCE TO RELATED APPLICATIONSThis appln is a continuation of Ser. No. 09/062,872 filed Apr. 20, 1998, now abandoned.The present application is related to co-pending U.S. Patent Application Ser. No. 08/624,261 entitled "Method and Apparatus for Identifying and Eliminating three-dimensional Objects Visually Obstructed from a Planar Surface" filed on Mar. 29, 1996 now U.S. Pat. No. 5,926,181. The present application is also related to U.S. patent application Ser. No. 08/624,260 entitled "Graphics Processors, System and Method for Generating Screen Pixels in Raster Order Utilizing a Single Interpolator" filed on Mar. 29, 1996 now U.S. Pat. No. 5,963,210.FIELD OF THE INVENTIONThe present invention relates to computer systems, and more particularly to a method and system for providing a hardware sort which is simple to design, simple to use, fast, and applicable to computer graphics system.BACKGROUND OF THE INVENTIONMany computer systems must sort items based on the value of a key in order to achieve certain functions. Such computer systems conventionally employ a software sort. For example, computer graphics systems may utilize a software sort in order to render an image. In current computer graphics systems, images of three-dimensional objects can be depicted on a two dimensional display. In order to give the illusion of depth, computer graphics systems use each objects "z value," the distance of each object to the viewing plane. In particular, the objects are ordered based on each object's z value. Thus, the key for such sort is the z value. Once the objects are sorted according to their z values, the computer graphics system can correctly blend the colors of translucent objects and opaque objects that can be seen through the translucent objects to achieve the proper color to be displayed for each pixel.In a conventional computer graphics system, the software sort occurs when a display list is generated through an application. The display list orders three-dimensional objects based on a key, typically the z value. The display list typically orders translucent object from back to front. Thus, the display list sorts translucent objects. Although they may appear on the display list, opaque objects are typically sorted using a conventional Z buffer.Placing the objects in the order prescribed by the display list allows the computer system to properly depict the images of the three-dimensional objects on the display. Hardware in the computer graphics system utilizes the display list, a frame buffer, and a z buffer to render the three-dimensional objects in the order dictated by the display list. The frame buffer and z buffer describe a portion of a three-dimensional object that is to be rendered. The frame buffer includes data such as color and alpha values for the portion of the object, while the z buffer includes corresponding the z values. The conventional computer graphics system provides the objects described in the frame and z buffers to the display screen in the order prescribed by the display list. Consequently, solid objects are rendered first, then trinslucent objects are rendered from back to front. Thus, the display list generated by software is used to render the three-dimensional objects.Although conventional computer graphics systems are capable of depicting three-dimensional objects, the software sort required to provide the display list can be relatively slow. If the software sort is optimized, the sort time can be reduced to a limited extent. However, development time for the software sort is significantly increased. Moreover, changes to the display list and the software sort creating the display list may be difficult to implement. Finally, since the hardware requires a display list in order to properly render the objects, the computer system is limited to using those applications which provide a sorted display list. Without the display list and the attendant software sort, the computer system may not be able to properly depict three-dimensional objects.Accordingly, what is needed is a system and method for sorting items which does not require a sort performed by software. It would also be beneficial if the system and method could be implemented in a computer graphics system for providing a two dimensional image of three-dimensional objects. The present invention addresses such a need.SUMMARY OF THE INVENTIONThe present invention provides a method and system for providing a sort in a computer system. The sort is based on a plurality of values of a key. Each of the plurality of items has an associated value of the plurality of values. The method and system comprise providing a new item of the plurality of items to a plurality of sort cells. The new item includes a new value of the plurality of values. The plurality of sort cells is for sorting the plurality of items. Each sort cell is for sorting a corresponding item of the of items. The corresponding item has a corresponding value of the plurality of values. The method and system further comprise comparing the new value to the corresponding value for each of the plurality of sort cells to determine whether to retain the corresponding item. Each of the plurality of sort cells retains the corresponding item if the corresponding item is to be retained. For each of the plurality of sort cells, the method and system determine whether to accept the new item or an item corresponding to the previous sort cell if the corresponding item is not to be retained. If the corresponding item is not to be retained, the method and system allow a sort cell to accept the new item or the item corresponding to the previous sort cell.According to the system and method disclosed herein, the present invention provides a hardware sort which is simple to implement and modify, fast, and applicable to computer graphics systems.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram of a conventional system for depicting three-dimensional objects one a two dimensional display.FIG. 2 is a block diagram depicting a computer graphics system utilizing the system and method in accordance with the present invention.FIG. 3 is a block diagram of one embodiment of a system in accordance with the present invention.FIG. 4 is a flow chart of a method for providing a sort in accordance with the present invention.FIG. 5 is a block diagram depicting one embodiment of a sort cell in accordance with the present invention.FIG. 6 is a detailed flow chart of a method for providing a sort in accordance with the present invention.FIG. 7 is a block diagram of a preferred embodiment of a system in accordance with the present invention.FIG. 8 is a block diagram depicting a preferred embodiment of a sort cell in accordance with the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention relates to an improvement in computer systems which sort items based on a key. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the preferred embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. Thus, the present invention is not intended to be limited to the embodiment shown but is to be accorded the widest scope consistent with the principles and features described herein.Many conventional systems achieve a particular desired result by sorting items based on the value of a key associated with the items. For example, computer graphics systems is which depict images of three-dimensional objects on a two dimensional display use a software sort to create the illusion of depth. The key that is used for this software sort is typically an object's "z value", or distance from the viewing plane at a particular pixel.FIG. 1 depicts a block diagram of a conventional system 10 for providing a two dimensional display of three-dimensional objects. Typically, the three-dimensional objects are broken into polygons, such as triangles, for display. Consequently, rendering a three dimensional object will be discussed in the context of rendering polygons. A software application 12 is used to display the polygons. The application 12 calls drivers 14 which create a display list.The display list contains the x, y, and z coordinates, alpha or blending value, color, and other information for each polygon. The display list also lists the information relating to each polygon in the order in which the polygons will be rendered to the display 22. The display list typically places, polygons which are part of opaque or solid objects first on the display list. These opaque objects are typically sorted by a z-buffer 17, described below. The polygons for translucent objects are generally next on the display list. The translucent polygons are ordered from back to front, or highest to lowest z value, on the display list. The ordering of the polygons allows the computer system to properly depict the images of the three-dimensional objects on the display. The drivers 14 sort the translucent polygons based on the z value of each polygon in order to create the display list. Note that in general, a polygon may contain a range of z values. Consequently, the sort provided by the display list may actually be more complex to take into account the range of z values. Thus, in the conventional computer graphics system 10, the translucent polygons are sorted by software to determine the order in which the objects will be rendered to the display.Once the polygons are properly ordered in the display list, a hardware renderer 16 can begin the process of rendering the polygons to the display 22. The display list is provided to the hardware renderer 16, which prepares data relating to the polygons for display. The hardware renderer 16 creates a z buffer 17 and a frame buffer 18 to store data relating to each of the polygons. The z buffer 17 includes the z values for each pixel in the polygon. The frame buffer 18 includes the colors for each pixel in the polygon.For each polygon on the display list, data from the z buffer 17 and frame buffer 18 are then provided to the display controller 20. The display controller 20 then outputs the data on the display 22. Thus, each polygon is rendered in the order dictated by the display list. Because the polygons are rendered in the order prescribed by the display list, the conventional system 10 can correctly blend the colors of translucent objects and opaque objects that can be seen through the translucent objects to achieve the proper color for each pixel to be displayed.Although the conventional system 10 shown in FIG. 1 is capable of displaying three-dimensional objects on a to dimensional display, those with ordinary skill in the art will realize that the software sort required to provide the display list has several disadvantages. The software sort can be relatively slow if not optimized. If the software sort is optimized, the time required to perform the software sort can be reduced to a certain extent. However, development time for the software sort is significantly increased. Moreover, changes to the display list and the software sort creating the display list may be difficult or time consuming to implement. Finally, the fact that the hardware renderer 16 requires the data to be provided in the order of the display list limits the computer system 10 to using applications 12 which provide the translucent polygons sorted on the display list and, therefore, perform a software sort. Without having the translucent polygons properly sorted in the display list, the computer system 10 may not be able to properly depict three-dimensional objects.The present invention provides a method and system for providing a sort in a computer system. The sort is based on a plurality of values of a key. Each of the plurality of items has an associated value of the plurality of values. The method and system comprise providing a new item of the plurality of items to a plurality of sort cells. The new item includes a new value of the plurality of values. The plurality of sort cells is for sorting the plurality of items. Each sort cell is for sorting a corresponding item of the plurality of items. The corresponding item has a corresponding value of the plurality of values. The method and system further comprise comparing the new value to the corresponding value for each of the plurality of sort cells to determine whether to retain the corresponding item. Each of the plurality of sort cells retains the corresponding item if the corresponding item is to be retained. For each of the plurality of sort cells, the method and system determine whether to accept the new item or an item corresponding to the previous sort cell if the corresponding item is not to be retained. If the corresponding item is not to be retained, the method and system allow a sort cell to accept the new item or the item corresponding to the previous sort cell.The present invention will be described in terms of a computer graphics system which sorts fragments using a specified number of sort cells. However, one of ordinary skill in the art will readily recognize that this method and system will operate effectively for other computer systems which sorts other items and for another number of sort cells.To more particularly illustrate the method and system in accordance with the present invention, refer now to FIG. 2 depicting a computer graphics system 50 in which the present invention could be implemented. The computer system 50 is described more completely in U.S. Pat. No. 5,926,181 entitled "Method and Apparatus for Identifying and Eliminating Three-Dimensional Objects Visually Obstructed from a Planar Surface". The computer graphics system 50 includes a central processing unit (CPU) 52, a display 54, a user interface 56 such as a keyboard or mouse or other communicating device, a memory 55, and an image generating unit 60 coupled with a bus 58. Note, however, that nothing prevents the method and system from being implemented in a different computer system having other components.The image generating unit includes an interface 61 connected to the bus 58. The interface 61 transmits data to a data processing unit 62. A processor block 65 is coupled with the data processing unit 62. The processor block 65 identifies data describing polygons ("intersecting polygons") which intersect a z axis extending from a selected pixel in an x-y plane corresponding to a screen of the display 54. In a preferred embodiment, the processor block 65 includes a processor for each polygon that may intersect the z axis extending from the selected pixel. The data associated with an intersecting polygon is termed a fragment. Thus, data relating to each selected pixel includes fragments for each of the intersecting polygons.A quick z unit 66 receives the fragments from each intersecting polygon associated with the selected pixel and removes fragments for certain polygons which are obstructed without determining the precise z value of the polygon. The quick z 66 is described more completely in U.S. Pat. No. 5,926,181 entitled "Method and Apparatus for Identifig and Eliminating Three-Dimensional Objects Visually Obstructed from a Planar Surface". The interpolator 68 receives the fragments for the polygons intersecting the particular pixel and interpolates the data, including interpolating texture, color, and alpha values for the fragment. The interpolator 68 provides the fragments for each of the intersecting polygons to a hardware sorter 100 constructed in accordance with the present invention. The hardware sorter 100 sorts the fragments for the intersecting polygons based on the value of a key associated with the fragment. In a preferred embodiment, the key is the z value for the fragment at the selected pixel. To more particularly illustrate the hardware sorter 100 in accordance with the present invention, refer now to FIG. 3 depicting one embodiment of the hardware sorter 100. Note that although the hardware sorter 100 is described in conjunction with the computer graphics system 50, nothing prevents the use of the hardware sorter 100 in another computer system. Thus, the hardware sorter 100 will be described as sorting based on a particular key (the z value) associated with a particular item (the fragment for an intersecting polygon for a selected pixel). However, nothing prevents the hardware sorter 100 from sorting based on another key or accepting other types of data. Thus, the hardware sorter 100 is applicable to other systems requiring a sort, such as a router in a network.The hardware sorter 100 includes a plurality of sort cells 110. Note that although only four sort cells 110 are depicted, nothing prevents the hardware sorter 100 from having another number of sort cells 110. In a preferred embodiment, the number of sort cells 110 is at least equal to the number of items to be sorted. Thus, in a preferred embodiment, the number of sort cells 110 is the same as the number of processors in the processor block 65. Also in a preferred embodiment, the number of sort cells is sixteen. However, nothing prevents the use of another number of sort cells 110. Similarly, nothing prevents the number of sort cells 110 from being different from the number of processors in the processor block 65.Alternate embodiments of the present invention can also be used where overflow may occur. Overflow occurs where there are more items to sort than there are sort cells. The alternate embodiments used for overflow cases may depend on the application for which the hardware sorter 100 is used. For example, where the embodiment is used in a computer graphics system subject to overflow, fragments which are determined to be "solid" may be passed through the hardware sorter 100. In such a case, the hardware sorter 100 will only sort non-solid fragments. The solid fragments may be sorted in a different portion of the image generating unit 60. Note that in such an embodiment, "non-solid" may apply to partial fragments such as those generated by anti-aliasing. Moreover, whether a fragment is solid or non-solid need not be based solely on a fragment's alpha value.The hardware sorter 100 further includes a new input line 102 for providing a new fragment in parallel to each of the sort cells 110 via new input 120. Each sort cell 110 also includes an output 130. The output 130 of a sort cell 130 is coupled to an input of a next sort cell 110. The output 130 of the last sort cell 110 is not coupled to another sort cell 110. Instead, the output 130 of the last sort cell 110 provides the output of the hardware sorter 100. Although not depicted in FIG. 3, in an alternate embodiment, each sort cell could have another number of new input lines 102, another number of outputs 130, and/or another number of inputs. In such a system each sort cell 110 could sort another number of fragments.Refer now to FIG. 4 depicting a method 150 in accordance with the present invention which uses the hardware sorter 100. Each sort cell 110 may have a fragment which corresponds to it ("corresponding fragment"). Each corresponding fragment includes a corresponding z value, which is used to sort the fragment, and corresponding data, such as color and alpha values for the corresponding fragment. A new fragment, including the new z value, is broadcast to each of the plurality of sort cells 110, via step 152. Note that in a preferred embodiment, if the new fragment is the first fragment for a pixel, the first fragment is also placed in the first sort cell 110. Where the new fragment is a first fragment for a pixel when the hardware sorter 100 is empty, the first fragment is placed in the first sort cell 110. Preferably, this is accomplished by indicating that data in other sort cells 110 is invalid. However, nothing prevents the present invention from using another method for placing the first fragment in the first sort cell 110 when the hardware sorter 100 is empty.Steps 154 through 164 are performed for each sort cell 110 in the hardware sorter 100 that takes part in the sort. The new z value for the new fragment is compared to the corresponding z value via step 154. Based on this comparison, each sort cell 110 retains the corresponding fragment, accepts the new fragment, or accepts the fragment corresponding to a previous sort cell 110. Thus, it is determined whether the sort cell 110 will retain the corresponding fragment, including the corresponding z value, via step 156. If the corresponding fragment is to be retained, then the sort cell 110 keeps the corresponding fragment via step 158. If the corresponding fragment is not to be retained, then in step 160 it is determined whether the sort cell 110 is to take the fragment corresponding to a previous sort cell 110. If the sort cell 110 is to accept this fragment, the sort cell 110 takes the fragment corresponding to the previous cell and passes its corresponding fragment to be accepted by the next sort cell 110 via step 162. If the fragment is not to be taken by the sort cell 110, the sort cell 110 takes the new fragment and passes its corresponding fragment to be accepted by the next cell in step 164. As a result, the new fragment is inserted into the hardware sorter 100 in the appropriate sortcell 110. This process continues to sort all of the fragments provided to the hardware sorter.Refer now to FIG. 5 depicting one embodiment of a sort cell 110 in accordance with the present invention. The sort cell 110is coupled to a new input 120. The new input 120 includes a new data input 122, a new key input 124, and a new identification input 126. The sort cell 110 includes an input 140 and an output 130. The input 140 includes a control signal input 141, a previous key input 142, and a previous data input 143. The output 130 includes a control signal output 132, a key output 134, and a data output 136. The sort cell 110 also includes an OR gate 113 which combines the control signal input with a control signal provided by the controller 115 to be output on the control signal output 132. In a preferred embodiment, the control signal output 132 is coupled to the control signal input 141 of a next sort cell. The key output 134 is coupled to the key input 142 of the next sort cell. Also in a preferred embodiment, the data output 136 is coupled to the data input 143 of the next sort cell. Consequently, to pass a corresponding fragment to the next sort cell 110, the corresponding key and data are provided over the key output 134 and the data output 136.The sort cell 110 further includes a comparator 114 and a controller 115 coupled with the comparator 114. In addition, the sort cell 110 contains a key selector 116, a key storage 117, a data selector 118 and a data storage 119. The corresponding key is stored in the key storage 117. The corresponding data for the corresponding fragment is stored in the data storage 119. Note that although the sort cell 110 is depicted as having key storage 117 and data storage 119 internal to the sort cell 110, nothing prevents the method and system from storing the key and/or the data in a location remote from the sort cell 110, such as in a random access memory. The key selector 116 and data selector 118 are controlled by a controller 115. The controller 115 can also provide a "take data" signal which is ORed with a previous cell's "take data" signal by the OR gate 113. This combination signal is provided to the control signal output 132. Thus, when either a previous sort cell 110 or the current sort cell 110 provides a "take data" signal, the current sort cell 110 also asserts a "take data" signal over the control signal output 132. When a "take data" signal is asserted over control signal output 132, the next sort cell 110 will accept the corresponding fragment previously stored in the sort cell 110.In one embodiment, the key selector 116 and data selector 118 are multiplexers. The key selector 116 selects between the new key, a key corresponding to a previous cell input through previous key input 142, and a new key input from new key input 122. Similarly, the data selector selects between the new data provided by the new data input 124, data corresponding to a previous sort cell 110 that is provided through previous data input 143, and the corresponding data.In a preferred, embodiment, fragments for pixels in a scan line are provided one after another to the hardware sorter 100. As a result, the hardware sorter 100 should be capable of sorting the fragments for one pixel without affecting the order of the fragments for another pixel. This capability can be provided using the controller 115 and a pixel identification. Consequently, in a preferred embodiment, each fragment for a pixel includes a pixel identification. The new identification is provided to the sort cell 110 through the new identification input 126.In a preferred embodiment, the pixel identification is provided by a counter, not shown. Also in a preferred embodiment, each fragment has a fragment type associated with it. Preferably, there are four fragment types, N, L, E and EOL. An N fragment is simply an nth fragment in the pixel. This could be any fragment for the pixel except for the last fragment. An L fragment is a last fragment for a particular pixel. An E (empty) fragment indicates that there are no intersecting polygons for the pixel. An EOL fragment indicates that the hardware sorter 100 should be flushed, for example due to an end of line or end of frame.In the preferred embodiment the fragment types are transformed to an identification by the counter, not shown. The counter provides a unique number for a set of fragments corresponding to a particular pixel. The counter does so by incrementing after every L or E fragment. Consequently, all fragments for the first pixel have an identification of one. All fragments for the second pixel have an identification of two. The identification for each pixel may be incremented until an EOL fragment which flushes the hardware sorter 100 is reached. This identification is provided to each of the sort cells 110 along with the fragment. Consequently, in a preferred embodiment, a corresponding fragment for a particular sort cell 110 includes a corresponding identification in addition to the corresponding z value and the corresponding data.Note, however, the identification merely allows the hardware sorter 100 to distinguish between different pixels having fragments in the hardware sorter 100 at the same time. Because there are N sort cells 110 in the hardware sorter 100, there can be at most fragments for N different pixels in the hardware sorter 100. Consequently, the identification may represent up to only N different values to ensure that each fragment in the hardware sorter 100 can be associated with a unique pixel. For example, for a hardware sorter 100 having four sort cells 110, an identification having only two bits (four different values) can be used. The counter can run, reusing values for pixels which cannot both have fragments in the hardware sorter 100 at the same time.To more particularly describe the operation of the hardware sorter 100, refer to FIGS. 3, 5, and 6. FIG. 6 depicts a method 200 for providing a hardware sort using the identification in accordance with the present invention. In step 202, a new fragment is provided to all of the sort cells 110 through the new input line 102 and new input 120. The new fragment includes the new z value, new data and new identification. In a preferred embodiment, the new z value for the fragment is provided to each sort cell 110 via the new key input 124. Similarly, the new data is provided via the new data input 122. The new identification is provided through new identification input 126. If the new fragment is the first fragment for a pixel, then the new fragment is automatically input to the first sort cell 110.The steps 204 through 224 for the method 200 are performed by each sort cells 110 that is taking part in the sort. In step 204, it is determined if the new identification is the same as the corresponding identification. In a preferred embodiment, step 204 is performed by the controller 115. Depending on whether the new identification is the same as the corresponding identification, the sort cell 110 behaves differently.If the identifications match, the comparator 114 is used to compare the new z value to the corresponding z value, via step 214. In one embodiment, this comparison determines whether the corresponding z value is greater than the new z value. In the embodiment depicted in FIG. 4, the results of the comparison are provided to the controller 115. However, nothing prevents the resultant of the comparator 114 from being used directly to control the key selector 116, the data selector 118, or generate a signal over control signal output 132.If the corresponding z value is not greater than the new z value, the then the new fragment will not displace the corresponding fragment or the fragment corresponding to any sort cell 110 located prior to the sort cell 110. Thus, if the corresponding z is not greater than the new z, the sort cell 110 retains the corresponding fragment via step 216. In a preferred embodiment, the corresponding z value and data are retained in step 216 because the controller 115 signals the selectors 116 and 118 to choose the input from the key storage 117 and data storage 119, respectively.When the resultant of the comparator 114 indicates that the corresponding z value is greater than the new z value, the sort cell 110 will not retain its state. Consequently, the "take data" signal is asserted via step 218. In a preferred embodiment, the "take data" signal is simply the resultant of the comparator 114 indicating that the corresponding z value is greater than the new z value. To determine what the sort cell 110 will do, the controller determines whether the "take data" signal from a previous sort cell has been asserted via step 220.When the corresponding fragment (z value and data) is not to be retained, the sort cell 110 must determine whether to accept the new key and data provided over lines 124 and 122, respectively, or to accept the z value and data from a previous cell provided via, previous key input 142 and previous data input 143. Consequently, the controller 115 determines whether the "take data" signal from a previous cell has been asserted over the control signal input 141 via step 220. If the "take data" signal is not asserted, then the sort cell 110 accepts the new fragment via step 222. Consequently, when the resultant of the comparator 114 indicates that the corresponding z value is greater than the new z value and the take data signal from the previous sort cell is not provided via control signal input 141, the controller 115 allows the sort cell 110 to accept the new data. To do so, the controller provides the selectors 116 and 118 with a signal indicating that the values from the new key input 124 and the new data input 122 are to chosen. The new z value and new data are then stored in the key storage 117 and data storage 118, respectively.If the "take data" signal is asserted over the control signal input 141, then via step 224 the sort cell 110 accepts the fragment corresponding to the previous cell. Consequently, when the resultant of the comparator 114 indicates that the corresponding z value is greater than the new z value and the "take data" signal from the previous sort cell 110 is provided via control signal input 141, the controller 115 allows the sort cell 110 to accept the z value and data corresponding to a previous cell. To do so, the controller provides the selectors 116 and 118 with a signal indicating that the values from the previous key input 142 and the previous data input 143 to the selectors 116 and 118 are to be selected. Thus, the z value and data corresponding to the previous cell are stored in the key storage 117 and data storage 118, respectively.If the identifications do not match, then via step 206, it is ensured that the new fragment is not sorted by the sort cell 110. In a preferred embodiment, step 206 is performed by turning off the comparator 114 for the sort cell 110. Turning the comparator off assures that the sort cell 110 will not accept the new data, thereby preventing the new fragment from being sorted with fragments for another pixel. The controller 115 then determines if the "take data" signal from the previous cell has been provided to the sort cell 110 via step 208. If the "take data" signal has not been asserted, then the sort cell 110 retains the corresponding fragment via step 210. In a preferred embodiment, the controller accomplishes this by controlling the selectors 116 and 118 to choose the input from the key storage 117 and data storage 119, respectively. If the "take data" signal has been asserted, then via step 212, the sort cell 110 accepts the fragment from the previous sort cell 110 and, due to the OR gate 113, asserts the "take data" signal over the control signal output 132.Note that for the first sort cell 110 in the hardware sorter 100, a take data signal is never asserted dover the control signal input 141. As a result, the first sort cell 110 will only transfer its corresponding fragment to the second sort cell when its corresponding z value is less than the new z value, a fragment for a new pixel arrives, or the hardware sorter 100 is flushed. Thus, the z value for the fragment stored in the first sort cell 110 is the smallest for a particular pixel.In a preferred embodiment, the pixels input to the hardware sorter 100 will be output in order. The OR gate 113 in each sort cell 110 allows the hardware sorter 100 to accomplish this. A first fragment for a current pixel is input to the first sort cell 110. As subsequent fragments for the current pixel are sorted by the hardware sorter 100, the "take data" signal may be asserted by a sort cell 110 near the top of the hardware sorter 100.Because of the OR gate 113, each subsequent sort cell 110 will assert the "take data" signal over the control signal output 132. As a result, fragments for prior pixels and fragments for the current sort cell which have a higher z value are passed down the hardware sorter 100 in order. Eventually, the fragments for these pixels are output by the last sort cell 110 in the a hardware sorter. As a result, the pixels pass through the hardware sorter 100 in order and the fragments for each pixel are properly sorted. In a preferred embodiment, at the end of a line, an EOL fragment causes the "take data" signal to be asserted by all of the sort cells 110. As is a result, the hardware sorter 100 is flushed. Consequently, the system 50 is synchronized, ensuring that pixels are written at a particular time.In a preferred embodiment, the new fragment is broadcast in parallel to the sort cells 110 in a first clock cycle. In the same clock cycle, the sort cells perform a comparison, insert the new fragment in the appropriate place, and, if necessary, move fragments which are displaced by the new fragment farther down the hardware sorter 100. Consequently, each new fragment is sorted in a single cycle in a preferred embodiment. The number of sort cells is based on the order of the number of fragments to be sorted. As a result, the number of cycles between providing a pixel to the first cell of the hardware sorter 100 and providing the sorted pixel to the output of the last sort cell 110 is substantially the same as the number of sort cells 110. In general, this time is at least as good as the time achieved by an optimized software sort. Although there may be a delay between the first pixel being input to the hardware sorter 100 and the first pixel being output by the hardware sorter 100 ("latency"), the sort for each pixel is performed in a single clock cycle. Therefore, a sort performed in accordance with the present invention is typically faster than an optimized software sort.Although FIG. 5 depicts each sort cell 110 as having an internal controller 115, in a preferred embodiment, the controller 115 for each, sort cell is combined to a single controller outside of the sort cells. FIG. 7 depicts a preferred embodiment of a hardware sorter 300 in accordance with the present invention. Note that although the hardware sorter 300 will be described in conjunction with the computer graphics system 50, nothing prevents the use of the hardware sorter 300 in another computer system. Thus, the hardware sorter 300 will be described as sorting based on a particular key (the z value) associated with particular data (the fragments for a pixel including color and alpha values) nothing prevents the hardware sorter 300 from sorting based on another key or accepting other types of data. Thus, the hardware sorter 300 is applicable to other systems requiring a sort, such as a router in a network.The hardware sorter 300 includes a plurality of sort cells 310. Note that although only four sort cells 310 are depicted, nothing prevents the hardware sorter 300 from having another number of sort cells 310. In a preferred embodiment, the number of sort cells 310 is at least equal to the number of items to be sorted. Thus, in a preferred embodiment, the number of sort cells 310 is the same as the number of processors in the processor block 65. Also in a preferred embodiment, the number of sort cells 310 is sixteen.The hardware sorter 300 further includes a new input line 320 for providing a new fragment in parallel to each of the sort cells 310 via new input 320. Each sort cell 310 also includes an output 330. The output 330 of a sort cell 130 is coupled to an input of a next sort cell 310. The output 330 of the last sort cell 310 is not coupled to another sort cell 310. Instead, the output 330 of the last sort cell 310 provides the output of the hardware sorter 300. The hardware sorter 300 also includes a controller 350 which controls the sort cells 310 through control lines 360.To more particularly describe the hardware sorter 300 refer to FIG. 8, depicting a sort cell 310 in accordance with the present invention. The sort cell 310 is coupled to a new input 320. The sort cell310 includes a control line 360, an output 330, and an input 340. The new input 320 includes new data input 322 and new key input 324. The input 340 includes previous key input 312 and previous data input 313. The output. 330 includes corresponding key output 334 and corresponding data output 336. The control line 360 includes control signal output 362 and control signal input 364. The sort cell 310 also includes a key selector 316, key storage 317, data selector 318, and data storage 319. In one embodiment, the key storage 317 and data storage 319 exist outside of the sort cell 310.The sort cell 310 and hardware sorter 300 function similarly to the sort cell 110 and hardware sorter 100, respectively. The primary difference is that the hardware sorter 300 includes a single controller 350 separate from the sort cells 300. In the hardware sorter 300, the controller 350 takes on the functions of the controllers 115 for all of the sort cells 110 in the hardware sorter 100. In addition, each sort cell 310 includes an input 364 for the signals used to control the key selector 116 and the data selector 118. The sort cell 310 also includes an output 362 to provide the resultant from the comparator 314 to the controller 350. Because the controller 350 controls all of the sort cells 310, in a preferred embodiment no control signal is passed between the sort cells 310. However, the controller 350 controls the sort cells 310 to provide the same functions as provided by the hardware sorter 100.As discussed above, in a preferred embodiment, the hardware sorter 100 and the hardware sorter 300 include a number of sort cells on which is at least the same as the number of items to be sorted. However, nothing prevents fewer sort cells from being used. This is particularly true if an appropriate sort cell 110 or 310 were provided with a heuristic in order to choose which item to discard in the event that the number of items to be sorted is greater than the number of sort cells 110 or 310.For example, in the computer graphics system 50, the corresponding fragment for the last sort cell 110 or 310 is the fragment farthest from the viewer. Suppose that the new fragment has a z value near that of the fragment corresponding to the last sort cell 110 or 310. A heuristic included in the last sort cell 110 or 310 could take the z value and other information into account to determine how to respond to the new fragment, for example by discarding one of the fragments. Consequently, although a preferred embodiment includes at least as many sort cells as items expected to be sorted, nothing prevents the use of fewer sort cells.A method and system has been disclosed for providing a hardware sort. Because the new item (fragment) to be sorted can be broadcast to all sort cells in parallel and because each sort cell can simultaneously perform functions, the system and method provide an efficient sort. In addition, each sort cell can be the same as other sort cells. When the number of items sorted or size of the hardware sorter are desired to be changed, sort cells can be easily added or subtracted from the hardware sorter. Consequently, the time required to develop the method and system is reduced. In addition, because each sort cell can be made the same, the hardware sorter is a regular array. Consequently, the hardware sorter is relatively simple to lay out once the layout of a single cell is determined.Although the present invention has been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations to the embodiments and those variations would be within the spirit and scope of the present invention. Accordingly, may modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims. |
To accommodate the operational and structural requirements of high performance integrated circuits, an integrated circuit package includes conductive trenches that are formed within a substrate. The trenches provide increased current carrying capacity, lower inductance, higher capacitance, and single and/or dual reference planes for signal conductors. Trench structures can be provided at various locations within the substrate, such as adjacent to signal conductors and embedded capacitors, as well as on the substrate periphery to couple the package to a socket. Trenches can be formed by routing, drilling, imprinting, and/or microperforation. Methods of fabrication, as well as application of the package to an electronic assembly and to an electronic system, are also described. |
What is claimed is: 1. A substrate for mounting an integrated circuit comprising:a plurality of layers, at least some of the layers comprising a plurality of traces, vias, and trenches; a first plurality of lands on a first surface thereof and coupled to a first group of traces, vias, and trenches; and a second plurality of lands on a second surface thereof and coupled to a second group of traces, vias, and trenches. 2. The substrate recited in claim 1, wherein the substrate comprises a core region, and wherein the trenches are in the core region.3. The substrate recited in claim 1, wherein at least one trench is formed with an exposed surface thereof along the periphery of the substrate.4. The substrate recited in claim 1, wherein at least one of the first plurality of lands and one of the second plurality of lands are to couple to a first potential, wherein at least another one of the first plurality of lands and another one of the second plurality of lands are to couple to a second potential, wherein a first trench is to conduct the first potential, and wherein a second trench is to conduct the second potential.5. The substrate recited in claim 4, wherein a first group of vias are to conduct electrical signals, and wherein the first group of vias is adjacent to the first trench.6. The substrate recited in claim 5, wherein the first trench provides a reference plane for electrical signals conducted by the first group of vias.7. The substrate recited in claim 5, wherein the first group of vias is also adjacent to the second trench.8. The substrate recited in claim 7, wherein the second trench provides a reference plane for electrical signals conducted by the first group of vias.9. The substrate recited in claim 4 and further comprising at least one embedded capacitor adjacent to the first trench and to the second trench.10. The substrate recited in claim 4 and further comprising a first group of trenches to conduct the first potential, and a second group of trenches to conduct the second potential, and wherein at least one embedded capacitor is adjacent to the first and second groups of trenches.11. The substrate recited in claim 10, wherein the at least one embedded capacitor is adjacent to alternating ones of the first and second groups of trenches.12. The substrate recited in claim 11, wherein a plurality of embedded capacitors are adjacent to alternating ones of the first and second groups of trenches.13. The substrate recited in claim 12, wherein a portion of the plurality of embedded capacitors are adjacent to one another, and wherein adjacent trenches between adjacent capacitors are from different ones of the first and second groups of trenches.14. The substrate recited in claim 4, wherein the first and second trenches are substantially parallel to one another.15. The substrate recited in claim 14, wherein the first and second trenches are adjacent to one another.16. A substrate for mounting an integrated circuit comprising:a plurality of non-conductive layers, at least some of the layers comprising a plurality of traces, vias, and trenches; and a plurality of lands on a surface thereof and coupled to a group of the traces, vias, and trenches; wherein at least one of the plurality of lands is to couple to a first potential, wherein at least another one of the first plurality of lands is to couple to a second potential, wherein a first group of trenches is to conduct the first potential; and wherein a second group of trenches is to conduct the second potential. 17. The substrate recited in claim 16, wherein trenches from the first group of trenches are adjacent one another.18. The substrate recited in claim 16, wherein the first and second groups of trenches provide electromagnetic reference planes.19. The substrate recited in claim 16, wherein the substrate comprises a core region, and wherein the trenches are in the core region.20. The substrate recited in claim 19, wherein the first and second groups of trenches provide direct current paths through the core region.21. The substrate recited in claim 16, wherein the first and second groups of trenches are distributed substantially throughout the substrate, with a portion of the trenches being substantially parallel to a first edge of the substrate, and with a further portion of the trenches being substantially parallel to a second edge of the substrate that is orthogonal to the first edge.22. The substrate recited in claim 16, wherein trenches alternatively from the first and second groups are substantially aligned end-to-end.23. The substrate recited in claim 22, wherein the trenches include trenches with exposed surfaces thereof along the periphery of the substrate to couple to corresponding conductors of a socket.24. The substrate recited in claim 22, wherein at least one embedded capacitor is substantially surrounded by the trenches that are substantially aligned end-to-end.25. The substrate recited in claim 16, wherein trenches within the first group are substantially aligned end-to-end.26. The substrate recited in claim 16, wherein trenches within the first and second groups are substantially aligned end-to-end.27. The substrate recited in claim 16, wherein trenches alternatively from the first and second groups are substantially aligned side-by-side.28. The substrate recited in claim 27, wherein at least two embedded capacitors are on either side of the trenches that are substantially aligned side-by-side.29. The substrate recited in claim 28, wherein each of the at least two embedded capacitors is within a cavity, and wherein the trenches that are substantially aligned side-by-side are each within a respective cavity.30. The substrate recited in claim 16, wherein trenches within the first group are substantially aligned side-by-side.31. The substrate recited in claim 16, wherein trenches within the first and second groups are substantially aligned side-by-side.32. The substrate recited in claim 16, wherein a first group of vias is to conduct electrical signals, and wherein the first group of vias is between trenches from the first group.33. The substrate recited in claim 32, wherein the trenches provide an electromagnetic reference plane for the first group of vias.34. The substrate recited in claim 16, wherein a first group of vias are to conduct electrical signals, and wherein the first group of vias is adjacent to at least one trench from the first group.35. The substrate recited in claim 34, wherein the at least one trench provides an electromagnetic reference plane for the first group of vias.36. The substrate recited in claim 16, wherein at least one trench within the first group has an exposed surface along the periphery of the substrate to couple to a conductor of a socket.37. The substrate recited in claim 16, wherein at least one trench within the first group and at least one trench within the second group have exposed surfaces along the periphery of the substrate to couple to corresponding conductors of a socket.38. An electronic package comprising:a substrate comprising: a plurality of non-conductive layers, at least some of the layers comprising a plurality of traces, vias, and trenches; p2 a first plurality of lands on a first surface thereof and coupled to a first group of traces, vias, and trenches; and a second plurality of lands on a second surface thereof and coupled to a second group of traces, vias, and trenches; wherein at least one of the first plurality of lands and one of the second plurality of lands are to couple to a first potential, wherein at least another one of the first plurality of lands and another one of the second plurality of lands are to couple to a second potential, wherein a first group of trenches is to conduct the first potential; and wherein a second group of trenches is to conduct the second potential; and an integrated circuit coupled to the first plurality of lands. 39. The electronic package recited in claim 38, wherein the substrate comprises a core region, and wherein the first and second groups of trenches provide direct current paths through the core region.40. The electronic package recited in claim 38, wherein the first and second groups of trenches include trenches with exposed surfaces thereof along the periphery of the electronic package to couple to corresponding conductors of a socket.41. The electronic package recited in claim 38, wherein a first group of vias are to conduct electrical signals, and wherein the first group of vias is adjacent to at least one trench.42. The electronic package recited in claim 38, wherein the lands of the second plurality of lands to couple to the first and second potential, respectively, are positioned to be coupled to corresponding nodes at the first and second potential of an additional substrate subjacent to the substrate.43. An electronic system comprising:a bus coupling components in the electronic system; a display coupled to the bus; external memory coupled to the bus; and a processor coupled to the bus and comprising an electronic assembly including: a substrate comprising: a plurality of non-conductive layers, at least some of the layers comprising a plurality of traces, vias, and trenches; a first plurality of lands on a first surface thereof and coupled to a first group of traces, vias, and trenches; and a second plurality of lands on a second surface thereof and coupled to a second group of traces, vias, and trenches; wherein at least one of the first plurality of lands and one of the second plurality of lands are to couple to a first potential, wherein at least another one of the first plurality of lands and another one of the second plurality of lands are to couple to a second potential, wherein a first group of trenches is to conduct the first potential; and wherein a second group of trenches is to conduct the second potential; and an integrated circuit coupled to the first plurality of lands. 44. The electronic system recited in claim 43, wherein the substrate comprises a core region, and wherein the first and second groups of trenches provide direct current paths through the core region.45. The electronic system recited in claim 43, wherein the first and second groups of trenches include trenches with exposed surfaces thereof along the periphery of the electronic assembly to couple to corresponding conductors of a socket of the electronic system.46. A method for fabricating a substrate, the method comprising:forming a core region of the substrate; forming a plurality of vias in the core region; forming a plurality of trenches in the core region; applying a conductive material to the vias and trenches; and forming a first buildup region above the core region. 47. The method recited in claim 46, the method further comprising:forming a first insulating region between the core region and the first buildup region; and forming vias and trenches in the first insulating region that couple to vias and trenches in the core region. 48. The method recited in claim 46, the method further comprising:forming a second buildup region below the core region. 49. The method recited in claim 48, the method further comprising:forming a second insulating region between the core region and the second buildup region; and forming vias and trenches in the second insulating region that couple to vias and trenches in the core region. 50. The method recited in claim 46, the method further comprising:forming at least one cavity in the core region, the at least one cavity comprising walls substantially parallel to the vias and trenches; forming a plurality of conductive plates on the walls of the at least one cavity; and providing a capacitor in the at least one cavity. 51. The method recited in claim 46, wherein the substrate comprises a plurality of exterior sidewalls, the method further comprising:forming a plurality of conductive plates on at least one of the exterior sidewalls, at least a portion of the conductive plates comprising surfaces to couple to corresponding conductors of a socket of an electronic assembly. 52. The method recited in claim 46, wherein the substrate comprises a plurality of exterior sidewalls, the method further comprising:forming a power plane and a ground plane within the substrate; and forming a plurality of conductive plates, located on one or more of the exterior sidewalls, a first conductive plate being coupled to the power plane, a second conductive plate being coupled to the ground plane, the first and second conductive plates comprising surfaces to couple to corresponding power and ground conductors of a socket of an electronic assembly. |
RELATED INVENTIONSThe present invention is related to the following inventions which are assigned to the same assignee as the present invention:Ser. No. 09/540,707, entitled "Discrete Device Socket and Method of Fabrication Therefor";Ser. No. 09/606,882, entitled "Electronic Package Having Embedded Capacitors and Method of Fabrication Therefor";Ser. No. 09/730,210, entitled "An Electronic Assembly Providing Shunting of Electrical Current"; andSer. No. 09/735,956 entitled "Electronic Circuit Housing with Trench Vias and Method of Fabrication Therefor".TECHNICAL FIELD OF THE INVENTIONThe present invention relates generally to electronics packaging. More particularly, the present invention relates to an electronic assembly that includes a substrate comprising conductive trenches for providing improved power delivery and signal integrity, and for reducing inductance, in an integrated circuit package for a high performance integrated circuit, and to manufacturing methods related thereto.BACKGROUND OF THE INVENTIONIntegrated circuits (IC's) are typically assembled into packages by physically and electrically coupling them to a substrate made of organic or ceramic material. One or more IC's or IC packages can be physically and electrically coupled to a substrate such as a printed circuit board (PCB) or card to form an "electronic assembly". The "electronic assembly" can be part of an "electronic system". An "electronic system" is broadly defined herein as any product comprising an "electronic assembly". Examples of electronic systems include computers (e.g., desktop, laptop, hand-held, server, etc.), wireless communications devices (e.g., cellular phones, cordless phones, pagers, etc.), computer-related peripherals (e.g., printers, scanners, monitors, etc.), entertainment devices (e.g., televisions, radios, stereos, tape and compact disc players, video cassette recorders, MP3 (Motion Picture Experts Group, Audio Layer 3) players, etc.), and the like.In the field of electronic systems there is an incessant competitive pressure among manufacturers to drive the performance of their equipment up while driving down production costs. This is particularly true regarding the packaging of IC's on substrates, where each new generation of packaging must provide increased performance, particularly in terms of an increased number of components and higher clock frequencies, while generally being smaller or more compact in size.An IC substrate may comprise a number of insulated metal layers selectively patterned to provide metal interconnect lines (referred to herein as "traces"), and one or more electronic components mounted on one or more surfaces of the substrate. The electronic component or components are functionally connected to other elements of an electronic system through a hierarchy of electrically conductive paths that include the substrate traces. The substrate traces typically carry signals that are transmitted between the electronic components, such as IC's, of the system. Some IC's have a relatively large number of input/output (I/O) terminals (also called "lands"), as well as a large number of power and ground terminals or lands. The large number of I/O, power, and ground terminals requires that the substrate contain a relatively large number of traces. Some substrates require multiple layers of traces to accommodate all of the system interconnections.Traces located within different layers are typically connected electrically by vias (also called "plated through-holes") formed in the board. A via can be made by making a hole through some or all layers of a substrate and then plating the interior hole surface or filling the hole with an electrically conductive material, such as copper or tungsten.One of the conventional methods for mounting an IC on a substrate is called "controlled collapse chip connect" (C4). In fabricating a C4 package, the electrically conductive terminations or lands (generally referred to as "electrical contacts") of an IC component are soldered directly to corresponding lands on the surface of the substrate using reflowable solder bumps or balls. The C4 process is widely used because of its robustness and simplicity.As the internal circuitry of high performance ICs, such as processors, operates at higher and higher clock frequencies, and as ICs become more dense and operate at higher and higher power levels, a number of manufacturing and operational factors can reach unacceptable levels. These factors include manufacturing cost and complexity, package size, inductance and capacitance levels, heat dissipation, signal integrity, and product reliability.For the reasons stated above, and for other reasons stated below which will become apparent to those skilled in the art upon reading and understanding the present specification, there is a significant need in the art for methods and structures for packaging a high performance IC on a substrate that provide increased power delivery and signal integrity, and decreased inductance levels.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram of an electronic system incorporating at least one electronic assembly with improved power delivery and signal integrity in accordance with one embodiment of the invention;FIG. 2 is a perspective view of an IC package in accordance with one embodiment of the invention;FIG. 3 illustrates a top view of the IC package shown in FIG. 2;FIG. 4 illustrates a cross-sectional representation of the IC package shown in FIG. 3 taken along line 69 of FIG. 3;FIG. 5 illustrates a cross-sectional representation of the substrate of the IC package shown in FIG. 2 taken along line 65 of FIG. 2;FIG. 6 illustrates a cross-sectional representation of an electronic assembly that includes an IC package substrate, in accordance with an embodiment of the invention:FIGS. 7-10 are cross-sectional representations illustrating stages of fabricating an IC package substrate, in accordance with an embodiment of the invention;FIG. 11 is a top view of the IC package fabrication stage shown in FIG. 10;FIG. 12 is a top view of an IC package fabrication stage subsequent to that shown in FIG. 11.FIG. 13-21 are cross-sectional representations illustrating additional stages of fabricating an IC package substrate, in accordance with an embodiment of the invention p FIG. 22 is a top view of a cross-section of the IC package fabrication stage shown in FIG. 21 taken along line 191 of FIG. 21;FIG. 23 is a top view of an IC package fabrication stage subsequent to that shown in FIG. 21; andFIGS. 24 and 25 together illustrate a flow diagram of a method of fabricating a substrate; in accordance with one embodiment of the invention.DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTIONIn the following detailed description of embodiments of the invention, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific preferred embodiments in which the inventions may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that structural, mechanical, compositional, and electrical changes may be made without departing from the spirit and scope of the present inventions. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.The present invention provides a solution to performance and reliability problems that are associated with prior art packaging of integrated circuits (IC's) that have high circuit density and that operate at high clock speeds and high power levels, by employing conductive trenches or planes within a substrate to which an IC is mounted. Trench structures are provided at various locations within the substrate. The trenches provide increased current carrying capacity, lower inductance, higher capacitance, and single and/or dual reference planes for signal conductors primarily in, but not limited to, microstrip or strip-line configurations.Trenches can be provided, for example, adjacent to or surrounding signal conductors such as vias or plated through holes. These provide reference planes within the substrate that help signal integrity. Ordinarily, signal vias are only surrounded by other signal vias, but in the present invention signal vias can be surrounded by at least two planes that provide power-ground, power-power, or ground-ground references.Trenches can also be positioned within the substrate to provide low resistance DC paths through the package core.In addition, closely spaced trenches at a supply voltage alternating with trenches at a ground potential provide low inductance through the package core.Trenches can be formed around embedded capacitors. Trenches or conductive planes having an exposed surface can also be formed on the substrate periphery to couple the package to a socket.Various embodiments are illustrated and described herein.In one embodiment, a front surface of an IC is flip-chip mounted to an organic land grid array (OLGA) substrate using "controlled collapse chip connect" (C4) technology. In one embodiment, the substrate core contains a plurality of conductive trenches or conductive planes at various locations within the substrate core. Some of the conductive trenches are at a supply potential, while others are at a ground potential. Some of the trenches surround capacitors that are embedded within the substrate. Other trenches are located adjacent to signal conductors such as signal vias. Yet other trenches are formed on the substrate periphery to provide suitable structure for coupling to a socket located, for example, on a higher level of packaging, such as a motherboard.FIG. 1 is a block diagram of an electronic system 1 incorporating at least one electronic assembly 4 with improved power delivery and signal integrity in accordance with one embodiment of the invention. Electronic system 1 is merely one example of an electronic system in which the present invention can be used. In this example, electronic system 1 comprises a data processing system that includes a system bus 2 to couple the various components of the system. System bus 2 provides communications links among the various components of the electronic system 1 and can be implemented as a single bus, as a combination of busses, or in any other suitable manner.Electronic assembly 4 is coupled to system bus 2. Electronic assembly 4 can include any circuit or combination of circuits. In one embodiment, electronic assembly 4 includes a processor 6 which can be of any type. As used herein, "processor" means any type of computational circuit, such as but not limited to a microprocessor, a microcontroller, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a graphics processor, a digital signal processor (DSP), or any other type of processor or processing circuit.Other types of circuits that can be included in electronic assembly 4 are a custom circuit, an application-specific integrated circuit (ASIC), or the like, such as, for example, one or more circuits (such as a communications circuit 7) for use in wireless devices like cellular telephones, pagers, portable computers, two-way radios, and similar electronic systems. The IC can perform any other type of function.Electronic system 1 can also include an external memory 10, which in turn can include one or more memory elements suitable to the particular application, such as a main memory 12 in the form of random access memory (RAM), one or more hard drives 14, and/or one or more drives that handle removable media 16 such as floppy diskettes, compact disks (CDs), digital video disk (DVD), and the like.Electronic system 1 can also include a display device 8, one or more speakers 9, and a keyboard and/or controller 20, which can include a mouse, trackball, game controller, voice-recognition device, or any other device that permits a system user to input information into and receive information from the electronic system 1.FIG. 2 is a perspective view of an IC package 40 in accordance with one embodiment of the invention. IC package 40 includes an IC or die 50 mounted in "flip-chip" orientation with its lands 52 facing downward to couple with corresponding lands on the upper surface of a substrate 60 through solder balls or bumps.Substrate 60 is a multi-layer board, and it can include additional lands on its opposite surface for mating with additional packaging structure (not shown). Substrate 60 can be used for any packaging application. For example, it can form part of a chip package for packaging die 50. Alternatively, it can be part of a higher level packaging structure, such as a motherboard or printed circuit board (PCB).As will be described in greater detail below, substrate 60 comprises a plurality of conductive trenches. As used herein, the term "trench" means a conductive plane or member, other than a via, partially or entirely extending through a substrate (typically, but not necessarily perpendicular to the upper and lower surfaces of a substrate), or being formed on an exterior sidewall of the substrate. Trenches 61 and 62, for example, are segmented, conductive planes or terminals formed on the exterior sidewalls of substrate 60. Trenches 61 and 62 can be optionally employed to couple with corresponding conductors of a socket structure that is part of an electronic assembly or electronic system. In embodiments wherein substrate 60 comprises trenches, such as trenches 61 and 62, on one or more exterior sidewalls to couple to socket conductors, the lands on the bottom of substrate 60 can be reduced or eliminated entirely.FIG. 3 illustrates a top view of the IC package 40 shown in FIG. 2. The IC package comprises a die 50 mounted on an organic land grid array (OLGA) substrate 60. While an OLGA substrate is shown, the present invention is not limited to use with an OLGA substrate, and any other type of substrate can be employed. The IC package illustrated in FIG. 3 can form part of electronic assembly 4 shown in FIG. 1. Die 50 can be of any type. In one embodiment, die 50 is a processor.In FIG. 3, die 50 comprises a plurality of signal conductors (not shown) that terminate in lands 52 arranged in several rows near the periphery of the bottom surface of die 50. Die 50 also comprises a plurality of power and ground conductors (not shown) that terminate in lands 54 within the central region of die 50. Lands 52 can be coupled to corresponding lands or signal nodes (not shown) on substrate 60 by appropriate connections such as solder bumps or solder balls (56, FIG. 4). Likewise, lands 54 can be coupled to corresponding lands (not shown) on substrate 60 by appropriate connections such as solder balls (56, FIG. 4).In FIG. 3 we are looking through die 50 at lands 52 and 54 (shown in dashed lines) on the bottom surface of die 50. Lands 52 represent signal nodes, while lands 54 represent power supply nodes. As used herein, the term power supply node refers to either a ground node (e.g. Vss) or to a power node at a potential different from ground (e.g. Vcc).Also seen in FIG. 3 are a plurality of trenches 61-64 formed on the periphery of substrate 60. Trenches 61-64 have exposed external surfaces and are suitably formed to couple to corresponding connector structure, such as socket conductors 66, on another packaging structure 68. Trenches 61-64 typically conduct Vss or Vcc, but they can also conduct electronic signals. In one embodiment, some or all of trenches 61-64 serve to couple power and ground planes in the IC package 40 (FIG. 2) to corresponding Vss and Vcc socket conductors (e.g. socket conductors 66) on packaging structure 68. Any number of trenches 61-64 can be provided. Different numbers of trenches 61-64 can be provided on each side of substrate 60. Alternatively, trenches 61-64 can be omitted from the periphery of substrate 60.FIG. 4 illustrates a cross-sectional representation of the IC package shown in FIG. 3 taken along line 69 of FIG. 3. Die 50 is coupled to substrate 60 after moving die 50 towards substrate 60 in the direction indicated by arrows 51.Die 50 has lands 52 and 54 on its mounting surface. Solder bumps or balls 56 are used to couple lands 52 and 54 to corresponding lands 74 on the upper surface of substrate 60.While the sequential fabrication of substrate 60 is discussed in detail below with reference to FIGS. 7-25, a description of the structure of substrate 60 may be helpful at this point. Substrate 60 comprises a central region 72, an upper buildup region 71, and a lower buildup region 73. Central region 72 includes a substrate core 70, an upper insulating layer 97, and a lower insulating layer 98. Central region 72, as well as buildup regions 71 and 73, can comprise low permittivity (Dk) material in order to enhance signal transmission speed within the substrate. Low permittivity material can include, for example, low Dk liquid crystal polymer (LCP) material, or any other suitable low Dk material. LCP material has an additional advantage in that the coefficient of thermal expansion (CTE) can be tailored to closely match the CTE of the die, thus reducing thermally-induced mechanical stresses on the die and package structure.Buildup region 71 comprises one or more layers of insulating material. Each layer can comprise a plurality of conductive traces 76 and 78 for conducting signals, Vcc, and Vss throughout circuits that are situated between the central region 72 and the lands 74 on the upper surface of buildup region 71.Likewise, buildup region 73 comprises one or more layers of insulating material, each comprising a plurality of conductive traces 75 and 77 for conducting signals, Vcc, and Vss throughout circuits that are situated between the central region 72 and the lands 79 on the lower surface of buildup region 73.Central region 72 comprises a plurality of vias 86, which can be entirely or partially filled with conductive material. Vias 86 typically conduct signals through the central region 73, between signal traces situated within buildup regions 71 and 73 and/or vice versa.Central region 72 further comprises a plurality of trenches 87-89. Central region optionally comprises one or more cavities in which correspondingly reside embedded capacitors. As seen in FIG. 4, an embedded capacitor can include upper and lower conductive elements81 and 83, respectively, separated by a dielectric element 82. Upper conductive elements 81 are coupled to appropriate traces in buildup region 71 by way of vias 84, while lower conductive elements 83 are coupled to appropriate traces in buildup region 73 by way of vias 85.Substrate 60 optionally includes trenches 62 and 64 formed along its periphery. While trenches 62 and 64 are illustrated as extending from the upper surface of substrate 60 to the lower surface of substrate 60, in other embodiments trenches 62 and 64 can extend only partially between the upper and lower surfaces of substrate 60, for example, along the central region 72 or along the substrate core 70.Trenches 62 and 64 can be coupled to conductors within any region or layer of substrate 60. For example, any of trenches 61-64 (FIGS. 3 or 5) can be coupled to one or more power conductors or power planes, to one or more ground conductors or ground planes, or to one or more signal conductors.Trenches that are not located on the periphery of substrate 60 can also be coupled to conductors within any region or layer of substrate 60. For example, any interior trench, such as trenches 87-89, can be coupled to one or more power conductors or power planes, to one or more ground conductors or ground planes, or to one or more signal conductors.While a BGA arrangement employing solder bumps or balls 56 is illustrated in FIG. 4 for coupling die 50 to substrate 60, the present invention is not limited to use with a BGA arrangement, and it can be used with any other type of packaging technology, e.g. land grid array (LGA), chip scale package (CSP), ceramic, flex, tape, or the like. Further, the present invention is not to be construed as limited to use in C4 packages, and it can be used with any other type of IC package where the herein-described features of the present invention provide an advantage.FIG. 5 illustrates a cross-sectional representation of substrate 60 of IC package 40 shown in FIG. 2 taken along line 65 of FIG. 2. Looking downward on the cross-section, the reader sees a plurality of vias such as vias 86 and 92. In this illustration, vias 92 are plated through-holes (PTH's) that can be either partially or entirely filled with electrically conductive material, such as copper or tungsten.Also seen in FIG. 5 are a plurality of trenches in various configurations. For example, trenches 61-64 are formed around the periphery of substrate 60. Some trenches, such as trenches 87 are substantially aligned end-to-end in rows. Other trenches, such as trenches 88 and 89, are substantially aligned side-by-side. In addition, other trenches, such as trenches 91 and 93, are on either side of a group of vias 92. A "group" of vias can be any number of vias.In addition, yet other trenches, such as trenches 101-114, are formed around the embedded capacitors, only whose upper conductive elements 81 can be seen in FIG. 5.The various configurations of trenches used in the present invention and seen, for example, in FIG. 5 offer several significant advantages over known substrates currently in use, as will now be explained. Three significant factors in the choice and implementation of substrates relate to power delivery, inductance, and signal integrity. In typical, known packages current is carried to and from the IC by metallic (e.g. copper or aluminum) strips formed on surface of a package layer and by metallization of the sidewalls of plated through-holes (PTH's) and vias.To meet increasingly stricter requirements of power delivery, inductance, and signal integrity in high performance IC's (such as microprocessors, microcontrollers, chipsets, and memories), smaller diameter vias, thinner traces, and higher numbers of discrete capacitors have been employed. These capacitors have been mounted on the surface of the package using known surface-mount and reflow technology. However, the greater number of capacitors utilize package space that could have otherwise been used for signal, power, and ground routing. In addition, the greater number of capacitors adds significant cost to the packages. Further, the capacitor leads and long current loops drive inductance to levels in excess of those required for optimum IC performance.A second problem is that poor signal references, resulting from referencing PTH's in the package core, often degrade signal performance.A third problem with the known technology is that while the number of vias is increasing, and the trace dimensions are decreasing, the current per via or trace is also increasing. When current levels increase, trace and/or via metallization can overheat, crack, or delaminate, which results in degraded performance or catastrophic failure of the IC, IC package, and/or electronic assembly.The above-mentioned problems are substantially lessened by the present invention. First, the present invention substantially lessens inductance in the package through the use of conductive trenches. By closely spacing trenches, and by alternating power and ground trenches, inductance can be minimized through the package core. One example of alternating power and ground trenches is illustrated in FIG. 5 by trenches 88 and 89, of which either trench can carry power and the other can carry ground. Alternatively, both trenches could carry power, or both could carry ground.Another example of alternating power and ground trenches is illustrated in FIG. 5 by trenches 101-114. Trenches 101-114 are very closed spaced. Alternating ones of trenches 101-114 carry power or ground. For example, trenches 101, 103, 105, 107, 110, and 113 can carry power, and trenches 102, 104, 106, 108, 109, 111, 112, and 114 can carry ground.Secondly, the present invention substantially lessens signal degradation resulting from poor references. By positioning trenches adjacent to and/or surrounding one or more signal vias, reference planes are created that enhance signal integrity. An example of this is shown in FIG. 5 by trenches 91 and 93, which are positioned adjacent to and on either side of a group of signal vias 92. Signal vias, such as signal vias 92, would ordinarily be surrounded only by other signal vias. However, as indicated by FIG. 5, signal vias such as signal vias 92 can be surrounded by two conductive planes in the form of trenches 91 and 93. Both trenches in the pair of trenches 91/93 can be at ground; they can both be at power; or one can be at ground and the other at power. This provides a strip-line configuration, which contributes to a high level of signal integrity. The trenches provide electromagnetic reference planes.In addition, a microstrip configuration can be achieved by placing a set of signal vias, such as signal vias 94, adjacent to just one trench, such as trench 93. The one trench provides an electromagnetic reference plane.Likewise, groups of signal vias near the substrate periphery, such as signal vias 86, can be positioned adjacent to a trench 64 and a trench 87. Trenches 64 and 87 carry Vcc or Vss, depending upon what reference value signal vias 86 need.By utilizing trenches to provide electromagnetic reference planes, the present invention lessens signal discontinuities in electrical signals traveling through signal vias, so that higher speed signals to and from the IC can be utilized, and the electronic assembly thus can provide relatively higher performance.Thirdly, the present invention accommodates higher current values through the package, by utilizing trenches to provide DC shunts through the package. All of the trenches shown in FIG. 5 provide a low resistance path through the package for improved current carry capability.As mentioned earlier, trenches can be provided in various configurations. For example, trenches 88/89 are positioned in a side-by-side arrangement. Trenches 87/90 are positioned in an end-to-end arrangement. Trenches 101-108 surround an embedded capacitor.In general, for the lowest inductance, adjacent trenches are not at the same potential. Thus, for example, each trench in trench pairs 87/90 and 88/89 is at a different potential. Likewise, within a capacitor cavity, trenches 101-108 alternate in potential, and trenches from adjacent cavities, such as trenches 104 and 113, also are at different potentials; i.e., a capacitor is adjacent to alternating ones of trenches at either ground or power potentials. As an additional example, each of the trenches 61, 62, 63, and 64 located along the respective sides of substrate 60 can be at a different potential from that of its neighbors along the same side of substrate 60.While the embodiment depicted in FIG. 5 illustrates a variety of different trenches in a variety of different trench configurations, it will be understood that in other embodiments more or fewer trenches and/or trench configurations can be provided. For example, in one embodiment, trenches 61-64 at the substrate periphery are omitted. Similarly, in another embodiment, capacitors 81 and the trenches surrounding them, such as trenches 101-108, are omitted.In addition, while trenches have been illustrated in an orthogonal pattern within substrate 60, they could be formed within substrate 60 at various angles other than 90 degrees (such as but not limited to 45 degrees) with respect to the top, bottom, and sides of substrate 60, or with respect to other trenches.FIG. 6 illustrates a cross-sectional representation of an electronic assembly 150 that includes an IC package substrate 60, in accordance with an embodiment of the invention. The electronic assembly 150 comprises a die 50 mounted on a substrate 60. Die 50 can be of any type. In one embodiment, die 50 is a processor. Substrate 60 is a multi-layer substrate. Substrate 60 in this configuration can be referred to as an interposer.Substrate 60 can be mounted to an additional substrate 152, such as a printed circuit board (PCB) or card. Substrate 60 can comprise, for example, a plurality of lands 79 that are positioned to be mechanically and electrically coupled to corresponding lands 154 of substrate 152 by suitable connectors such as ball grid array (BGA) solder balls 80.As mentioned earlier, in embodiments wherein substrate 60 comprises trenches on one or more exterior sidewalls, such as trenches 62 and 64 (FIG. 4), to couple to socket conductors, the lands 79 on the bottom of substrate 60 can be reduced or eliminated entirely. In one embodiment, IC package 40 (FIG. 4) is coupled entirely through such exterior trenches to a mating socket on additional substrate 152.While a BGA arrangement is illustrated in FIG. 6 for coupling substrate 60 to substrate 152, the present invention is not limited to use with a BGA arrangement, and it can be used with any other type of packaging technology. Further, the present invention is not to be construed as limited to use in C4 packages, and it can be used with any other type of IC package where the herein-described features of the present invention provide an advantage.FABRICATIONFIGS. 7-10 are cross-sectional representations illustrating stages of fabricating an IC package substrate, in accordance with an embodiment of the invention While the fabrication of an IC package substrate is illustrated, the present invention can be used in other types of substrates, such as motherboards and PCB's.FIG. 7 illustrates a substrate core 70. Substrate core 70 can be made of any suitable material. In the embodiment illustrated in the fabrication sequence below, it is made of an organic material.The substrate core 70, insulating layers 97 and 98, and buildup regions 71 and 73 of substrate 60 (FIG. 4) can be fabricated by conventional organic buildup techniques. For example, they can be fabricated from materials such as epoxies, acrylates, polyimides, polyurethanes, polysulfides, resin-glass weave (e.g. FR-4), nylons, and other similar materials. The layers can be constructed using familiar equipment for extruding, coating, spinning on, spraying, screen-printing, stenciling, and doctor-blading. Coating equipment such as a meniscus coater or curtain coater could be used.If substrate 60 is fabricated of ceramic, conventional ceramic techniques can be used, such as but not limited to high temperature co-fired ceramic (HTCC) technology, high thermal coefficient of expansion (HITCE) technology, or glass ceramic technology. To ensure low equivalent series resistance (ESR) values, a low temperature silver or copper compatible co-fired ceramic technology may be used.FIG. 8 illustrates substrate core 70 after vias 131 are formed therein. Vias 131 can be formed in substrate core 70 using techniques that are well known to those of ordinary skill in the art. For example, vias 131 can be drilled or punched. Alternatively, vias 131 can be formed by imprinting or microperforation techniques.FIG. 9 illustrates substrate core 70 after trenches 132 and capacitor cavities 133 are formed therein. Trenches 132 and capacitor cavities can be formed in substrate 70 using techniques that are well known to those of ordinary skill in the art. For example, trenches 132 and cavities 133 can be drilled or punched. Alternatively, they can be formed by imprinting or microperforation techniques.After this stage, vias 131, trenches 132, and cavities 133 are thoroughly cleaned using plasma or other known cleaning techniques.FIG. 10 illustrates substrate core 70 after vias 131, trenches 132, and the sidewalls 134 of cavities 133 have been filled or coated with electrically conductive material, such as but not limited to copper or tungsten, using techniques that are well known to those of ordinary skill in the art. While vias 131 are generally illustrated herein as having only their interior walls plated, vias 131 can alternatively be completely filled, or some vias 131 can be partially filled while others can be completely filled.FIG. 11 is a top view of the IC package fabrication stage shown in FIG. 10. FIG. 11 illustrates more clearly the plated sidewalls 134 of capacitor cavities 133.FIG. 12 is a top view of an IC package fabrication stage subsequent to that shown in FIG. 11. Sidewalls 134 of capacitor cavities 133 have been segmented into individual conductive trenches, such as trenches 101-114. While eight trenches have been formed within each capacitor cavity 133 in the embodiment illustrated, more or fewer trenches could be formed. Sidewalls 134 are separated into individual trenches 101-114 through conventional subtractive techniques, such as etching or drilling. At this stage, each trench 101-114 is electrically isolated from its neighbor, both within the same cavity 133 and within any adjoining cavity 133. Likewise, each trench 132 is electrically isolated from each other trench 132.FIGS. 13-21 are cross-sectional representations illustrating additional stages of fabricating an IC package substrate, in accordance with an embodiment of the invention. In general, FIGS. 13-16 illustrate operations that form the substrate central region 72 (FIG. 4), while FIGS. 17-21 illustrate the operations that form the upper buildup region 71 and lower buildup region 73 (FIG. 4).FIG. 13 illustrates the substrate core after capacitors 181 have been inserted or fabricated in capacitor cavities 133 (FIG. 12). Each capacitor 133 comprises an upper conductive layer 146, a dielectric layer 147, and a lower conductive layer 148. In the embodiment shown, discrete capacitors 181 are inserted into cavities 133. A number of alternative embodiments are possible. For example, other types of discrete capacitors can be used than the type shown.Various types of suitable discrete capacitors, suitable methods for embedding them with substrates, and suitable electrical contacts for coupling them to the electrical structure of the package, are described for example in Related Invention Ser. No. 09/540,707.Moreover, capacitors, such as but not limited to planar chip capacitors or ceramic capacitors, can be fabricated within cavities 133 using, for example, the techniques described in Related Invention Ser. No. 09/606,882.Also illustrated in FIG. 13 is the addition of a lower insulating or dielectric layer 142, (which corresponds to insulating layer 98 shown in FIG. 4).FIG. 14 illustrates the substrate core structure after an upper insulating layer 144 has been formed on substrate core70.FIG. 15 illustrates the substrate core structure after vias 161 and trenches 162 have been formed in upper insulating layer 144, and after vias 171 and trenches 172 have been formed in lower insulating layer 142. The vias and trenches can be formed using the techniques mentioned earlier.FIG. 16 illustrates the substrate core structure after metallization of vias 161/171 and trenches 162/172.FIG. 17 illustrates the substrate core structure after formation of a first insulating layer 164 thereon for the upper buildup region 71 (FIG. 4).FIG. 18 illustrates the formation of conductive traces 166 in insulating layer 164 of upper buildup region 71 (FIG. 4). Conductive traces 166 can be formed in insulating layer 164 using additive or subtractive techniques that are well known to those of ordinary skill in the art.FIG. 19 illustrates the formation of an additional insulating layer 174 having conductive traces 176 therein for the upper buildup region 179 (corresponding to upper buildup region 71 of FIG. 4). In addition, connection nodes or lands 178 have been formed in or upon insulating layer 174.FIG. 20 illustrates the formation of lower buildup region 180 (corresponding to lower buildup region 73 of FIG. 4) comprising insulating layer 182 having traces 184; insulating layer 186 having traces 188; and lands 189. Lower buildup region 180 can be fabricated in any suitable manner. For example, it can be fabricated before, during, or after the fabrication of upper buildup region 179.The above description of the fabrication of one embodiment of the invention has been considerably simplified for purposes of illustration. It will be understood by those of ordinary skill in the art that ground and power planes can also be fabricated within any portion(s) of the substrate. In one embodiment, these are implemented by conductive layers that are coupled to Vcc or Vss in an alternating manner within the substrate, although other arrangements of ground and power planes can also be built. In one embodiment, the ground and power planes can have the structures, and be fabricated, as described in the above-mentioned Related Invention entitled "An Electronic Assembly Providing Shunting of Electrical Current".FIG. 21 illustrates the substrate structure after metallization of the periphery, as depicted by metallized trenches 192 and 194. These are also depicted in FIG. 22 described immediately below.FIG. 22 is a top view of a cross-section of the IC package fabrication stage shown in FIG. 21 taken along line 191 of FIG. 21. Metallized trenches 192 and 194 are seen prior to their segmentation.FIG. 23 is a top view of an IC package fabrication stage subsequent to that shown in FIG. 21. In FIG. 23 unsegmented trench 192 has been segmented into a plurality of trenches 301-304, and unsegmented trench 194 has been segmented into a plurality of trenches 306-309. Likewise, but not illustrated in FIG. 23, a plurality of trenches 61 and 63 can be formed along the bottom and top, respectively, of the substrate 60 (refer to FIG. 5).One method for fabricating a substrate comprising trench structures for providing improved power delivery and signal integrity will now be described.FIGS. 24 and 25 together illustrate a flow diagram of a method 401 of fabricating a substrate, in accordance with one embodiment of the invention. The sequence of fabrication operations differs in some respects from that described with reference to FIGS. 7-23, in part to illustrate that many different fabrication sequences are possible.In 403, a core region is formed, e.g. substrate core 70 (FIG. 7).In 405, a plurality of vias (e.g. vias 131, FIG. 8) are formed in the core region.In 407, a plurality of trenches (e.g. trenches 132, FIG. 9) are formed in the core region.In 409, one or more cavities (e.g. cavities 133, FIG. 9) are formed in the core region. In one embodiment, each cavity comprises sidewalls that are substantially parallel to the sidewalls of the vias and trenches.In 411, a conductive material is applied to the vias, trenches, and cavity walls, as depicted for example in FIG. 10.In 413, the conductive material on the cavity walls is segmented into a plurality of conductive plates or trenches, as depicted for example in FIG. 12.In 415, a capacitor is inserted into or fabricated within each cavity, as depicted for example in FIG. 13.In 417, a first insulating region is formed over the core region, as depicted for example by insulating layer 144 in FIG. 14.In 419, vias and trenches are formed in the first insulating region. These vias and trenches couple to corresponding vias and trenches in the core region, as depicted for example in FIG. 15.In 421, a first buildup region (e.g. buildup region 179, FIG. 19) is formed above the first insulating region. The first buildup region includes traces and lands, as depicted for example by traces 166 and 176 and lands 178 in FIGS. 18-19.In 423, a second insulating region is formed below the core region, as depicted for example by insulating layer 142 in FIG. 13.In 425, vias and trenches are formed in the second insulating region. These vias and trenches couple to corresponding vias and trenches in the core region, as depicted for example in FIG. 15.In 427, a second buildup region (e.g. buildup region 180, FIG. 20) is formed below the second insulating region. The second buildup region includes traces and lands, as depicted for example by traces 184 and 188 and lands 189 in FIG. 20.In 429, a plurality of conductive plates or trenches are formed on one or more exterior sidewalls of the substrate structure, as depicted for example by conductive plates or trenches 301-304 and 306-309 in FIG. 23. These conductive plates or trenches can be employed, for example, to couple to corresponding conductors of a socket (e.g. socket conductors 66, FIG. 3) within another packaging structure, such as packaging structure 68 (FIG. 3). Socketable IC packages are commercially attractive, because they enable IC packages to be readily tested, replaced, and/or upgraded.The method ends at 431.The operations described above with respect to the methods illustrated in FIGS. 7-25 can be performed in a different order from those described herein.The above-described and illustrated details relating to the composition, dimensions, number, and order of layers and their constituent parts are merely exemplary of the embodiments illustrated, and they are not meant to be limiting. The above-described and illustrated choice of materials; geometry; number, order, dimensions, and composition of layers; mechanisms for affixing; and assembly sequencing can all be varied by one of ordinary skill in the art to optimize the performance of the package.Any suitable method, or combination of different methods, for depositing the electrically conductive materials can be used, such as plating, sputtering, vapor, electrical, screening, stenciling, chemical including chemical vapor deposition (CVD), vacuum, and so forth.The particular implementation of the package is very flexible in terms of the orientation, size, number, order, and composition of its constituent elements. Various embodiments of the invention can be implemented using various combinations of substrate technology to achieve the advantages of the present invention. The structure, including types of materials used, dimensions, layout, geometry, and so forth, of the package can be built in a wide variety of embodiments, depending upon the requirements of the electronic assembly of which it forms apart.FIGS. 1-23 are merely representational and are not drawn to scale. Certain proportions thereof may be exaggerated, while others may be minimized. The drawings are intended to illustrate various implementations of the invention that can be understood and appropriately carried out by those of ordinary skill in the art.CONCLUSIONThe present invention provides for a substrate for an electronic assembly and methods of manufacture thereof that minimize problems associated with high power delivery. An electronic system and/or data processing system that incorporates one or more electronic assemblies that utilize the present invention can handle the relatively high power densities and clock frequencies associated with high performance integrated circuits, and such systems are therefore more commercially attractive.By substantially increasing current delivery, lowering inductance, and providing improved signal references, in substrates used for high performance IC's, such electronic equipment can be operated at increased clock frequencies and with higher reliability.As shown herein, the present invention can be implemented in a number of different embodiments, including a substrate, an integrated circuit package, an electronic assembly, an electronic system in the form of a data processing system, and various methods of fabricating a substrate. Other embodiments will be readily apparent to those of ordinary skill in the art. The elements, materials, geometries, dimensions, and sequence of operations can all be varied to suit particular packaging requirements.While certain operations have been described herein relative to "upper" or "above", or to "lower" or "below", it will be understood that these descriptors are relative, and that they would be reversed if the substrate or package were inverted. Therefore, these terms are not intended to be limiting.Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement that is calculated to achieve the same purpose may be substituted for the specific embodiment shown. This application is intended to cover any adaptations or variations of the present invention. Therefore, it is manifestly intended that this invention be limited only by the claims and the equivalents thereof. |
A microelectronic device (100) contains a high voltage component (104) having an upper plate (132) and a lower plate (130). The upper plate is isolated from the lower plate by a main dielectric (136) between the upper plate and low voltage elements (106) at a surface of the substrate (102) of the microelectronic device. A lower-bandgap dielectric layer (140) is disposed between the upper plate and the main dielectric. The lower-bandgap dielectric layer contains at least one sub-layer (144) of silicon nitride having a refractive index between 2.11 and 2.23. The lower-bandgap dielectric layer extends beyond the upper plate continuously around the upper plate. The lower-bandgap dielectric layer has an isolation break (150) surrounding the upper plate at a distance of at least twice the thickness of the lower-bandgap dielectric layer from the upper plate. |
1.A microelectronic device, which includes:The lower plate of the high-voltage capacitor of the microelectronic device;The upper plate of the high-voltage capacitor;A main dielectric with a thickness of at least 2 microns provided between the lower plate and the upper plate; andA lower band gap dielectric layer arranged between the main dielectric and the upper plate, wherein:The lower band gap dielectric layer at least includes a first sub-layer of silicon nitride with a refractive index in the range of 2.11-2.23;The lower band gap dielectric layer continuously extends around the upper plate and across the upper plate for a certain distance, the distance being at least twice the thickness of the lower band gap dielectric layer;The lower band gap dielectric layer has an isolation fracture, so that the lower band gap dielectric layer is discontinuous at the isolation fracture; andThe isolation fracture surrounds the upper plate.2.The microelectronic device according to claim 1, wherein the lower bandgap dielectric layer further comprises a second sublayer disposed between the first sublayer and the lower plate, and the band of the second sublayer The gap energy is smaller than the band gap energy of the main dielectric layer.3.The microelectronic device of claim 2, wherein a portion of the main dielectric adjacent to the lower bandgap dielectric layer includes a silicon dioxide-based dielectric material, and the second sublayer includes silicon oxide nitride.4.The microelectronic device according to claim 4, wherein the main dielectric includes a plurality of inter-metal dielectric layers, or IMD layers, and interlayer dielectric layers, or ILD layers, the IMD layers including silicon dioxide-based dielectric materials, and the ILD The layer includes a silicon dioxide-based dielectric material.5.The microelectronic device according to claim 1, further comprising a low-voltage component arranged outside the isolation fracture.6.The microelectronic device according to claim 5, wherein the low voltage component is a metal oxide semiconductor transistor (MOS transistor) having a gate dielectric layer less than 70 nm thick.7.The microelectronic device according to claim 1, wherein the lower band gap dielectric layer includes a portion provided outside the isolation fracture.8.8. The microelectronic device according to claim 7, wherein the portion of the lower bandgap dielectric layer provided outside the isolation fracture is in contact with a low-voltage element of the microelectronic device.9.The microelectronic device according to claim 1, wherein the edge of the lower band gap dielectric layer at the isolation fracture is covered with a dielectric material.10.The microelectronic device of claim 1, wherein the silicon nitride of the first sublayer has a thickness of about 600 nm.11.A method of forming a microelectronic device, which includes:Forming the lower plate of the high-voltage component of the microelectronic device;Forming a main dielectric of at least 2 microns in thickness adjacent to the lower plate;Forming a lower band gap dielectric layer adjacent to the main dielectric opposite to the lower plate, the lower band gap dielectric layer including a silicon nitride layer with a refractive index in the range of 2.11-2.23;Forming the upper plate of the high voltage component adjacent to the lower band gap dielectric layer; andAn isolation fracture is formed in the lower band gap dielectric layer, so that the lower band gap dielectric layer is discontinuous at the isolation fracture and the isolation fracture surrounds the upper plate.12.The method of claim 11, wherein the step of forming the lower band gap dielectric layer further comprises forming a silicon oxynitride layer between the silicon nitride and the main dielectric.13.The method of claim 12, wherein the portion of the main dielectric adjacent to the lower bandgap dielectric layer includes a silicon dioxide-based dielectric material.14.The method of claim 11, wherein the main dielectric includes a plurality of IMD layers and an ILD layer, the IMD layer includes a silica-based dielectric material, and the ILD layer includes a silica-based dielectric material.15.The method according to claim 11, further comprising forming a low-voltage component disposed outside the isolation fracture.16.The method according to claim 15, wherein the low voltage component is a MOS transistor having a gate dielectric layer less than 70 nm thick.17.The method according to claim 11, wherein the step of forming the isolation fracture includes removing the lower band gap dielectric layer in the region for the isolation fracture, leaving the lower band gap dielectric layer disposed on the The external part of the isolation fracture.18.The method according to claim 17, wherein the part of the lower bandgap dielectric layer disposed outside the isolation fracture is in contact with a low-voltage component of the microelectronic device.19.11. The method of claim 11, further comprising forming a dielectric material on the edge of the lower band gap dielectric layer at the isolation fracture.20.A device including:A first semiconductor die and a second semiconductor die each having a high-voltage capacitor, the high-voltage capacitor having:Lower boardOn board;A main dielectric provided between the lower board and the upper board; andThe silicon nitride layer arranged between the main dielectric and the upper plate, wherein:The refractive index of the silicon nitride layer is in the range of 2.11-2.23;The silicon nitride layer continuously extends around the upper plate over the upper plate for a certain distance, and the distance is at least twice the thickness of the silicon nitride layer;The silicon nitride layer has an isolation fracture, so that the silicon nitride layer is discontinuous at the isolation fracture; andThe isolation fracture surrounds the upper plate; andA laminated inductor connected in parallel with the isolation barrier provided by the high-voltage capacitor of the first semiconductor die and the second semiconductor die. |
It is used to achieve high immunity to ultra-fast high-voltage transients across inorganic electrical isolation barriers.
Art and methodTechnical fieldThe invention relates to the field of microelectronic devices. More specifically, the present invention relates to high-voltage components in microelectronic devices.Background techniqueMicroelectronic devices with high-voltage components with high-voltage nodes that can operate at a potential greater than 100 volts may have a thin lower band gap dielectric layer between the high-voltage node and the main dielectric. The thickness of the main dielectric is a few microns, Separate high-voltage nodes from low-voltage components. The lower bandgap dielectric layer whose thickness is generally less than 10% of the thickness of the main dielectric has a bandgap energy smaller than that of the main dielectric and provides reliability for the main dielectric by reducing the peak electric field at the corners of the high-voltage node. The lower band gap dielectric layer can enhance the high-voltage performance and reliability of the device, and the degree of enhancement can be customized by changing the refractive index value of the layer.Summary of the inventionThe following presents a simplified summary of the invention in order to provide a basic understanding of one or more aspects of the invention. This summary is not an extensive overview of the invention, and is neither intended to identify key or important elements of the invention, nor to delimit its scope. On the contrary, the main purpose of the summary of the present invention is to present some concepts of the present invention in a simplified form as a prelude to the more detailed description presented later.A microelectronic device includes a high-voltage component having an upper plate and a lower plate. The upper board is isolated from the lower board by a main dielectric formed near the surface of the base of the microelectronic device. The lower band gap dielectric layer is provided between the upper plate and the main dielectric. The lower band gap dielectric layer includes at least one sub-layer of silicon nitride. The refractive index (RI) of the silicon nitride sublayer is between 2.11 and 2.24. The lower band gap dielectric layer continuously extends beyond the upper plate around the upper plate. The lower band gap dielectric layer has an isolation fracture surrounding the upper plate, and the distance between the isolation fracture and the upper plate is at least twice the thickness of the lower band gap dielectric layer.Description of the drawingsFigure 1 is a cross-section of an example microelectronic device containing high voltage components.2A to 2F are cross-sections of the microelectronic device of FIG. 1 depicted in successive manufacturing stages.3A to 3C are cross-sections of the microelectronic device of FIG. 1 at the isolation fracture, depicting alternative methods of forming the isolation fracture and high-voltage nodes.Fig. 4 is a graph of breakdown voltage Vbd and refractive index (RI).Figure 5 is a graph of the failure rate and the peak voltage Vpk under various RIs.Figures 6 to 10 are graphs of various parameters and RI.Figure 11 is a cross-section of another example microelectronic device containing high voltage components.Fig. 12 is a three-dimensional (isometric) view of a multi-chip module MCM having a laminated inductor packaged with an ISO device that includes the high-voltage components of Figs. 1 and 11.Detailed waysThe present invention is described with reference to the drawings. The drawings are not drawn to scale, and they are only to illustrate the invention. Several aspects of the present invention are described below with reference to exemplary applications for ease of explanation. It should be understood that many specific details, relationships, and methods are set forth to provide an understanding of the present invention. However, those skilled in the relevant art will readily recognize that the present invention can be practiced without one or more of the specific details or using other methods. In other cases, well-known structures or operations are not shown in detail to avoid obscuring the present invention. The present invention is not limited by the order of the described behaviors or events, because some behaviors may occur in a different order and/or concurrently with other behaviors or events. In addition, not all illustrated actions or events require implementation of the method according to the present invention.The microelectronic device contains a high-voltage capacitor having an upper plate (typically, a high-voltage node) and a lower plate (typically, a low-voltage node). The upper board is isolated from the lower board by a main dielectric between the upper board and the low-voltage element formed at the surface of the base of the microelectronic device. The lower band gap dielectric layer is provided between the upper plate and the main dielectric. The lower band gap dielectric layer includes at least one sublayer having a band gap energy smaller than that of the main dielectric. The lower band gap dielectric layer continuously extends beyond the upper plate around the upper plate. The lower band gap dielectric layer has an isolation fracture surrounding the upper plate, and the distance between the isolation fracture and the upper plate is at least twice the thickness of the lower band gap dielectric layer. The isolation fracture is located between the upper plate and the low-voltage components of the microelectronic device.As is common for IC manufacturers, efforts are being made to simplify and optimize processes to both reduce costs and improve product reliability. As a result of such efforts, it was found that the number of metal levels can be reduced from 7 to 5 while retaining the high-voltage capability of almost all parameters. However, marginal problems were found in which the device failed to meet the International Electrotechnical Commission's 8kV Electrostatic Discharge (IEC-ESD) immunity standard (IEC/EN 61000-4-2, level 4). The IEC-ESD isolation barrier test is an ultra-fast transient voltage test at the system level. Usually, IC manufacturers do not perform this test at the component level. In order to improve the IEC-ESD performance, many potential factors have been studied, such as the thickness of the lower band gap dielectric layer, the thickness of the main capacitor dielectric, and the thermal annealing of the lower band gap dielectric layer, but no solution has been formed.However, the inventor found that when silicon nitride has a lower refractive index (RI), the IEC-ESD breakdown voltage performance of a high-voltage capacitor using a silicon nitride layer in the lower band gap layer has an unexpectedly great improvement. . Lower RI will reduce 1000 times the slower transient voltage "surge (SURGE)" protection, so it is counter-intuitive that lower RI will cause the ultra-fast transient breakdown voltage of this type of capacitor to increase. Figure 4 presents the IEC-ESD breakdown voltage (Vbd) as a function of the RI of a representative non-production test structure, where the IEC-ESD Vbd is obtained using a transient voltage pulse that rises 1.2ns and Decrease 1.2ns, after 12 pulses of positive polarity, add 12 pulses of negative polarity. When the RI of the silicon nitride layer was reduced from 2.26 to 2.08, the Vbd characteristic showed a significant increase from about 10 kV to about 13 kV. Figure 5 shows the relationship between the failure rate and peak voltage (Vpk) of a representative capacitive isolation device with five values of RI for "surge" capability. The five values use 25 values that rise by 1.2μs and fall by 50μs. The voltage pulse is then obtained by 25 pulses of opposite polarity with similar rise and fall times, as specified in the reinforced isolation standard VDE-0884-11. These figures show that although the best "surge" performance is achieved for higher RI values (>2.23), the best IEC-ESD capability is achieved when the RI is less than 2.23, for example about 2.0 to 2.1. Therefore, the inventors have determined that the SiN layer under the top high voltage (HV) capacitor plate provides excellent HV performance, but may not be optimized for both "surge" capability and IEC-ESD transient capability at the same time.As described in detail below, the inventors have determined that the use of silicon nitride in the lower band gap dielectric layer can balance surge protection and IEC-ESD performance. The refractive index of this silicon nitride ranges from 2.11 to 2.23, for example, 2.17±0.04. A CVD process in which SiH4+NH3+Ar flows in the plasma can be used. The gas flow ratio of SiH4/NH3 is selected to obtain a refractive index of about 2.17. Temperature, RF power, and chamber pressure also affect RI. Figures 6 to 10 show the general trend of RI depending on key manufacturing parameters that may be applicable to many different deposition tools. Figure 6 shows the relationship between RI and silane flow rate; Figure 7 shows the relationship between RI and ammonia flow rate; 8 shows the relationship between RI and the distance between the reactant shower head and the substrate surface; Figure 9 shows the relationship between RI and deposition pressure; and Figure 10 shows the relationship between RI and deposition power.Turning to Figure 1, a cross-section of an example microelectronic device 100 containing high voltage components is presented. Various aspects of the device 100 are described without implied limitations in order to provide context for the lower band gap dielectric layer described below. In this example, the microelectronic device 100 is described as an integrated circuit 100. Other configurations for the microelectronic device 100, such as stand-alone components or hybrid circuits, are within the scope of this example. The microelectronic device 100 is formed on a substrate 102 such as a silicon wafer. The microelectronic device 100 includes a high-voltage component 104 (depicted as a high-voltage capacitor 104 in FIG. 1), and may include a low-voltage component 106 (depicted as a gate dielectric layer 110 with a thickness of less than 70 nm) that operates at 24 volts or lower. Semiconductor (MOS) transistor 106). The microelectronic device 100 may optionally include a Faraday cage 108 surrounding the high voltage component 104.The field oxide 112 can be formed in the substrate 102 to laterally isolate the elements of the microelectronic device 100. A pre-metal dielectric (PMD) layer 114 is formed on the substrate 102. The contact 116 is disposed through the PMD layer 114 to provide electrical connection for the low voltage component 106 and the Faraday cage 108.A plurality of metal levels 118 are provided above the PMD layer 114. The metal level 118 includes a metal interconnect 120 connected to the low voltage component 106 and the Faraday cage 108. An In-Metal Dielectric (IMD) layer 122 of silicon dioxide-based dielectric material is provided between the metal interconnections 120 in each metal level 118. The via level 124 is provided between the metal levels 118. The via level 124 includes metal vias 126 connecting the metal interconnect 120. Metal vias 126 are provided through the interlayer dielectric (ILD) layer 128 of silicon dioxide-based dielectric material in each via level 124. Other dielectric materials of IMD layer 122 and ILD layer 128, such as low-k materials, are within the scope of this example. The IMD layer 122 and the ILD layer 128 may include a cap layer and an etch stop layer of different dielectric materials (such as silicon nitride). The IMD layer 122 may be a portion corresponding to the ILD layer 128, depending on the process sequence used to form the plurality of metal levels 118.The lower plate 130 of the high voltage component 104 (depicted as the lower plate 130 of the high voltage capacitor 104) is arranged in one of the metal levels 118, such as the first metal level 118 as depicted in FIG. 1. The upper plate 132 of the high voltage component 104 (depicted as the upper plate 132 of the high voltage capacitor 104) is arranged in another metal level 134, such as the top metal level 134 as depicted in FIG. 1. The combined IMD layer 122 and ILD layer 128 between the lower plate 130 and the upper plate 132 provide the main dielectric 136 of the high voltage component 104. In this example, the main dielectric 136 is the capacitor dielectric 136 of the high-voltage capacitor 104. The thickness 138 of the capacitor dielectric 136 is at least 2 μm, for example 3 μm or more, and can be determined by the desired operating voltage of the upper plate 132 relative to the lower plate 130 and possibly the substrate 102. For example, a version of the high-voltage capacitor 104 in which the upper plate 132 is designed to operate at 1000 volts root mean square relative to the lower plate 130 may have a capacitor dielectric 136 with a thickness 138 of 16 μm to 20 μm. The use of silicon nitride with a refractive index in the range of 2.11 to 2.23 provides unexpected benefits, namely a significant improvement in IEC-ESD performance and the balanced surge protection discussed earlier.The lower band gap dielectric layer 140 is disposed between the main dielectric 136 and the upper plate 132 and is opposite to the lower plate 130. The lower band gap dielectric layer 140 includes at least one dielectric sublayer having a band gap energy smaller than that of a portion of the main dielectric 136 adjacent to the upper plate 132. In this example, the lower bandgap dielectric layer 140 includes a 200nm to 600nm thick silicon oxide nitride first sublayer 142 in contact with the main dielectric 136, and between the first sublayer 142 and the upper plate 132 and between the first sublayer 142 and the upper plate 132. The second sub-layer 144 of silicon nitride with a thickness of 400 nm to 800 nm (for example, 600 nm) is contacted. The band gap energy of the first sub-layer 142 of silicon oxynitride is lower than that of the silicon dioxide-based dielectric material of the main dielectric 136, and the band gap energy of the second sub-layer 144 of silicon nitride is lower than that of the first sub-layer 142. The lower band gap dielectric layer 140 extends across the upper plate 132 and continuously surrounds the upper plate 132 at a distance 146 that is at least twice the thickness 148 of the lower band gap dielectric layer 140. There is an isolation fracture 150 in the lower bandgap dielectric layer 140 in contact with the upper plate 132; the isolation fracture 150 surrounds the upper plate 132. The position of the isolation fracture 150 is not closer to the upper plate 132 than the distance 146. The optional low-voltage portion 152 of the lower bandgap dielectric layer 140 may be disposed outside the isolation fracture 150 such that the low-voltage portion 152 of the lower bandgap dielectric layer 140 is separated from the lower bandgap dielectric layer 140 contacting the upper plate 132 by the isolation fracture 150. The low-voltage portion 152 of the lower bandgap dielectric layer 140 may contact the low-voltage components of the microelectronic device 100 that extend to the lower bandgap dielectric layer 140, such as the Faraday cage 108. The isolation break 150 is located between the upper board 132 and any low-voltage components of the microelectronic device 100, so that the lower bandgap dielectric layer 140 in contact with the upper board 132 does not contact any low-voltage components. The isolation break 150 advantageously prevents leakage current from passing through the interface of the lower band gap dielectric layer 140 from the upper plate 132 to the low-voltage components of the microelectronic device 100. The low-voltage portion 152 (if present) of the lower bandgap dielectric layer 140 is laterally separated from the lower bandgap dielectric layer 140 contacting the upper plate 132, and the isolation distance 154 is at least 1 μm, and may be 10 μm to 25 μm, in order to form an isolation The photolithography process of the fracture 150 advantageously provides a process margin. The formation of the lower bandgap dielectric layer 140 with the isolation fracture 150 is particularly advantageous for the example of the high-voltage component 104 operating at 1000 volts or higher, because if such a component does not have the lower bandgap dielectric layer 140 with the isolation fracture 150 , The reliability will be very low, thus excluding useful embodiments of the microelectronic device 100.The upper plate 132 is disposed in the upper IMD layer 156 which covers the edge of the lower band gap dielectric layer 140 at the isolation fracture 150. The upper IMD layer 156 may include silicon dioxide, similar to the main dielectric 136.The upper board 132 may be connected to the pad 158 of the microelectronic device 100 or may be a part thereof, as depicted in FIG. 1. The protective outer coating 160 of polyimide, silicon nitride, silicon oxynitride, and/or silicon dioxide may be disposed above the upper plate 132 or may overlap the edge of the upper plate 132, as depicted in FIG. 1. The electrical connection 162 to the upper board 132 may be made by wire bonding 162. The low voltage portion 152 of the lower band gap dielectric layer 140 can advantageously shield the low voltage component 106 from the electric field from the electrical connection 162 to the upper plate 132.During the operation of the microelectronic device 100, when a high voltage potential difference is applied between the upper plate 132 and the lower plate 130, the lower bandgap dielectric layer 140 advantageously provides the main dielectric 136 by reducing the electric field near the corners of the upper plate 132 reliability. The isolation break 150 advantageously provides reliability by preventing leakage current from the upper plate 132 to the low-voltage components of the microelectronic device 100 through the lower band gap dielectric layer 140.Figures 2A to 2F are cross-sections of the microelectronic device of Figure 1 depicted in successive manufacturing stages. 2A, the microelectronic device 100 is formed on a substrate 102. The substrate 102 may be a silicon wafer or other semiconductor substrate, or may be a dielectric substrate, such as sapphire or alumina ceramic. In a version of this example where the substrate 102 is a semiconductor substrate, the field oxide 112 may be formed to laterally isolate the elements of the microelectronic device 100 in the substrate 102. The field oxide 112 may be formed by a shallow trench isolation (STI) process, a local oxidation of silicon (LOCOS) process, or other methods.The low-pressure part 106 is formed in and on the substrate 102. The low pressure component 106 can be close to the high pressure component 104 and can be separated from the high pressure component 104 by a Faraday cage 108.The PMD layer 114 is formed on the substrate 102. The PMD layer 114 may include a stack of dielectric layers including a 10nm to 100nm thick silicon nitride or silicon dioxide PMD liner formed by a plasma-enhanced chemical vapor deposition (PECVD) process, formed by a PECVD process, usually 100nm to 1000nm thick silicon dioxide, phosphorous silicate glass (PSG) or borophosphosilicate glass (BPSG) layer, usually flattened by a chemical mechanical polishing (CMP) process, and an optional PMD cap layer (usually Hard materials of 10 nm to 100 nm, such as silicon nitride, silicon carbide nitride, or silicon carbide formed by another PECVD process). Contact holes are formed through the PMD layer 114 to expose the substrate 102, for example in the low-voltage component 106 and the Faraday cage 108, and possibly in the high-voltage component 104. Contact 116 is formed in the contact hole to provide electrical connection. A liner of titanium and titanium nitride can be formed by using a sputtering process and a CVD process, respectively, a tungsten layer is formed on the liner to fill the contact hole using a CVD plasma process, and an etchback and/or CMP process can be used The tungsten and the liner are removed from the top surface of the PMD layer 114 to form the contact 116.The metal level 118 and the IMD layer 122, as well as the via level 124 and the ILD layer 128, can be formed by any of several methods. In a version of this example, any of the metal levels 118 may be formed by forming an aluminum-based interconnection metal layer over the underlying PMD layer 114 or the ILD layer 128. The aluminum-based interconnection metal layer may include an adhesion layer of titanium, titanium tungsten or titanium nitride, and an aluminum layer containing a few percent of silicon, titanium and/or copper on the adhesion layer, with a thickness of 200 nm to several micrometers, And may include an anti-reflective layer of titanium or titanium nitride on the aluminum layer. An interconnection etching mask including a photoresist is formed over the interconnection metal layer so as to cover the area of the metal interconnection 120, and an etching process (such as plasma etching using chlorine radicals) is used to remove the interconnection etching mask. The interconnection metal layer in the area exposed by the mold leaves the metal interconnection 120. The corresponding IMD layer 122 is then formed between the metal interconnections 120. The IMD layer 122 can be formed by the following steps: a PECVD process using tetraethylorthosilicate (also known as tetraethoxysilane (TEOS)) is used to deposit a silicon dioxide-based dielectric material layer, and then through a resist etch-back process Or the CMP process planarizes the dielectric material, so that the IMD layer 122 covers the metal interconnection 120, as shown in FIG. 1. The IMD layer 122 may include a silicon dioxide-based dielectric material formed by spin-coating the microelectronic device 100 with a solution containing methyl silsesquioxane (MSQ) and then baking the solution to remove volatile materials.In another version of this example, any of the metal levels 118 can be formed by a single damascene process, in which the IMD layer 122 is formed first, and an interconnection trench is formed through the IMD layer 122 in the area for the metal interconnection 120 groove. The IMD layer 122 may be a stack of dielectric layers formed by a sequential PECVD process, the stack including an etch stop layer, a main layer, and a cap layer. A liner of tantalum nitride is formed over the IMD layer 122 by a CVD plasma process and extends into the interconnect trench as a conformal liner. A seed layer of sputtered copper is formed on the pad, and electroplated copper is formed on the seed layer to fill the interconnect trench. The copper CMP process removes copper and the liner from the top surface of the IMD layer 122, leaving the metal interconnect 120 in the interconnect trench.In another version, the metal interconnection 120 may be formed by a lift-off process, in which a lift-off pattern of organic material such as photoresist is formed over the corresponding lower ILD layer 128, which has a pattern for the metal interconnection 120 Open up. The metal layer used for the metal interconnect 120 is deposited over the lift-off pattern and onto the ILD layer 128 in the opening. Subsequently, a solvent spray is used to remove the peeling pattern, and the metal layer on the peeling pattern is taken away, leaving the metal interconnection 120 behind.In a version of this example, any of the via levels 124, including the corresponding via 126 and the ILD layer 128, can be formed by a similar process as described for the contact 116. In another version, the via level 124 including the corresponding via 126 and the via level 124 of the ILD layer 128 may be formed by a single damascene process, as described for the metal level 118 including the metal interconnect 120 and the IMD layer 122.In another version of this example, any one of the metal levels 118 and the corresponding lower via level 124 can be formed simultaneously by a double damascene process. In the double damascene process, an ILD layer 128 is formed and a corresponding IMD layer 122 is formed on the ILD layer 128. Through the sequence of patterning and etching steps, interconnect trenches are formed through the IMD layer 122 and via holes are formed through the ILD layer 128. The sequence may be, for example, trench first sequence, via first sequence, or partial via second sequence. One order. A liner, a seed layer, and electroplated copper filling metal are formed over the IMD layer 122, and at the same time, the via and the interconnect trench are filled. The subsequent copper CMP process removes the copper and the liner from the top surface of the IMD layer 122, leaving the metal interconnect 120 in the interconnect trench and the via 126 in the via.In another version of this example, any of the metal levels 118 can be formed by a mask plating process. An adhesion layer of titanium and a seed layer of copper are formed on the top surface of the relevant ILD layer 128. The adhesion layer makes electrical contact with the through hole 126 or the lower layer instance of the contact 116. A plating mask of photoresist is formed over the seed layer in order to expose the area for the metal interconnection 120. The electroplating operation electroplates copper to a desired thickness on the seed layer in the area exposed by the electroplating mask. For example, the plating mask is removed by ashing or dissolving in a solvent. For example, a reactive ion etching (RIE) process is used to remove the seed layer and adhesion layer outside the copper plating, leaving the copper plating and the underlying seed layer and adhesion layer to provide the metal interconnection 120.The lower plate 130 of the high voltage component 104 is formed in one of the lower metal levels 118, possibly the lowermost metal level 118. The lower plate 130 may be formed simultaneously with the metal interconnection 120 in the metal level 118. Alternatively, the lower plate 130 may be formed separately from the metal interconnection 120. The ILD layer 128 and the IMD layer 122 above the lower plate 130 provide the main dielectric 136 of the high voltage component 104.2B, a lower band gap dielectric layer 140 is formed on the ILD layer 128 and the IMD layer 122 of the main dielectric 136 containing the high voltage component 104. The lower band gap dielectric layer 140 includes at least one layer of silicon nitride. In this example, the lower band gap dielectric layer 140 is formed by PECVD reaction using bis(tert-butylamino)silane (BTBAS) and TEOS or N2O and NH3 to form 200nm to 600nm thick silicon oxynitride (sometimes called The first sub-layer 142 of silicon oxide nitride, or SiON) is performed. The atomic fractions of nitrogen and oxygen in the first sublayer 142 can be selected by adjusting the relative gas flow rates of the nitrogen-containing and oxygen-containing feed gases. The formation of the lower band gap dielectric layer 140 is continued by forming a 400-800 nm thick second sub-layer 144 of silicon nitride by a CVD process of flowing SiH4+NH3+Ar in a plasma at about 375 degrees Celsius. In other versions of this example, the lower bandgap dielectric layer 140 may consist of only one sub-layer of silicon nitride. There are several key parameters that affect RI, such as gas ratio, RF power, and pressure. Figures 6 to 10 show the interaction between RI and various parameters. The RI of silicon nitride is in the range of 2.11 to 2.24, and can be formed using the parameters shown in Table 1.Table 1In a further version, the lower band gap dielectric layer 140 may have more than two sub-layers. The dielectric materials that can be used for the sub-layers of the lower band gap dielectric layer 140 may include the dielectric materials of Table 2.Table 2Dielectric material band gap range (electron volts) Silicon oxide nitride~7.5 Silicon nitride 4.7~~6 Silicon oxide carbide nitride is higher than silicon carbide nitride Silicon carbide nitride 3.8 to 4.7 Tantalum pentoxide 3.8 to 5.3 Diamond-like carbon Carbon 5.5 Titanium dioxide 3.3 Aluminum nitride 6.2 Aluminum oxide 6.5 to 7.0 Silicon monoxide is lower than SiO2 zinc oxide 3.4The band gaps of the variable stoichiometric materials in Table 2, such as silicon oxide nitride, silicon oxide carbide nitride, and silicon carbide nitride may have different band gaps, depending on the relative atomic fractions of oxygen, nitrogen, and/or carbon. The silicon-rich version of the silicon-containing dielectric material may provide inferior performance as a sublayer of the lower bandgap dielectric layer 140 due to lower than expected electrical impedance.2C, the via 126 passing through the lower band gap dielectric layer 140 is formed after the lower band gap dielectric layer 140 is formed. The via 126 passing through the lower band gap dielectric layer 140 may be formed by any of the methods described with reference to FIG. 2A.Referring to FIG. 2D, the metal interconnection 120 and the upper plate 132 are formed over the lower band gap dielectric layer 140. The metal interconnection 120 above the lower band gap dielectric layer 140 may be formed using any of the methods described with reference to FIG. 2A. The upper plate 132 may be formed simultaneously with the metal interconnection 120 above the lower band gap dielectric layer 140, or may be formed separately.Referring to FIG. 2E, an isolation fracture 150 is formed through the lower band gap dielectric layer 140. The isolation fracture 150 may be formed by the following steps: forming an isolation etching mask over the lower band gap dielectric layer 140, the metal interconnection 120 above the lower band gap dielectric layer, and the upper plate 132, and etching through the lower band gap dielectric layer 140 to the lower layer In the ILD layer 128, the lower band gap dielectric layer 140 under the upper plate 132 and the low voltage portion 152 of the lower band gap dielectric layer 140 are left. Other methods of forming the isolation fracture 150 are discussed below.Referring to FIG. 2F, the IMD layer 156 formed above the lower band gap dielectric layer 140 is adjacent to the isolation fracture 150. The IMD layer 156 above the lower band gap dielectric layer 140 may be formed by any of the methods described with reference to FIG. 2A. Forming the IMD layer 156 to be adjacent to the isolation break 150 advantageously prevents leakage current from passing through the interface of the lower band gap dielectric layer 140 from the upper board 132 to the low-voltage components of the microelectronic device 100. The formation of the microelectronic device 100 proceeds with the formation of the protective overcoat 160 to subsequently provide the structure of FIG. 1.3A to 3C are cross-sections of the microelectronic device of FIG. 1 at the isolation fracture, depicting alternative methods of forming the isolation fracture and high-voltage nodes. Referring to FIG. 3A, the microelectronic device 100 is manufactured as described with reference to FIGS. 2A to 2C. The lower band gap dielectric layer 140 is formed above the ILD layer 128 at the top of the main dielectric 136. In this example, the lower band gap dielectric layer 140 includes a first sublayer 142 formed on the ILD layer 128 and a second sublayer 144 formed on the first sublayer 142. After the second sub-layer 144 is formed, an oxidation process, such as an N 2 O plasma process, forms an oxygen-rich top region 164 on the top of the second sub-layer 144. The oxygen-rich top region 164 may be less than 30 nm thick. The lower region 166 of the second sub-layer 144 is substantially not changed by the oxidation process.An interconnection metal layer 168 is formed on the lower band gap dielectric layer 140. The interconnection metal layer 168 includes a 2 nm to 15 nm thick adhesion layer 170 of titanium, titanium tungsten, or titanium nitride formed by a sputtering process or a reactive sputtering process. The interconnection metal layer 168 further includes an aluminum layer 172 formed on the adhesion layer 170. The aluminum layer 172 may include up to 2% silicon, titanium, and/or copper. The aluminum layer 172 may be 200 nm to several micrometers thick, and is formed by a sputtering process. The interconnection metal layer 168 also includes an anti-reflective layer 174 of titanium nitride. The anti-reflective layer 174 is 10 nm to 20 nm thick and is formed by a reactive sputtering process on the aluminum layer 172. Other configurations for the interconnect metal layer 168 are within the scope of this example.An interconnection mask 176 is formed over the interconnection metal layer 168 to cover the area of the upper plate 132 and the metal interconnection 120 of FIG. 1 above the lower bandgap dielectric layer 140. The interconnection mask 176 may include a photoresist formed by a photolithography process, and may also include an anti-reflection layer and/or a hard mask layer. FIG. 3A depicts a portion of the interconnection mask 176 over the upper plate 132 that is subsequently formed.3B, the interconnection etching process removes the interconnection metal layer 168 in the area exposed by the interconnection mask 176, leaving the upper plate 132 and the metal interconnection 120 of FIG. 1 above the lower bandgap dielectric layer 140. In this example, the interconnection etching process further removes a part, but not all, of the second sublayer 144 of the lower bandgap dielectric layer 140 in the area exposed by the interconnection mask 176. The interconnect mask 176 is then removed, for example, by an ashing process. After the interconnection etching process is completed and the interconnection mask 176 is removed, at least 10 nm of the second sub-layer 144 remains in the area exposed by the interconnection mask 176.Referring to FIG. 3C, an isolation etching mask 178 is formed over the upper plate 132 and the lower band gap dielectric layer 140 to expose the area for the isolation fracture 150. The isolation etching mask 178 may include a photoresist formed by a photolithography process. The area for isolating the fracture 150 is laterally separated from the upper plate 132 by a distance 146, as described with reference to FIG. 1. The width 154 of the area for isolating the fracture 150 is described with reference to FIG. 1. The width 154 may be 10 μm to 25 μm to advantageously facilitate the photolithography process for forming the isolation etching mask 178 with a desired level of process margin. The isolation etching process removes the first sub-layer 142, the second sub-layer 144, and a portion of the ILD layer 128 in the area exposed by the isolation etching mask 178. The isolation etch mask 178 is subsequently removed, for example, by an ashing process.FIG. 11 is a cross-section of another example microelectronic device 1100 that shares some characteristics with the microelectronic device 100 of FIG. 1. In FIG. 11, structural features similar to those of FIG. 1 retain the same feature marks, while recognizing that various material substitutions can be made within the scope of the foregoing discussion. The base 102 is omitted to save space. The device 1100 includes five metal levels M1-M5 and four via levels. For the sake of clarity, the feature indexes of metal elements and vias are omitted. As previously described, the metal features and vias are located in the IMD layer 122 and the ILD layer 128. For clarity, these dielectric layers are represented by the combined index 122/128. The high-voltage capacitor 104 includes a lower plate 130 formed in the M2 layer and an upper plate 132 formed in the M5 layer. The high voltage capacitor 104 is surrounded by a Faraday cage 1110, which includes a continuous chain from M5 to M1 through the associated via level, and is grounded to the underlying substrate at unreferenced contacts. The circuitry 1120 outside the Faraday cage 1110 may support other attributes of the device, such as an analog-to-digital converter, digital transmission across the high-voltage capacitor 104, or data reception. The scribe seal structure 1130 includes stacked M1-M5 features and associated through holes. The upper IMD layer 156 as previously described, for example, 1.5 μm SiO2, covers the M5 level. The first protective outer coating 160', such as 2.8 μm SiON, covers the IMD layer 156, and the second protective outer coating 160", such as 10 μm polyimide, covers the first protective outer coating 160'. In this example , The wire bond 162 is directly fabricated to the upper board 132.The lower bandgap dielectric layer 140 is located between the M5 feature (including the upper plate 132) and the dielectric layer 122/128 on which the M5 layer is formed. In this example, the lower band gap dielectric layer 140 includes a first sub-layer 142 of SiON and a second sub-layer 144 of silicon nitride, both of which can be formed as described above. As described above, the lower bandgap dielectric layer 140 continuously extends around the upper plate 132 over the upper plate 132 for a certain distance 146 and ends at the isolation fracture 152 surrounding the upper plate 132. The low voltage portion 152 of the lower band gap dielectric layer 140 is separated from the portion of the lower band gap dielectric layer 140 extending from the upper plate 132 by a distance 154. The low pressure portion 152 extends to and across the scribe seal 1130.Figure 12 shows another example that includes a multi-chip module (MCM) 1200 that includes one or more high-voltage capacitors according to the examples described herein. The package substrate 1210 supports, for example, a plurality of device dies, and a laminated transformer 1240 that can provide isolated power transfer between the device dies 1220, 1230. Each of the first device die 1220 and the second device die 1230 may include one or more instances of the high voltage capacitor 1250 constructed in accordance with the principles described herein. The device dies 1220, 1230 may also include one or more instances (not shown) of high-voltage capacitors constructed in accordance with the principles described herein. In particular, the high-voltage capacitor 1250 includes the lower bandgap dielectric layer 140 described earlier. The device 1200 is expected to benefit from the improved high voltage performance associated with the silicon nitride sublayer 144, which has a lower band gap energy and a refractive index in the range of 2.11 to 2.24 (eg, 2.14±0.04). Compared to using a SiO2 capacitor that does not include the lower bandgap dielectric layer 140, the high-voltage capacitor 1250 improves the overall IEC-ESD performance of the system. With this combination of laminated transformer and device dies 1220, 1230, an improvement of 2300V can be obtained. Other types of MCMs with different device arrangements and/or functions are within the scope of the present disclosure.Although various embodiments of the present invention have been described above, it should be understood that they are presented by way of example rather than limitation. Without departing from the spirit or scope of the present invention, a large number of changes can be made to the disclosed embodiments based on the disclosure herein. Therefore, the breadth and scope of the present invention should not be limited by any of the above-mentioned embodiments. Instead, the scope of the present invention should be defined by the appended claims and their equivalents. |
A method of fabricating a transistor includes providing a semiconductor substrate having a surface and forming a nitride layer outwardly of the surface of the substrate. The nitride layer is oxidized to form a nitrided silicon oxide layer comprising an oxide layer beneath the nitride layer. A high-K layer is deposited outwardly of the nitride layer, and a conductive layer is formed outwardly of the high-K layer. The conductive layer, the high-K layer, and the nitrided silicon oxide layer are etched and patterned to form a gate stack. Sidewall spacers are formed outwardly of the semiconductor substrate adjacent to the gate stack, and source/drain regions are formed in the semiconductor substrate adjacent to the sidewall spacers. |
What is claimed is: 1. A method of fabricating a transistor, comprising:providing a semiconductor substrate having a surface; forming a nitride layer outwardly of the surface of the substrate; oxidizing the nitride layer to form a nitrided silicon oxide layer comprising an oxide layer beneath the nitride layer; depositing a high-K layer outwardly of the nitride layer; forming a conductive layer outwardly of the high-K layer; patterning and etching the conductive layer, the high-K layer, and the nitrided silicon oxide layer to form a gate stack; forming sidewall spacers outwardly of the semiconductor substrate adjacent to the gate stack; and forming source/drain regions in the semiconductor substrate adjacent to the sidewall spacers. 2. The method of claim 1, wherein forming the nitride layer comprises subjecting the surface of the substrate to plasma nitridation.3. The method of claim 1, wherein the thickness of the nitrided silicon oxide layer is less than about 20 Angstroms.4. The method of claim 1, wherein the high-K dielectric layer comprises an oxygen-containing material.5. The method of claim 1, wherein the high-K dielectric layer comprises a material selected from the group consisting of Ta2O5, BaTiO3, TiO2, CeO2, and barium strontium titanate.6. The method of claim 2, wherein the plasma nitridation comprises high density plasma nitridation.7. The method of claim 2, wherein the plasma nitridation uses a nitrogen-containing precursor selected from the group consisting of N2 or NH3 or a mixture thereof with an inert gas.8. The method of claim 1, wherein the oxidizing occurs at a temperature in the range of 600 to 1000[deg.] C.9. The method of claim 1, further comprising removing an oxide layer from the surface of the substrate before forming the nitride layer outwardly of the surface of the substrate.10. A method of fabricating a transistor, comprising:providing a semiconductor substrate having a surface; forming a nitride layer outwardly of the surface of the substrate; oxidizing the nitride layer to form a nitrided silicon oxide layer comprising an oxide layer beneath the nitride layer, wherein the thickness of the nitrided silicon oxide layer is less than about 20 Angstroms; forming a conductive layer outwardly of the nitrided silicon oxide layer; patterning and etching the conductive layer and the nitrided silicon oxide layer to form a gate stack; forming sidewall spacers outwardly of the semiconductor substrate adjacent to the gate stack; and forming source/drain regions in the semiconductor substrate adjacent to the sidewall spacers. 11. The method of claim 10, wherein forming the nitride layer comprises subjecting the surface of the substrate to plasma nitridation.12. The method of claim 11, wherein said plasma nitridation comprises high density plasma nitridation.13. The method of claim 11, wherein the plasma nitridation uses a nitrogen-containing precursor selected from the group consisting of N2 or NH3 or a mixture thereof with an inert gas.14. The method of claim 10, further comprising removing an oxide layer from the surface of the substrate before forming the nitride layer outwardly of the surface of the substrate.15. The method of claim 14, wherein removing an oxide layer from the surface of the substrate comprises stripping the surface of the substrate with hydrofluoric acid. |
TECHNICAL FIELD OF THE INVENTIONThis invention relates generally to the field of integrated circuit fabrication, and more particularly to a semiconductor with a nitrided silicon gate oxide and a method for forming same.BACKGROUND OF THE INVENTIONPresently, there is a great demand for shrinking semiconductor devices to provide an increased density of devices on the semiconductor chip that are faster and consume less power. The scaling of devices in the lateral dimension requires vertical scaling as well so as to achieve adequate device performance.Gate stacks may comprise a gate oxide overlying a gate dielectric. The gate oxide may comprise silicon dioxide or, more recently, a nitrided gate oxide. Traditionally, plasma-assisted nitridation of silicon oxide to form nitrided gate oxide structures is achieved by creating a silicon dioxide layer on the surface of a substrate and reacting the silicon dioxide layer with ionized nitrogen generated by a plasma source.SUMMARY OF THE INVENTIONA method of fabricating a transistor includes providing a semiconductor substrate having a surface and forming a nitride layer outwardly of the surface of the substrate. The nitride layer is oxidized to form a nitrided silicon oxide layer comprising an oxide layer beneath the nitride layer. A high-K layer is deposited outwardly of the nitride layer, and a conductive layer is formed outwardly of the high-K layer. The conductive layer, the high-K layer, and the nitrided silicon oxide layer are etched and patterned to form a gate stack. Sidewall spacers are formed outwardly of the semiconductor substrate adjacent to the gate stack, and source/drain regions are formed in the semiconductor substrate adjacent to the sidewall spacers.Technical advantages of the present invention include an improved gate dielectric with low nitrogen incorporation in the substrate. The low nitrogen incorporation increases electron mobility and limits voltage shift. In addition, low nitrogen incorporation limits migration of oxygen atoms into the substrate and thus increases the efficiency of transistor components.Certain embodiments may possess none, one, some, or all of these technical features and advantages and/or additional technical features and advantages. Other technical advantages will be readily apparent to one skilled in the art from the following figures, description, and claims.BRIEF DESCRIPTION OF THE DRAWINGSFor a more complete understanding of the present invention and its advantages, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:FIGS. 1A-G are a series of schematic cross-sectional diagrams illustrating a method of method of fabricating a transistor in accordance with one embodiment of the present invention; andFIG. 2 is a graph illustrating a profile of atomic percentage of nitrogen in the gate stack in accordance with one embodiment of the present invention.DETAILED DESCRIPTION OF THE INVENTIONFIGS. 1A-1G are a series of schematic cross-sectional diagrams illustrating a method of fabricating a transistor in accordance with one embodiment of the present invention. The method shown in FIGS. 1A-1G may be used in both positive metal oxide semiconductor (PMOS) and negative metal oxide semiconductor (NMOS) devices.Referring to FIG. 1A, substrate 10 may comprise a silicon substrate or silicon epitaxial layer. However, other substrates may alternatively be used. Substrate 10 will conventionally have already undergone several processing steps. For example, formation of isolation structures 12 may have been performed. An oxide layer 14 may have formed on the surface of the substrate due to exposure of the substrate to air or otherwise.Referring to FIG. 1B, the substrate 10 is stripped with hydrofluoric acid (HF) 16 or otherwise treated or cleaned so as to remove oxide layer 14 and/or other impurities from the surface of the substrate.Referring to FIG. 1C, nitride layer 18 is formed outwardly of the surface of the substrate 10. In one embodiment, nitride layer 18 may be formed outwardly of the substrate by being formed on the substrate. In another embodiment, nitride layer 18 may be formed outwardly of the substrate by being formed on an intermediate layer. In a particular embodiment, the nitride layer 18 may be formed by subjecting the surface of the substrate 10 to plasma 17. The source of nitrogen for the plasma 17 may be a nitrogen containing precursor such as N2 or NH3 or a mixture thereof with a suitable inert gas (He, Ar, etc.) or oxidizing gas (NO, N2 O, O2, etc.). The plasma is preferably a high density plasma. The plasma may be generated by a helical-resonator source, an electron-cyclotron resonance source, or an inductively coupled source.During plasma nitridation, the substrate 10 may be unbiased, in which case the ionized substances are accelerated by the plasma potential (on the order of 20 Volts) and then implanted into the substrate 10 surface. A bias can be applied to the substrate 10 to further accelerate the ions from the plasma and implant them deeper into the surface. Either a direct current (DC) or radio frequency (RF) bias may be applied to the substrate 10.In a particular embodiment, the plasma nitridation process may comprise the following process conditions: plasma density between 1*10<10 >to 1*10<12 >cm<-3>; nitrogen flow between 1-2000 sccm preferably 1-100 sccm); pressures on the order of 1-300 mTorr (preferably 1-50 mTorr), temperature in the range of 77 K to 773 K; substrate bias in the range of 0 to 200 Volts; a temperature may be less than about 500[deg.] C.; and a duration in the range of 1 to 300 seconds. Nitride layer 18 may comprise a mixture of Si3N4 and SiOxN4. In a particular embodiment, the nitride layer 12 may have a thickness of about 10-12 Angstroms.Referring to FIG. 1D, after the formation of nitride layer 18, oxide layer 20 is formed beneath nitride layer 18. In a particular embodiment, oxide layer 20 is formed beneath nitride layer 18 by thermal oxidation of the substrate 10 and nitride layer 18. Thermal oxidation in a particular embodiment may take place at a temperature of about 600-1000[deg.] C. in an oxidizing ambient such as O2, N2O, NO, dilute steam, or another suitable oxidant. The oxide layer 20 may comprise SiO2, but may also comprise an amount of nitrogen in the form of SiOxN4 or other compounds. Nitride layer 18 may retard the oxidation, resulting in control over the thickness of the oxide layer 20. Together, nitride layer 18 and oxide layer 20 form nitrided silicon oxide layer 22. The thickness of the nitrided silicon oxide layer 22 may be optimized for using the nitrided silicon oxide as a gate oxide. In a particular embodiment, oxide layer 20 may have a thickness of about 2-10 angstroms and nitrided silicon gate oxide layer 22 may have a thickness of about 14-20 angstroms.Referring to FIG. 1E, in a particular embodiment, a gate dielectric 24 comprising a material with a high dielectric constant, or with "high-K," is formed outwardly of nitride layer 18. High-K is used herein to refer to a dielectric material having a dielectric constant greater than about 7. In particular embodiments, materials having a dielectric constant from 7 to 30 may be used. The high-K dielectric layer may comprise an oxygen-containing material such as Ta2O5, BaTiO3, TiO2, CeO2, or barium strontium titanate. The high-K dielectric layer 24 may be formed by thermal or plasma-assisted processes, atomic layer epitaxy, or by any other suitable methods.Referring to FIG. 1F, conductive layer 26 is formed outwardly of high-K layer 24. Conductive layer 26 may comprise polysilicon, metal, or another suitable gate material.Referring to FIG. 1G, conductive layer 26, high-K dielectric layer 24, and nitrided silicon oxide layer 22 are patterned and etched to form gate stack 28 including gate (layer 26) and gate dielectrics (layers 22 and 24). Fabrication of transistor 40 may be completed by implanting drain extension regions 36, depositing and etching a dielectric to form sidewall spacers 30, and implanting source/drain regions 32.FIG. 2 is a graph illustrating an example profile of atomic percentage of nitrogen in nitride layer 18 and oxide layer 20 accordance with one embodiment of the present invention. Substrate 10, nitride layer 18, and oxide layer 20 are shown on FIG. 2 relative to depth. The profile of atomic percentage of nitrogen reflects the oxidation of the nitride layer 18 and substrate 10 as described in reference to FIG. 1D.In the illustrated embodiment, the atomic percentage of nitrogen at the top of nitride layer 18 is about 9% (and this value ranges from 6% to 12% in particular embodiments), and the percentage peaks at 15% within nitride layer 18 (the peak may range from 10% to 20% in particular embodiments). The percentage decreases with depth within oxide layer 20, reaching about 11% at the boundary between substrate 10 and oxide layer 20 (and this value ranges from 8% to 14% in particular embodiments). Although example values and ranges of atomic percentage of nitrogen have been given, it should be understood that any appropriate values may be used in particular embodiments.Although the present invention has been described with several embodiments, a myriad of changes, variations, alterations, transformations, and modifications may be suggested to one skilled in the art, and it is intended that the present invention encompass such changes, variations, alterations, transformations, and modifications as fall within the scope of the appended claims. |
The application relates to managing a mode to access a memory component or a logic component for machine learning computation in a memory sub-system. A first mode setting signal is received from a host system. The first mode setting signal indicates a first mode. A memory component is a memory component in the first mode based on the first mode setting signal. In the first mode, memory cells of the memory component are exposed to the host system. A second mode setting signal is received from the host system. The second mode setting signal indicates a second mode. The memory component is set to the second mode based on the second mode setting signal. In the second mode, a machine learning operation component of the memory component is exposed to the host system. |
1.A system including:A memory component, the memory component includes a memory cell array and a machine learning operation component, the machine learning operation component performs machine learning calculations in combination with the memory cell array; andA processing device, which is operatively coupled with the memory component to perform the following operations:Receiving a first mode setting signal from the host system, the first mode setting signal indicating the first mode;The memory component is set to the first mode based on the first mode setting signal, wherein in the first mode, the processing device exposes the memory cell array of the memory component to the host system;Receiving a second mode setting signal from the host system, the second mode setting signal indicating a second mode; andThe memory component is set to the second mode based on the second mode setting signal, wherein in the second mode, the processing device exposes the machine learning operation component of the memory component to the Host system.2.The system according to claim 1, wherein the memory component additionally comprises:A mode selection component, which provides a mode selection signal to indicate whether the first mode or the second mode is used for the memory component; andA decoding component coupled with the mode selection component, the decoding component enabling the host system to access the memory cell array or the machine learning operation component based on the mode selection signal from the mode selection component.3.The system of claim 2, wherein:In order to set the system to the first mode based on the first mode setting signal, the processing device performs the following operations:Cause the mode selection component to provide a first mode selection signal based on the first mode setting signal to the decoding component of the memory component, wherein the first mode selection signal represents the first mode, and The decoding component enables the host system to select the first mode based on the first mode selection signal to access the memory cell array; andIn order to set the system to the second mode based on the second mode setting signal, the processing device performs the following operations:Cause the mode selection component to provide a second mode selection signal based on the second mode setting signal to the decoding component of the memory component, wherein the second mode selection signal represents the second mode, and The decoding component enables the host system to access the machine learning operation component based on the second mode selection signal.4.The system of claim 3, wherein the mode selection component includes a mode register.5.The system according to claim 4, wherein the first mode setting signal and the second mode setting signal correspond to a mode register setting MRS command.6.The system of claim 3, wherein the mode selection component includes an input pin dedicated to mode selection and the memory component additionally includes a pull-up resistor and a switch coupled to the input pin.7.The system according to claim 6, wherein:The first mode setting signal corresponds to a switch control signal: turning off the switch so that a voltage signal having a voltage level that satisfies a threshold condition is provided to the input pin as the first mode selection signal; andThe second mode setting signal corresponds to a switch control signal: the switch is closed so that a voltage signal having a voltage level that does not satisfy the threshold condition is provided to the input pin as the second mode selection signal .8.A non-transitory computer-readable storage medium including instructions that, when executed by a processor operatively coupled with a memory component, cause the processor to perform the following operations:Receiving a first mode setting signal from the host system, the first mode setting signal indicating the first mode;The memory component is set to the first mode based on the first mode setting signal, wherein the memory component includes a memory cell array and a machine learning operation component, and the machine learning operation component is combined with the memory cell array to execute a machine Learning to calculate and wherein in the first mode, the processing device exposes the memory cell array of the memory component to the host system;Receiving a second mode setting signal from the host system, the second mode setting signal indicating a second mode; andThe memory component is set to the second mode based on the second mode setting signal, wherein in the second mode, the processing device exposes the machine learning operation component of the memory component to the Host system.9.The non-transitory computer-readable storage medium according to claim 8, wherein the memory component further comprises:A mode selection component, which provides a mode selection signal to indicate whether the first mode or the second mode is used for the memory component; andA decoding component coupled with the mode selection component, the decoding component enabling the host system to access the memory cell array or the machine learning operation component based on the mode selection signal from the mode selection component.10.The non-transitory computer-readable storage medium according to claim 9, wherein:In order to set the system to the first mode based on the first mode setting signal, the processing device performs the following operations:Cause the mode selection component to provide a first mode selection signal based on the first mode setting signal to the decoding component of the memory component, wherein the first mode selection signal represents the first mode, and The decoding component enables the host system to select the first mode based on the first mode selection signal to access the memory cell array; andIn order to set the system to the second mode based on the second mode setting signal, the processing device performs the following operations:Cause the mode selection component to provide a second mode selection signal based on the second mode setting signal to the decoding component of the memory component, wherein the second mode selection signal represents the second mode, and The decoding component enables the host system to access the machine learning operation component based on the second mode selection signal.11.The non-transitory computer-readable storage medium of claim 10, wherein the mode selection component includes a mode register.12.The non-transitory computer-readable storage medium according to claim 11, wherein the first mode setting signal and the second mode setting signal correspond to a mode register setting MRS command.13.The non-transitory computer-readable storage medium of claim 10, wherein the mode selection component includes an input pin dedicated to mode selection and the memory component additionally includes a pull-up resistor coupled to the input pin And switch.14.The non-transitory computer-readable storage medium according to claim 13, wherein:The first mode setting signal corresponds to a switch control signal: turning off the switch so that a voltage signal having a voltage level that satisfies a threshold condition is provided to the input pin as the first mode selection signal; andThe second mode setting signal corresponds to a switch control signal: the switch is closed so that a voltage signal having a voltage level that does not satisfy the threshold condition is provided to the input pin as the second mode selection signal .15.A system including:A memory component, the memory component includes a memory cell array and a machine learning operation component, the machine learning operation component performs machine learning calculations in combination with the memory cell array; andA processing device, which is operatively coupled with the memory component to perform the following operations:Receiving a mode setting signal from the host system, the mode setting signal indicating one of the first mode or the second mode; andIn response to receiving the mode setting signal, causing the memory component to operate in the corresponding mode of the first mode or the second mode, wherein:In the first mode, the processing device receives input data for the machine learning calculation from the host system and routes the input data to one of the memory cell arrays of the memory component or Multiple memory cells; andIn the second mode, the processing device receives an execution signal for performing the machine learning calculation from the host system and routes the execution signal to the machine learning operation component of the memory component.16.The system according to claim 15, wherein the execution signal indicates the type of model to be used for the machine learning calculation and the input data.17.The system according to claim 16, wherein in the second mode, the processing device further performs the following operations:The model type and the corresponding address of the input data stored in one or more memory cells of the memory cell array are provided to the machine learning operation component.18.The system according to claim 15, wherein in the second mode, the processing device further performs the following operations:In response to detecting that the machine learning operation component has generated output data from the machine learning calculation, the output data is provided to the host system.19.15. The system of claim 15, wherein the mode setting signal indicates the first mode and the mode setting signal indicates the second mode to cause voltage signals having different voltage levels to be supplied to the memory component.20.The system according to claim 15, wherein the mode setting signal indicates the first mode and the mode setting signal indicates that the second mode corresponds to a different mode register setting MRS command. |
Manage the memory components used for machine learning calculations in the memory subsystem or
The pattern of logical componentsTechnical fieldThe embodiments of the present disclosure generally relate to a memory subsystem, and more specifically, relate to a mode of managing access to memory components or logic components used for machine learning calculations in the memory subsystem.Background techniqueThe memory subsystem can be a storage device, a memory module, and a mixture of a storage device and a memory module. The memory subsystem may include one or more memory components that store data. The memory component may be, for example, a non-volatile memory component and a volatile memory component. In general, the host system can utilize the memory subsystem to store data at and retrieve data from the memory component.Summary of the inventionIn one aspect, this application is directed to a system that includes: a memory component including a memory cell array and a machine learning operation component, the machine learning operation component performing machine learning calculations in conjunction with the memory cell array And a processing device, which is operable with the memory component to perform the following operations: receiving a first mode setting signal from the host system, the first mode setting signal indicating a first mode; based on the first mode setting signal The memory component is set to the first mode, wherein in the first mode, the processing device exposes the memory cell array of the memory component to the host system; receiving from the host system A second mode setting signal, the second mode setting signal indicating a second mode; and setting the memory component to the second mode based on the second mode setting signal, wherein in the second mode, all The processing device exposes the machine learning operation component of the memory component to the host system.In another aspect, this application is directed to a non-transitory computer-readable storage medium including instructions that, when executed by a processor operatively coupled with a memory component, cause the processor to perform the following operations: Receiving a first mode setting signal from the host system, the first mode setting signal indicating a first mode; and setting the memory component to the first mode based on the first mode setting signal, wherein the memory component includes a memory A cell array and a machine learning operation component that perform machine learning calculations in conjunction with the memory cell array and wherein in the first mode, the processing device exposes the memory cell array of the memory component In the host system; receiving a second mode setting signal from the host system, the second mode setting signal indicating a second mode; and setting the memory component to the second mode based on the second mode setting signal Mode, wherein in the second mode, the processing device exposes the machine learning operation component of the memory component to the host system.In another aspect, this application is directed to a system including: a memory component including a memory cell array and a machine learning operation component, the machine learning operation component performing machine learning in conjunction with the memory cell array Computing; and a processing device that is operable to interact with the memory component to perform the following operations: receiving a mode setting signal from the host system, the mode setting signal indicating one of the first mode or the second mode; and in response to receiving To the mode setting signal, the memory component is caused to operate in the corresponding mode in the first mode or the second mode, wherein: in the first mode, the processing device is slaved from the host system Receiving input data for the machine learning calculation and routing the input data to one or more memory cells in the memory cell array of the memory component; and in the second mode, the processing The device receives an execution signal for performing the machine learning calculation from the host system and routes the execution signal to the machine learning operation component of the memory component.Description of the drawingsThe present disclosure will be more fully understood from the detailed description given below and the accompanying drawings of various embodiments of the present disclosure. However, the accompanying drawings should not be regarded as limiting the present disclosure to specific embodiments, but only for explanation and understanding.Figure 1 illustrates an example computing environment including a memory subsystem according to some embodiments of the present disclosure.Figure 2 is a block diagram of an example memory subsystem according to some embodiments of the present disclosure.Figure 3 is a block diagram of an example memory subsystem according to some other embodiments of the present disclosure.4 is a flowchart of an example method of setting a mode of a memory component according to some embodiments of the present disclosure.Figure 5 is a flowchart of an example method of operating memory components in different modes according to some embodiments of the present disclosure.Figure 6 is a block diagram of an example computer system in which embodiments of the present disclosure can be operated.Detailed waysAspects of the present disclosure are directed to managing a mode of accessing memory components or logic components used for machine learning calculations in a memory subsystem. The memory subsystem can be a storage device, a memory module, or a mixture of a storage device and a memory module. An example of a storage device and a memory module is described with reference to FIG. 1. Generally speaking, a host system can utilize a memory subsystem including one or more memory components (hereinafter also referred to as "memory devices"). The host system can provide data for storage at the memory subsystem and can request data to be retrieved from the memory subsystem.The conventional memory subsystem only contains memory components for storing data provided by the host system. Therefore, the interface between the host system and the memory subsystem (e.g., the system bus) only needs to handle the data and commands directed to the memory component. For example, if the host system transmits a command (eg, a write command) and corresponding data to the memory subsystem via the system bus, the command and corresponding data are automatically routed to the memory component. However, if the memory subsystem contains additional components other than the memory components, confusion may occur regarding the way of directing commands and/or data received from the host system. Conventional memory subsystems lack a specific mechanism to determine which of several independent component commands and/or data to direct to the appropriate component and to determine the routing of the commands and/or data to the appropriate component. Therefore, a large number of errors may occur because commands and/or data may be routed to the wrong component.Aspects of the present disclosure solve the above and other deficiencies by providing memory subsystems with different operating modes that enable the host system to access different types of components contained in the memory components (e.g., memory cell arrays and logic components ( For example, logic gates or resistor arrays used to perform machine learning calculations)). The memory subsystem or the memory components of the memory subsystem may operate in a mode that exposes the array of memory cells in the memory component to the host system. In the second mode of operation, the memory subsystem or the memory component may alternatively expose the logic components disposed on the memory component to the host system.The advantages of the present disclosure include, but are not limited to, by providing two separate operating modes (one operating mode is used to perform traditional operations of storing and retrieving data and the other operating mode is used to perform logical operations, such as machine learning calculations), making the memory subsystem (I.e., the memory components of the memory subsystem) utilization is maximized. In addition, by providing two separate modes of operation, the memory subsystem requires only one interface for the memory component. That is, there is no need to implement two separate interfaces for the memory cell array and the logic component in the memory component. Therefore, the present disclosure simplifies the implementation of logic components in memory components.Figure 1 illustrates an example computing environment 100 including a memory subsystem 110 according to some embodiments of the present disclosure. The memory subsystem 110 may include media, such as one or more volatile memory devices (e.g., memory device 140), one or more non-volatile memory devices (e.g., memory device 130), or a combination thereof.The memory subsystem 110 may be a storage device, a memory module, or a mixture of a storage device and a memory module. Examples of storage devices include solid state drives (SSD), flash drives, universal serial bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal flash storage device (UFS) drives, and hard disk drives (HDD) ). Examples of memory modules include dual in-line memory modules (DIMMs), small form-factor DIMMs (SO-DIMMs), and non-volatile dual in-line memory modules (NVDIMMs).The computing environment 100 may include a host system 120 coupled to one or more memory subsystems 110. In some embodiments, the host system 120 is coupled to different types of memory subsystems 110. FIG. 1 illustrates an example of a host system 120 coupled to a memory subsystem 110. The host system 120 uses, for example, the memory subsystem 110 to write data to and read data from the memory subsystem 110. As used herein, "coupled to" generally refers to a connection between components, which can be an indirect communication connection or a direct communication connection (for example, without intervening components), whether wired or wireless, including, for example, electrical connections, optical connections, Magnetic connection and other connections.The host system 120 may be a computing device, such as a desktop computer, a portable computer, a network server, a mobile device, or this type of computing device including storage and processing devices. The host system 120 may be coupled to the memory subsystem 110 via a physical host interface. Examples of physical host interfaces include but are not limited to Serial Advanced Technology Attachment (SATA) interface, Peripheral Component Interconnect High Speed (PCIe) interface, Universal Serial Bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), etc. . The physical host interface can be used to transmit data between the host system 120 and the memory subsystem 110. When the memory subsystem 110 is coupled with the host system 120 through a PCIe interface, the host system 120 may further utilize an NVM high-speed (NVMe) interface to access memory components (for example, the memory device 130). The physical host interface may provide an interface for transferring control, address, data, and other signals between the memory subsystem 110 and the host system 120.The memory device may include any combination of different types of non-volatile memory devices and/or volatile memory devices. The volatile memory device (for example, the memory device 140) may be, but is not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).Examples of non-volatile memory devices (eg, memory device 130) include NAND type flash memory. Each of the memory devices 130 may include one or more memory cell arrays, such as single-level cells (SLC), multi-level cells (MLC) (eg, three-level cells (TLC) or four-level cells (QLC)). In some embodiments, a particular memory component may include the SLC portion of the memory cell, as well as the MLC portion, TLC portion, or QLC portion. Each of the memory units may store one or more data bits for use by the host system 120. In addition, the memory cells of the memory device 130 may be grouped into memory pages or memory blocks, which may refer to memory component units used to store data.Although a non-volatile memory component such as NAND type flash memory is described, the memory device 130 may be based on any other type of non-volatile memory, such as read only memory (ROM), phase change memory (PCM), magnetic random access memory Access memory (MRAM), "NOR" (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), and a cross-point array of non-volatile memory cells. The cross-point array of the non-volatile memory can be combined with a stackable cross-grid data access array to perform bit storage based on changes in body resistance. In addition, in contrast to many flash-based memories, cross-point non-volatile memory can perform in-situ write operations, in which non-volatile memory cells can be programmed without pre-erasing the non-volatile memory cells .The memory subsystem controller 115 may communicate with the memory device 130 to perform operations, such as reading data, writing data, or erasing data at the memory device 130, and other such operations. The memory subsystem controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The memory subsystem controller 115 may be a microcontroller, a dedicated logic circuit system (for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processors.The memory subsystem controller 115 may include a processor (processing device) 117 configured to execute instructions stored in the local memory 119. In the example shown, the local memory 119 of the memory subsystem controller 115 contains an embedded memory that is configured to store various processes, operations, logic flows, and routines that control the operations of the memory subsystem 110. The instructions include handling the communication between the memory subsystem 110 and the host system 120.In some embodiments, the local memory 119 may include memory registers that store memory pointers, extracted data, and the like. The local memory 119 may also include read-only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been illustrated as including the memory subsystem controller 115, in another embodiment of the present disclosure, the memory subsystem 110 may not include the memory subsystem controller 115, and may be changed to Rely on external control (for example, provided by an external host or by a processor or controller separate from the memory subsystem).Generally speaking, the memory subsystem controller 115 can receive commands or operations from the host system 120, and can convert the commands or operations into instructions or appropriate commands to achieve the required access to the memory device 130. The memory subsystem controller 115 may be responsible for other operations associated with the memory device 130, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical block address and Address translation between physical addresses. The memory subsystem controller 115 may additionally include a host interface circuit system to communicate with the host system 120 via a physical host interface. The host interface circuitry can convert commands received from the host system into command instructions to access the memory device 130, and convert responses associated with the memory device 130 into information for the host system 120.The memory subsystem 110 may also include additional circuitry or components not shown. In some embodiments, the memory subsystem 110 may include a cache or buffer (e.g., DRAM) and address circuitry (e.g., row decoder and column decoder), which may receive addresses from the memory subsystem controller 115 And the address is decoded to access the memory device 130.In some embodiments, the storage device 130 includes a local media controller 135 and a machine learning operation component 137. The local media controller 135 may operate in conjunction with the memory subsystem controller 115 to perform operations on one or more memory units of the memory device 130. The machine learning operation component 137 can perform machine learning calculations in conjunction with the memory unit of the memory device 130. In some embodiments, the machine learning operation component 137 can be coupled to or physically placed adjacent to the memory unit so that the machine learning operation component 137 can quickly (and with less power) access data required for machine learning calculations from the memory unit. In other embodiments, the machine learning operation component 137 may be included in the memory subsystem controller 115 or the memory device 140. In some other embodiments, the machine learning operating component 137 may be disposed within the memory subsystem 110 while being external to but coupled to the memory subsystem controller 115 and the memory devices 130 and 140.The memory subsystem 110 includes a mode management component 113, which can configure the memory device 130 to operate in a memory operation mode that enables the host system 120 to access the memory cell array, or to enable the host system 120 to access the machine learning operation component 137. Operate in learning operation mode. In some embodiments, the memory subsystem controller 115 includes at least a part of the mode management component 113. For example, the memory subsystem controller 115 may include a processor 117 (processing device) configured to execute instructions stored in the local memory 119 for performing the operations described herein. In some embodiments, the mode management component 113 is part of the host system 120, application program, or operating system.The mode management component 113 may receive a mode setting signal from the host system 120. In one embodiment, the mode setting signal may indicate a memory operation mode or a machine learning operation mode. In another embodiment, the mode management component 113 may determine the memory operation mode or the machine learning operation mode from the mode setting signal. Then, the mode management component 113 may set the operation mode of the memory device 130 to a memory operation mode or a machine learning operation mode based on the mode setting signal. In the memory operation mode, the mode management component 113 may expose the memory cell array of the memory device 130 to the host system 120. In the machine learning operation mode, the mode management component 113 may expose the machine learning operation component 137 of the memory device 130 to the host system 120. Additional details regarding the operation of the mode management component 113 are described below.Figure 2 is a block diagram of an example memory subsystem 200 according to some embodiments of the present disclosure. In one embodiment, the memory subsystem 200 includes a memory component 210, a bus 230, a switch 242, and a pull-up resistor 244.The memory component 210 may be a volatile memory device or a non-volatile memory device. In one embodiment, the memory component 210 includes a memory cell array 215, a machine learning operation component 217, a decoding component 220, and a mode selection pin 250. Although the memory component 210 has only one interface (ie, bus 230) to the memory subsystem controller or the host system, the memory component 210 can operate in two separate modes (memory operation) based on the signal provided through the mode selection pin 250 Mode and machine learning operation mode). For example, in the memory operation mode, the memory component 210 uses the mode selection signal 255 to expose the memory cell array 215 to the host system 120. In the machine learning operation mode, the memory component 210 may expose the machine learning operation component 217 to the host system 120 based on the different mode selection signal 255.The memory cell array 215 may include memory cells or memory cells that are the smallest unit for storing data. For example, the memory cell may be SLC, MLC, TLC, and/or QLC. In some embodiments, the memory unit may store data associated with machine learning calculations or any data received from the host system.The machine learning operation component 217 performs machine learning calculations. In some embodiments, the machine learning operation component 217 may be included in the package of the memory component 210 or inside the memory component 210. The machine learning operation component 217 is coupled to the memory cell array 215 to access data required for performing machine learning calculations. The machine learning operation component 217 is also coupled to the decoding component 220 to receive an execution signal for initiating a machine learning calculation. Generally, machine learning calculations for image recognition or classification are performed. Machine learning computing involves using a machine learning model to process input data (e.g., pixel pictures) and output predictions about the input data (e.g., classification or classification probabilities).A machine learning model is a mathematical representation that finds patterns in input data and classifies the input data or makes other predictions or decisions. Examples of machine learning models include deep neural networks, convolutional neural networks, and recurrent neural networks. A neural network may include an input layer for receiving input data, an output layer for generating predictions, and a concealment between the input and output layers for performing calculations on input data (for example, multiply-accumulate operations) to generate predictions Floor. Each layer is composed of multiple neurons or nodes. Each node can be assigned a value and coupled to one or more nodes in subsequent layers through edges with assigned weight values. Therefore, when advancing from one layer to the next layer (ie, advancing from the input layer to the hidden layer and the output layer), one or more nodes on the current layer may be coupled with one or more nodes in the next layer. Therefore, the value of the node in the next layer corresponds to the result of the multiply-accumulate operation. For example, for each node on the current layer that is coupled to a node in the next layer, calculate and then add (ie, accumulate) the product of the value assigned to the node on the current layer and the weight assigned to the corresponding edge (I.e., multiplication), the corresponding edge couples the node of the current layer to the corresponding node on the next layer.In some embodiments, the machine learning operation component 217 may correspond to digital logic used to perform machine learning calculations. The digital logic can be implemented by using digital logic gates or other such circuit systems. For example, digital logic can be used to implement machine learning models, receive input data for machine learning models, and store output data for machine learning models. In some embodiments, the multiplication and accumulation operations of the machine learning calculation can be performed by the digital logic of the machine learning operation component 217. Therefore, when performing machine learning calculations, the machine learning operation component 217 can access the machine learning model and input data stored in the memory cell array 215.In some other embodiments, the machine learning operation component 217 may correspond to a resistor array. For example, the multiplication and accumulation operations of the machine learning calculation can be performed by the resistor array of the machine learning operation component 217. Each resistor of the machine learning operation component 217 can represent a node in each layer of the machine learning model, and the resistance value of the resistor can be programmed or fine-tuned to correspond to a pair of resistors between a pair of nodes representing a neural network The weight value of the edge between the filters. The input and output of the resistor can be used to process multiplication and accumulation operations in machine learning calculations. In some embodiments, the output of the last layer of the machine learning model can be coupled with an analog-to-digital (ADC) converter to convert one or more analog signals into digital signals, the analog signal being the last one of the machine learning model Value, the digital signal can be used to represent the output of a machine learning model.The decoding component 220 decodes the input signal received via the bus 230 and generates a decoded signal. The decoded signal may contain more bits than the input signal. The decoding component 220 is coupled to the bus 230 and therefore, receives input signals from the memory subsystem controller or the host system 120. Examples of such input signals may include addresses, data (for example, data associated with machine learning calculations, any control signals for setting different modes of the memory component 210, execution signals for machine learning calculations), and clock signals . The decoding component 220 decodes the input signal and generates a decoded signal (e.g., decoded address, decoded Data, decoded clock signal). In addition, the decoding component 220 may be coupled to the mode selection pin 250 and receive the mode selection signal 255 as an input signal. The decoding component 220 may then generate a decoded mode selection signal (not shown). For example, the decoding component 220 may receive a mode selection signal 255 having a value indicating a memory operation mode or a machine learning operation mode (eg, a low value for the memory operation mode and a high value for the machine learning operation mode). The decoding component 220 can decode the mode selection signal 225 and generate a decoded mode selection signal having two or more bits that enable the host system 120 to access the memory cell array 215 or the machine learning operation component 217.The mode selection pin 250 is an input pin of the memory component 210, which is configured to provide a mode selection signal to the decoding component 220 in combination with the switch 242 and the pull-up resistor 244. In some embodiments, the switch 242 may be coupled to ground (GND) at one end and coupled to the pull-up resistor 244 and the mode selection pin 250 at the other end. The switch 242 may receive a control signal (ie, the mode setting signal 240) that makes the switch close or open. The mode setting signal 240 may be provided from the host system (e.g., via the bus 230 via the memory subsystem controller). The pull-up resistor 244 may be a resistor having a high resistance (for example, 10 kΩ). For example, in the case where the mode setting signal 240 corresponds to a control signal that turns off the switch 242, the mode selection pin 250 is actually coupled to the pull-up resistor 244. Because the pull-up resistor has high resistance and is coupled to the power supply voltage (for example, Vcc or Vdd), the mode selection pin 250 receives a relatively high voltage signal (for example, close to 5V) and uses the high voltage signal as an indication of the memory operation mode and The mode selection signal 255 of one of the machine learning operation modes is provided to the decoding component 220. On the other hand, in the case where the mode setting signal 240 is supplied to the switch 242 to close the switch, the mode selection pin 250 is coupled to GND and the pull-up resistor 244. Therefore, the mode selection pin 250 will receive a relatively low voltage signal (for example, close to 0V) and provide the decoding component 220 with the low voltage signal as 255 indicating the other of the memory operation mode and the machine learning operation mode. In some other embodiments, instead of the switch 242, the pull-up resistor 244 and the mode selection pin 250 may be coupled to a voltage source that carries a relatively high voltage signal for the memory operation mode ( As a mode setting signal) and a relatively low voltage signal (as a mode setting signal) for setting the memory component 210 to a machine learning operation mode. The voltage source can be controlled by the host system or the memory subsystem controller via the host system.In other embodiments, the host system may determine the current operating mode of the memory component 210. For example, the host system can determine the current operation by storing the last operation mode requested in the local memory of the host system, or requesting information about the current operation mode from the memory subsystem controller or the local media controller of the memory component 210 mode. Once the host system determines the current operating mode of the memory component 210, the host system can determine whether to change the operating mode of the memory component 210 depending on the desired operation. In the case where the host system determines to change the operation mode, the host system can generate an appropriate mode setting signal for the desired mode (memory operation mode or machine learning operation mode). Thus, the host system can ensure that the memory components are in the correct operating mode and route data to the correct components (the memory cell array 215 or the machine learning operating component 217) via the bus 230.The bus 230 may be a data bus in the memory subsystem 200 that carries signals such as: address, data (for example, data associated with machine learning calculations, any control signals for setting different modes of the memory component 210, The execution signal for the machine learning calculation), and the clock signal for the read and/or write operation or the machine learning calculation operation to be performed on the memory component 210 depending on the operation mode of the memory component 210. In some embodiments, the bus 230 interfaces the memory subsystem controller and the memory component 210.Figure 3 is a block diagram of an example memory subsystem according to some other embodiments of the present disclosure. In one embodiment, the memory subsystem 300 includes a memory component 310 and a bus 330. Similar to the bus 230 in FIG. 2, the bus 330 may be a data bus in the memory subsystem 300 that carries signals such as address, data, and read and/or write to the memory component 310. Input operation or a clock signal for machine learning calculations performed by the machine learning operation component 317.The memory component 310 may be a volatile memory device or a non-volatile memory device. The memory component 310 may include a memory cell array 315, a machine learning operation component 317, a decoding component 320, and a mode register 350. In some embodiments, the memory component 310 may operate in two separate modes (a memory operation mode and a machine learning operation mode). When the memory component 310 is operating in the memory operation mode, the memory component 310 may expose the memory cell array 315 to the host system 120. When the memory component 310 operates in the machine learning operation mode, the memory component 310 may expose the machine learning operation component 317 to the host system 130.The memory cell array 315 may include memory cells or memory cells that are the smallest unit for storing data in a memory operation mode. For example, the memory cell may be SLC, MLC, TLC, and/or QLC.The machine learning operation component 317 performs machine learning calculations. Similar to the machine learning operation component 217 in FIG. 2, the machine learning operation component 317 may be included in the memory component 310 package or inside the memory component 310. In some embodiments, the machine learning operation component 317 may correspond to digital logic used to perform machine learning calculations. In some other embodiments, the machine learning operation component 317 may correspond to a resistor array.The decoding component 320 decodes the input signal in a similar manner to the decoding component 220 in FIG. 2 and generates a decoded signal. The decoding component 320 is coupled to the bus 330 and therefore receives input signals (e.g., address, data, clock signal, mode selection signal 355) from the memory subsystem controller or the host system. In response, the decoding component 320 may generate a decoded signal (e.g., address, data, clock signal, mode selection signal (not shown)). For example, the decoding component 320 may receive the mode selection signal 355 (e.g., a low value for a memory operation mode and a high value for a machine learning operation mode) and provide a device that enables the host system to access the memory cell array 315 or machine learning operation The two or more bits of the decoded mode selection signal of the component 317.The mode register 350 operates to configure the mode of the memory component 310. The mode register 350 may be coupled to the bus 330 or a separate memory control bus (a bus carrying control signals in the memory subsystem) (not shown) to receive control signals from the memory subsystem controller or the host system. In some embodiments, the control signal may be a mode setting signal 340 (eg, a mode register setting (MRS) command). The MRS command may indicate which mode (memory operation mode or machine learning operation mode) the memory component 310 should operate in. In response, the mode register 350 may generate a mode selection signal 355. In some embodiments, the mode selection signal 355 may correspond to a bit that indicates the memory operation mode or the machine learning operation mode to the decoding component 320.In one embodiment, the host system may determine what the current operating mode of the memory component 310 is before providing the mode setting signal. As described above with respect to FIG. 2, the host system can confirm the current operating mode to ensure that the correct data is routed to the correct component (the memory cell array 315 or the machine learning operating component 317). If the memory component 310 is not operating in the desired mode, the host system can provide an appropriate mode setting signal to change the operation mode of the memory component 310.In another embodiment, the memory component 310 may include both a mode register 350 and a mode selection input pin (for example, the mode selection pin 250 in FIG. 2). The mode selection input pin is configured similarly to the mode selection pin 250 (ie, coupled with a switch and a pull-up resistor). In such cases, the decoding component 320 can be coupled with both the mode selection input pin and the mode register 350 in a similar manner as described above with respect to FIGS. 2 and 3 and receive two separate mode selection signals. For example, for the memory operation mode, the mode register 350 may receive the MRS command indicating the corresponding mode and provide the corresponding mode selection signal to the decoding component 320. At the same time, the switch associated with the mode selection input pin may be turned off so that the mode selection input pin causes a high voltage signal (for example, close to 5V) to be supplied to the decoding component 320. On the other hand, for the machine learning operation mode, when the mode register 350 receives another MRS command for the machine learning operation mode and provides the corresponding mode selection signal to the decoding component 320, the switch can be closed and the mode selection input pin can cause The low voltage signal (for example, close to 0V) is provided to the decoding component 320.4 is a flowchart of an example method 400 of setting a mode of a memory component according to some embodiments of the present disclosure. The memory component may include an array of memory cells and a machine learning operation component. The memory cell array stores data. The machine learning operation component combines with the memory cell array to perform machine learning calculations.The method 400 can be implemented by hardware (for example, processing device, circuit system, dedicated logic, programmable logic, microcode, hardware of the device, integrated circuit, etc.), software (for example, instructions to be executed on the processing device), or The combined processing logic is executed. In some embodiments, the method 400 is executed by the mode management component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the described embodiments should be understood as examples only, and the described processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are also possible.When performing the method 400, the processing device may be coupled with the memory components of the memory subsystem. In some embodiments, such memory components may include memory cell arrays and machine learning operation components. The memory cells in the memory cell array can store any data received from the processing device. The machine learning operation component can perform machine learning calculations in conjunction with the memory cell array. Within the memory component, the machine learning operation component can be coupled to or physically placed adjacent to the memory cell array so that the machine learning operation component can quickly (and with less power) access data required for calculation from the memory cells in the array.At operation 410, the processing device receives a mode setting signal from the host system. The mode setting signal may indicate the memory operation mode of the memory component. For example, the mode setting signal may correspond to a series of binary numbers indicating the memory operation mode (compared to the machine learning operation mode). In another example, the mode setting signal may correspond to a control signal that causes a voltage signal satisfying a threshold condition (for example, a voltage signal higher (or lower) than 2.5V) to be supplied to the memory component. For example, such a voltage signal having a voltage level higher than 2.5V may indicate that the operation mode of the memory component is the memory operation mode. On the other hand, if the mode setting signal does not result in a voltage signal satisfying the threshold condition, it may be determined that the mode setting signal indicates a machine learning operation mode. Additional details of the mode setting signal will be discussed below in relation to operation 420.At operation 420, the processing device sets the memory component to a memory operation mode based on the mode setting signal. That is, in the memory operation mode, the processing device may expose the memory cell array of the memory component to the host system. In some embodiments, the host system may provide the mode setting signal to the memory subsystem (ie, the processing device). The processing device may use the mode setting signal to cause the memory component to operate in the memory operation mode.In some embodiments, the memory component may include a mode selection component and a decoding component. The mode selection component may provide a mode selection signal to select the first mode or the second mode for the memory component. The processing device may cause the mode selection component to provide an appropriate mode selection signal based on the mode setting signal. For example, the mode selection component may receive a mode setting signal indicating a memory operation mode and provide a mode selection signal to select the memory operation mode based on the mode setting signal (and the same is true for a machine learning operation mode). In some embodiments, the mode selection component may provide a mode selection signal representing '0' for the memory operation mode and '1' for the machine learning operation mode. In some other embodiments, the mode selection signal may be a voltage signal representing the selected memory operation mode (for example, a voltage signal having a voltage value higher than 2.5V) (and a voltage signal having a voltage value equal to or lower than 2.5V) The voltage signal can represent the machine learning mode of operation). Therefore, depending on the mode selection signal, the mode selection component can use the mode selection signal to configure the decoding component to enable the host system to access the memory cell array (for example, in the memory operation mode) or the machine learning operation component (for example, in the machine learning operation mode). Mode). Thus, the mode selection component can indicate the selected mode to the decoding component.In some embodiments, the mode selection component may include a dedicated pin of the memory component (ie, an input pin of the memory component dedicated to mode selection) to provide the mode selection signal to the decoding component. For example, dedicated pins can be coupled with pull-up resistors. The pull-up resistor can be arranged on the memory subsystem outside the memory component. The pull-up resistor may be a resistor with high resistance (for example, 10kΩ). The pull-up resistor can cause a dedicated pin to provide the mode selection signal to the decoding component.In some embodiments, a dedicated pin can be coupled to a pull-up resistor and to a switch, which can couple the dedicated pin and the pull-up resistor to ground (GND). The pull-up resistor can be coupled to a dedicated pin at one end and to the supply voltage (Vcc) at the other end (see Figure 2 or 3). The processing device may provide a mode setting signal as a control signal for opening or closing the switch. When the switch is open, the dedicated pin is actually coupled to the pull-up resistor. Because the pull-up resistor has high resistance and is coupled to Vcc, the voltage drop across the pull-up resistor is relatively small, so that a voltage close to 5V is supplied to the dedicated pin. On the other hand, the processing device may provide a mode setting signal to close the switch. When the switch is closed, the dedicated pin will be coupled to ground and coupled to the pull-up resistor. In this case, the pull-up resistor passes a small amount of current through the closed switch to ground, so that a low voltage of about 0V is supplied to the dedicated pin.Therefore, when the processing device provides a mode setting signal (e.g., a control signal to turn off the switch), a voltage signal (e.g., a signal having a voltage level higher than 2.5V) that satisfies the threshold condition (e.g., a signal having a voltage level higher than 2.5V) The high voltage signal of the voltage value) is supplied to the dedicated pin via the pull-up resistor, and then the operation mode is set to the memory operation mode. The dedicated pin can provide a high voltage signal as a mode selection signal to indicate the memory operation mode to the decoding component. When the processing device provides a different mode setting signal for closing the switch (for example, causing the voltage signal (for example, a low voltage signal having a voltage value close to 0V) to not meet the threshold condition (for example, having a voltage level higher than 2.5V) Control signal), the dedicated pin of the memory component receives a low voltage signal with a voltage value close to 0V via a pull-up resistor. The dedicated pin can provide a low-voltage signal as a mode selection signal to indicate the machine learning operation mode to the decoding component.In other embodiments, the mode selection component may include a mode register. The mode register may be coupled to the memory control bus (the bus carrying control signals in the memory subsystem) or the data bus (for example, the bus 230 in FIG. 2 or the bus 330 in FIG. 3) to receive the control signal, such as the mode Set signal (for example, mode register set (MRS) command). The MRS command can include certain command signals to indicate which mode the memory component should be set to, such as /CS (chip select), /RAS (row address strobe), /CAS (column address strobe), and /WE (write Into enable)). In response to receiving the MRS command, the mode register may generate a corresponding mode selection signal (for example, a control signal indicating a memory operation mode or a machine learning operation mode in units of bits). Therefore, the mode register can provide the mode selection signal to the decoding component.The decoding component of the memory component can decode the received mode selection signal and generate a decoded mode selection signal (for example, the decoded mode selection signal contains more bits than the mode selection signal), which enables the host system to access the memory cell array or Machine learning operating components. In some embodiments, the decoded mode selection signal may be two bits (while the mode selection signal has one bit, '0' for the memory operation mode and '1' for the machine learning operation mode). For the memory operation mode, the decoded mode selection signal can be provided to the memory cell array or the local controller of the memory component, so that the host system can access the memory cell array. For the machine learning operation mode, the decoded mode selection signal can be provided to the local controller of the machine learning operation component or the memory component, so that the host system can access the machine learning operation component. Thus, the mode selection signal from the mode selection component can be used to enable the host system to access the memory cell array of the memory component or the machine learning operation component.At operation 430, the processing device receives another mode setting signal from the host system. The mode setting signal may indicate the machine learning operation mode of the memory component. For example, the mode setting signal may be an MRS command predefined for the machine learning operation mode. In another example, the mode setting signal may be such that a voltage signal having a voltage level lower than, for example, 2.5V (that is, the threshold condition is not satisfied, that is, a voltage signal higher than 2.5V is not satisfied) is supplied to the memory component (via Dedicated pins for memory components).At operation 440, the processing device sets the memory component to a machine learning operation mode based on the mode setting signal received at operation 430. The mode setting signal may indicate the machine learning operation mode. Therefore, the processing device may expose the machine learning operation component of the memory component to the host system. Similar to operation 420, the processing device may provide an MRS command indicating the machine learning operation mode to the mode selection component (ie, the mode register 250 in FIG. 2) to the memory component so that the memory component can be set to the machine learning operation mode. In some other embodiments, the processing device may transmit a control signal for closing the switch to the switch (ie, the switch in FIG. 2) coupled with the mode selection component (ie, the mode selection input pin 250 in FIG. 2). 242). Based on the mode setting signal, the mode selection component may provide an appropriate mode selection signal (in this case, the mode selection signal selects a machine learning operation mode having a voltage level close to 0V) to the decoding component. Therefore, the mode selection component may configure the decoding component to enable the host system to access the machine learning operation component (for example, in the machine learning operation mode).Figure 5 is a flowchart of an example method 500 of operating memory components in different modes according to some embodiments of the present disclosure. The memory component may include an array of memory cells and a machine learning operation component. The memory cell array stores data. The machine learning operation component combines with the memory cell array to perform machine learning calculations.The method 500 may be implemented by hardware (for example, processing device, circuit system, dedicated logic, programmable logic, microcode, hardware of the device, integrated circuit, etc.), software (for example, instructions to be executed on the processing device), or The combined processing logic is executed. In some embodiments, the method 500 is executed by the mode management component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the described embodiments should be understood as examples only, and the described processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are also possible.At operation 510, the processing device receives a mode setting signal from the host system. As described above with respect to FIG. 4, the mode setting signal may indicate a memory operation mode or a machine learning operation mode. In some embodiments, the mode setting signal indicates the memory operation mode and the mode setting signal indicates the machine learning operation mode may correspond to causing voltage signals with different voltage levels (e.g., approximately 0V and 5V) to be supplied to the memory component (via the memory component). The dedicated pin) of the control signal. In some other embodiments, the mode setting signal indicates the memory operation mode and the mode setting signal indicates that the machine learning operation mode may correspond to different mode register setting (MRS) commands.At operation 520, in response to receiving the mode setting signal, the processing device causes the memory component to operate in one of a memory operation mode or a machine learning operation mode. For example, the host system may communicate with the processing device to perform image recognition on a picture of an animal using a deep neural network model to determine the type of animal species and/or the probability that the subject of the picture is the type of the animal species. To initiate image recognition, the host system may first cooperate with the memory component (ie, the memory cell array in the memory component) to store the pixel data of the picture in the memory operation mode. Then, the host system can request the memory component (ie, the machine learning operation component) to perform image recognition on the pixel data. The result of the image recognition in the machine learning operation mode cannot be provided (for example, the type of animal species is "cat" and/or picture The probability that the subject in is "cat" is 0.97). Therefore, the processing device may receive a mode setting signal for the memory operation mode from the host system at operation 510. Next, at operation 520, the processing device may cause the memory component to operate in the memory operation mode and store pixel data. Subsequently, the processing device may receive a mode setting signal for the machine learning operation mode from the host system. Next, at operation 520, the processing device may cause the memory component to operate in a machine learning operation mode and perform machine learning calculations. In some embodiments, the processing device may only receive a mode setting signal indicating a machine learning operation mode for performing machine learning calculations. In such cases, the host system can provide the processing device with a machine learning calculation execution request along with the input data.In the memory operation mode, at operation 530, the processing device receives input data for machine learning calculations from the host system. In some embodiments, the input data may be in the form of pixel data representing a picture to be processed for image recognition calculated via machine learning. In addition, in the memory operation mode, the processing device routes input data to memory cells in the memory cell array of the memory component for write operations.In the machine learning mode of operation, at operation 540, the processing device receives an execution signal for executing machine learning calculations (for example, image recognition) from the host system. In some embodiments, the execution signal may indicate the type of model (for example, a deep neural network model) and input data (for example, a picture of an animal) to be used for the machine learning calculation. In some other embodiments, the execution signal may additionally include input data.Also, at operation 540, the processing device routes the execution signal to the machine learning operation component of the memory component. In response, the machine learning operation component can initiate a machine learning calculation by accessing and loading the indicated model and input data from the memory cell array based on the execution signal. In some embodiments, the processing device may alternatively provide the model and the address of the input data together with the execution signal or as part of the execution signal to the machine learning operation component. After processing the input data through the model (ie, performing a multiplication-accumulation operation on the input data), the machine learning operation component can combine the output data (e.g., the category of the input data (e.g., the type of animal species is "cat") and /Or the probability that the input data belongs to the category (for example, 0.97)) is provided to the output buffer of the memory component associated with the machine learning operation component. Therefore, the processing device can detect that the machine learning operation component has generated output data from the machine learning calculation by accessing the output buffer. Then, the processing device can provide the output data to the host system. In the case where the execution signal contains input data, the machine learning operation component may first store the input data in the memory cell array before executing the machine learning calculation.In another embodiment, in response to receiving another mode setting signal for the memory operation mode from the host system when operating in the machine learning operation mode, the memory component may switch to the memory operation after operating in the machine learning operation mode mode. For example, if the estimated execution time does not meet the threshold condition (for example, less than 2 milliseconds), the processing device may receive another mode setting signal for the memory operation mode so that the processing device may be exposed while waiting for the machine learning calculation to be completed An array of memory cells for host systems. During the waiting period, in the memory operation mode, the processing device can enable the host system to access the execution code or operating system stored in the memory cell array. After detecting that the output data of the machine learning calculation is ready, the processing device can immediately notify the host system that it is completed. In response, the host system can provide a mode setting signal for the machine learning operation mode to the processing device so that the host system can access the output data. The processing device may detect the completion of the machine learning calculation by determining that the output data is stored in the output buffer associated with the machine learning operation component.Figure 6 illustrates an example machine of a computer system 600 that can execute a set of instructions within the computer system 600 for causing the machine to perform any one or more of the methods discussed herein. In some embodiments, the computer system 600 may correspond to include, be coupled to, or use a memory subsystem (for example, the memory subsystem 110 of FIG. 1) or may be used to perform operations of a controller (for example, to execute an operating system to execute the corresponding The host system (for example, the host system 120 of FIG. 1) in the operation of the mode management component 113 of FIG. 1). In alternative embodiments, the machine may be connected (e.g., network connected) to other machines in the LAN, intranet, extranet, or the Internet. The machine can be used as a peer machine in a peer-to-peer (or decentralized) network environment or as a server or client machine in a cloud computing infrastructure or environment while in the capacity of a server or client machine in a client-server network environment operating.The machine can be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network appliance, server, network router, switch or bridge, car or capable (in sequence or other Mode) Any machine that executes a set of instructions that specify actions to be taken by the machine. In addition, although a single machine is described, the term "machine" should also be considered to encompass any collection of machines that individually or collectively execute one (or more) sets of instructions to perform any of the methods discussed herein. Or multiple.The example computer system 600 includes a processing device 602, a main memory 604 (e.g., read only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or RDRAM, etc.), and static memory 606 (e.g., Flash memory, static random access memory (SRAM), etc.), and a data storage system 618, which communicate with each other via a bus 630.The processing device 602 represents one or more general processing devices, such as a microprocessor, a central processing unit, and so on. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor or a processor implementing other instruction sets, Or a processor that implements a combination of instruction sets. The processing device 602 may also be one or more dedicated processing devices, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a network processor, and so on. The processing device 602 is configured to execute instructions 626 for performing the operations and steps discussed herein. The computer system 600 may additionally include a network interface device 608 to communicate on the network 620.The data storage system 618 may include a machine-readable storage medium 624 (also referred to as a computer-readable medium) on which one or more instruction sets 626 or any one or more of the methods or functions described herein are stored. software. The instructions 626 may also completely or at least partially reside in the main memory 604 and/or the processing device 602 during execution of the instructions 626 by the computer system 600, and the main memory 604 and the processing device 602 also constitute machine-readable storage media. The machine-readable storage medium 624, the data storage system 618, and/or the main memory 604 may correspond to the memory subsystem 110 of FIG.In one embodiment, the instructions 626 include instructions for implementing the functionality corresponding to the mode management component (eg, the mode management component 113 of FIG. 1). Although the machine-readable storage medium 624 is shown as a single medium in the example embodiment, the term "machine-readable storage medium" should be considered to include a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" shall also be considered to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Therefore, the term "machine-readable storage medium" should be considered to include, but is not limited to, solid-state memory, optical media, and magnetic media.Some parts of the previous detailed description have been presented with respect to the algorithm and symbolic representation of the operation of the data bits in the computer memory. These algorithm descriptions and representations are the most effective way for those skilled in the data processing field to convey the main idea of their work to others in the field. Algorithms are here and generally considered to be self-consistent sequences of operations that lead to the desired result. Operations are operations that require physical control of physical quantities. These quantities are usually but not necessarily in the form of electrical or magnetic signals that can be stored, combined, compared, and otherwise manipulated. Sometimes, mainly for general reasons, it has proven convenient to refer to these signals as bits, values, elements, symbols, characters, items, numbers, etc.However, it should be borne in mind that all these and similar terms should be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to a computer system that manipulates and transforms the data expressed as physical (electronic) quantities in the registers and memory of a computer system into computer system memory or registers or other data similarly expressed as physical quantities in other such information storage systems Or similar to the actions and processes of electronic computing devices.The present disclosure also relates to equipment for performing the operations herein. This device may be specially constructed for the required purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs can be stored in computer-readable storage media, such as but not limited to any type of disk (including floppy disk, optical disk, CD-ROM and magneto-optical disk), read only memory (ROM), random access memory (RAM) , EPROM, EEPROM, magnetic or optical card or any type of medium suitable for storing electronic instructions, each of which is coupled to the computer system bus.The algorithms and displays presented in this article are not essentially related to any particular computer or other device. Various general-purpose systems may be used with programs according to the teachings herein, or it may prove convenient to construct more specialized devices to perform the methods described. The structure of a variety of these systems will be presented as set forth in the description below. In addition, the present disclosure is not described with reference to any specific programming language. It should be understood that various programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having stored thereon instructions that can be used to program a computer system (or other electronic device) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, the machine-readable (eg, computer-readable) medium includes machine (eg, computer)-readable storage media, such as read-only memory ("ROM"), random access memory ("RAM"), disk storage media , Optical storage media, flash memory components, etc.In the foregoing specification, the embodiments of the present disclosure have been described with reference to specific example embodiments thereof. It should be apparent that various modifications can be made to the present disclosure without departing from the broader spirit and scope of the embodiments of the present disclosure as set forth in the appended claims. Therefore, the description and drawings should be viewed in an illustrative sense rather than a restrictive sense. |
A system-on-chip (SoC) 30 has a non-debug domain and a debug domain. A debug framework provides non-debug domain system resets. This is implemented by a cross trigger system debug interface (DTI) 50 triggering a reset control unit (RCU) 38 to reset the non-debug domain. The non debug domain, e.g. all non debug intellectual property (IP) blocks, are reset to a known state before debugging without affecting the debug domain, e.g. debug logic already initialised for debugging. By leveraging the DTI and its associated signalling, the RCU resets all endpoints of SoC 30 except for the RCU and the debug domain. The DTI uses registers to issue non debug domain system requests to the RCU and to monitor the status of the non debug domain system received from the RCU. Input/output connections between the DTI and RCU use inverters. Via registers, a debugger 24 initiates a non-debug domain system reset and polls the DTI. . |
WHAT IS CLAIMED IS: 1. A system-on-chip comprising: a reset control unit configured to reset a non-debug domain; and a debug domain that includes a debug trigger interface connected to the reset control unit, wherein the debug trigger interface is configured to trigger the reset control unit to reset the non-debug domain. 2. The system-on-chip of claim 1, wherein the debug trigger interface has a trigger output connected to a non-debug domain system reset request of the reset control unit. 3. The system-on-chip of claim 2, wherein the trigger output is connected to the non-debug domain system reset request via an inverter. 4. The system-on-chip of any preceding claim, wherein the debug trigger interface is further configured to monitor a status of the non-debug domain system reset. 5. The system-on-chip of claim 4, wherein the debug trigger interface has a trigger input connected to a non-debug domain system reset status of the reset control unit. 6. The system-on-chip of claim 5, wherein the trigger input is connected to the non-debug domain system reset status via an inverter. 7. The system-on-chip of any preceding claim, wherein the debug trigger interface includes an application trigger register configured to cause the debug trigger interface to issue a non-debug domain system reset request to the reset control unit. 8. The system-on-chip of any preceding claim, wherein the debug trigger interface includes an application trigger in status register configured to indicate a status of the non-debug domain system reset received from the reset control unit. 9. The system-on-chip of any preceding claim, wherein the debug trigger interface includes an application pulse register configured to cause the debug trigger interface to issue a non-debug domain system reset request to the reset control unit for a defined time. 10, A method for enabling a non-debug domain system reset of a system, wherein the system includes a debug trigger interface connected to a reset control unit, the method comprising: issuing a non-debug domain system reset request from the debug trigger interface to the reset control unit, such that the debug trigger interface triggers the reset control unit to reset a non-debug domain of the system. I 1. The method of claim I 0, wherein the non-debug domain system reset request is issued from a trigger output of the debug trigger interface connected to a non-debug domain system reset request of the reset control unit. 12, The method of claim II, further comprising inverting the non-debug domain system reset request issued to the reset control unit. 13. The method of any one of claims 10 to 12, wherein the debug trigger interface includes an application trigger register, and the debug trigger interface issues the non-debug domain system based on a state of the application trigger register. 14. The method of any one of claims 10 to 13, further comprising monitoring, by the debug trigger interface, a non-debug domain system reset status 15. The method of claim 14, wherein the monitoring includes observing a status of a trigger input of the debug trigger interface connected to a non-debug domain system reset status of the reset control unit.16. The method of claim 15, further comprising inverting a non-debug domain system reset status signal received by the trigger input from the reset control unit.17. The method of any one of claims 14 to 16, wherein the debug trigger interface includes an application trigger in status register, and the debug trigger interface sets a state of the application trigger in status register based on a non-debug domain system reset status received from the reset control unit.18. A non-transitory media encoded with logic that includes code for execution and, when executed by a processor, operable to perform operations comprising: defining a non-debug domain system reset request channel between a debug trigger interface and a reset control unit; and configuring the debug trigger interface to trigger the reset control unit to reset the non-debug domain.19. The non-transitory media of claim 18, the operations further comprising monitoring a status of the non-debug domain system reset.20. The non-transitory media of claim 19, the operations further comprising: setting a state of an application trigger register to cause the debug trigger interface to issue a non-debug domain system reset request to the reset control unit; and polling a state of an application trigger in status register, wherein the state of the application trigger in register is based on a non-debug domain system reset status received from the reset control unit |
DEBUG TRIGGER INTERFACE FOR NON-DEBUG DOMAIN SYSTEM RESETTECHNICAL FIELD[0001] The present disclosure relates generally to system-on-chips (SoCs), and more particularly, to debug environments for SoCs.BACKGROUND[0002] Debuggers are specialized software (and its associated supporting hardware) that detect and correct any errors (bugs) in a target system, such as a system-on-chip. Debuggers prefer to provide a clean debug session by bringing the target system to a known state. This is best accomplished by resetting the target system's non-debug domain to a known state before beginning a debug session without affecting the target system's debug domain. Although existing non-debug domain system reset mechanisms have been generally adequate for their intended purposes, they have not been entirely satisfactory in all respects.BRIEF DESCRIPTION OF DRAWINGS[0003] The present disclosure is best understood from the following detailed description when read with the accompanying figures. It is emphasized that, in accordance with the standard practice in the industry, various features are not drawn to scale and are used for illustration purposes only. In fact, the dimension of the various features may be arbitrarily increased or reduced for clarity of discussion.[0004] FIGURE I is a schematic block diagram of an exemplary debug environment according to various aspects of the present disclosure.[0005] FIGURE 2 is a schematic block diagram of an exemplary debug trigger interface system reset mechanism, which can be implemented in debug environment of FIGURE 1, according to various aspects of the present disclosure.100061 FIGURE 3 is a flowchart of exemplary method that can be implemented for providing a non-debug domain reset in a debug environment, such as debug environment of FIGURE 1, according to various aspects of the present disclosure.100071 FIGURE 4 is a flowchart of an exemplary method that can be implemented for providing a non-debug domain system reset in a debug environment, such as debug environment of FIGURE 1, according to various aspects of the present disclosure.OVERVIEW OF EXAMPLE EMBODIMENTS100081 A system, such as a system-on-chip, has a non-debug domain and a debug domain.The debug domain has a debug framework that enables a debugger driven, non-debug domain system reset. The system includes a reset control unit, and a debug trigger mechanism that includes a debug trigger interface (DTI) connected to the reset control unit. The DTI is configured to trigger the reset control unit to reset the non-debug domain The DTI may be further configured to monitor a status of the non-debug domain system reset.100091 In some implementations, the DTI has a trigger output connected to a non-debug domain system reset request of the reset control unit, and a trigger input connected to a non-debug domain system reset status of the reset control unit. The trigger output may be connected to the non-debug domain system reset request via an inverter, and the trigger input may be connected to the non-debug domain system reset status via an inverter. The DTI can include an application trigger register configured to cause the DTI to issue (assert) a non-debug domain system reset request to the reset control unit. The DTI can further include an application trigger in status register configured to indicate a status of the non-debug domain system reset. A debugger connected to the system use the application trigger register and application trigger in status register for debugger non-debug system reset assertion operations. For example, the debugger can program application trigger register to a state that causes the DTI to issue (assert) the non-debug domain system reset request to the reset control unit. The debugger can also monitor the application trigger in status register to determine the status of the non-debug domain system reset.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS100101 Debuggers are specialized software (and its associated supporting hardware) that detect and correct any errors (bugs) in a target system. Debuggers prefer to provide a clean debug session by bringing all intellectual property (IP) blocks of the target system, such as a system-on-chip (SoC), to a known state. This is best accomplished by resetting the target system's non-debug domain (for example, all non-debug IP blocks) to a known state before beginning a debug session without affecting the target system's debug domain, such as any debug logic that has already been initialized to perform debug operations.100111 Debuggers typically interact with a debug framework, such as ARM® CoreSightTM debug and trace framework, associated with the target system to accomplish desired debug operations. Currently, SoC debug frameworks do not support debuggers directly providing a non-debug domain system reset. For example, a SoC configured with ARM® CoreSightTm debug and trace framework may include a debug access port that has direct control/status signaling and handshake mechanisms for enabling debug domain power-up, non-debug domain power-up, and debug domain reset, but not non-debug domain system reset. In some configurations, an ARM® CoreSightTM debug and trace system model requires a core processor of the SoC to perform a specific process that enables a debugger to indirectly initiate a non-debug domain system reset. However, indirect non-debug domain system resets can cause problems since the operations associated with indirect system resets typically involve components that are affected by the system reset, such that the operations cannot be completed without protocol violations and/or errors.100121 To address such issues, the following disclosure proposes a target system debug framework configured with a non-debug domain system reset mechanism that enables a debugger to bring a non-debug domain of the target system to a known state. The debug framework leverages a debug trigger interface (such as a cross-trigger interface provided by ARM® CoreSightTM debug and trace framework) to create control/status signaling and handshake mechanisms for enabling the non-debug domain system reset. Different embodiments may have different advantages, and no particular advantage is necessarily required of any of the embodiments described herein.100131 FIGURE 1 is a schematic block diagram of an exemplary debug environment 10 for performing debugging and tracing operations according to various aspects of the present disclosure. As described below, debug environment 10 provides a debugger driven, non-debug domain system reset. FIGURE I has been simplified for the sake of clarity to better understand the inventive concepts of the present disclosure. Additional features can be added in debug environment I 0, and some of the features described can be replaced or eliminated in debug environment 10.100141 In FIGURE 1, a debug host system 20 (also referred to as a debug host or an external debugger) includes a processor 22 that can execute software, such as a debugger 24, for debugging and tracing various components of a target system connected thereto. Debugger 24 can communicate with a non-debug domain and a debug domain associated with the target system to facilitate the debugging and tracing operations. In various implementations, debug host system 20 sends various debug and trace requests to a debug and trace system associated with the target system, which can execute such requests and send information related to such requests to debug host system 20.100151 For purposes of the following discussion, the target system is depicted as a system-on-chip (SOC) 30, where components of the target system are integrated in a single chip. SoC 30 includes a system interconnect 32 that interconnects various components of SoC 30. For example, SoC 30 may include a processor 34, a processor 36, a memory 37, and various other components connected to system interconnect 32, such that processor 34, processor 36, memory 37 and the various other components can communicate with one another via system interconnect 32. In the depicted embodiment, processor 36 is a digital signal processor (DSP). The various components of SoC 30 can provide various systems, including but not limited to, memory systems, video/graphics systems, audio systems, power management systems, security systems, input/output systems, wired/wireless connectivity systems, or a combination thereof.[0016] A reset control unit (RCU) unit 38 is configured to reset SoC 30 and/or various components of SoC 30, such as processor 34 and/or processor 36, upon a hardware-triggered event and/or a software-triggered event. Reset control unit 38 can control how SoC 30, and its various components, enters and exit reset, including a hardware reset, a system reset, a processor only reset, and/or other type of reset. Reset generally refers to a known, initial state of SoC 30 and/or various components of SoC 30, and system reset (also referred to as a non-debug domain system reset) generally refers to setting all components of SoC 30, except reset control unit 38 and a debug domain of SoC 30, to their associated default state. To initiate a system reset, reset control unit 38 can communicate reset signaling via system interconnect 32 to processor 34, processor 36, and/or other components of SoC 30. In various implementations, reset control unit 38 includes a reset control for triggering a non-debug domain system reset. The reset control may be implemented as a control register that includes a bit (or bits) that control asserting/deasserting a non-debug domain system reset. As described further below, debug environment 10 is configured such that debugger 24 can communicate a non-debug domain system reset request to reset control unit 38, and thus debugger 24 can initiate a non-debug domain system reset of SoC 30. The non-debug domain system reset can set all (or portions, in some embodiments) of the non-debug domain of SoC 30 to a known, default state without affecting the debug domain.10017] A debug and trace system 40 of SoC 30 enables debug host system 20 to access and control various components of SoC 30 to accomplish debugging and tracing of various components of SoC 30. In various implementations, debug and trace system 40 can be based on ARM® CoreSightTM debug and trace framework, which is modified as described herein to achieve a non-debug domain system reset from a debug domain of SoC 30. Debug and trace system 40 includes a debug access port (DAP) 42 that provides access to SoC 30. In various implementations, debug access port 42 may be implemented with a joint test action group (JTAG) debug port, a serial wire debug (SWD) port, other suitable debug port, or a combination thereof. Debug host system 20 (particularly, debugger 24) can connect to and communicate with SoC 30 through debug access port 42 to perform debugging and tracing operations on SoC 30, and debug and trace system 40 can communicate debug information, trace information, and/or other information to debug host system 20 through debug access port 42. For example, in the depicted embodiment, debug host system 20 can access system resources (such as processor 34, processor 36, and/or system memory 37) through system bus 32 connected to debug access port 42; debug host system 20 can access debug components and trace components of SoC 30 (making up debug and trace system 40) through a debug bus 43 connected to debug access port 42; and debug host system 20 can access trace data and trace information through a trace bus 44 connected to debug access port 42. Debug bus 43 is configured to connect debug and trace components of SoC 30, facilitating transfer of debug data and debug information across SoC 30. Trace bus 44 is configured to connect various trace components of SoC 30, facilitating transfer of trace data and trace information across SoC 30. In various implementations, debug bus 43 can be an Advanced Peripheral Bus (APB) or other suitable debug bus, and trace bus 44 can be an Advanced Trace Bus (ATB) or other suitable trace bus. In various implementations, debug and trace system 40 can include a debug memory 45 that stores information about each debug component and/or trace component connected to debug bus 43. For example, a read only memory (ROM) table can store a location of each debug component and/or trace component (such as processor 34, processor 36, debug access port 42, and other debug and/or trace components) connected to debug bus 43.100181 Debug and trace system 40 further includes a debug trigger mechanism (such as an embedded cross trigger provided by ARM® CoreSightTM debug and trace framework) for communicating debug events across SoC 30. For example, processor 34, processor 36, and various other components of debug and trace system 40 can communicate debug events to one another via the debug trigger mechanism Debug events can include instruction breakpoints, data breakpoints, watchpoints, and other messaging associated with debugging. In various implementations, debug trigger mechanism enables communicating debug events to various endpoints of SoC 30 for halting processor cores and/or triggering trace capture. Debug trigger mechanism includes various debug trigger interfaces (DT1s) 46 (such as cross trigger interfaces provided by ARM® CoreSightm debug and trace framework) and a debug trigger interface interconnect 48 (such as a cross trigger matrix provided by ARM® CoreSightTM debug and trace framework) that interconnects the various DTIs 46, In FIGURE 1, a DTI 46a interconnects processor 34 with DTI interconnect 48, a DTI 46b interconnects processor 36 with DTI interconnect 48, and a DTI 46c interconnects various trace components of debug and trace system 40 with DTI interconnect 48. Each DTI enables its associated SoC component (such as processor 34, processor 36, debug component, or trace component) to broadcast and respond to debug events (triggers) on DTI interconnect 48, where DTI 48 broadcasts debug events from one DTI to other DT's. For example, each DTI maps trigger event inputs received from its associated system (such as processor 34 for DTI 46a) onto channels associated with DTI interconnect 48, and maps channel inputs received from DTI interconnect 48 to trigger event outputs for its associated system. Each DTI can also include associated DTI registers (not shown), which debugger 24 can use to generate internal trigger event inputs for DTIs 46, thus facilitating software-triggered events.100191 Typically, debug and trace system 40 implements debug trigger interfaces (such as DTI 46a, DTI 46b, and DTI 46c) and their associated signaling mechanisms for communicating debug events and controlling debug actions corresponding to such debug events. The present disclosure recognizes that, since a debug trigger interface is connected to a debug domain reset only and thus is not affected by any non-debug domain system reset, the debug trigger interface and its associated signaling mechanisms can be configured to provide a non-debug domain system reset, which is not a typical debug event or a typical debug action. In particular, debug environment 10 can leverage a debug trigger interface and its associated signaling to enable debugger 24 to request and monitor a non-debug domain system reset of a target system, such as SoC 30. For example, in FIGURE I, debug trigger mechanism further includes a system DTI 50 connected to reset control unit 38, where system DTI 50 is configured to request and monitor a status of a non-debug domain system reset, as described further below.100201 In many respects, system DTI 50 is similar to DTTs 46. System DTI 50 enables its associated system (such as reset control unit 38 or other component of SoC 30 (for example, a trigger routing unit 52)) to broadcast and respond to debug events on DTI interconnect 48. For example, similar to DTIs 46, system DTI 50 can map trigger event inputs received from reset control unit 38 and/or trigger routing unit 52 onto channels associated with DTI interconnect 48, and map channel inputs received from DTI interconnect 48 to trigger event outputs for reset control unit 38 and/or trigger routing unit 52. System DTI 50 further includes associated system DTI registers (not shown in FIGURE 1) that allow debugger 24 to generate internal trigger event inputs for system DTI 50. In various implementations, debugger 24 can configure system DTI registers via debug bus 43 to provide a software-triggered non-debug domain system reset, utilizing system DTI 50 to request that reset control unit 38 perform a system reset and thereafter monitor a status of the non-debug domain system reset based on system reset status signaling received from reset control unit 38.100211 Where SoC 30 includes a system master halt/restart control (not shown) for triggering a system halt/system restart for system masters (such as processor 34, processor 36, a DMA controller, etc.) and a system peripheral halt/restart control (not shown) for triggering a system halt/system restart for system peripherals (such as a general-purpose tinier, a watchdog tinier, a pulse-width modulator, etc.), system DTI 50 can further be connected to system master halt/restart control and system peripheral halt/restart control, allowing system DTI 50 to initiate system master halt/restart and system peripheral halt/restart. System master halt/restart control and system peripheral halt/restart control may be implemented as control registers that respectively include a bit (or bits) that controls asserting/deasserting a system master halt/restart and a bit (or bits) that controls assertingkleasserting a system peripheral halt/restart. In some implementations, debugger 24 can configure system DTI registers so that system DTI 50 can trigger system master halt/restart and/or system peripheral halt/restart. In some implementations, debugger 24 can monitor system DTI registers for a status of system master halt/restart and/or a status of system peripheral halt/restart.100221 Turning to FIGURE 2, FIGURE 2 is a schematic block diagram of an exemplary debug trigger interface system reset mechanism, which can be implemented in debug environment 10 of FIGURE 1, according to various aspects of the present disclosure. In FIGURE 2, the debug trigger interface system reset mechanism leverages system DTI 50 and its associated signaling to enable debugger 24 to request and monitor a non-debug domain system reset. Leveraging the signaling from system DTI 50 provides a seamless solution for initiating a system reset directly from debug logic to reset control unit 38. In some implementations, system DTI 50 is an ARM® CoreSightTM cross-trigger interface (CTI), where the ARMS CoreSightTM CTI's signaling (including ARM® CoreSightTM CTI registers) is leveraged to initiate non-debug domain system reset. FIGURE 2 has been simplified for the sake of clarity to better understand the inventive concepts of the present disclosure. Additional features can be added in the debug trigger interface system reset mechanism, and some of the features described can be replaced or eliminated in other embodiments of the debug trigger interface system reset mechanism.100231 System DTI 50 includes a trigger in interface configured to receive various trigger inputs from reset control unit 38 and/or other associated system (such as trigger routing unit 52), and a trigger out interface configured to send various trigger event outputs to reset control unit 38 and/or other associated system. For example, system DTI 50 has m trigger inputs and n trigger outputs, where m is a total number of trigger inputs associated with trigger in interface, and n is a total number of trigger outputs associated with trigger out interface. In various implementations, system DTI 50 has a trigger input and a trigger output configured for non-domain system reset signaling. In FIGURE 2, a trigger input M (DTITRIGIN[M]) is connected to system reset status signaling of reset control unit 38 (M being an integer from one to m), such that system DTI 50 can receive a system reset status from reset control unit 38, and a trigger output N (DTITRIGOUT[N]) is connected to system reset request signaling of reset control unit 38 (N being an integer from one to n), such that system DTI 50 can request that reset control unit 38 initiate (assert) a non-debug domain system reset. Debugger 24 can issue a non-debug domain system reset request by configuring DTI 50 to issue (ssert) a non-debug domain system reset request via trigger output N (DTITRIGOUT[N]) to reset control unit 38, and can further observe a status of the non-debug domain system reset by monitoring a non-debug domain system reset status received by DTI 50 via trigger input M (DTITRIGIN[M]) from reset control unit 38. In some implementations, trigger input M (DTITRIGIN[M]) is connected to a system reset status output of reset control unit 38 via an inverter 54, and trigger output N (DTITRIGOGT[N]) is connected to a system reset request input via an inverter 56. For example, inverting non-debug domain system reset signaling can be implemented when reset control unit 38 uses active low inputs for system reset purposes, such as system reset request logic.[0024] System DTI 50 further includes a system DTI register set 60, which debugger 24 can configure to generate system reset signaling and observe system reset status signaling. Each system DTI register may be a 32-bit register, though the present disclosure contemplates any size system DTI registers. In the depicted embodiment, system DTI register set 60 includes a DTI Trigger to Channel Enable Register (DTIINEN) 62a, a DTI Channel to Trigger Enable Register (DTIOUTEN) 62b, a DTI Application Trigger Set Register (DTIAPPSET) 62c, a DTI Application Trigger Clear Register (DTIAPPCLEAR) 62d, a DTI Trigger In Status Register (DTITRIGINSTATUS) 62e, a DTI Application Pulse Register (DTIAPPPULSE) 62f, and/or other various system DTI registers. In some implementations, system DTI register set 60 is an ARM® CoreSightTM cross-trigger interface (CTI) register set that includes a CTI Trigger to Channel Enable register, CTI Channel to Trigger Enable register, CTI Application Trigger Clear register, CTI Trigger In Status register, CTI Application Trigger Set register, CTI Application Pulse register, and/or other CTI register. As described below, debugger 24 can issue a non-debug domain system reset request by configuring (for example, writing to) DTI Application Trigger Set Register 62c. Further, debugger 24 can observe a status of the non-debug domain system reset by monitoring (for example, reading) DTI Trigger In Status Register 62e. In the depicted embodiment, reset control unit 38 implements active low states for non-debug domain system reset purposes. Accordingly, by inverting a trigger output signal from trigger output N and connecting the inverted trigger output signal to a system reset request of reset control unit 38, debugger 24 can issue a non-debug domain system reset request by configuring (for example, writing to) DTI Application Trigger Set Register 62c. Further, by inverting a system reset signal from reset control unit 38 and connecting the inverted system reset signal to trigger input N, debugger 24 can observe a status of the non-debug domain system reset by monitoring (for example, reading) DTI Trigger In Status Register 62e.100251 DTI Trigger to Channel Enable Register 62a is a read/write register that enables signaling of an event on a channel of DTI-interconnect 48 when reset control unit 38 or other associated system (such as trigger routing unit 52) issues a trigger event input to system DTI 50. Each trigger input of system DTI 50 may have an associated DTI Trigger to Channel Enable Register 62a. For example, system DTI register set 60 may include m DTI Trigger to Channel Enable Registers 62a. DTI Trigger to Channel Enable Register 62a includes an enable trigger in bit (or bits) associated with each channel of DTI interconnect 48, which can be set to a first state, such as a LOW state (for example, a digital 0) or a second state, such as a HIGH state (for example, a digital I). For a given channel, an enable trigger in bit set to the first state disables a trigger input from generating an event on the channel; and the enable trigger in bit set to the second state enables the trigger input to generate an event on the channel. In the present example, DTI Trigger to Channel Enable Register 62a may be associated with trigger input M (DTITRIGIN[M]). Since trigger input M is designated for receiving non-debug domain system reset status signaling from reset control unit 38, each enable trigger in bit can be set to the first state (for example, a digital 0) to ensure that a channel event is not generated when system DTI 50 receives a system reset status signal on trigger input M from reset control unit 38. Accordingly, for non-debug domain system reset purposes, system DTI 50 includes a trigger input (here, trigger input M) that will not be mapped to any channel.100261 DTI Channel to Trigger Enable Register (DTIOUTEN) 62b is a read/write register that defines which channel(s) of DTI interconnect 48 can generate a trigger output. Generally, DTI Channel to Trigger Enable Register 62b maps application triggers, such as software-triggered events from debugger 24, to trigger outputs of system DTI 50. Each trigger output from system DTI 50 may have an associated DTI Channel to Trigger Enable Register 62b. For example, system DTI register set 60 may include n DTI Channel to Trigger Enable Registers 62b. DTI Channel to Trigger Enable Register 62b includes an enable trigger out bit (or bits) associated with each channel of DTI interconnect 48, which can be set to the first state (LOW state) or the second state (HIGH state). For a given channel, an enable trigger out bit set to the first state disables a channel input from being routed to the trigger output; and the enable trigger out bit set to the second state enables routing the channel input to the trigger output. Changing an enable trigger out bit from the first state to the second state enables a channel event for the channel to generate a trigger event on the trigger output. In the present example, DTI Channel to Trigger Enable Register 62b may be associated with trigger output N (DTITRIGOUT[N]), and DTI Channel to Trigger Enable Register 62b may include an enable trigger out bit associated with a channel X of DTI interconnect 48 that is assigned as a system reset request channel. Since trigger output N is designated for sending non-debug domain system reset request signaling to reset control unit 38, an enable trigger out bit associated with channel X may be set to the second state (for example, a digital 1) to ensure that any channel event triggered on channel X by debugger 24 is routed to trigger output N to reset control unit 38. Accordingly, for non-debug domain system reset purposes, system DTI 50 includes a trigger output (here, trigger output N) that will be mapped to a channel (here, channel X) used by debugger 24 for triggering non-debug domain system reset.100271 DTI Application Trigger Set Register (DTIAPPSET) 62c is a read/write register that enables an application, such as debugger 24, to raise a channel event. DTI Application Trigger Set Register 62c includes an application trigger bit (or bits) associated with each channel of DTI interconnect 48, which can be set to the first state (LOW state) or the second state (HIGH state). For a given channel, debugger 24 can set an application trigger bit to the second state to generate a channel event for the channel. Otherwise, the application trigger bit is set to the first state, indicating that the application trigger for the channel is inactive. Tn the present example, DTI Application Trigger Set Register 62c may include an application trigger bit associated with channel X of DTI interconnect 48, which has been assigned as the system reset request channel. Accordingly, for non-debug domain system reset purposes, debugger 24 can initiate a non-debug domain system reset by setting the application trigger bit associated with channel X to the second state (for example, a digital 1), causing a system reset channel event to be raised on channel X. Since a channel event on channel X will be routed to trigger output N connected to system reset request input of rest control unit 38 (as a result of how debugger 24 configured DTI Channel to Trigger Enable Register 62b), debugger 24 can issue a non-debug domain system request by writing to DTI Application Trigger Set Register 62c.100281 DTI Application Trigger Clear Register (DTTAPPCLEAR) 62d is a read/write register that enables an application, such as debugger 24, to clear a channel event. DTI Application Trigger Clear Register 62d includes an application trigger clear bit (or bits) associated with each channel of DTI interconnect 48, which can be set to the first state (LOW state) or the second state (HIGH state). For a given channel, debugger 24 can set an application trigger clear bit to the second state to clear a channel event for the channel. Otherwise, the application trigger clear bit is set to the first state. In the present example, DTI Application Trigger Clear Register 62d may include an application trigger clear bit associated with channel X. Accordingly, for non-debug domain system reset purposes, debugger 24 can clear the non-debug domain system reset by setting the application trigger clear bit associated with channel X to the second state (for example, a digital 1), causing the system reset channel event to be cleared on channel X. 100291 Alternatively, DTI Application Pulse Register (DTIAPPPULSE) 62e can be used for asserting and deasserting non-debug domain system reset. DTI Application Pulse Register 62e is a write only register that enables an application, such as debugger 24, to raise a channel event pulse for some clock period, such as a clock period of debug trigger mechanism. In contrast to DTI Application Trigger Set Register 62c, DTI Application Pulse Register 62e clears itself so that the application, such as debugger 24, does not have to clear the channel event once raised. DTI Application Pulse Register 62e includes an application pulse bit (or bits) associated with each channel of DTI interconnect 48, which can be set to the first state (LOW state) or the second state (HIGH state). For a given channel, debugger 24 can set an application pulse bit to the second state to generate a channel event pulse for the channel for some time, such as a clock period of debug trigger mechanism. Otherwise, the application pulse bit is set to the first state, indicating that the application trigger for the channel is inactive. In the present example, DTI Application Pulse Register 62e may include an application pulse bit associated with channel X of DTI interconnect 48. Accordingly, for non-debug domain system reset purposes, debugger 24 can initiate a non-debug domain system reset by setting the application pulse bit associated with channel X to the second state (for example, a digital I), causing a system reset channel event pulse to be raised on channel X for a defined time. Since a channel event on channel X will be routed to trigger output N connected to system reset request input of reset control unit 38, debugger 24 can issue a non-debug domain system request by writing to DTT Application Pulse Register 62e, which will automatically be cleared after the defined time.100301 DTI Trigger in Status Register 621 is a read-only register that includes trigger in status bits that indicate a status of trigger inputs to system DTI 50. DTI Trigger In Status Register 62f can include a trigger in status bit (or bits) associated with each trigger input to system DTI 50. A trigger in status bit is set to the first state (LOW state) when its associated trigger input is inactive, and a second state (HIGH STATE) when its associated trigger input is active. In the present example, DTI Trigger In Status Register 621 can include a trigger in status bit corresponding with trigger input M (DTITRIGIN[M]). Since trigger input M is designated for receiving non-debug domain system reset status signaling from reset control unit 38, debugger 24 can monitor a non-debug domain system reset status by evaluating (for example, reading) a state of the trigger in status bit corresponding with trigger input M. 100311 In an exemplary operation of debug environment 10, when debugger 24 connects (attaches) to SoC 30, debugger 24 can configure a debug domain of SoC 30. For example, debugger 24 can initialize configuration settings for any accessible debug logic of SoC 30. Before performing a debug session, debugger 24 can initiate a non-debug domain system reset that brings SoC 30 to a known state without affecting the debug domain of SoC 30, such as any debug logic that has already been initialized for performing debug operations. As described above, debugger 24 can assign a channel of the SoC's debug trigger mechanism for non-debug domain system reset signaling (such as channel X), map a trigger output of system DTI 50 (such as trigger output N) to the channel assigned for non-debug domain system reset signaling (for example, DTIOUTNEN[N]=='channel X enabled'), and ensure that a trigger input of system DTI 50 (such as trigger input M) is not mapped to any channel (for example, DTIINEN[M]==4'b0). After polling (observing) DTI Trigger In Status Register 62f, specifically the trigger in status bit corresponding with trigger input M, to confirm that a non-debug domain system reset has not been asserted (for example, DTITRIGIN[M]==0), debugger 24 can assert a non-debug domain system reset by writing to DTT Application Trigger Set Register 62c, specifically the application trigger bit associated with channel X (for example, DTIAPPSET[X]==1). This causes a system reset channel event to be raised on channel X, which provides non-debug domain system reset request signaling to reset control unit 38.10032] Reset control unit 38 can then initiate a non-debug domain system reset upon receiving the non-debug domain system reset request signaling, resetting all endpoints of SoC 30 except for the reset control unit 38 and the debug domain, including any debug logic of the endpoints. Debugger 24 can poll (observe) DTI Trigger In Status Register 62f, specifically the trigger in status bit corresponding with trigger input M, until it indicates that the non-debug domain system reset has been asserted (for example, DTITRIG1N[M]=1). Once debugger 24 confirms that the non-debug domain system reset has been asserted, debugger 24 can clear the non-debug domain system reset request by writing DTI Application Trigger Set Register 62c, specifically the application trigger bit associated with channel X (for example, DTIAPPSET[X]==0). Alternatively, debugger 24 can write to DTI Application Trigger Clear Register 62d, specifically the application trigger bit associated with channel X (for example, DTIAPPCLEAR[X]==1) to deassert the system reset. Debugger 24 can then poll (observe) DTI Trigger In Status Register 62f again, specifically the trigger in status bit corresponding with trigger input M, to ensure that the non-debug domain system reset has been deasserted (for example, DTITRIGIN[M]== I). Debugger 24 then knows that SoC 30 is in a known state, and debugger 24 can perform the debug session 100331 Returning to FIGURE I, SoC 30 further includes trigger routing unit (TRU) 52, where system DTI 50 is connected to trigger routing unit 52. In various implementations, remaining trigger inputs and/or trigger outputs from system DTI 50, such as trigger inputs and trigger outputs not connected to reset control unit 38, may be connected to trigger routing unit 52. Trigger routing unit 52 can provide system-level sequence control and system-level synchronization for SoC 30 without core intervention, for example, from processor 34 and/or processor 36. In various implementations, trigger routing unit 52 maps trigger masters (trigger generators) to trigger slaves (trigger receivers).100341 SoC 30 can further include a system watchpoint unit (SWU) 70 configured for transaction monitoring, which can provide debug support. System watchpoint unit 70 can generate events (such as a trace message, a trigger, or an interrupt) based on monitoring transactions at the system slaves. In various implementations, system watchpoint unit 70 uses various watchpoint match groups for transaction monitoring.100351 To facilitate non-invasive, real-time debugging techniques, debug and trace system 40 can capture trace information associated with operation of SoC 30, which can be analyzed by debugger 24. The trace information can include instruction information from various components of SoC 30, data information from various components of SoC 30, bus transaction information, andior other information associated with operation of SoC 30. For example, in various implementations, debug and trace system 40 can observe software executing on processor 34 and processor 36, collecting trace information associated with the software execution. In FIGURE I, debug and trace system 40 can include various trace components, such as trace bus 44, a trace module for each processing element (such as a trace module 72a associated with processor 34 and a trace module 72b associated with processor 36), a system trace module 74, a trace buffer 76 for storing trace data, a trace port 78 for enabling debug host 20 to capture trace data, a serial wire output (SWO) 80. Each trace module (TM) can enables tracking and storing of real-time instruction flow, data flow, and/or program flow. Each trace module can be implemented as an embedded trace macrocell (ETM), a program trace macrocell (PTM), an instruction trace macrocell (TIM), or other suitable trace macrocell.100361 Turning to FIGURE 3, FIGURE 3 is a flowchart of exemplary method 100 that can be implemented for providing a non-debug domain system reset in a debug environment, such as that described with reference to FIGURE I and FIGURE 2, according to various aspects of the present disclosure. In various implementations, method 100 can be implemented by a system that includes a debug trigger interface and a reset control unit. At block 102, the debug trigger interface is connected to the reset control unit. In some implementations, a non-debug domain system reset request channel may be defined between a debug trigger interface and a reset control. At block 104, the debug trigger interface is configured to trigger the reset control unit to reset a non-debug domain. Additional steps can be provided before, during, and after method 100 and some of the steps described can be replaced or eliminated for other embodiments of method 100.100371 Turning to FIGURE 4, FIGURE 4 is a flowchart of an exemplary method 110 that can be implemented for providing a non-debug domain system reset in a debug environment, such as that described with reference to FIGURE 1 and FIGURE 2, according to various aspects of the present disclosure. In various implementations, method 110 can be implemented during operation of debug environment 10. For example, when debugger 24 connects (attaches) to SoC 30, debugger 24 can configure a debug domain of SoC 30 (such as initialize configuration settings for any accessible debug logic of SoC 30), and then implement method 110 to bring SoC 30 to a known state before performing a debug session, without affecting the debug domain of SoC 30, such as any initialized debug logic. Additional steps can be provided before, during, and after method 110 and some of the steps described can be replaced or eliminated for other embodiments of method 110.100381 At block 112, a non-debug domain system reset request channel is defined between a debug trigger interface and a reset control unit. For example, debugger 24 can assign a channel X of the SoC's debug trigger mechanism for non-debug domain system reset siDtaling. At block 114, a trigger output of the debug trigger interface is mapped to the non-debug domain system reset request channel. For example, debugger 24 can map trigger output N of system DTI 50 to channel X, which was assigned for non-debug domain system reset signaling (for example, DTIOUTNEN[N]=='channel X enabled'). Method 110 can further include ensuring that a trigger input of the debug trigger interface used for monitoring a status of the non-debug domain system reset is not mapped to any channel. For example, debugger 24 can further ensure that trigger input M of system DTI 50, which is used for monitoring non-debug domain system reset status signaling from reset control unit 38, is not mapped to any channel.100391 At block 116, a non-debug domain system reset is asserted. For example, debugger 24 can assert a non-debug domain system reset by writing to DTI Application Trigger Set Register 62c, specifically the application trigger bit associated with channel X (for example, DTIAPPSET[X]==1). This causes a system reset channel event to be raised on channel X, which provides non-debug domain system reset request signaling to reset control unit 38. Reset control unit 38 can then initiate a non-debug domain system reset upon receiving the non-debug domain system reset request signaling, resetting the non-debug domain of SoC 30, except for the reset control unit 38 and the debug domain of SoC 30, including any debug logic that was initialized before initiating the non-debug domain system reset. In various implementations, before asserting the non-debug domain system reset, method 110 can include checking a non-debug domain system reset status to ensure that a non-debug domain system reset has not already been initiated. For example, debugger 24 can read DTI Trigger In Status Register 62f specifically the trigger in status bit corresponding with trigger input M, to confirm that a non-debug domain system reset has not been asserted (for example, DTITRIGIN[M]-0).[0040] At block 118, a status of the asserted non-debug domain system reset can be checked.For example, debugger 24 can read DTI Trigger In Status Register 62f, specifically the trigger in status bit corresponding with trigger input M, until it indicates that the non-debug domain system reset has been asserted (for example, DTITRIGIN[M]==1). At block 120, the non-debug domain system reset is deasserted. For example, once debugger 24 confirms that the non-debug domain system reset has been asserted (block 118), debugger 24 can clear the non-debug domain system reset request by writing to DTI Application Trigger Set Register 62c, specifically the application trigger bit associated with channel X can be set to a low state (for example, DTIAPPSET[X]==0). Alternatively, debugger 24 can write to DTI Application Trigger Clear Register 62d, specifically the application trigger bit associated with channel X (for example, DTIAPPCLEAR[X]== I), to deassert the system reset. In various implementations, method 110 can further include checking the non-debug domain system reset status to ensure that the non-debug domain system reset is no longer asserted. For example, debugger 24 can read DTI Trigger in Status Register 62f again, specifically the trigger in status bit corresponding with trigger input M, to ensure that the non-debug domain system reset has been deasserted (for example, DTITRIGIN[M]-0). Debugger 24 then knows that SoC 30 is in a known state, and debugger 24 can perform the debug session.[0041] In various implementations, when implementing method 110, debugger 24 can use DTI Application Pulse Register (DTIAPPPULSE) 62e for asserting and deasserting non-debug domain system reset. For example, debugger 24 can assert a non-debug domain system reset (block 116) by setting the application pulse bit associated with channel X to a state that causes a system reset channel event pulse to be raised on channel X for a defined time. Since DTI Application Pulse Register 62e will automatically be cleared after the defined time, the non-debug domain system reset will automatically deassert without further action from debugger 24. In such implementations, debugger 24 can still check a status of the asserted non-debug domain system reset (block 118) by polling DTI Trigger In Status Register 62f, specifically the trigger in status bit corresponding with trigger input M, until it indicates that the non-debug domain system reset has been asserted.[0042] In various implementations, components of target system are implemented in a same device. Alternatively, components of target system can be distributed in various integrated circuits and/or devices interconnected with each other, such that components of target system are integrated to provide a debug environment. In various implementations, the various circuits and/or components of the FIGURES can be implemented on a board of an associated electronic device. The board can be a general circuit board that can hold various components of an internal electronic system of the electronic device and, further, provide connectors for other peripherals. The board can provide the electrical connections by which the other components of the system can communicate electrically. Any suitable processors (inclusive of digital signal processors, microprocessors, supporting chipsets, etc.), memory elements, etc. can be suitably coupled to the board based on particular configuration needs, processing demands, computer designs, other considerations, or a combination thereof. Other components, such as external storage, sensors, controllers for audio/video display, and peripheral devices may be attached to the board as plug-in cards, via cables, or integrated into the board itself. In various implementations, the various circuits and/or components of the FIGURES can be implemented as stand-alone modules (for example, a device with associated components and circuitry configured to perform a specific application or function) or implemented as plug-in modules into application specific hardware of electronic devices. Note that particular embodiments of the present disclosure may be readily included in a system-on-chip (SOC) package, either in part, or in whole. An SOC represents an integrated circuit that integrates components of a computer or other electronic system into a single chip. It may contain digital, analog, mixed-signal, and often radio frequency functions: all of which may be provided on a single chip substrate. Other embodiments may include a multichip-module (MCM), with a plurality of separate ICs located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the various functions described herein may be implemented in one or more semiconductor cores (such as silicon cores) in application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), other semiconductor chips, or combinations thereof.100431 The various functions outlined herein may be implemented by logic encoded in one or more non-transitory and/or tangible media (for example, embedded logic provided in an application specific integrated circuit (ASIC), as digital signal processor (DSP) instructions, software (potentially inclusive of object code and source code) to be executed by a processor, or other similar machine, etc.). In some of these instances, a memory element can store data used for the operations described herein. This includes the memory element being able to store logic (for example, software, code, processor instructions) that is executed by a processor to carry out the activities described herein. The processor can execute any type of instructions associated with the data to achieve the operations detailed herein. In various implementations, the processor can transform an element or an article (such as data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (such as software/computer instructions executed by the processor) and the elements identified herein can be some type of a programmable processor (such as a DSP), programmable digital logic (e.g., a FPGA, an erasable programmable read only memory (EPROM), an electrically erasable programmable ROM (EEPROM)), or an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof.100441 Note that the activities discussed above with reference to the FIGURES are applicable to any integrated circuits that involve signal processing, particularly those that can execute specialized software programs or algorithms, some of which may be associated with processing digitized real-time data. Certain embodiments can relate to multi-DSP signal processing, floating point processing, signal/control processing, fixed-function processing, microcontroller applications, etc. In certain contexts, the features discussed herein can be applicable to medical systems, scientific instrumentation, wireless and wired communications, radar, industrial process control, audio and video equipment, current sensing, instrumentation (which can be highly precise), and other digital-processing-based systems. Moreover, certain embodiments discussed above can be provisioned in digital signal processing technologies for medical imaging, patient monitoring, medical instrumentation, and home healthcare. This could include pulmonary monitors, accelerometers, heart rate monitors, pacemakers, etc. Other applications can involve automotive technologies for safety systems (e.g., stability control systems, driver assistance systems, braking systems, infotainment and interior applications of any kind). Furthermore, powertrain systems (for example, in hybrid and electric vehicles) can use high-precision data conversion products in battery monitoring, control systems, reporting controls, maintenance activities, etc. In yet other example scenarios, the teachings of the present disclosure can be applicable in the industrial markets that include process control systems that help drive productivity, energy efficiency, and reliability, in consumer applications, the teachings of the signal processing circuits discussed above can be used for image processing, auto focus, and image stabilization (e.g., for digital still cameras, camcorders, etc.). Other consumer applications can include audio and video processors for home theater systems, DUD recorders, and high-definition televisions. Yet other consumer applications can involve advanced touch screen controllers (e.g., for any type of portable media device). Hence, such technologies could readily be a part of smartphones, tablets, security systems, PCs, gaming technologies, virtual reality, simulation training, etc. [0045] The specifications, dimensions, and relationships outlined herein have only been offered for purposes of example and teaching only. Each of these may be varied considerably without departing from the scope of the appended claims. The specifications apply only to non-limiting examples and, accordingly, they should be construed as such In the foregoing description, example embodiments have been described with reference to particular processor and/or component arrangements. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims. The description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.[0046] Note that with the numerous examples provided herein, interaction may be described in terms of two, three, four, or more processing components. However, this has been done for purposes of clarity and example only. It should be appreciated that the system can be consolidated in any suitable manner. Along similar design alternatives, any of the illustrated components, modules, circuits, and elements of the FIGURES may be combined in various possible configurations, all of which are clearly within the broad scope of this Specification. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of processing components. It should be appreciated that the processing components of the FIGURES and its teachings are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of the processing system and/or components as potentially applied to a myriad of other architectures.[0047] Further, note that references to various features (e.g., elements, structures, modules, components, steps, operations, characteristics, etc.) included in "one embodiment", "example embodiment", "an embodiment", "another embodiment", "some embodiments", "various embodiments", "other embodiments", "alternative embodiment", and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments. it is further noted that "coupled to" and "coupled with" are used interchangeably herein, and that references to a feature "coupled to" or "coupled with" another feature include any communicative coupling means, electrical coupling means, mechanical coupling means, other coupling means, or a combination thereof that facilitates the feature functionalities and operations, such as the security check mechanisms, described herein.10048] Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words "means for" or "steps for" are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims.10049] OTHER NOTES EXAMPLES AND IMPLEMENTATIONS 10050] A system is provided that can include means for issuing a non-debug domain system reset request from the debug trigger interface to the reset control unit, such that the debug trigger interface triggers the reset control unit to reset a non-debug domain. In various implementations, a system can include means for defining a non-debug domain system reset request channel between a debug trigger interface and a reset control unit, means for configuring the debug trigger interface to trigger the reset control unit to reset the non-debug domain, and/or means for monitoring a status of the non-debug domain system reset. The 'means for' in these instances can include (but is not limited to) using any suitable component discussed herein, along with any suitable software, circuitry, hub, computer code, logic, algorithms, hardware, controller, interface, link, bus, communication pathway, etc. In various implementations, the system includes memory that includes instructions that when executed cause the system to perform any of the activities discussed herein. |
An integrated circuit package may include a semiconductor die on a first side of the integrated circuit package, a first ball grid array (BGA) connection on the first side of the integrated circuit package, and a second BGA connection on a second side of the integrated circuit package. The integrated circuit package may include one or more traces that route data from the first BGA connection and the second BGA connection. |
1.An integrated circuit (IC) package that includes:A semiconductor die arranged on the first side of the integrated circuit (IC) package;A first ball grid array (BGA) connection part is provided on the first side of the IC package; andA second BGA connection portion is provided on the second side of the IC package, wherein one or more traces are configured to route data via the first BGA connection portion and the second BGA connection portion.2.The integrated circuit package of claim 1, wherein the first BGA connection portion includes one or more ball grid array (BGA) solder balls.3.The integrated circuit package of claim 1, wherein the second BGA connection portion includes one or more ball grid array (BGA) pads.4.The integrated circuit package of claim 3, wherein the one or more BGA pads are configured to be communicatively coupled to at least one memory device, wherein the at least one memory device includes a static random access memory (SRAM ), embedded dynamic random access memory (EDRAM), double data rate synchronous dynamic random access memory (DDRSDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), graphics double data rate synchronous dynamic random access Access memory (GDDR SDRAM) or a combination thereof.5.The integrated circuit package of claim 4, wherein the semiconductor die is configured to communicate with the at least one memory device via the one or more BGA pads.6.The integrated circuit package of claim 1, wherein the first BGA connection portion includes one or more ball grid array (BGA) solder balls, and wherein the semiconductor die is configured to pass through the one or A plurality of BGA solder balls communicate with one or more devices arranged on a printed circuit board (PCB), wherein the one or more BGA solder balls are connected to the PCB.7.The integrated circuit package of claim 6, wherein the semiconductor die is configured to communicate with the one or more devices via the one or more traces and the one or more BGA solder balls.8.A printed circuit board (PCB) assembly, including:Integrated circuit (IC) packaging, including:A semiconductor die arranged on the first side of the integrated circuit (IC) package;A first ball grid array (BGA) connection part is provided on the first side of the IC package; andA second BGA connection portion disposed on the second side of the IC package, wherein one or more traces are configured to route data via the first BGA connection portion and the second BGA connection portion; andA first memory device, wherein the semiconductor die is configured to communicate with the first memory device via the first BGA connection.9.8. The PCB assembly of claim 8, wherein the first BGA connection portion includes one or more ball grid array (BGA) solder balls.10.9. The PCB assembly of claim 9, wherein the second BGA connection portion includes one or more ball grid array (BGA) pads.11.The PCB assembly of claim 10, wherein the one or more BGA pads are configured to be communicatively coupled to the first memory device.12.The PCB assembly of claim 11, wherein the semiconductor die is configured to communicate with the first memory device via the one or more BGA pads.13.The PCB assembly of claim 11, wherein the semiconductor die is configured to communicate with a second memory device via the one or more BGA solder balls.14.A field programmable gate array (FPGA) package, including:One or more ball grid array (BGA) balls, arranged on the first side of the FPGA package;One or more ball grid array (BGA) pads are provided on the second side of the FPGA package; andOne or more channels are configured to communicatively couple the one or more BGA balls to the one or more BGA pads.15.The FPGA package of claim 14, wherein the first side of the FPGA package is coupled to a printed circuit board (PCB).16.The FPGA package of claim 15, wherein the second side of the FPGA is coupled to a semiconductor die.17.The FPGA package of claim 16, wherein the semiconductor die is configured to use the one or more channels to communicate with one or more memory devices coupled to a BGA pad.18.The FPGA package of claim 16, wherein the one or more BGA balls are configured to be communicatively coupled to one or more devices provided on the PCB.19.The FPGA package of claim 18, wherein the one or more devices provided on the PCB are configured to communicate with the semiconductor die via the one or more channels.20.A system including:Integrated circuit package, including multiple channels;A first ball grid array (BGA) connection portion is provided on the first side of the integrated circuit package, wherein the first BGA connection portion is coupled to the plurality of channels, wherein the first BGA connection The part includes BGA pads;The second BGA connection part is arranged on the second side of the integrated circuit package, wherein the second BGA connection part is coupled to the plurality of channels to realize the first BGA connection part and the second Communication between BGA connection parts, and wherein the second BGA connection part includes BGA solder balls;A first semiconductor die disposed on the first side of the integrated circuit (IC) package and communicatively coupled to the plurality of channels, wherein the first semiconductor die includes a field programmable gate array A circuit system, and wherein the first semiconductor die is communicatively coupled to the plurality of channels via a connection portion different from the first BGA connection portion;A second semiconductor die, disposed on the first side of the IC package, and communicatively coupled to the plurality of channels via the first BGA connection, wherein the second semiconductor die includes a first A memory device;Wherein the first semiconductor die or the second semiconductor die is configured to communicate with a second memory device provided on a printed circuit board attached to the first integrated circuit package via the second BGA connection portion Communication. |
Multi-ball grid array (BGA) configuration for single integrated circuit (IC) packageBackground techniqueThe present disclosure relates to integrated circuit packages suitable for supporting multiple product types. More particularly, the present disclosure relates to a package configuration that supports communication between an integrated circuit die and a memory device. The memory device may be a device-on-package or an out-of-package device, or the memory device may be a device-on-package or an out-of-package device. equipment.This section is intended to introduce the reader to various aspects of the art that may be related to the various aspects of the present disclosure described and/or claimed below. It is believed that this discussion will help provide readers with background information to promote a better understanding of various aspects of this disclosure. Therefore, it should be understood that these statements should be understood from this perspective, and should not be understood as an acknowledgement of the prior art.Integrated circuit devices are used in many electronic systems. To name a few, computers, handheld devices, portable phones, televisions, industrial control systems, robotics, and telecommunications networks all use integrated circuit devices. Integrated circuit devices can be formed using photolithography techniques, which pattern circuits onto a substrate wafer that is cut to form multiple (usually identical) individual integrated circuit dies. Each integrated circuit die can include many different components, such as programmable logic structures, digital or analog signal transmission circuits, digital signal processing circuits, dedicated data processing circuits, memory, and so on. Multiple integrated circuit dies and components can be packaged on a substrate to form an integrated circuit package. The package may include electrical connections to connect the die and other components to a printed circuit board (PCB), and pins or leads that can be used for electrical connections to circuits, power, and ground outside the integrated circuit. Therefore, the package can be used as an interface between the die and the PCB.Generally speaking, the components included in integrated circuits and packages can be based on different basic technologies. In other words, different packages can be used in various technical specification sets, resulting in a series of package sizes and configurations. As a result, various packaging specifications for different technologies may result in different tape-outs for each of the various packaging specifications. These different tapeout solutions may increase costs and require more design and production time.Brief description of the drawingsVarious aspects of the present disclosure can be better understood by reading the following detailed description and referring to the accompanying drawings, in which:Fig. 1 is a block diagram of a programmable logic device programmed with a circuit design according to an embodiment;FIG. 2 is a block diagram of a package including a programmable logic device according to an embodiment, in which a fabric die and a base die are vertically stacked;Fig. 3 is a block diagram of a circuit card assembly (CCA) according to an embodiment. The circuit card assembly (CCA) shows the programmable logic device of Fig. 2 and a printed circuit board mounted on the circuit card assembly (CCA) using different packages ( Memory device on PCB);4 is a block diagram of the circuit card assembly (CCA) of FIG. 3 according to an embodiment, the circuit card assembly (CCA) showing the programmable logic device and the memory device on the same package; and5 is a side block diagram of a package with the programmable logic device and memory device of FIG. 4 according to an embodiment.detailed descriptionOne or more specific embodiments will be described below. In order to provide a brief description of these embodiments, not all features of the actual implementation are described in the specification. It can be appreciated that in the development of any such actual implementation, for example, in any engineering or design project, many implementation-specific decisions must be made to achieve the developer’s specific goals, for example, compliance with system-related constraints and Business-related constraints may be different for different implementations. Moreover, it can be appreciated that such development work may be complicated and time-consuming, but in any case, it will still be a routine task of design, production, and manufacturing for ordinary technicians who benefit from the present disclosure.When introducing elements of various embodiments of the present disclosure, the articles "a," "an," and "the" are intended to mean that there are one or more of the elements. The terms "including," "including," and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements. In addition, it should be understood that references to "one embodiment" or "an embodiment" of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also include the recited features. Furthermore, the phrase A "based on" B means that A is based at least in part on B. In addition, unless expressly stated otherwise, the term "or" is intended to be inclusive (e.g., logical'or' (OR)) rather than exclusive (e.g., logical'exclusive OR' (XOR)). In other words, the phrase A "or" B is intended to mean A, B, or both A and B.As the size of the equipment space becomes more and more constrained, the requirements for equipment performance continue to increase. For example, wireless devices operating in high-speed networks are driving packaging solutions to use various compact architectures. Generally speaking, integrated circuit devices can be expressed as a system of individual integrated circuit dies that can communicate signals between each other in an efficient manner. For example, in a packaging solution with two or more dies, the number of connections available between the dies depends on the amount of space available for routing circuits between different locations on a single monolithic integrated circuit. In order to save space on the integrated circuit, as discussed herein, multiple integrated circuit dies may be stacked vertically using various interconnections to facilitate communication between the dies.In some embodiments, the integrated circuit die may communicate with other components provided on the same package as the corresponding integrated circuit die or with components provided on the PCB or not on the same package as the corresponding integrated circuit die (e.g. , Outside the package) other circuit devices on the component communication. By way of example, one or more integrated circuit dies on the package may communicate with a memory device provided outside the package, such as a memory chip. Memory chips can be classified based on type and application. For example, double data rate synchronous dynamic random access memory (DDR SDRAM) provides a higher data transfer rate by strictly controlling the timing of electronic data and clock signals, thereby achieving a single data rate (SDR) SDRAM at the same clock frequency Almost twice the bandwidth. Similarly, Graphics Double Data Rate (GDDR) SDRAM is a type of memory tailored specifically for the use of graphics cards. Both DDR and GDDR memories can be out-of-package and can use longer trace paths between the die provided on the package and the memory components provided outside the package. In addition, additional memory devices with which the integrated circuit die can communicate include, but are not limited to, static random access memory (SRAM) and embedded dynamic random access memory (EDRAM). Compared with dynamic random access memory (DRAM), SRAM is a static form of RAM, which is not constantly refreshed. SRAM is usually used for auxiliary device operations, such as cache memory and storage registers. EDRAM is a DRAM integrated on the same die or a multi-chip module (MCM) of the integrated circuit. Moreover, these memory devices that can be outside the package consume space on the PCB, otherwise, the space can be used for additional or other circuit components.An integrated circuit die or multi-chip system can also communicate with memory components placed on the same package, for example through a package-on-package (PoP) architecture, which is commonly used in wireless device applications. The package-on-package architecture involves stacking two or more packages vertically on top of each other so that signals can be routed vertically between the packages. In either case, the integrated circuit dies that communicate with memory components on or off the package use separate tape-outs, and each tape-out has its own photomask cost.What is desired may be an integrated circuit package that maintains communication between components on the package (e.g., one or more dies) and other devices on the same package and provided outside the package (e.g., memory devices) Architecture. The tape-out is the final product of the design process of the integrated circuit or printed circuit board (PCB) before the integrated circuit or printed circuit board (PCB) is sent to manufacture. Specifically, tape-out is a point where the graphic design of the photomask for the circuit is sent to the manufacturing plant. A photolithography mask is a layer pattern used to create integrated circuits. As discussed above, different types of integrated circuit applications may include a die on a package that communicates with a memory device on or outside of the package on which the die is disposed. These different types of integrated circuit applications can use separate tapeouts for each package architecture, and therefore, use separate photomasks with separate corresponding costs. In addition, each tape-out has a corresponding test interface unit (TIU), the test interface unit (TIU) test lead times (lead times) and the corresponding test cost. Given these various package architectures, a designed package may be incompatible with communication between the die and memory devices on the same package or outside the package.In order to be able to effectively use the integrated circuit package, the package may include a ball grid array (BGA) connection portion, which can be connected to a memory device provided on the package or outside the package via the circuit connection portion on the PCB Communication. For example, in one embodiment, a BGA pad on the top of the package may enable the integrated circuit die to communicate via the BGA pad with one or more memory devices also provided on the package. In addition, the integrated circuit die can also maintain the ability to communicate with memory devices outside the package via the circuit connection portion provided on the PCB. Therefore, a package containing BGA pads along with BGA balls on either side of the package can allow one or more dies on the package to communicate with components on and outside of the package. As a result, by making a single package architecture that can facilitate communication with memory devices on and outside the package, the photomask and production costs can be controlled without the need to manufacture multiple separate package designs.In addition, many of the previously mentioned electronic systems, such as portable phones or another wireless device, may include integrated circuit dies that communicate with various other devices. For example, various Field Programmable Gate Array (FPGA) devices may include FPGA die, and FPGA die may be installed on the package (e.g., the same package as the FPGA die) or outside the package (e.g., chip on PCB). Route signals between other components of the external memory device (e.g., via conductive traces) to communicate with other components on the package (e.g., the same package as the FPGA die) or outside the package (e.g., off-chip memory device on the PCB) Communication. As discussed above, additional components can use additional space on the PCB, but reserved PCB space is particularly beneficial for complex devices, such as wireless applications with lower bandwidth requirements (such as 3G or 4G standards). Wireless devices operating in the network may require additional memory devices and components to be placed on the PCB.In view of the foregoing, Figure 1 shows a block diagram of a system 10 that can employ programmable logic with one or more dies that can communicate with devices on the same package or on different packages (for example, elsewhere on the PCB). Device 12. Using the system 10, a designer can implement circuit design functions on an integrated circuit, such as a reconfigurable programmable logic device 12, such as a field programmable gate array (FPGA). The designer may use design software 14 to implement the circuit design to be programmed onto the programmable logic device 12, such as theversion of Intel Corporation of Santa Clara, California. The design software 14 may use the compiler 16 to generate a low-level circuit design defined by the bitstream 18, which is sometimes also referred to as a program object file and/or configuration program for programming the programmable logic device 12. Therefore, the compiler 16 can provide the programmable logic device 12 with machine-readable instructions representing the circuit design. For example, the programmable logic device 12 may receive one or more configuration programs (bit streams) 18 that describe the hardware implementation that should be stored in the programmable logic device 12. A configuration program (for example, a bit stream) 18 may be programmed into the programmable logic device 12 as a program configuration 20. In some cases, the program configuration 20 may represent an accelerator function for performing specialized tasks, such as video processing, voice recognition, image recognition, car-to-car communication, or other highly specialized tasks. These specialized tasks can be used in wireless applications, such as wireless devices operating in 5G networks.In order to use the packaging architecture of the present disclosure to perform application tasks, the programmable logic device 12 may include a structural die that communicates with the base die. The base die can perform dedicated tasks, while the structural die can be used for general purpose. For example, the structural die may be configured with an accelerator function topology that cooperates with a dedicated circuit system in the base die. In this way, and in one embodiment, the programmable logic device 12 may be a structural die stacked on a base die, thereby creating a 3D stack to perform dedicated tasks, such as for wireless application tasks. In another example, the structural die may be an FPGA, and the base die may be a high-speed transceiver for wireless applications. In some applications, the base die and the structural die may be side-by-side and connected to each other via an interposer or bridge (for example, an embedded multi-die interconnect bridge (EMIB)) in a 2.5D form. As discussed above, multiple ball grid array (BGA) connections (e.g., BGA balls on the bottom side of the package and BGA pads on the top side of the package) can allow the base die to be connected to and outside the package. Memory device communication. Although the examples provided below may refer to a base die that communicates with memory devices or components on and/or outside the package, other types of devices or components that communicate with the base die on an integrated circuit package can also benefit from this public. These components may include on-board power measurement circuitry (eg, voltage regulators, oscillators, etc.).An example of the programmable logic device 12 is shown in FIG. 2, but any suitable programmable logic device can be used. In the example of FIG. 2, the programmable logic device 12 includes a structural die 22 and a base die 24, which are connected to each other via micro bumps 26. Although the structural die 22 and the base die 24 appear in FIG. 2 in a one-to-one relationship, other relationships may be used. For example, a single base die 24 may be attached to several structural dies 22, or several base dies 24 may be attached to a single structural die 22, or several base dies 24 may be attached to several structural tubes The core 22 (e.g., in a staggered pattern along the x and/or y direction). The peripheral circuit 28 may be attached to the base die 24, embedded within the base die 24, and/or provided on top of the base die 24, and the heat sink 30 may be used to reduce heat accumulation on the programmable logic device 12. The heat sink 30 may appear above the package, as shown, and/or below the package (eg, as a double-sided heat sink). The base die 24 may be attached to the package 32 substrate via C4 bumps or BGA solder balls 34.As discussed above, the package includes electrical connections (e.g., pins) to support communication between the component (e.g., base die 24) and the PCB. In the example shown in FIG. 2, two pairs of structural die 22 and base die 24 are shown via silicon bridge 36 (eg, embedded multi-die interconnect bridge (EMIB)) and silicon bridge interface 39. The microbumps 38 are communicatively connected to each other. The silicon bridge 36 also represents an interposer using BGA solder balls 34, and the BGA solder balls 34 can be electrically connected to other circuit systems, such as the PCB 52.Although the micro bump 26 and the micro bump 38 are described as being applied between the structural die 22 and the base die 24 or between edge devices, for example, between the silicon bridge 36 and the silicon bridge interface 39, it should be noted that Micro bumps are used at any suitable position between the components of the programmable logic device 12. For example, the micro bumps can be incorporated into any suitable position (e.g., middle, edge, diagonal) between the structural die 22 and the base die 24. In the same way, the micro bumps can be combined in any suitable pattern or amorphous shape to facilitate the interconnection between the various components described herein.It should be understood that FIG. 2 shows a 3D arrangement representing a specific embodiment, in which the structural die 22 is stacked on top of the base die 24, and the interconnection points or micro bumps 26 can be directly connected to the base die 24 The corresponding interconnection structure. In another embodiment, the structural die 22 and the base die 24 may be connected in a 2.5D arrangement that uses a silicon bridge 36 to connect the structural die 22 and the base die 24.As previously described, one or more die of the integrated circuit package, such as the base die 24, can communicate with memory devices on a different package on the package 32 or on the PCB 52 outside of the package 32. In order to use the package design of the integrated circuit for communicating with the memory on the package 32 to a different integrated circuit that communicates with the memory device outside the package 32, a new tapeout and photomask with a corresponding package architecture is created. However, by adding BGA connections on both the top side of the package and the bottom side of the package, this multifunctional package architecture can be utilized by a variety of integrated circuit device design types. The plurality of BGA connection parts may include BGA solder balls 34 on the bottom side (for example, the land side) of the package connected to the PCB and connected to device components (for example, one or more dies, memory devices). Etc.) BGA pads on the top side (eg, die side) of the package. In addition, this multi-functional package architecture can be used for integrated circuit dies that communicate with memory devices on and outside of the package. In addition, a single tape-out and photomask can be produced for this multifunctional package.To aid in explanation, FIG. 3 depicts a block diagram of a circuit card assembly (CCA) 50 including an integrated circuit device and a memory device and other components. In short, the CCA 50 may include an assembled PCB 52 with components. As shown in the figure, the CCA 50 includes an integrated circuit device 37 (for example, the programmable logic device 12 of FIGS. 1 and 2) and one or more memory devices 54 mounted on the PCB 52. The integrated circuit device 37 is mounted on the PCB 52 using a package 32 (not shown), and one or more memory devices 54 are mounted on the PCB 52 separated from the package storing the integrated circuit device 37. The package supporting the integrated circuit device 37 has its own photomask and tapeout. However, if the integrated circuit device 37 is to communicate with the memory device 54 integrated on the same package as it, different photomasks and tapes will be used for the package to achieve corresponding communication.In addition, the integrated circuit device 37 for accelerating dedicated tasks can use the off-package memory device 54 to access stored data for performing such tasks. Since the memory device 54 is outside the package, bandwidth and/or latency constraints may occur when transferring data to or from the memory device 54 outside the package. The package 32 can be modified, for example, by adding additional BGA connections on the other side of the existing package 32 (for example, the side without BGA solder balls 34) to alleviate these delay and package architecture constraints to allow memory The device 54 communicates with the integrated circuit device 37 on the same package 32, while still allowing communication with another memory device 54 outside the package 32.For illustration, FIG. 4 depicts the memory device 54 and the integrated circuit device 37 of FIG. 3 on the same package 32 using BGA connections on both the top and bottom sides of the package 32. In this way, the memory device 54 pre-mounted on the PCB 52 and not on the package 32 for the integrated circuit device 37 can utilize the same package 32, thereby creating an additional PCB area 55 that can be reserved for other devices or components.In addition, since the integrated circuit device 37 and the memory device 54 are on the same package 32, the data transfer between the memory device and the integrated circuit device 37 can avoid the use of PCB 52 traces. Rather, the additional BGA connection allows the integrated circuit device 37 to communicate with the memory device 54 through the traces of the BGA, thereby allowing faster data exchange between devices.Although the following descriptions represent a specific embodiment of a package 32 modified with BGA solder balls 34 on the bottom side of the package 32 and BGA pads 35 on the top side of the package 32, it should be noted that this document The described modified package architecture can use BGA solder balls 34 on one or more sides of the package, and BGA pads 35 on one or more sides of the package 32, so that the design of a single package 32 can allow The integrated circuit device 37 communicates with components or devices on and outside the package 32 (e.g., memory device 54).To illustrate in detail the BGA connections on the top side of the package 32 connectable to one or more dies of the integrated circuit device 37 and on the bottom side of the package 32 connectable to the PCB 52, FIG. 5 depicts a block diagram of the package 32 60, where there are BGA solder balls 34 on the bottom side of the package and BGA pads 35 on the top side of the package. BGA pads may refer to solder or non-solder surface mount pads (for example, solder resist defined pads (SMD) or non-solder resist defined pads (NSMD)). As shown, the package 32 integrates BGA pads 35 and BGA solder balls 34 into a single modified multi-function package 32 design, allowing the BGA pads 35 and/or the bottom of the package 32 to be connected to the top side of the package 32 The solder balls 34 on the side of the device communicate.In this example, the memory device 54 is connected to the BGA pad 35 on the top side of the package 32 along with the integrated circuit die 25 (eg, semiconductor die). As previously described, the modified package 32 allows communication between all devices connected to the various BGA connections of the package 32. Channels and/or paths may be routed between BGA areas to allow signal communication between various devices connected to package 32. In short, the channel is used to arrange the traces used to communicate signals between devices.As shown in the figure, the first channel 62 (Ch0_D0) (refer to channel 1, pin D0 of package 32) can be used for BGA solder balls 34 to BGA pads 35, BGA solder balls 34 to die 25, and BGA soldering The traces between the disk 35 to the die 25 allow the traces to pass through each device connected to each side of the package 32 via the first channel 62. Similarly, the second channel 64 (Ch0_D1) (refer to channel 1, pin D1 of package 32) can be used for BGA solder ball 34 to BGA pad 35, BGA solder ball 34 to die 25, and BGA pad 35 To the traces between the dies 25, allowing the traces to run through each device connected to each side of the package 32 via the second channel 64. In this way, using traces, die 25 can communicate with memory device 54 on the top side of package 32 and memory device 54 (not shown) outside of package 32, as described above. Moreover, other out-of-package devices on the PCB 52 can also communicate with the die 25 and the memory device 54 via the BGA solder balls 34.The technical effects of using the modified multifunctional integrated circuit package 32 architecture disclosed herein include the use of multiple BGA connections on a single package (for example, the top side is connected to the base die 24 and the bottom side is connected to the PCB 52) to It is allowed that devices previously provided outside the package 32 are arranged on the package 32. Thus, this modification increases the PCB area 55 applicable to other circuit components. For example, without increasing the size of the PCB 52 and/or CCA, other components, such as power delivery components or other dedicated components, can be added to the PCB 52. Maintaining a compact PCB size may be particularly beneficial for systems where form factors are constrained (for example, wireless devices that are constrained to smaller packages for mobility). As mentioned earlier, 5G applications may particularly benefit from the package architecture described in this article, because after moving the off-package device to the package, additional components and devices can be used on the unused PCB area.The methods and devices of the present disclosure can be incorporated into any suitable circuit. For example, these methods and devices can be incorporated into many types of devices, such as microprocessors or other integrated circuits. Exemplary integrated circuits include programmable array logic (PAL), programmable logic array (PLA), field programmable logic array (FPLA), electrically programmable logic device (EPLD), electrically erasable programmable logic device (EEPLD) , Logic Cell Array (LCA), Field Programmable Gate Array (FPGA), Application Specific Standard Product (ASSP), Application Specific Integrated Circuit (ASIC) and Microprocessor, to name a few.The technology proposed and claimed in this paper is referred to and applied to practical material objects and concrete examples. This technology obviously improves the technical field and is therefore not abstract, intangible or purely theoretical. In addition, if any claim attached to the end of this specification contains one or more designated as "means for [executing] [function]..." or "steps for [executing] [function]..." This element means that such an element will be explained in accordance with US Patent Law 35U.SC 112(f). However, for any claim containing an element specified in any other way, it means that such element will not be interpreted in accordance with U.S. Patent Law 35U.S.C 112(f).Although various modifications and alternative forms can be made to the embodiments set forth in the present disclosure, the specific embodiments have been illustrated in the drawings by way of example, and the specific embodiments have been described in detail herein. However, it should be understood that the present disclosure is not intended to be limited to the specific form disclosed. The present disclosure will cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims. |
An apparatus includes an array of bit cells (202, 204, 206, 208) that include a first row of bit cells and a second row of bit cells. The apparatus also includes a first global read word line (240) configured to be selectively coupled to the first row of bit cells and to the second row of bit cells. The apparatus further includes a second global read word line (244) configured to be selectively coupled to the first row of bit cells and to the second row of bit cells. The apparatus also includes a global write word line (242) configured to be selectively coupled to the first row of bit cells and to the second row of bit cells. The first global read word line, the second global read word line, and the global write word line are located in a common metal layer (M4). |
1.A storage device comprising:a bit cell array, the bit cell array comprising a first row bit cell and a second row bit cell;a first global read word line, the first global read word line configured to be selectively coupled to the first row bit cell and the second row bit cell;a second global read word line, the second global read word line configured to be selectively coupled to the first row bit cell and the second row bit cell;Wherein the first global read word line and the second global read word line are located in a common metal layer,Wherein the storage device further comprises a global write word line, the global write word line being configured to be selectively coupled to the first row bit unit and the second row bit unit, wherein the global write word line is located at In the shared metal layer.2.The storage device of claim 1 further comprising row selection logic, said row selection logic being configured to:Receiving a selection signal;Coupling the first global read word line, the second global read word line, and the global write word line to the first row bit unit if the select signal has a first logic value;The first global read word line, the second global read word line, and the global write word line are coupled to the second row bit unit if the select signal has a second logic value.3.The memory device of claim 1 wherein said common metal layer is a fourth metal layer.4.The memory device of claim 1 wherein said bit cell array is fabricated using a semiconductor fabrication process of less than 14 nanometers (nm).5.The memory device of claim 4 wherein said semiconductor fabrication process is a 10 nm process.6.The memory device of claim 5, wherein a pitch of the first global read word line is about 80 nm, wherein a pitch of the second global read word line is about 80 nm, and wherein the global write word line The pitch is approximately 80 nm.7.The memory device of claim 4 wherein said semiconductor fabrication process is a 7 nm process.8.The storage device of claim 1 further comprising:a first partial read word line coupled to the first row bit cell, the first partial read word line being formed in a second metal layer;a second partial read word line coupled to the first row bit cell, the second partial read word line being formed in the second metal layer;A first partial write word line coupled to the first row of bit cells, the first partial write word line being formed in a third metal layer.9.The storage device of claim 8 further comprising:a third partial read word line coupled to the second row bit cell, the third partial read word line being formed in the second metal layer;a fourth partial read word line coupled to the second row bit cell, the fourth partial read word line being formed in the second metal layer;A second partial write word line coupled to the second row of bit cells, the second partial write word line being formed in the third metal layer.10.The storage device of claim 1 wherein said first row bit unit comprises a 3-port static random access memory (SRAM) bit unit.11.The memory device of claim 1 wherein said bit cell array, said first global read word line, and said second global read word line are integrated in a static random access memory (SRAM) device And wherein the SRAM device is integrated in a mobile communication device.12.The memory device of claim 1 wherein said bit cell array, said first global read word line, and said second global read word line are integrated in a static random access memory (SRAM) device And wherein the SRAM device is integrated in a communication unit.13.A method for operating a storage device, comprising:Receiving a selection signal at a row selection logic;Coupling the first global read word line and the second global read word line to the first row bit cell if the select signal has a first logic value;If the select signal has a second logic value, coupling the first global read word line and the second global read word line to a second row bit unit;Wherein the first global read word line and the second global read word line are located in a common metal layer,Wherein, the method further comprises:A global write word line is selectively coupled to the first row bit cell and the second row bit cell, wherein the global write word line is located in the common metal layer.14.The method of claim 13 further comprising:Coupling the global write word line to the first row bit unit if the select signal has the first logic value;If the select signal has the second logic value, the global write word line is coupled to the second row bit unit.15.The method of claim 13 wherein said common metal layer is a fourth metal layer.16.The method of claim 13 wherein said first row bit cell and said second row bit cell are fabricated using a semiconductor fabrication process of less than 14 nanometers (nm).17.The method of claim 16 wherein said semiconductor fabrication process is a 7 nm process or a 10 nm process.18.A non-transitory computer readable medium comprising instructions that, when executed by a processor, cause the processor to:Initiating coupling the first global read word line and the second global read word line to the first row bit unit if the received select signal has a first logic value;Initiating coupling the first global read word line and the second global read word line to a second row bit unit if the received select signal has a second logic value;Wherein the first global read word line and the second global read word line are located in a common metal layer,Wherein the non-transitory computer readable medium further comprises instructions that, when executed by the processor, cause the processor to:A global write word line is selectively coupled to the first row bit cell and the second row bit cell, wherein the global write word line is located in the common metal layer.19.The non-transitory computer readable medium of claim 18, further comprising instructions that, when executed by said processor, cause said processor to:Initiating coupling the global write word line to the first row bit unit if the received select signal has the first logic value;Initiating coupling of the global write word line to the second row bit unit if the received select signal has the second logic value.20.The non-transitory computer readable medium of claim 18 wherein said common metal layer is a fourth metal layer.21.The non-transitory computer readable medium of claim 18, wherein the first row bit cell and the second row bit cell are fabricated using a fabrication process that is less than 14 nanometers (nm).22.The non-transitory computer readable medium of claim 18 wherein said first row bit unit comprises a 3-port static random access memory (SRAM) bit unit.23.The non-transitory computer readable medium of claim 22 wherein said 3-port SRAM bitcell comprises a first read port, a second read port, and a write port, wherein the first partial read word line The first global read word line is coupled to the first read port, wherein a second partial read word line couples the second global read word line to the second read port, wherein the local write word line will globally A write word line is coupled to the write port, wherein the first partial read word line and the second partial read word line are located in a second metal layer, and wherein the local write word line is located in a third metal layer.24.A storage device that includes:a first device for performing a read operation, configured to be selectively coupled to the first row bit unit and the second row bit cell;a second device for performing a read operation, configured to be selectively coupled to the first row bit unit and the second row bit cell;Wherein the first device for performing a read operation and the second device for performing a read operation are located in a common metal layer,Wherein the storage device further comprises means for performing a write operation configured to be selectively coupled to the first row bit unit and the second row bit cell, wherein the means for performing a write operation The device is located in the common metal layer.25.The storage device of claim 24 wherein said common metal layer is a fourth metal layer.26.The storage device of claim 24 wherein said first row bit unit comprises a 3-port static random access memory (SRAM) bit unit.27.The storage device of claim 26 wherein said 3-port SRAM bit unit comprises a first read port, a second read port, and a write port.28.The storage device of claim 27 wherein a first partial read word line couples said first means for performing a read operation to said first read port, wherein said second partial read word line The second device for performing a read operation is coupled to the second read port, and wherein the local write word line couples the means for performing a write operation to the write port. |
3-port bit cell array with shared first and second global read word lines and global write lines on the same metal layerI. Priority requirementsThe present application claims the benefit of commonly owned U.S. Patent Application Serial No. No. No. No. No. No. No. No. No. No. No. No. No. No.II. FieldThe present disclosure generally relates to read word lines and write lines for bit cells.III. Description of related technologyTechnological advances have produced smaller and more powerful computing devices. For example, there currently exist a wide variety of portable personal computing devices, including wireless phones that are small, lightweight, and easily carried by users, such as mobile and smart phones, tablets, and laptop computers. These devices communicate voice and data packets over a wireless network. In addition, many such devices incorporate additional functionality such as digital cameras, digital video cameras, digital recorders, and audio file players. Also, such devices are capable of processing executable instructions, including software applications that can be used to access the Internet, such as web browser applications. As such, these devices can include significant computing power.An electronic device, such as a wireless telephone, can include a memory that includes a memory array that is made up of one or more memory units. One type of memory unit that can be used for a memory (eg, a memory cache) is a 3-port bit unit. A 3-port bit cell can include two read ports and one write port and can be used in a static random access memory (SRAM) device. In 14 nanometer (nm) complementary metal oxide semiconductor (CMOS) technology, 3-port SRAM bit cells can be covered by using fin field effect transistors (FinFETs) and two metal layers (called M1 and M2 layers). Fabricated by a dual mask lithography-etch-lithography-etch (LELE) process. The top metal layer M2 can be patterned in a non-linear manner and can include "jogs" (eg, coils). For manufacturing processes of less than 14 nm (eg, 10 nm or 7 nm), the reduced cost and improved process control (eg, more accurate line width and line spacing) provided by self-aligned double patterning (SADP) compared to LELE Control), therefore SADP may be more preferred than LELE for forming M1 and M2. However, SADP may not support a nonlinear pattern including concave and convex portions.IV. OverviewThe present disclosure provides a design that includes a bit cell array that shares a common global word line in a single metal layer. For example, the bit cell array can include a first bit cell and a second bit cell. The first bit cell can be in the first row of the bit cell array and the second bit cell can be in the second row of the bit cell array. The first line can include two partial read word lines and one partial write word line. The second line can also include two partial read word lines and one partial write word line. The partial read word line may be in the second metal layer (M2), and the partial write word line may be in the third metal layer (M3). In a particular example, each bit cell (eg, each row) can have a width of approximately 132 nm (eg, approximately twice the contact polycrystalline pitch (CPP) or a contact polycrystalline (gate) line of bit cells) Twice the distance between them).The first global read word line, the second global read word line, and the global write word line may be in a common metal layer (eg, a fourth metal layer (M4)). The spacing of each global word line can be approximately 80 nm. The global word line can be placed in M4 across the width of the first bit cell and the width of the second bit cell (eg, a combined width of approximately 264 nm). Row select logic may be coupled to the global word lines to control whether the global word lines are coupled to a first bit cell (eg, a first row) or a second bit cell (eg, a second row). Thus, all global word lines can be located in a single metal layer (M4), as opposed to one global word line per metal layer, which improves routing between different components within the bit cell. For example, the sixth metal layer (M6) and the eighth metal layer (M8) may be relatively open to the wiring because each global word line is in M4. Additionally, the global word lines can have a relatively large pitch (eg, 80 nm), which can reduce read/write latency due to reduced word line resistive-capacitive (RC) impedance.In a particular aspect, an apparatus includes an array of bit cells, the array of bit cells including a first row bit cell and a second row bit cell. The apparatus also includes a first global read word line configured to be selectively coupled to the first row bit cell and the second row bit cell. The apparatus further includes a second global read word line configured to be selectively coupled to the first row bit cell and the second row bit cell. The apparatus also includes a global write word line configured to be selectively coupled to the first row bit unit and the second row bit unit. The first global read word line, the second global read word line, and the global write word line are located in a common metal layer.In another particular aspect, a method includes receiving a selection signal at a row selection logic. The method also includes coupling the first global read word line, the second global read word line, and the global write word line to the first row bit cell if the select signal has a first logic value. The method also includes coupling the first global read word line, the second global read word line, and the global write word line to a second row bit unit if the select signal has a second logic value . The first global read word line, the second global read word line, and the global write word line are located in a common metal layer.In another particular aspect, a non-transitory computer readable medium comprising instructions, the instructions, when executed by a processor, cause the processor to: if the received selection signal has a first logic value, then A global read word line, a second global read word line, and a global write word line are coupled to the first row of bit cells. The instructions are also executable to cause the processor to: if the received select signal has a second logic value, then the first global read word line, the second global read word line, and the global The write word line is coupled to the second row bit unit. The first global read word line, the second global read word line, and the global write word line are located in a common metal layer.In another particular aspect, an apparatus includes a first device for performing a read operation configured to be selectively coupled to a first row bit cell and a second row bit cell. The apparatus also includes second means for performing a read operation configured to be selectively coupled to the first row bit unit and the second row bit cell. The apparatus further includes means for performing a write operation configured to be selectively coupled to the first row bit unit and the second row bit cell. The first means for performing a read operation, the second means for performing a read operation, and the means for performing a write operation are located in a common metal layer.One particular advantage provided by at least one of the disclosed embodiments is improved routing between different components within a bit cell. For example, the upper metal layers (M6 and M8) may be relatively open to routing because global word lines (eg, two read global word lines and one write global word line) are placed in a single metal layer (M4). In addition, since the global word line is placed across the width of two bit cells (as opposed to one bit cell), the global word line can have a relatively large width, which reduces read/write due to reduced word line RC impedance. waiting time. Other aspects, advantages and features of the present disclosure will become apparent after reading the entire application. The entire application includes the following sections: Brief Description, Detailed Description, and Claim.V. Brief description of the drawing1A and 1B are circuit diagrams of an illustrative embodiment of a 3-port bit cell;2 is a layout diagram of a 3-port SRAM array with shared global read word lines and write lines;3 is an illustrative embodiment of row select logic for a 3-port SRAM array with shared global read word lines and write lines;4 is a flow diagram of a particular illustrative embodiment of a method of operating a 3-port SRAM array having shared global read word lines and write word lines;5 is a block diagram of an electronic device including a 3-port SRAM array with shared global read word lines and write word lines;6 is a data flow diagram of a particular illustrative embodiment of a fabrication process for fabricating an electronic device including a 3-port SRAM array having shared global read word lines and write word lines.VI. Detailed descriptionThe reduction from 14nm technology can present challenges. For example, for 14 nm and larger technology nodes, the width of the 3-port bit cell can be limited to less than or equal to twice the contact polycrystalline pitch (CPP, the distance between contact polycrystalline (gate) lines). . For 14 nm, the CPP can be about 80-90 nm. As used herein, the cell "width" can be perpendicular to the polycrystalline direction and along the fin direction. For a technology node of less than 14 nm, the CPP is reduced, which results in a reduced bit cell width (eg, a bit cell width of approximately 132 nm). When the bit cell width is reduced (ie, narrowed), the word line and the read word line in the bit cell are also narrowed, resulting in increased read due to increased word line resistor-capacitor (RC) impedance. / Write wait time.In a conventional bit cell, the global word line may be located in the fourth metal layer (M4), the sixth metal layer (M6), and the eighth metal layer (M8). For example, each global word line can have a width of approximately 80 nm, which results in a single global word line per metal layer. To illustrate, the first global read word line can be located in M4, the second global read word line can be located in M6, and the global write word line can be located in M8. Placing global word lines in M4, M6, and M8 reduces the routing capability within the bit cells. For example, routing between different components and layers within a bit cell using M4, M6, and M8 may be degraded because each layer includes a relatively large global word line.To circumvent this problem, the present disclosure provides global word lines (eg, a first global read word line, a second global read word line, and a global write word line) in a common metal layer (eg, M4). The spacing of each global word line can be approximately 80 nm, and the global word line can be placed in a common metal layer across the width of two bit cells (eg, 132 nm X 2 = 264 nm). Row select logic may be coupled to the global word lines to control whether the global word lines are coupled to a first bit cell (eg, a first row) or a second bit cell (eg, a second row).Specific embodiments of the present disclosure are described below with reference to the drawings. In the description and drawings, common features are indicated by the common reference numerals for the clarity of the embodiments depicted and described.Referring to Figures 1A and 1B, a circuit diagram of a first illustrative embodiment of a bit cell 100 is shown. Bit unit 100 includes a storage latch 110. The memory latch 110 can include a pair of cross-coupled inverters 112, 114. Each of the inverters 112, 114 may include a p-type metal oxide semiconductor (PMOS) transistor and an n-type metal oxide semiconductor (NMOS) transistor, as shown in FIG. 1B.The memory latch 110 can be coupled (eg, coupled) to the first write transistor 121 and the second write transistor 122. The write transistors 121, 122 can be NMOS transistors as shown. The first write transistor 121 may be connected to the first write bit line (WBL1) 135 and the write word line (WWL) 137, and the second write transistor 122 may be connected to the second write bit line (WBL2) 136 and the write word line (WWL) 137. . The first write transistor 121 and the second write transistor 122 may be complementary write transistors of the write port of the bit cell 100. When one of the write word line 137 and the write bit line 135 or 136 is asserted, the write port can be used to write a logic 0 (eg, low) value to the memory latch 110. When the write word line 137 and the other of the write bit lines 135 or 136 are asserted, the write port can be used to write a logic 1 (eg, high) value to the storage latch 110.The memory latch 110 can also be connected to the first read drive transistor 123 and the second read drive transistor 124. The first read drive transistor 123 can be coupled to the first read transistor 125 and the second read drive transistor 124 can be coupled to the second read transistor 126. Read drive transistors 123, 124 and read transistors 125, 126 may be NMOS transistors as shown. The first read transistor 125 is connectable to the first read bit line (RBL1) 131 and the first read word line (RWL1) 133. The second read transistor 126 can be coupled to a second read bit line (RBL2) 132 and a second read word line (RWL2) 134. Transistors 123 and 125 may correspond to a first read port of bit cell 100, and transistors 124 and 126 may correspond to a second read port of bit cell 100. Read word lines 133 and/or 134 may be asserted during a read operation and these read ports may be complementary read ports. For example, when the data value at the first read port is a logic 0, the data value at the second read port is a logic one and vice versa. In the example of FIG. 1B, the first read port (left side) is shown as reading a logic 0 value ("0") and the second read port (right side) is shown as reading a logic 1 ("1") value. .Bit unit 100 may thus include two read ports and one write port, and may alternatively be referred to as a "3-port" bit unit. Since bit cell 100 includes ten transistors, bit cell 100 can also be referred to as a "10T" bit cell. In a particular embodiment, bitcell 100 is included in a static random access memory (SRAM) device and provides high speed parallel memory access. As an illustrative and non-limiting example, an SRAM device including bitcell 100 can be used in the L1 and/or L2 cache of the processor. The SRAM device can include one or more bit cell arrays arranged in a grid-like manner, including multi-row bit cells and multi-column bit cells.As further described herein, bit cell 100 has a height (H) and a width (W). Depending on the described technique, the width (W) may be approximately twice the contact polycrystalline pitch (CPP) associated with the bit cell 100, where the CPP corresponds to the distance between the contact polycrystalline (gate) lines. CPP is alternatively referred to as the gate pitch. For example, CPP is the distance from the edge of the polyline to the corresponding edge of the adjacent poly line (eg, top edge to top edge, or bottom edge to bottom edge). CPP can therefore also be considered to be equal to the sum of one polycrystalline width and one polycrystalline spacing. In a 10 nm semiconductor fabrication process (eg, a process with a minimum available line distance/feature size of 10 nm), the CPP can be approximately equal to 60-66 nm. For comparison purposes, the CPP for a 14 nm process (eg, a process with a minimum available line distance/feature size of 14 nm) may be about 80-90 nm.In order to maintain a bit cell width for a sub-14 nm process (eg, a 10 nm process or a 7 nm process) at 2*CPP (eg, 132 nm) or less and improve routing between different components of a bit cell, the techniques of the present disclosure (eg, A plurality of bit cell rows (e.g., a first bit cell row and a second bit cell row) that share a common global word line in a single metal layer are described with reference to FIG. 2). For example, the first global read word line, the second global read word line, and the global write word line can be located in the fourth metal layer (M4). The spacing of each global word line can be approximately 80 nm. Since the width of the two bit cell rows is approximately 264 nm (eg, 2*132 nm), the three global word lines can be patterned using a width less than the width of the two bit cells. For example, the total width occupied by the three word lines (eg, 3*80 nm=240 nm) is less than the width of two bit cell rows.As further described with respect to FIG. 2, the selection logic can selectively couple the global word line to the first bit cell row or the second bit cell row. Thus, all global word lines can be located in a single metal layer (M4), as opposed to one global word line per metal layer, which improves routing between different components within the bit cell. For example, the sixth metal layer (M6) and the eighth metal layer (M8) may be relatively open to the wiring because each global word line is in the fourth metal layer (M4). Additionally, the global word lines can have a relatively large pitch (eg, 80 nm), which can reduce read/write latency due to reduced word line resistive-capacitive (RC) impedance.Referring to Figure 2, a layout diagram 200 of a 3-port SRAM array with shared global read word lines and write lines is shown. The layout diagram 200 includes a first bit unit 202, a second bit unit 204, a third bit unit 206, and a fourth bit unit 208. Each bit cell 202-208 can have the circuit layout shown in Figures 1A and 1B. The first bit unit 202 and the third bit unit 206 can be included in a first array of 3-port SRAM arrays, and the second bit unit 204 and the fourth bit unit 208 can be included in a second array of 3-port SRAM arrays . The first array (eg, first and third bit cells 202, 206) may have twice the width of the CPP equal to one of the bit cells 202-208, and the second array (eg, the second and fourth bit cells) 204, 208) may also have twice the width of the CPP equal to one of the bit cells 202-208. For example, in a 10 nm semiconductor fabrication process, the first array and the second array can each have a width of approximately 132 nm. Thus, the combined width of the first array and the second array can be approximately equal to 264 nm.At the time of fabrication, bit cells 202-208 may include various components/layers, such as fins (FinFETs including source/drain regions), transistor gates (alternatively referred to as polycrystalline lines), for transistor source Middle process contact (MD) of the /drain region (eg, local interconnect), central process contact (MP) for gate/poly line (eg, local interconnect), first metal layer (M1) ), connecting MD and MP to the via of M1 (via 0), the second metal layer (M2), the via connecting M1 to M2 (via 1), the third metal layer (M3), and M2 is connected to the through hole of M3 (through hole 2).Figure 2 illustrates a second metal layer (M2) and a third metal layer (M3). The second metal layer (M2) may be coupled to the bit cells 202-208, and the third metal layer (M3) may be patterned over the second metal layer (M2). The first partial read word line 220 can be included in the second metal layer (M2). For bit cells 202, 206 in the first array, first partial read word line 220 may correspond to first read word line (RWL1) 133 of FIGS. 1A and 1B. For example, the first partial read word line 220 can be coupled to the gate of a transistor in the first bit cell 202 (which corresponds to the transistor 125 of FIGS. 1A and 1B) and can be coupled to a transistor in the third bit cell 206 (which Corresponds to the gate of transistor 125).The first partial write word line 222 can be included in the third metal layer (M3). For bit cells 202, 206 in the first array, first partial write word line 222 may correspond to first write word line (WWL) 137 of FIGS. 1A and 1B. For example, the first partial write word line 222 can be coupled to the gate of a transistor in the first bit cell 202 (which corresponds to the transistors 121, 122 of FIGS. 1A and 1B) and can be coupled to a transistor in the third bit cell 206 ( It corresponds to the gate of the transistors 121, 122).The second partial read word line 224 can also be included in the second metal layer (M2). For bit cells 202, 206 in the first array, second partial read word line 224 may correspond to second read word line (RWL2) 134 of FIGS. 1A and 1B. For example, second partial read word line 224 can be coupled to the gate of a transistor in first bit cell 202 (which corresponds to transistor 126 of FIGS. 1A and 1B) and can be coupled to a transistor in third bit cell 206 (which Corresponds to the gate of transistor 126).The third partial read word line 230 can also be included in the second metal layer (M2). For bit cells 204, 208 in the second array, third partial read word line 230 may correspond to first read word line (RWL1) 133 of FIGS. 1A and 1B. For example, the third partial read word line 230 can be coupled to the gate of a transistor in the second bit cell 204 (which corresponds to the transistor 125 of FIGS. 1A and 1B) and can be coupled to a transistor in the fourth bit cell 208 (which Corresponds to the gate of transistor 125).The second partial write word line 232 can also be included in the third metal layer (M3). For bit cells 204, 208 in the second array, second partial write word line 232 may correspond to write word line (WWL) 137 of FIGS. 1A and 1B. For example, the second partial write word line 232 can be coupled to the gate of a transistor in the second bit cell 204 (which corresponds to the transistors 121, 122 of FIGS. 1A and 1B) and can be coupled to a transistor in the fourth bit cell 208 ( It corresponds to the gate of the transistors 121, 122).The fourth partial read word line 234 can also be included in the second metal layer (M2). For bit cells 204, 208 in the second array, fourth local read word line 234 may correspond to second read word line (RWL2) 134 of FIGS. 1A and 1B. For example, the fourth partial read word line 234 can be coupled to the gate of a transistor in the second bit cell 204 (which corresponds to the transistor 126 of FIGS. 1A and 1B) and can be coupled to a transistor in the fourth bit cell 208 (which Corresponds to the gate of transistor 126).In a standard bit cell including a poly-gate having a length oriented in a lateral direction, the first metal layer may have a length oriented in a longitudinal direction, and the second metal layer may have a length oriented in a lateral direction ( As illustrated in the embodiment of Figure 2, and the third metal layer can have a length that is oriented in the longitudinal direction. However, since the length of the third metal layer (M3) of FIG. 2 is oriented in the lateral direction, the third metal layer (M3) is the "wrong direction layer". Thereby, the pitch of the third metal layer (M3) can be approximately equal to 126 nm. Since the first metal layer (M1) (not shown) and the second metal layer (M2) of FIG. 2 are "correct direction layers" (eg, the layers have orientations in a similar direction to the corresponding layers in the standard bit cells) The length), therefore the first metal layer (M1) and the second metal layer (M2) have a relatively low pitch (for example, approximately equal to 42 nm).When migrating from a 14 nm process to a 10 nm process, SADP may be preferred for each metal layer of patterned bit cells 202-208. Since SADP may not be suitable for bump portions/coils, the metal layers of bit cells 202-208 may correspond to only a linear pattern. When a linear only pattern is used at 10 nm, three independently accessible word lines (2 read word lines and 1 write word line) can be patterned on the second and third metal layers of each bit cell 202-208 ( M2, M3).As described above, the second metal layer (M2) is the "correct direction layer" and has a relatively low pitch. Thus, the two read word lines (RWL1, RWL2) 133, 134 can be patterned in the second metal layer (M2) without extending the width of the bit cells 202-208. For example, each of the read word lines (RWL1, RWL2) 133, 134 may have a width of about 23 nm (satisfying the pitch requirement of the second metal layer (M2)) and may accommodate the width of the bit cells 202-208 (eg, 2*) CPP or 132nm).As described above, the third metal layer (M3) is a "wrong direction layer" and has a relatively high pitch. Thus, a single write word line (WWL) 137 can be patterned in the third metal layer (M3) of each bit cell 202-208 without extending the width of bit cells 202-208. Since a single write word line (WWL) 137 is patterned in the third metal layer (M3) (as opposed to two read word lines (RWL1, RWL2) 133, 134 which will increase the width of the bit cells 202-208), Line (WWL) 137 can have a relatively large width. For example, the write word line (WWL) 137 can have a width of about 66 nm (meeting the spacing requirements of the third metal layer (M3)) and can accommodate the width of the bit cells 202-208. The relatively large width of the write word line (WWL) 137 reduces the write latency of the bit cells 202-208. For example, the increased width of the write word line (WWL) 137 can reduce the RC impedance of the write word line (WWL) 137, resulting in reduced latency.Figure 2 also illustrates a fourth metal layer (M4). The first global read word line 240, the global write word line 242, and the second global read word line 244 may be included in the fourth metal layer (M4). The fourth metal layer (M4) may be a "correct direction layer" (eg, oriented in a similar manner to a corresponding layer in a standard bit cell) and may have relatively low spacing requirements. For example, in the 10 nm fabrication process, the pitch requirement of the fourth metal layer (M4) may be about 80 nm. Thus, the pitch of each global word line 240-244 can be approximately 80 nm. Since the combined width of the first array and the second array is about 264 nm (eg, 2*132 nm), the three global word lines 240-244 can be patterned using a smaller width than the combined width of the first array and the second array. . For example, the total width (e.g., 3*80 nm = 240 nm) occupied by the three global word lines 240-244 is less than the combined width of the first and second arrays.Row select logic 250 can be configured to control whether global word lines 240-244 are coupled to the first array or the second array. For example, based on a logic value (eg, a voltage level) of the select signal, row select logic 250 can couple one of global word lines 240-244 to a corresponding local word line 220-224 or a second array in the first array. Corresponding local word lines 230-234. The operation of row select logic 250 is described in more detail with respect to FIG.The floor plan 200 of FIG. 2 can provide improved routing between different components within bit cells 202-208. For example, a bit having one global word line in the fourth metal layer (M4), one global word line in the sixth metal layer (M6), and one global word line in the eighth metal layer (M8) In contrast to the cell architecture, the floor plan 200 includes three global word lines 240-244 in the fourth metal layer (M4). Thus, the upper metal layer (eg, the sixth metal layer (M6) and the eighth metal layer (M8)) can be relatively open to the wiring because the global word lines 240-244 are placed in a single metal layer (eg, a fourth metal) In layer (M4)). In addition, since global word lines 240-244 are placed across the width of the two arrays (as opposed to a typical bit cell architecture in which global word lines are placed across the width of a single array), global word lines 240-244 can have relatively large widths. This reduces the read/write latency due to the reduced word line RC impedance.Referring to Figure 3, a particular illustrative embodiment of the row selection logic 250 of Figure 2 is illustrated. Row select logic 250 includes a first logical NAND gate 302, a second logical NAND gate 304, a third logical NAND gate 306, a first logical AND gate 312, a second logical AND gate 314, and a third logical AND gate 316. .Row select logic 250 can be configured to control whether global word lines 240-244 are coupled to a first bit cell array (eg, first and third bit cells 202, 206 of FIG. 2) or a second bit cell array (eg, a map) The second and fourth bit units 204, 208) of 2. To illustrate, a selection signal 320 can be provided to the first input of each of the logical NAND gates 302-306 and the second input of each of the logical AND gates 312-316. The first global read word line 240 can be coupled to a second input of the first logical NAND gate 302 and a first input of the first logical AND gate 312. The global write word line 242 can be coupled to a second input of the second logical NAND gate 304 and a first input of the second logical AND gate 314. The second global read word line 244 can be coupled to a second input of the third logical NAND gate 306 and a first input of the third logical AND gate 316.If the first global read word line 240 has a logic high voltage level and the select signal 320 has a logic low voltage level, the first logical NAND gate 302 provides a logic high voltage level to the first local read word line 220 (eg, To "couple" the first global read word line 240 to the first partial read word line 220), and the first logic AND gate 312 provides a logic low voltage level to the third partial read word line 230 (eg, to Global read word line 240 is "decoupled" from third partial read word line 230. If the first global read word line 240 has a logic high voltage level and the select signal 320 has a logic high voltage level, the first logical NAND gate 302 provides a logic low voltage level to the first partial read word line 220 (eg, To "decouple" the first global read word line 240 from the first partial read word line 220, and the first logic AND gate 312 provides a logic high voltage level to the third partial read word line 230 (eg, to A global read word line 240 is "coupled" to a third partial read word line 230).If global write word line 242 has a logic high voltage level and select signal 320 has a logic low voltage level, second logic NAND gate 304 provides a logic high voltage level to first partial write word line 222 (eg, to write globally) Line 242 is "coupled" to first partial write word line 222), and second logic AND gate 314 provides a logic low voltage level to second partial read word line 232 (eg, to read global write word line 242 from the fourth partial write word) Line 234 is "decoupled"). If global write word line 242 has a logic high voltage level and select signal 320 has a logic high voltage level, second logic NAND gate 304 provides a logic low voltage level to first local write word line 222 (eg, to write globally) Line 242 is "decoupled" from first partial write word line 222, and second logic AND gate 314 provides a logic high voltage level to second partial write word line 232 (eg, to "couple" global write word line 242 to the second Partial write line 232).If the second global read word line 244 has a logic high voltage level and the select signal 320 has a logic low voltage level, the third logical NAND gate 306 provides a logic high voltage level to the second local read word line 224 (eg, To "couple" the second global read word line 244 to the second partial read word line 224), and the third logic AND gate 316 provides a logic low voltage level to the fourth partial read word line 234 (eg, to Global read word line 244 is "decoupled" from fourth partial read word line 234. If the second global read word line 244 has a logic high voltage level and the select signal 320 has a logic high voltage level, the third logical NAND gate 306 provides a logic low voltage level to the second local read word line 224 (eg, To "decouple" the second global read word line 244 from the second partial read word line 224, and the third logic AND gate 316 provides a logic high voltage level to the fourth partial read word line 234 (eg, to The two global read word lines 244 are "coupled" to a fourth partial read word line 234).Row select logic 250 may enable global word lines 240-244 to be selectively coupled to respective local word lines 220-224, 230-224. Row select logic 250 may enable global word lines 240-244 to be placed in a fourth metal layer (M4) with three different metal layers (eg, fourth metal layer (M4), sixth metal layer (M6) And the eighth metal layer (M8) is opposite. Thus, the upper metal layer (eg, the sixth metal layer (M6) and the eighth metal layer (M8)) can be relatively open to the wiring because the global word lines 240-244 are placed in a single metal layer (eg, a fourth metal) In layer (M4)). Thus, row select logic 250 can also enable global word lines 240-244 to have relatively large spacing, which can reduce read/write latency due to reduced word line RC impedance.Referring to FIG. 4, a flow diagram of a particular illustrative embodiment of a method 400 of operating a 3-port SRAM array having shared global read word lines and write word lines is shown. The method can be performed using the row selection logic 250 of Figures 2 and 3.The method 400 includes, at 402, receiving a selection signal. For example, referring to FIG. 3, row selection logic 250 can receive selection signal 320. A select signal 320 can be provided to the first input of each of the logical NAND gates 302-306 and the second input of each of the logical AND gates 312-316.At 404, if the select signal has a first logic value, the first global read word line, the second global read word line, and the global write word line can be coupled to the first row bit unit. For example, referring to FIGS. 2 and 3, if the first global read word line 240 has a logic high voltage level and the select signal 320 has a logic low voltage level, the first logical NAND gate 302 provides the first partial read word line 220. A logic high voltage level (eg, to "couple" the first global read word line 240 to the first partial read word line 220), and the first logic AND gate 312 provides a logic low voltage to the third partial read word line 230 Flat (eg, to "decouple" the first global read word line 240 from the third partial read word line 230). The first partial read word line 220 is coupled to a first row of bit cells (eg, the first bit cell array of FIG. 2).As another example, if global write word line 242 has a logic high voltage level and select signal 320 has a logic low voltage level, second logical NAND gate 304 provides a logic high voltage level to first partial write word line 222 (eg, To "couple" the global write word line 242 to the first partial write word line 222), and the second logic AND gate 314 provides a logic low voltage level to the second partial read word line 232 (eg, to place the global write word line 242 from The fourth partial read word line 234 is "decoupled"). The first partial write word line 222 is coupled to a first row of bit cells (eg, the first bit cell array of FIG. 2). As another example, if the second global read word line 244 has a logic high voltage level and the select signal 320 has a logic low voltage level, the third logical NAND gate 306 provides a logic high voltage to the second local read word line 224. The level (eg, to "couple" the second global read word line 244 to the second local read word line 224), and the third logic AND gate 316 provides a logic low voltage level to the fourth partial read word line 234 (eg, To "decouple" the second global read word line 244 from the fourth partial read word line 234. The second partial read word line 224 is coupled to a first row of bit cells (eg, the first bit cell array of FIG. 2).At 406, if the select signal has a second logic value, the first global read word line, the second global read word line, and the global write word line can be coupled to the second row bit unit. For example, referring to FIGS. 2 and 3, if the first global read word line 240 has a logic high voltage level and the select signal 320 has a logic high voltage level, the first logical NAND gate 302 provides the first partial read word line 220. A logic low voltage level (eg, to "decouple" the first global read word line 240 from the first local read word line 220), and the first logic AND gate 312 provides a logic high voltage to the third local read word line 230 The level (eg, to "couple" the first global read word line 240 to the third partial read word line 230). The third partial read word line 230 is coupled to a second row bit cell (eg, the second bit cell array in FIG. 2).As another example, if global write word line 242 has a logic high voltage level and select signal 320 has a logic high voltage level, second logical NAND gate 304 provides a logic low voltage level to first partial write word line 222 (eg, To "decouple" the global write word line 242 from the first partial write word line 222, and the second logical AND gate 314 provides a logic high voltage level to the second partial write word line 232 (eg, to place the global write word line 242 " Coupling" to the second partial write word line 232). The second partial write word line 232 is coupled to a second row of bit cells (eg, the second bit cell array of FIG. 2). As another example, if the second global read word line 244 has a logic high voltage level and the select signal 320 has a logic high voltage level, the third logical NAND gate 306 provides a logic low voltage to the second local read word line 224. The level (eg, to "decouple" the second global read word line 244 from the second local read word line 224), and the third logic AND gate 316 provides a logic high voltage level to the fourth partial read word line 234 ( For example, the second global read word line 244 is "coupled" to a fourth partial read word line 234). The fourth partial read word line 234 is coupled to a second row of bit cells (eg, the second bit cell array of FIG. 2).The first global read word line 240, the global write word line 242, and the second global read word line 244 are located in a common metal layer (eg, the fourth metal layer (M4) of FIG. 2). Thus, method 400 of FIG. 4 can be provided for coupling global word lines 240-244 to respective local word lines 220-224, 230-234 such that global word lines 240-244 can be placed in a common metal layer. technology.Referring to Figure 5, a block diagram of a particular illustrative embodiment of an electronic device is depicted and generally designated 500. Electronic device 500 includes a processor 510, such as a digital signal processor (DSP) or central processing unit (CPU), coupled to memory 532.Processor 510 can be coupled to SRAM device 564, which includes an array of bit cells having shared global word lines. For example, SRAM device 564 can include bit cells 202-208 of FIG. 2 and can include a metal layer configuration as described with respect to FIG. In a particular embodiment, SRAM device 564 may also include row select logic 250 of FIGS. 2-3. In another particular embodiment, the functionality of row select logic 250 may be implemented by processor 510. It should be noted that although FIG. 5 illustrates the use of SRAM device 564 coupled to processor 510, this should not be considered limiting. An SRAM device (such as SRAM device 564) in accordance with the present disclosure may be included in any type of memory of any type of electronic device.FIG. 5 shows display controller 526 coupled to processor 510 and display 528. An encoder/decoder (CODEC) 534 can also be coupled to the processor 510. Speaker 536 and microphone 538 can be coupled to CODEC 534. FIG. 5 also indicates that wireless controller 540 can be coupled to processor 510 and antenna 542. In a particular embodiment, processor 510, display controller 526, memory 532, CODEC 534, and wireless controller 540 are included in a system level package or system on a chip (e.g., mobile station modem (MSM)) 522. In a particular embodiment, input device 530 and power source 544 are coupled to system-on-chip device 522. Moreover, in a particular embodiment, as illustrated in FIG. 5, display 528, input device 530, speaker 536, microphone 538, antenna 542, and power source 544 are external to system-on-chip device 522. However, each of display 528, input device 530, speaker 536, microphone 538, antenna 542, and power source 544 can be coupled to a component of system-on-chip device 522, such as an interface or controller.Although SRAM device 564 is depicted in wireless device 500 of FIG. 5, in other embodiments, SRAM device 564 can be included in other devices. As a non-limiting example, SRAM device 564 can be included in a set top box, an entertainment unit, a navigation device, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, Digital music player, portable music player, video player, digital video player, digital video disc (DVD) player, portable digital video player, or any other device.In conjunction with the described embodiments, an apparatus includes a first device for performing a read operation configured to be selectively coupled to a first row bit unit and a second row bit unit. For example, a first device for performing a read operation can include the first global read word line 240 of FIGS. 2-3, the SRAM device 564 of FIG. 5, one or more other devices configured to perform a read operation, or any thereof combination.The apparatus also includes a second device for performing a read operation configured to be selectively coupled to the first row bit unit and the second row bit unit. For example, a second device for performing a read operation can include the second global read word line 244 of FIGS. 2-3, the SRAM device 564 of FIG. 5, one or more other devices configured to perform a read operation, or any thereof combination.The apparatus also includes means for performing a write operation configured to be selectively coupled to the first row bit unit and the second row bit cell. For example, means for performing a write operation may include global write word line 242 of FIGS. 2-3, SRAM device 564 of FIG. 5, one or more other devices configured to perform a write operation, or any combination thereof. A first device for performing a read operation, a second device for performing a read operation, and a device for performing a write operation may be located in a common metal layer (eg, the fourth metal layer (M4) of FIG. 2).The devices and functionality disclosed above can be designed and configured in a computer file (eg, RTL, GDSII, GERBER, etc.) stored on a computer readable medium. Some or all of such documents may be provided to a manufacturing process personnel who manufacture devices based on such documents. The resulting product includes a semiconductor wafer that is subsequently diced into semiconductor dies and packaged into semiconductor chips. These chips can be used in electronic devices. FIG. 6 depicts a particular illustrative embodiment of an electronic device manufacturing process 600. For example, fabrication process 600 can be used to fabricate an electronic device that includes a bit cell array in accordance with the shared global word line technique described with respect to Figures 2-3.Physical device information 602 is received at manufacturing process 600, such as at research computer 606. The physical device information 602 can include design information representative of at least one physical property of the bit cell array in accordance with the shared global word line technique described with respect to Figures 2-3. For example, physical device information 602 can include physical parameters, material properties, and structural information that are input via a user interface 604 that is coupled to research computer 606. Research computer 606 includes a processor 608, such as one or more processing cores, coupled to a computer readable medium (eg, a non-transitory computer readable medium), such as memory 610. Memory 610 can store computer readable instructions that can be executed to cause processor 608 to transform physical device information 602 to follow a certain file format and generate library file 612.In a particular embodiment, library file 612 includes at least one data file including transformed design information. For example, library file 612 can include a library of bit cells that are provided for use with electronic design automation (EDA) tool 620, including a bit cell array in accordance with the shared global word line technique described with respect to Figures 2-3.Library file 612 can be used in conjunction with EDA tool 620 at design computer 614, which includes a processor 616, such as one or more processing cores, coupled to memory 618. EDA tool 620 can be stored as processor-executable instructions at memory 618 to enable a user of design computer 614 to design a bit cell array of library file 612 that includes shared global word line techniques as described with respect to Figures 2-3. Circuit. For example, a user of design computer 614 can input circuit design information 622 via user interface 624 coupled to design computer 614. Circuit design information 622 may include design information representative of at least one physical property of a bit cell array in accordance with the shared global word line technique described with respect to FIGS. 2-3. To illustrate, the circuit design properties may include the identification of a particular circuit and its relationship to other components in the circuit design, positioning information, feature size information, interconnect information, or representation of a shared global word line technique as described with respect to Figures 2-3. Additional information on the physical properties of the bit cell array.Design computer 614 can be configured to transform design information (including circuit design information 622) to follow a certain file format. To illustrate, the file format can include a database binary file format, such as a Graphics Data System (GDSII) file format, that represents planar geometry, textual indicia, and other information about the circuit layout in a layered format. Design computer 614 can be configured to generate a data file including the transformed design information, such as a GDSII file 626 including information describing the bit cell array according to the shared global word line technique described with respect to Figures 2-3, as well as other circuitry or information. . To illustrate, the data file can include information corresponding to a system on a chip (SOC) including a bit cell array in accordance with the shared global word line technique described with respect to FIGS. 2-3, and also including additional electronic circuitry within the SOC And components.The GDSII file 626 can be received at the manufacturing process 628 to fabricate a bit cell array according to the shared global word line technique described with respect to Figures 2-3 in accordance with the transformed information in the GDSII file 626. For example, the device fabrication process can include providing GDSII file 626 to mask manufacturer 630 to create one or more masks, such as a mask for use with lithography processing, which is illustrated as representative mask 632. Mask 632 can be used to create one or more wafers 633 during the fabrication process, which can be tested and divided into dies, such as representative dies 636. Die 636 includes circuitry including a device that includes a bit cell array in accordance with the shared global word line technique described with respect to Figures 2-3.For example, manufacturing process 628 can include processor 634 and memory 635 to initiate and/or control manufacturing process 628. Memory 635 can include executable instructions, such as computer readable instructions or processor readable instructions. These executable instructions can include one or more instructions that are executable by a computer, such as processor 634.Manufacturing process 628 can be implemented by a fully automated or partially automated manufacturing system. For example, manufacturing process 628 can be automated according to scheduling. The fabrication system can include manufacturing equipment (eg, processing tools) for performing one or more operations to form a semiconductor device. For example, the fabrication equipment can be configured to deposit one or more materials using chemical vapor deposition (CVD) and/or physical vapor deposition (PVD), using a single mask or multiple mask lithography-etch process (eg, dual Mask LELE) to pattern the material, use a lithography-freeze-lithography-etch (LFLE) process to pattern the material, use a self-aligned double patterning (SADP) process to pattern the material, epitaxially grow one or more Material, conformally depositing one or more materials, applying a hard mask, applying an etch mask, performing etching, performing planarization, forming a dummy gate stack, forming a gate stack, performing a standard cleaning type 1, etc. . In a particular embodiment, fabrication process 628 corresponds to a semiconductor fabrication process associated with a technology node that is less than 14 nm (eg, 10 nm, 7 nm, etc.). The particular process or combination of processes used to fabricate devices (e.g., bit cell arrays including shared global word line techniques as described with respect to Figures 2-3) may be based on design constraints and available materials/equipment. Thus, in certain embodiments, a different process than that described herein can be used during fabrication of the device.Manufacturing systems (eg, automated systems that perform manufacturing process 628) may have a distributed architecture (eg, a hierarchical structure). For example, the manufacturing system can include one or more processors (such as processor 634), one or more memories (such as memory 635), and/or controllers distributed according to the distributed architecture. The distributed architecture can include an advanced processor that controls or initiates operation of one or more low level systems. For example, the high level portion of manufacturing process 628 can include one or more processors (such as processor 634), and the low level systems can each include or be controlled by one or more corresponding controllers. A particular controller of a particular low level system may receive one or more instructions (eg, commands) from a particular advanced system, may issue subcommands to a subordinate module or processing tool, and may in turn communicate status data to that particular advanced system. Each of the one or more low level systems may be associated with one or more corresponding manufacturing equipment (eg, processing tools). In a particular embodiment, the manufacturing system can include a plurality of processors distributed in the manufacturing system. For example, the controller of the low level system component can include a processor, such as processor 634.Alternatively, processor 634 may be part of an advanced system, subsystem, or component of the manufacturing system. In another embodiment, processor 634 includes distributed processing at various levels and components of the manufacturing system.The executable instructions included in memory 635 can enable processor 634 to form (or initiate formation) a bit cell array in accordance with the shared global word line technique described with respect to Figures 2-3. The die 636 can be provided to a packaging process 638 in which the die 636 is incorporated into a representative package 640. For example, package 640 can include a single die 636 or multiple dies, such as a system in package (SiP) arrangement. Package 640 can be configured to comply with one or more standards or specifications, such as the Joint Electron Devices Engineering Council (JEDEC) standard.Information about the package 640 can be distributed to various product designers (such as via a component library stored at the computer 646). Computer 646 can include a processor 648 coupled to memory 650, such as one or more processing cores. A printed circuit board (PCB) tool can be stored at processor 650 as processor executable instructions to process PCB design information 642 received from a user of computer 646 via user interface 644. PCB design information 642 may include physical positioning information on a circuit board of a packaged semiconductor device corresponding to a package 640 comprising a bit cell array in accordance with the shared global word line technique described with respect to FIGS. 2-3.Computer 646 can be configured to transform PCB design information 642 to generate data files, such as GERBER having data including physical positioning information of packaged semiconductor devices on a circuit board, and layout of electrical connections such as traces and vias. File 652, wherein the packaged semiconductor device corresponds to a package 640 comprising a bit cell array in accordance with the shared global word line technique described with respect to Figures 2-3. In other embodiments, the data files generated from the transformed PCB design information may have a format other than the GERBER format.The GERBER file 652 can be received at the board assembly process 654 and used to create a PCB, such as a representative PCB 656 that is fabricated from design information stored within the GERBER file 652. For example, the GERBER file 652 can be uploaded to one or more machines to perform various steps of the PCB production process. The PCB 656 can be populated with electronic components (including the package 640) to form a representative printed circuit assembly (PCA) 658.PCA 658 can be received at product manufacturing process 660 and PCA 658 can be integrated into one or more electronic devices, such as first representative electronic device 662 and second representative electronic device 664. For example, the first representative electronic device 662, the second representative electronic device 664, or both may include or correspond to the electronic device 500 of FIG. 5, or a component thereof, such as the SRAM device 564. As an illustrative and non-limiting example, the first representative electronic device 662, the second representative electronic device 664, or both may include a communication device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular telephone, Satellite phone, computer, tablet device, portable computer, processor (or other electronic device) within the vehicle, or desktop computer. Alternatively or additionally, the first representative electronic device 662, the second representative electronic device 664, or both may include a bit cell array in which the shared global word line technique as described with respect to Figures 2-3 is integrated Set-top box, entertainment unit, navigation device, personal digital assistant (PDA), monitor, computer monitor, television, tuner, radio, satellite radio, music player, digital music player, portable music player, video player , a digital video player, a digital video disc (DVD) player, a portable digital video player, any other device that stores or retrieves data or computer instructions, or a combination thereof. As another illustrative and non-limiting example, one or more of electronic devices 662 and 664 can include a remote unit (such as a mobile phone), a handheld personal communication system (PCS) unit, a portable data unit (such as personal data) Assistant), Global Positioning System (GPS) enabled device, navigation device, fixed location data unit (such as meter reading equipment), or any other device that stores or retrieves data or computer instructions, or any combination thereof. Although FIG. 6 illustrates a remote unit in accordance with the teachings of the present disclosure, the disclosure is not limited to such illustrated units. Embodiments of the present disclosure may be suitably employed in any device that includes an active integrated circuit system including a memory and an on-chip circuitry.A device comprising a bit cell array in accordance with the shared global word line technique described with respect to Figures 2-3 can be fabricated, processed, and incorporated into an electronic device, as described in the illustrative process 600. One or more aspects of the various embodiments disclosed with respect to Figures 1A-6 can be included in various processing stages, such as being included in library file 612, GDSII file 626 (e.g., a file having the GDSII format), and GERBER file 652. (for example, a file having the GERBER format), and a memory 610 stored in the research computer 606, a memory 618 of the design computer 614, a memory 650 of the computer 646, and one used at various stages (such as at the board assembly process 654). Or at the memory of a plurality of other computers or processors (not shown), and also incorporated into one or more other physical embodiments, such as mask 632, die 636, package 640, PCA 658, other products ( Such as a prototype circuit or device (not shown), or any combination thereof. Although various representative stages of production from physical device design to final product are depicted, fewer stages may be used or additional stages may be included in other embodiments. Similarly, process 600 can be performed by a single entity or by one or more entities executing various stages of process 600.Although one or more of FIGS. 1A-6 may illustrate systems, devices, and/or methods in accordance with the teachings of the present disclosure, the disclosure is not limited to such illustrated systems, devices, and/or methods. Embodiments of the present disclosure may be suitably employed in any device including integrated circuit systems including memory, processors, and on-chip circuitry. One or more of the functions or components of any of Figures 1A-6 as illustrated or described herein may be combined with one or more other portions of the other of Figures 1A-6. Therefore, the individual embodiments described herein are not to be construed as limiting, and the embodiments of the present disclosure may be combined as appropriate without departing from the teachings of the disclosure.The various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software executed by a processor, or both. Combination of people. Various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in the form of their functionality. Whether such functionality is implemented as hardware or processor-executable instructions depends on the particular application and design constraints imposed on the overall system. The described functionality may be implemented by a skilled person in a different manner for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.The steps of a method or algorithm described in connection with the embodiments disclosed herein may be implemented directly in hardware, a software module executed by a processor, or a combination of both. Software modules can reside in random access memory (RAM), flash memory, read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable Read only memory (EEPROM), registers, hard disk, removable disk, compact disk read only memory (CD-ROM), or any other form of non-transitory storage medium known in the art. An exemplary storage medium is coupled to the processor to enable the processor to read and write information from/to the storage medium. In the alternative, the storage medium can be integrated into the processor. The processor and the storage medium can reside in an application specific integrated circuit (ASIC). The ASIC can reside in a computing device or user terminal. In the alternative, the processor and the storage medium may reside as a discrete component in a computing device or user terminal.The previous description of the disclosed embodiments is provided to enable a person skilled in the art to make or use the disclosed embodiments. Various modifications to these embodiments are obvious to those skilled in the art, and the principles defined herein may be applied to other embodiments without departing from the scope of the disclosure. Therefore, the present disclosure is not intended to be limited to the embodiments shown herein, but rather the broadest possible scope of the principles and novel features as defined by the appended claims. |
A system 400 and method 1200 is disclosed for a fast locking (e.g., within 1.5 sync bit times or the first data transition) clock and data recovery (CDR) system used in high speed data communications applications (e.g., ASIC and microprocessor chips). The CDR circuit takes multiple (e.g., 8) phases of the local clock, which are offset (e.g., by 45 degrees), and uses the multiple phases to latch the state of data at multiple times, and uses the latched data to determine which of the multiple phases captured a data transition. The CDR circuit compares the indicated phase to the phase used to capture a previous data transition and uses such information to, produce a stable selection of a clock phase. The selected clock phase is then employed to provide a recovered clock and data signals (CLK_OUT, and DATA_OUT), in association with the incoming serial data stream independent of jitter and free of metastable conditions. |
What is claimed is: 1. A system for producing recovered clock and data signals for serial data communications operations, comprising:a 1<st >logic system operable to receive a decoded single ended serial data stream, and a plurality of clock phases, which are evenly spaced and offset between successive phases derived from the local clock signal, wherein the 1<st >logic system is configured to detect whether a transition of the data stream took place corresponding with one of the plurality of clock phases and independent of an event caused by metastability, to generate one of a corresponding plurality of data transition detections based upon the serial data stream, and the plurality of clock phases; a 2<nd >logic system operable to receive a plurality of data transition detections, a plurality of clock phases, and a plurality of previous clock phase selections, to determine which phases corresponded to a data stream transition, to compare the new data transition to a last phase selection transition, and to generate a plurality of phase determinations based upon the comparisons; a 3<rd >logic system operable to receive the plurality of phase determinations, to apply amplification, hysteresis and feedback, and to generate a plurality of processed phase selections based upon the plurality of phase determinations, and an elimination of metastable conditions and multiple phase selections; and a phase selection logic operable receive the plurality of clock phase signals, and to receive and to use the plurality of processed phase selections to make a best phase selection which is used to securely latch and recover the clock and data signals, whereby the recovered clock and data signals are free of metastable conditions and multiple phase selections, and independent of jitter and wander. 2. The system of claim 1, wherein the 1<st >logic system of the CDR circuit comprises:a 1<st >latch, comprising a plurality of FFs individually adapted to receive the decoded single ended serial data stream, and one of a plurality of clock phase signals, and generate one of a corresponding plurality of data state indications, whereby the 1<st >latch FFs are individually triggered by one of the plurality of clock phases, to latch a logical state of the data stream at a transition of the one of the plurality of clock phase signals corresponding therewith; and a 1<st >logic component, comprising a plurality of XOR gates and OR gates, individually adapted to receive one of the plurality of data state indications from the 1<st >latch, and generate one of a corresponding plurality of data transition detections. 3. The system of claim 1, wherein the 2<nd >logic system of the CDR circuit comprises:a 2<nd >latch, comprising a plurality of FFs individually adapted to receive one of the plurality of data transition detections, and the one of the corresponding plurality of clock phase signals, and generate one of a corresponding plurality of latched transition detections, whereby the 2<nd >latch FFs are individually triggered and latched by the one of the plurality of clock phase signals corresponding therewith; and a 2<nd >logic component, comprising a plurality of AND gates operable to receive the plurality of latched transition detections from the 2<nd >latch, and a plurality of previous clock phase selections in the form of feedback, to determine which phases detected a data stream transition, to select a new phase associated with the transition detections, and to compare the new data transition to a last phase selection, and generate a plurality of phase determinations based upon the comparisons. 4. The system of claim 1, wherein the 3<rd >logic system of the CDR circuit comprises:a 3<rd >latch, comprising a plurality of FFs individually adapted to receive one of the plurality of phase determinations, and the one of the corresponding plurality of clock phase signals, and generate one of a corresponding plurality of latched phase determinations, whereby the 3<rd >latch FFs are individually triggered and latched by the one of the plurality of clock phase signals corresponding therewith; and a 3<rd >logic component, comprising a plurality of AND gates and FFs individually adapted to receive one of the plurality of latched phase determinations, to eliminate metastable conditions and multiple phase selections by the amplification, hysteresis and feedback of the selections, and to generate a plurality of processed phase selections based upon, the plurality of latched phase determinations, and the elimination of metastable conditions and multiple phase selections. 5. The system of claim 1, wherein the plurality of clock phases, which are evenly spaced and offset between successive phases derived from the local clock signal, comprises 8 phases.6. A system for producing recovered clock and data signals for serial data communications operations, comprising:a 1<st >logic system operable to receive a decoded single ended serial data stream, and a plurality of clock phases, which are evenly spaced and offset between successive phases derived from the local clock signal, wherein the 1<st >logic system is configured to detect whether a transition of the data stream took place corresponding with one of the plurality of clock phases and to generate one of a corresponding plurality of data transition detections based upon the serial data stream, and the plurality of clock phases; a 2<nd >logic system operable to receive a plurality of data transition detections, a plurality of clock phases, and a plurality of previous clock phase selections, to determine which phases corresponded to a data stream transition, to compare the new data transition to a last phase selection transition, which produce a plurality of phase determinations based upon the comparisons, and to generate a plurality of phase selections based upon the plurality of data transition detections, the plurality of clock phases, and the plurality of previous clock phase selections; and a phase selection logic operable receive the plurality of clock phase signals, and to receive and to use the plurality of phase selections to make a best phase selection which is used to securely latch and recover the clock and data signals, whereby the recovered clock and data signals are free of metastable conditions and multiple phase selections, and independent of jitter and wander. 7. The system of claim 6, wherein the 1<st >logic system of the CDR circuit comprises:a 1<st >latch, comprising a plurality of FFs individually adapted to receive the decoded single ended serial data stream, and one of a plurality of clock phase signals, and generate one of a corresponding plurality of data state indications, whereby the 1<st >latch FFs are individually triggered by one of the plurality of clock phases, to latch a logical state of the data stream at a transition of the one of the plurality of clock phase signals corresponding therewith; and a 1<st >logic component, comprising a plurality of XOR gates and OR gates, individually adapted to receive one of the plurality of data state indications from the 1<st >latch, and generate one of a corresponding plurality of data transition detections. 8. The system of claim 6, wherein the 2<nd >logic system of the CDR circuit comprises:a 2<nd >latch, comprising a plurality of FFs individually adapted to receive one of the plurality of data transition detections, and the one of the corresponding plurality of clock phase signals, and generate one of a corresponding plurality of latched transition detections, whereby the 2<nd >latch FFs are individually triggered and latched by the one of the plurality of clock phase signals corresponding therewith; a 2<nd >logic component, comprising a plurality of AND gates operable to receive the plurality of latched transition detections from the 2<nd >latch, and a plurality of previous clock phase selections in the form of feedback, to determine which phases detected a data stream transition, to select a new phase associated with the transition detections, and to compare the new data transition to a last phase selection, and to generate a plurality of phase determinations based upon the determinations and comparisons; and a 3<rd >latch, comprising a plurality of FFs individually adapted to receive one of the plurality of phase determinations, and the one of the corresponding plurality of clock phase signals, and generate one of the corresponding plurality of phase selections, whereby the 3<rd >latch FFs are individually triggered and latched by the one of the plurality of clock phase signals corresponding therewith. 9. A method of fast locking and recovery of a clock and data signal from a serial data stream in a communications device which has a high jitter tolerance and requires a small gate count implementation, while eliminating the effects of metastability, comprising the step of:receiving a decoded single-ended serial communications data stream into a 1<st >logic system; receiving a plurality of clock phases which are evenly spaced and offset between successive clock phases; latching the logical state of the data stream into a 1<st >logic system, corresponding in time to each phase of the plurality of clock phases, thereby recording a plurality of logical states of the data stream, individually corresponding with one of the plurality of clock phase transitions; detecting whether there was a data transition corresponding to one of the plurality of clock phases, by examining whether there was a change of state of the data stream recorded between successive phases; latching in a 2<nd >logic system, the results of the data transition detections at a clock phase which is offset by +3 clock phases, thereby offsetting the phase selection to about the middle of the data waveform to avoid the effects of jitter or metastability; determining the quantity of phases which were detected and which phases of the plurality of clock phases which have detected a new data transition; comparing the results of the new data transition determinations to a last phase selections, thereby producing a plurality of phase determination results; latching in a 3<rd >logic system the comparison phase determination results at a clock phase which is offset by +4 clock phases, thereby offsetting the phase selection to about the middle of the data waveform to avoid the effect of jitter or metastability, and compensate for calculation time to provide a plurality of latched phase determinations; processing the plurality of latched phase determination results through a plurality of logic circuits individually adapted to eliminate the effects of metastable conditions and multiple phase selections, thereby producing a plurality of acceptable phase selections; selecting a single phase of the plurality of acceptable phase selections to latch the data and synchronize the clock; latching the data stream data and synchronizing the clock with the single phase selection; and outputting a recovered clock and data signal within one clock data cycle, which is free of metastable conditions and independent of jitter and wander, while maintaining a small and simple CDR design. 10. The method of claim 9, wherein detecting whether there was a data transition corresponding to one of the plurality of clock phases, by examining whether there was a change of state of the data stream recorded between successive phases comprises:a) resetting a phase counter and a data transitions counter to zero count; b) determining if a data transition occurred before a current phase, by comparing the last state of the data stream detected and the current state; c) outputting a logical "1" state, which is to be latched at a clock phase (N+3), if the determination indicated that a transition was detected; d) incrementing the data transition counter; e) otherwise, outputting a logical "0" state, which is to be latched at the clock phase (N+3), if the determination indicated that a transition was not detected; f) determining if the current phase count is at a maximum count of 8; g) incrementing the phase counter if the phase count is not at a maximum count of 8; and repeating steps (b) thru (f) until the phase count is equal to 8; and h) ending the data transition detection operation when the maximum phase count is achieved. 11. The method of claim 9, wherein determining the quantity of phases which were detected and which phases of the plurality of clock phases which have detected a new data transition comprises:determining if there any data transitions were detected; selecting the previous phase if it is determined that there were no data transitions detected, and ending the quantity determination operation; otherwise, determining if 1 data transition was detected; selecting the 1 phase indicated if it is determined that one data transition was detected, and ending the quantity determination operation; otherwise, determining if 2 data transitions were detected; selecting the first of the 2 phases indicated if it is determined that 2 data transitions were detected, and ending the quantity determination operation; otherwise, determining if 3 data transitions were detected; selecting the center phase of the 3 phases indicated if it is determined that 3 data transitions were detected, and ending the quantity determination operation; and otherwise, selecting the previous phase if >3 data transitions were detected, and ending the quantity determination operation. 12. The method of claim 9, wherein comparing the results of the new data transition determinations to a last phase selections, thereby producing a plurality of phase determination results comprises:a) comparing the new data transition which was detected to the last phase selected; b) determining if there was a phase change of >2 phases between the new data transition and the last phase selected, if the result of the comparison step (a) indicated a case "B" situation, in which the data transition was later than the transition of the last phase; c) outputting a logic comparison result for a new phase selection if it is determined in step (b) that there was not >2 phase changes between the new data transition and the last phase selected, and ending the phase comparison operation; d) outputting a logic comparison result for a last phase selection if it is determined in step (b) that there was >2 phase changes between the new data transition and the last phase selected, and ending the phase comparison operation; e) determining if there was a phase change of >-1 phases between the last phase selected and the new data transition, if the result of the comparison step (a) indicated a case "C" situation, in which the data transition was earlier than the transition of the last phase; f) outputting a logic comparison result for a new phase selection if it is determined in step (e) that there was not >-1 phase changes between the last phase selected and the new data transition, and ending the phase comparison operation; g) processing the transition detections thru a logic circuit which eliminates phase change indications of >-1 phase changes, if it is determined in step (e) that there was >-1 phase changes between the last phase selected and the new data transition, and outputting a logic comparison result for a last phase selection, and ending the phase comparison operation; and h) otherwise, outputting a logic comparison result for a new phase selection, when the result of the comparison step (a) indicates a case "A" situation, in which the data transition about the same time as the transition of the last phase selection, and ending the phase comparison operation. 13. The method of claim 9, wherein processing the plurality of latched phase determination results through a plurality of logic circuits individually adapted to eliminate the effects of metastable conditions and multiple phase selections, thereby producing a plurality of acceptable phase selections comprises:amplifying and applying hysteresis to the PHT(N) logic results output which may contain the effects of metastable conditions; determining if a PHT(N) phase determination result input has an amplitude greater than the logic gate switching threshold voltage; outputting a logical "1" state, which is to be latched at a clock phase (N+4), if the determination indicated that the logic gate switching threshold voltage was achieved, thereby indicating that the current phase being processed was acceptable as a potential phase selection; otherwise, outputting a logical "0" state, which is to be latched at the clock phase (N+4), if the determination indicated that the logic gate switching threshold voltage was not achieved, thereby indicating that the current phase being processed was unacceptable as a potential phase selection; continuing to amplify the logic results thru additional gates; applying feedback to the logic gates to hold the state of the acceptable phase selection; disabling the next phase PH(N+1) with the feedback and logic, to eliminate the possibility of multiple phase selections; and latch the final phase selection result with the PH(N+2) clock phase. |
TECHNICAL FIELD OF INVENTIONThe present invention relates generally to serial data communication and transmission applications in the manufacture of integrated circuits needed as physical interface to any type of serial bus (in this example USB). More particularly, the present invention relates to clock and data recovery logic for a serial data stream, which supplies a sync lock within 1.5 bit times, insuring clock and data information is recovered in these applications. The CDR function is implemented as a plesiochronous technique with no feedback to a PLL. It also has no lock detection, nor loss of lock detection, nor loss of sync detection. In the intended application, those functions are integrated into the logic coupled to the recovered CLK and DATA.BACKGROUND OF THE INVENTIONWith the recent increased speed of computers and the need for high performance peripherals, the use of high speed serial data communications applications in integrated circuits built to physically interface to any given bus has increased correspondingly.USB (Universal Serial Bus) 1.1, has been the de facto external connectivity standard between computers and their peripherals in serial communications up to 12 Mbps (Million bits per second). As the need for faster communications and higher performance peripherals has grown, computer and peripheral manufacturers have responded with a new higher speed standard: USB 2.0.USB 2.0 increases the device data throughput up to 480 Mbps, 40 times faster than USB 1.1 devices while maintaining or improving on other USB 1.1 specifications such as the Microsoft Plug and Play feature, and numerous other technical specifications, some of which will be discussed in relation to the present invention. USB 2.0 even challenges FireWire (IEEE 1394) currently at 400 Mbps, as the serial interface of the future. Three speed modes are available under the new USB 2.0 standard: high-speed (480 Mbps), full-speed (12 Mbps), and low-speed (1.5 Mbps).Conventionally, an incoming serial data stream may be NRZI (Non-Return-to-Zero Inverted) encoded and bit stuffed. NRZI is a data transmission method in which the polarity of the bit is reversed whenever a 0 bit is encountered, and a static voltage level is transmitted whenever a 1 bit is encountered as illustrated in FIG. 1, and designated at reference numeral 110. NRZI thus uses the presence or absence of a transition to signify a bit (indicating a logical 0 by inverting the state). Combined with bit-stuffing, where an extra 0 bit is inserted after every six consecutive 1 bits, this data encoding causes a guaranteed transition every 7 bit times when a data payload would be all 1 bits. Every transition gives the CDR circuit phase information that it uses to align it's recovered clock to the phase of the incoming data. The less time between transitions, the less phase error which is to be expected caused by frequency offset. Other techniques used are, for example, 8b-10b coding similar to 1394 and Ethernet.The structure of the data stream follows a specific communications protocol, which defines the rules for sending a block of data (each known as a Protocol Data Unit (PDU)) (e.g., 150 of FIG. 2) from one node in a network to another node. The exchanged PDUs comprises three parts: a sync sequence 160, a packet payload (also known as a Service Data Unit (SDU)) 170, and an End of Packet (EOP) 180. The protocol does not define or constrain the data carried in the payload portion 170 of the data block. The protocol does, however, specify the format of the sync sequence.Packet switching refers to protocols in which a longer message (the data) exceeding a network-defined maximum length is divided into short message packets before they are transmitted. Each packet, with an associated header with information for routing the packet from origination to destination, is then transmitted individually and can even follow different routes to its destination. Once all the packets forming a message arrive at the destination, they are recompiled into the original message. Most modern Wide Area Network (WAN) protocols, including the successful TCP/IP protocol, as well as X.25, and Frame Relay, are based on packet-switching technologies.A fundamental difference between packet communication and conventional, continuous-type communication is that the data is formed into packets as described above. When there is no data to be sent, the bus is put into an ideal state that shows no change in voltage levels. Continuous-type protocols is would fill the idle time within a frame with well-known "idle" patterns which are used to occupy the link when there is no data to be communicated. A packet network equipment discards the "idle" patterns between packets and processes the entire packet as one piece of data. The equipment examines the packet header information (PCI) and then either removes the header (in an end system) or forwards the packet to another system. If the out-going link is not available, then the packet is placed in a queue until the link becomes free. A packet network is formed by links which connect packet network equipment.In the packet switching used in USB 2.0 at 480 Mbps, one portion of the packet header 160 will contain at least 12 sync bits indicated by an alternating pattern, intended to allow the sending and receiving clocks time to synchronize. The packet payload 170 will contain up to 1024 bits, while the end-of-packet 180 contains 8 bits.The incoming data stream is assumed to be sent with a clock of the same frequency as the local clock used in the receiving system, but shows all jitter components of an electrical transmission over a bandwidth limited media (e.g., data dependant cycle to cycle jitter).A conventional linear clock and data recovery (CDR) circuit attempts to recover the original transmitting clock by utilizing a phase detector (PD) or alternatively a phase-frequency detector (PFD), and source a charge pump followed by a VCO of an analog PLL. The resulting change in phase and frequency is sourced back to the PD/PFD to be compared to the next data. These conventional linear techniques use an analog PLL, which need an undefined number of transitions, are dependant on the PLLs bandwidth, the data-rate to VCO frequency ratio and more. In addition, data derived by these conventional linear techniques cannot be guaranteed by the USB synch packet (typically N*10e3 needed vs 6 available in USB FS mode).The capture range of a PLL is typically narrow, and usually requires the help of a frequency acquisition aid and special training sequences which have the disadvantage of limited availability.Other conventional plesiochronous techniques to minimize the effect of metastable readings give unreliable phase information. To do so, most of these techniques try to average the results before selecting a new phase. This also requires a continuous bitstream that is not available in USB applications.The analog types require many special analog components, including rectifier component(s), differentiator component(s), etc. These components are difficult to implement in ASIC devices, and when not carefully designed may not function properly under all conditions. The digital implementations have at most +/-50% usable frequency range, but are often narrower depending on the implementation and the statistics of the input data.For a number of reasons such as bus turn around timing (the time measured at the USB host controller, from the sending of a request to the farthest bus subscriber, until receipt of an acknowledge package), a USB HUB is allowed to strip-off a defined number of sync bits during the HS repeater mode which results in a minimum sync pattern of 12 alternating bits at the receiver of a subscriber. Under FS conditions the sync field consists of 6 bits from the start. This is not enough sync bits for conventional CDR techniques.Another prior art CDR methodology is illustrated in FIG. 3 and designated at reference numeral 200. The CDR 200 uses a crystal oscillator 220 to drive a PLL along with frequency dividers 230 to produce two phases of a local clock (CLK, and CLK(NOT)) 235 which enter the CDR circuit 210. The serial data stream is decoded from a USB transmitter/receiver 240 to a single ended signal DATA 245 which also enters the CDR circuit 210. Two 4-bit shift registers 260, 270 are incorporated to store 8 bits of the serial data. A voting logic circuit 290 is employed to select one of the clock phases, while averaging sample points 280, 285 of the 8 data bits to minimize the effects of metastable conditions in the CDR circuit 210. The selected phase 291 from the voting logic 290 then provides feedback 291 to control the PLL clock frequency 230 and is then used by gate logic 295, to gate the data stream to recover the clock and the data 297. The disadvantage of this scheme is that there is an 8 bit time-delay while the bits fill both shift registers before the clock frequency and phase can be established. In addition, and as described above, after a number of rerouting operations there may not be enough sync bits remaining to provide lock and may cause a loss of data. Here again, jitter in the data stream or isolated bit errors may also cause the PLL to lose lock, as the PLL frequency and lock is dependant on the feedback loop from the voting logic 290.Accordingly, considering the substantially higher data rates used in the new USB 2.0 at 480 Mbps, the new 350 ps cycle to cycle jitter specification under HS conditions (1 bit=2.08 ns ), and the increased use of hubs and routers, there is a need for a CDR circuit which is able to quickly lock to a serial data stream, have a high jitter tolerance, and yet eliminate the effects of metastable conditions inherent in CDR circuits used in high speed serial data communications applications of ASIC and microprocessor chips.SUMMARY OF THE INVENTIONThe following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is intended neither to identify key or critical elements of the invention nor to delineate the scope of the invention. Its primary purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.The invention is directed to a quick locking (e.g., within two sync bit times) clock and data recovery (CDR) circuit used in high speed data communications applications (e.g., ASIC and microprocessor chips). The CDR circuit takes multiple (e.g., 8) phases of the local clock, which are offset (e.g., by 45 degrees), and uses the multiple phases to latch the state of data at multiple times, and uses the latched data to determine which of the multiple phases captured a data transition. The CDR circuit compares the indicated phase to the phase used to capture a previous data transition and uses such information to, produce a stable selection of a clock phase. The selected clock phase is then employed to provide a recovered clock and data signal in association with the incoming serial data stream independent of jitter and free of metastable conditions.In accordance with the present invention, a CDR circuit for a serial data stream is disclosed. The CDR circuit requires only one data transition on the incoming data stream in order to pick one of the 8 clock phases for accurate data recovery. In one exemplary aspect of the invention, for every transition, there is a decision made as to the phase to be selected, in order to enable subsequent related logic to securely latch the data to produce a recovered clock and data signal. Thus, a feature of the present invention is that nearly "instant lock" is provided, as only the first bit of the incoming data stream is lost to achieve pattern lock, while other designs need much more pattern or bits to lock.The CDR circuit of this invention, therefore, provides recovered clock and data signals with a quasi-fixed phase relationship even though the serial data jitters. Also, as there is no feedback to the VCO or a PLL, the CDR system of the present invention avoids the usual PLL feedback loop problems previously discussed.The present invention utilizes a plurality of input phases (e.g., 8) of the local clock running at approximately the same nominal frequency of the transmitting clock. The phase offset between successive phases when the number of phases is eight is about 45 degrees. Therefore, the CDR circuit of the current invention may be used in any application where multiple phases offset from one another (e.g., about 45 degrees) of the receiving/sending clock are available (e.g., from a local VCO).The CDR circuit of the present invention also provides about 2 phase differences (e.g., about 2*260 ps=520 ps) of cycle to cycle jitter tolerance at 480 MHz, which is substantially greater than the 350 ps cycle to cycle jitter tolerance required by USB 2.0, and permits standard ASIC FFs (Flip-Flops) to be used. According to one exemplary aspect of the invention, the incoming data stream may experience frequency wander far greater than specified without causing loss of lock or loss of data. Thus no loss of lock or loss of data circuitry is required. However, if two different frequencies are used in such an exemplary case, a periodic phase shift on the recovered CLOCK_OUT will result. This, in principal, would add to the jitter transfer function of a CDR circuit. However, in any application that provides a deserializer function (serial to parallel conversion), this figure of merit is irrelevant as the deserializer can handle those events using FIFOs.An advantage of the present invention is that the CDR does not average sample points of the data as most other CDR circuits do. Averaging in CDR circuits is done to avoid the effect of metastable conditions which cannot be avoided in CDRs. The CDR described in accordance with the present invention has an alternative way to handle and eliminate the metastable conditions which avoids averaging and allows fast locking.Another advantage of the present invention, is that the CDR integrated circuit implementation may be small (e.g., about 300 gates).Still another advantage of the present invention, is that the CDR solution works well at low and high frequencies of over 480 MHz (multifrequency CDR).To the accomplishment of the foregoing and related ends, the invention comprises the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative embodiments of the invention. These embodiments are indicative, however, of but a few of the various ways in which the principles of the invention may be employed. Other objects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates a waveform comparison between NRZ and NRZI encoded serial data;FIG. 2 illustrates a protocol data unit (PDU) and a basic format associated therewith used in packet switching serial data communications;FIG. 3 is a simplified block diagram of a conventional clock and data recovery (CDR) system used in combination with a USB transmitter/receiver to produce recovered clock and data signals;FIG. 4 is a block diagram of an exemplary fast-locking clock and data recovery system used in combination with a differential receiver to produce recovered clock and data signals in accordance with an aspect of the invention;FIG. 5 is a block diagram of an exemplary clock phase generator circuit which may be used to produce multiple clock phases used in the CDR system of FIG. 4 in accordance with an aspect of the invention;FIG. 6 is a simplified block diagram of an exemplary fast-locking CDR circuit wherein clock and data signals may be recovered from a serial data stream, in which various aspects of the invention may be carried out;FIG. 7 is a schematic illustration of an exemplary 1<st >logic circuit and a 2<nd >latch, wherein a transition of the data stream may be detected and the transition detection latched in accordance with an aspect of the invention;FIG. 8 is a partial schematic illustration of the exemplary 1<st >logic circuit of FIG. 7 and a portion of a data stream signal waveform, wherein the transition detection of the data stream is indeterminate at clock phase N, which may produce a metastable condition in a FF;FIG. 9 is a simplified timing diagram illustrating exemplary CDR circuit data transition detection timings relative to the detecting phase N and the selected phase N+3 in which various aspects of the invention may be carried out;FIG. 10 is a simplified timing diagram illustrating an effect of 350 ps of data jitter in exemplary CDR circuit data transition detection timings relative to a selection of a phase in accordance with an aspect of the invention;FIG. 11 illustrates three waveform comparison cases of the CDR 2<nd >logic component for comparing the last phase selection to the new data transition detected, in accordance with an aspect of the invention;FIG. 12 illustrates a comparison of waveforms for the case of the CDR 2<nd >logic component when the new phase selection jumps (lags) the last phase selection by greater than N-2, in accordance with an aspect of the invention;FIG. 13 is a simplified timing diagram illustrating a source of a potential metastable condition in a FF, and the resulting glitch which may be produced in the CLK_OUT signal in an exemplary CDR circuit;FIG. 14 is a simplified schematic illustration of an exemplary CDR 3<rd >logic component "hold and disable next phase" circuit solution for the metastable condition of FIG. 13, intended for elimination of metastable conditions and multiple phase selections by disabling the next phase, in accordance with another aspect of the invention;FIG. 15 is a simplified timing diagram illustrating a waveform with a potential metastable condition in FF PHT3, and other timings as presented to the "hold and disable next phase" circuit to eliminate metastable conditions in the exemplary CDR 3<rd >logic component circuit in accordance with an aspect of the invention;FIG. 16 is a schematic illustration of an exemplary CDR 3<rd >logic component "hold and disable next phase" circuit solution for the metastable condition of FIG. 15, intended for elimination of metastable conditions and multiple phase selections by the use of hysteresis, amplification, feedback, and disabling the next phase, in accordance with another aspect of the present invention;FIG. 17 is a flow diagram illustrating an exemplary method for fast locking clock data recovery operation in association with an aspect of the present invention;FIG. 18 is a flow diagram illustrating an exemplary method for the transition detection step 1230 of FIG. 17 for the fast locking clock data recovery operation in association with an aspect of the present invention;FIG. 19 is a partial flow diagram illustrating an exemplary method for the data transition detection quantity determination and phase selection step 1250 of FIG. 17 for the fast locking clock data recovery operation in association with an aspect of the present invention;FIG. 20 is a partial flow diagram illustrating an exemplary method for the data transition detection comparison to last phase selection step 1250 of FIG. 17 for the fast locking clock data recovery operation in association with an aspect of the present invention; andFIG. 21 is a flow diagram illustrating an exemplary method for the elimination of metastable conditions and multi-phase selections step 1270 of FIG. 17 for the fast locking clock data recovery operation in association with an aspect of the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention will now be described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. The present invention relates to a fast locking clock and data recovery system in which a plurality of clock phases and a first logic system are used to detect the particular time within the clock cycle in which a data transition occurs (Phase detection). The CDR system also uses a 2<nd >logic system to determine which of the phases from the 1<st >logic system detected data transition (as more than one phase may have detected a transition, in the case of a metastable condition). The second logic system selects one of the phases, and compares the phase to a previous phase selection associated with a previous data transition, and may change the selected phase in response to the comparison based on predetermined criteria. The CDR system also makes use of a 3<rd >logic system to eliminate metastable conditions of the data transition detections and multiple phase selections. The CDR system also uses a phase selection logic to select one of the plurality of clock phases which is used to securely latch and output a recovered clock and data signal (CLK_OUT, and DATA_OUT). Thus, the present invention provides a fast locking CDR system for producing recovered clock and data signals which are free of metastable conditions with high jitter tolerance for the serial data stream. As this detection and the selection of a valid phase which ideally is centered in the eye of an incoming data stream, takes exactly 1.5 bit times, the first bit is lost while it is available to the second bit on an incoming serial datastream latched at it's eye center.As previously discussed in regard to FIG. 3, a prior art CDR system required 8 bits of sample point averaging of the data stream (and an associated 8 bit delay) before a clock phase could be determined for latching and recovering data. Averaging in the prior art system CDR system 200 is done to avoid the effect of metastable conditions which in many cases can not be avoided in CDRs because, for example, the initial phase relation of the incoming datastream to the local clock cannot be known. In fact, most conventional CDRs have the constant threat of metastable conditions even after they have locked to the incoming data stream. This leads to false readings and constant frequency and phase updates in their system, causing unnecessary transfer jitter. As was already discussed, a method of eliminating the effect of metastable conditions is averaging (mostly used in plesiochronous systems with slow locking and frequency tracking abilities).Switching transition time and metastable conditions can not be avoided in CDR systems. The present invention, however, avoids the delay limitation of conventional CDR systems with a method of forcing more (e.g., 8 bits more) information out of each bit cycle, and providing a method of selecting a stable clock phase on the first data transition. Essentially, according to one exemplary aspect of the present invention, the method breaks up a bit cycle into a plurality of smaller time periods with a plurality of clock phases. The clock phases are offset from each other by (1/N)360[deg.] of the individual clock phase.Clock phase offsetting is also found in prior art plesiochronous CDR techniques. However, the main difference lies in the combined phase detection which, according to the present invention, is made resistant to the metastable conditions and voting (phase selection), which does not require multiple bit readings to select a valid output clock phase. Within each of these smaller time periods, a decision is made as to the current state of the serial data stream, wherein the rising edge of each clock phase signal of the plurality of clock phases serves as the trigger-point for the data state decision. The first phase which records a change of state of the data stream, therefore, has recorded a transition which took place in the data stream, and unless a metastable condition is detected, this first phase is used to determine the proper clock phase to lock the data until the next transition occurs. In this way, the determination as to the actual point in time at which the transition took place, is narrowed to within the time period of the offset of a clock phase.According to one aspect of the current invention, 8 clock phases are used, offset by:(1/N)2[pi]=([1/8])2[pi]=[pi]/4Therefore, according to one aspect of the current invention, the 8 clock phases also resolve the data stream transition to within [1/8] th of the clock cycle and data rate period at 480 Mbps, this is:(1/N)Period=([1/8])(1/480 Mbps)=260 ps.This means that the data transition may be resolved to within 260 ps even at the USB 2.0 HS (High Speed) data rate of 480 Mbps, and provides a lock on the clock and data occurring 8 times sooner than with the conventional fast locking CDR system illustrated in FIG. 3.FIG. 4 illustrates an exemplary CDR system 300, in which several aspects of the current invention may be accomplished. A serial data stream 310 enters a differential receiver (e.g., or transceiver) 315 and outputs a single ended serial data stream 317 into a CDR circuit 320. A 4 stage voltage controlled oscillator (VCO) 325 generates a local clock signal running at approximately the same frequency as the transmitter clock. The 4 stage VCO 325 produces 8 phases 328 of the clock signal which, in this example, are evenly spaced and offset by [pi]/4 between successive phases, and are supplied to the CDR circuit 320.Even though 8 phases have been used in this example, it will be apparent to anyone skilled in the art, that any number of phases could be used which may be evenly spaced and offset by (1/N)*360[deg.] between successive phases. According to the present invention, 8 phases works sufficiently with the 350 ps cycle to cycle jitter requirement of USB 2.0. Alternately, 16 phases offset by [pi]/8 could be used to locate the data transition nearly twice as accurately generating less jitter transfer. However, jitter transfer is not a figure of merit (cannot be measured) for deserializer circuits, as long as integrated serial to parallel conversion accounts for local jitter on the clock signal by holding sufficient FIFO to compensate for singular duty cycle distortions.The 8 clock phases 328 of the example, together with the single ended data stream 317 are input to the CDR circuit 320 which is operable to detect a 10 data stream transition to within [1/8] th of a clock cycle time, select a phase associated with the transition detection, compare the initially selected phase to a previous phase selection associated with a previous data transition, and make a final phase selection determination based upon the comparison. The phase selection is then used to securely latch and recover the clock and data signals CLOCK & DATA 330, free of metastable conditions and multiple phase selections, and independent of input jitter, as will be discussed in greater detail below.FIG. 5 is an exemplary implementation 335 of the 4 stage VCO circuit of FIG. 4. A VCO 340 generates a local clock signal running at approximately the same frequency as the transmitted clock, and outputs a CLK 345 & CLK(NOT) 350 signal ([Phi]N-[Phi]N (NOT)) forming a VCO stage 360, which feeds 3 other successive VCO stages 360. The final VCO stage feeds back to the first VCO stage to produce 8 phases 370 ([Phi]0-[Phi]7) derived from the clock signals 345 & 350, whereby the 8 phases are evenly spaced and offset by [pi]/4 (380) between successively numbered phases 370. The phase difference between [Phi]N to [Phi]N (NOT) is, in the example, [pi] or 180[deg.]. The phase difference between [Phi]N to [Phi]N+1 is, in the example, [pi]/4 or 45[deg.].The 4 stage VCO comprises a plurality (e.g., 4) of VCO stages 360 individually adapted to receive the CLK & CLK(NOT) signals 345 & 350 respectively, and produce a CLK & CLK(NOT) (inverted CLK) for each of the individual VCO stages 360. FIG. 5 also illustrates 8 clock phases 370, evenly spaced and offset by [pi]/4 (45[deg.]) (380) between successively numbered phases 370. With 8 clock phases 370 ([Phi]0-[Phi]7) running at 480 MHz, the phase to phase offset 380, is ≈260 ps. The 8 phases are coupled from the 4 stage VCO to the CDR circuit as designated at reference numeral 390.FIG. 6 illustrates an exemplary CDR circuit 400, in which several aspects of the current invention may be accomplished. The CDR circuit 400 is operable to receive a single ended serial data stream 405, and 8 phases 410 of the local clock signal which are evenly spaced by 45[deg.] ([pi]/4) between successive phases.In one exemplary aspect of the present invention, the CDR circuit comprises: a 1<st >logic system 430 which is operable to receive the single ended serial data stream 405, and the plurality of clock phases 410, and detect whether a transition of the data stream took place corresponding with one or between two of the plurality of clock phases 410. The 1<st >logic system is further operable to generate a plurality of data signals (each separated by PHI/4 in the time domain) corresponding to the data transition detection 427 based upon the serial data stream 405.The CDR circuit 400 further comprises a 2<nd >logic system 450 which is operable to receive the plurality of data transition detections 427, a plurality of clock phases 410, and a plurality of previous clock phase selections 465, to determine which phase corresponds to the current data transition. The 2<nd >logic system 450 is further operable to compare the new data transition to the clock phase selection associated with the previous data transition, and generate a plurality of phase determinations 447 based upon the comparison.The CDR circuit 400 of FIG. 6 further comprises a 3<rd >logic system 480 which is operable to resolve any metastable conditions and/or multiphase selections associated with the clock phase determination, for example, 447, by applying amplification, hysteresis and feedback to generate a plurality of processed phase selections 475 based upon the plurality of phase selections 447.Lastly, the CDR circuit 400 comprises a phase selection logic circuit 485 operable receive the plurality of clock phase signals 410, and to select one of the plurality of clock phase signals 410 based on the plurality of processed phase selections 475, as the best phase selection 490. The best phase selection 490 is then used to securely latch 492 and recover the data signal DATA_OUT 495, whereby the recovered clock and data signals are free of metastable conditions and multiple phase selections.In another aspect of the present invention, the 1<st >logic system 430 of the CDR circuit of FIG. 6 (see also 430 of FIG. 7 for further circuit details) comprises: a 1<st >latch 415(515), comprising a plurality of D-FFs individually adapted to receive the serial data stream 405(505), and the plurality of clock phase signals 410(510). The logical state of the data stream is latched by the plurality of D-FFs at, for example, the rising edge of each clock phase of the plurality of clock phase signals 410(510), and serves as a trigger point for a data state indication 420 at each of the plurality of D-FFs.The 1<st >logic system 430 further comprises a 1<st >logic component 425(525 & 527), comprising a plurality of exclusive-OR (EX-OR) gates 525 and OR gates 527 adapted to receive the plurality of data state indications 420(520) from the 1<st >latch 415(515), and generate a plurality of data transition determinations 440(540). The EX-OR gates compare the outputs of 2 D-FFs. In case there was a data transition at a time at, or between two adjacent phases latching the data, the output of the EX-OR would be logic HIGH. To avoid the impact of metastable conditions of one (there can be only one at a time) of the D-FFs, not only DPH(N) is compared to DPH(N+1) and DPH(N-1) but also to DPH(N+2) and DPH(N-2). When OR'ing the results of the EX-ORs, there is always a valid result at the outputs of the OR gates. This result can be either a single logic one, or up to 3 logic ones (adjacent) at the time the outputs of the OR gates get latched into the second column of D-FFs 535. This would cause a multiphase selection which is taken care of later.FIG. 7 illustrates further exemplary circuit details of the 1<st >logic system 500, corresponding to the 1<st >logic system 430 of FIG. 6, plus a 2<nd >latch 535, corresponding to the 2<nd >latch 435 of FIG. 6. DPH0 thru DPH7520 illustrate the data state signals which are latched by the positive going edge of the associated clock phase signals 510. As the phases are offset, in the 8 phase example by 45[deg.], triggering of the D-FFs 515 by the 8 phases 510 will ripple downward thru the 8 FFs of the 1<st >latch 515, and repeat back to the top FF on a continuous basis. For this reason, the labels A, B, & C (550) illustrate interconnections from the bottom of the schematic to the top.Thus, it becomes apparent from FIG. 7 that the data transition determination, and ultimate phase selection process of the present invention is an ongoing, continuous process that yields a new phase selection (or at least a determination to stay with the last phase selection) on every data transition. Therefore, faster and tighter lock control is achieved, while eliminating metastable conditions with every data transition, and providing recovered clock and data signals independent of a cycle to cycle data stream jitter of up to 0.25 UI and frequency wander. This nearly instant lock feature, means that only the first bit is lost, whereas conventional CDRs require substantially more pattern to lock.The plurality of transition detection signals, SELPH4 thru SELPH3 (540) are latched by triggering the 2<nd >latch FFs 535 with the plurality of clock phase signals [Phi]3 thru [Phi]2 (510). So, in the same way as the 1<st >latch 515, the 2<nd >latch 535 is continuously updated with new transition detection signals 540 for the phase selection process which continuously yields recovered clock signals.FIG. 8 is a partial schematic illustration 600 of an exemplary 1<st >logic circuit 500 of FIG. 7 and a portion of a data stream signal waveform 660, wherein in the illustrated example the transition detection of the data stream 660 is indeterminate at clock phase N. When a data transition occurs coincident with the rising edge of a clock phase signal [Phi]N, a metastable condition may be produced in the associated FF, wherein the state of the FF has not yet settled on one state or another. That is, when the switching transition period of the FF data occurs coincident with the rising edge trigger of its respective clock phase signal [Phi]N, a metastable condition may take place in the associated FF. This condition may manifest itself as a reduced amplitude signal (A.K.A. a runt signal), or as an indeterminate delayed logic state (1 or 0). Thus, a runt signal, or an indeterminate logical state produces a metastable condition "M" in successive logic circuits.FIG. 8 also illustrates the logical state 670 of the data stream 660 and the resultant 1<st >latch logical state 620 which was latched into the 1<st >latch 615 via the clock phases ([Phi]N-2-[Phi]N+3). In this example, the 1<st >latch 615 logical states 620, illustrate a worse case situation with one metastable state "M", and three "1" states latched. The metastable state "M" is also displayed as 0/1 in 630 and 640 to indicate that either a "0" or a "1" may result on the output of the data transition determinations 640. Fortunately, the data state condition usually results in only 1 or 2 "1" states produced at the transition determinations 640. For a preview, FIG. 19 shows how up to 3 data transitions are processed according to one aspect of the invention, by the 2<nd >logic system (450 of FIG. 6). Therefore the first logic circuit 500, 600 detects when the input data stream 405, 505, 605 experiences a data transition by the logical state of the transition determination 640 of FIG. 8.FIG. 9 is a simplified timing diagram 700 illustrating exemplary CDR circuit data transition detection timings relative to the detecting phase [Phi]N area (770) and the selected phase [Phi]N+3 area (780) in which various aspects of the invention may be carried out. A plurality of clock phases timings ([Phi]0-[Phi]7) 705 are illustrated across the top continuously repeating, and clock phases signals ([Phi]0-[Phi]7) 720 are shown down the left side of the timing diagram. At the rising edge of the clock phases signals ([Phi]0-[Phi]7) 720, the associated clock phase timing ([Phi]0-[Phi]7) 705 is indicated. The clock phases as they relate to the detecting phase [Phi]N are illustrated by the second row of phase indications ([Phi]N-2-[Phi]N+5) 750. The input data stream DATAIN 710, and associated logical states 760 in this present example, illustrate a positive going data transition 772 occurring after phase timing [Phi]1, but just before phase timing [Phi]2. Thus, the rising edge 774 of [Phi]2 triggers its associated FF to latch the first "1" state of the data 710, and [Phi]2 is identified as the detecting phase [Phi]N (750). The 45[deg.] phase offsets 730 between successive phases are illustrated at the bottom of the diagram along with the associated offset time of 260 ps (reference numeral 740) at 480 MHz.Referring back to FIG. 6, it should be noted that there are intentional phase offsets placed between the 1<st >latch 415 and the 2<nd >latch 435, as well as between the 2<nd >latch 435 and the 3<rd >latch 455. Note that the first (top) FF of the 1<st >latch 415 uses [Phi]0 to trigger the FF, while the first (top) FF of the 2<nd >latch 435 is triggered by [Phi]3, and the first (top) FF of the 3<rd >latch 455 is triggered by [Phi]7. Thus, a 3 phase offset is used between the 1<st >and 2<nd >latch, and a 4 phase offset is used between the 2<nd >and 3<rd >latch. Now, referring back again to FIG. 9, the reason for these offsets will become more apparent. [Phi]2 just became the detecting phase [Phi]N (750), as the first phase to record a "1" state of the data 710. However, [Phi]2 should not be selected as the phase to latch DATAIN 710, as this phase is, by definition, very close to the data transition. If [Phi]2 were used, any data jitter which occurs, may cause data to be lost. [Phi]5 705 however, which is also [Phi]N+3 750 (within 780), occurs close to the middle of the positive half of the DATAIN waveform. Therefore, [Phi]5 705 becomes a much better choice with a 3 phase offset from [Phi]2 to [Phi]5. In this way, any jitter 790 which may occur will not cause the "1" state indication to change in the middle of the waveform (AKA "eye-opening").Statistically the offset between the detected data transition and the mid-point of the data waveform would be 4 phase offsets, however, as it takes a while to calculate which phase should be used, the offset can not be set too short. The shorter the time to calculate the right phase, the better, as there could be a jitter event anytime. If the clock is not adjusted before this occurs, a bit may be lost. In order to keep the time to calculate the resulting clock phase short, it was decided by the inventor, to use a shorter 3 phase offset between the first two latches, and the 4 phase offset between the second pair of latches, to account for the calculation time. The clock phase chosen needs to be close enough to the data transition to keep the time short, but close enough to the middle of the waveform to be independent of jitter.With this method, the CDR system of the present invention, does not need multiple transitions of DATAIN710 to select a clock phase, providing a nearly "instant lock", with recovered CLK_OUT, and DATA_OUT signals that have a fixed phase relationship with the local clock even though DATAIN jitters.FIG. 10 is a simplified timing diagram 800 illustrating the effect of 350 ps of data jitter in the exemplary CDR circuit data transition detection timings relative to the selection of a phase in accordance with an aspect of the invention. As with FIG. 9, a plurality of exemplary clock phases timings ([Phi]0-[Phi]7) 820 are illustrated across the top in a continuously repeating manner. The clock phases s they relate to the detecting phase [Phi]N are illustrated by the second row of phase indications ([Phi]N-2-[Phi]N+5) 850. The input data stream DATAIN 810, illustrate a positive going data transition 872 occurring after phase timing [Phi]1, but just before phase timing [Phi]3. In FIG. 9, the rising edge 774 of [Phi]2 triggers its associated FF to latch the first "1" state, and [Phi]2 becomes the detecting phase [Phi]N (750). However, with the 350 ps of jitter 890 possible at the leading edge 872 of DATAIN 810, [Phi]2, or [Phi]3may become the detecting phase [Phi]N. Once again, this illustrates the need for offsetting the selection phase to [Phi]5 ([Phi]N+3) viewed in area of interest 880 with the rising edge 884 of [Phi]5. Thus, [Phi]5 is close enough to the middle 882 of the positive half of the DATAIN 810 waveform to keep the recovered clock and data signals independent of jitter 890 which may reside within the serial input data.Returning briefly to FIG. 6, recall that the second logic system 450 is operable to receive signals 427 which indicate the transition of the serial input data 405 and latches the stat of the signal 427 via the second latch 435, to generate data transition determination signals SELPH4-SELPH3440. A second logic component 445 uses the data transition determination signals 440 to ascertain or determine the phase associated with the data transition and then consequently selects an initial clock phase that will lie in the "sweet spot" of the data based on the determination, as illustrated in FIG. 10. In addition, the second logic component 445 is operable to compare the initial clock phase to a final clock phase associated with the previous data transition via a data feed back 465, as illustrated in FIG. 6. In that manner if the new initial selected clock phase differs from the previous clock phase too much, a modification of the initial selected clock phase may be made.FIG. 11 illustrates three waveform comparison cases of the CDR 2<nd >logic component for comparing the last or previous phase selection to the new initial clock phase selection associated with the data transition, in accordance with an aspect of the invention.FIG. 11 illustrates the three exemplary case comparisons A, B, & C, which represent one of the two logic tasks which are accomplished by the 2<nd >logic component 445. That is, FIG. 11 illustrates the comparison that is made regarding the initial phase selection and a previous selection. The other task is to determine the number of data transitions identified by the 2<nd >latch. Exemplary methods of implementing both of these tasks are shown in FIGS. 19 & 20 and will be discussed in greater detail later.Case A of FIG. 11 illustrates a comparison which is made in the CDR 2<nd >logic component (445 of FIG. 6). If a new data (PHA) transition is found to occur at the same time as the last phase selection (PHT) (i.e. "Data Same"), then the initial clock phase determination matches the previous clock phase and a decision is made to select that same phase.Case B of FIG. 11 asks if a clock phase associated with the new data (PHA) transition is found to occur later than the last phase selection (PHT) (i.e. "Data Later"), then a decision is made to select the new phase as long as the phase jump is not more than 2 phase offsets (90[deg.], or ≅520 ps @ 480 MHz). If however, the new data transition is offset by more than 2 phase offsets, a decision is made to use the previous phase selection as the new phase selection.Case C of FIG. 11 asks if a clock phase associated with the new data (PHA) transition is found to occur before the last phase selection (PHT) (i.e. "Data Early"), then a decision is made to select the new phase as long as the phase jump is not more than 1 phase offset (45[deg.], or ≅260 ps @ 480 MHz). If the phase jump is phase offset greater than -1, data could be lost as the clock cycle CLK_OUT would be too short. In this case of a phase jump greater than -1, an extra logic circuit in the 2<nd >logic component 445 is provided to prevent this action, which results in a decision to use the last phase selection as the new phase selection.FIG. 12 illustrates a comparison of waveforms for the Case C ("Data Early") of the CDR 2<nd >logic component when the new data (PHA) transition selection jumps (lags) the last phase selection by greater than N-1, in accordance with an aspect of the invention, and shown in the exemplary method of FIG. 20 at steps 1262 & 1264. If a phase jump of greater than -1 were allowed to take place, FIG. 12 shows what would take place. Namely, at a phase jump of -2, a much too short 1.5 ns cycle would result, while at a phase jump of -3, a complete cycle would be lost. Thus, additional logic is provided to eliminate this condition.FIG. 13 is a simplified timing diagram illustrating the source of a potential metastable condition in a FF, and the resulting glitch which may be produced in the CLK_OUT signal in an exemplary CDR circuit as a result thereof. Metastability is the condition of a set of logic, particularly a FF, during which the logical output state is indeterminate as to a "0" or a "1" for some period of time. In other words, the output of a logic device may be unstable, or have only a slight margin of stability. PHT_N represents a signal input to a FF, and [Phi]N attempting to trigger the FF. If PHT_N happens to be transitioning 1000 to a low state at just the right moment when [Phi]N attempts to trigger the FF, a metastable condition may occur in the output of the FF as shown by the CLK_OUT signal, and produce a glitch 1010 or a low amplitude (runt) signal output. If the output signal amplitude is too low, or the glitch makes the signal time too short, then the next phase PHT_N+1 will take over the final clock phase selection. In this case, PHASE_N would be selected for only the time PHT_N is active, and when it has died out, PHASE_N+1 would be selected by the PHT_N+1. There is a very short period of time when this condition is actually true (e.g., pico seconds) but, the risk is there.The present invention resolves the above problem with another exemplary circuit, as illustrated in FIG. 14 (1050). In the circuit 1050 which may be incorporated into the 3<rd >logic component 470, as may be desired, runt signals (signals inadequate in amplitude or time duration) at PHT_N, are amplified via gates 1060 & 1080 and held in state via hysteresis and feedback circuitry 1070, once the selection of PHASE_N has taken place. At the same time, the invention provides for disallowing a next phase PHASE_N+1 to be selected via the inverter 1090 and NAND gate 1095, as PHT_N would eventually go away, and PHT_N+1 would take over anyway. In this way two adjacent phases will not be selected during the same period.FIG. 14 illustrates 2 exemplary partial phase stages of an 8 phase CDR 3<rd >logic component 470 of FIG. 6 in accordance with an aspect of the invention, which when OR'ed together at 1096 by the OR gate 1097, provide the CLK_OUT signal 1098. The exemplary 3<rd >logic component represents a "hold and disable next phase" circuit solution for the potential metastable condition of FIG. 13. Therefore, the circuitry 1050 within the 3<rd >logic component 470 eliminates any potential metastable conditions that may arise at the FFs 455 and eliminates any potential multiple phase selections by picking the first phase and disabling any subsequent selected phases, if any.FIG. 15 is a simplified timing diagram illustrating exemplary specific phases of a waveform with a potential metastable condition 1100 in a FF PHT3, and other timings as presented to the "hold and disable next phase" circuit to eliminate metastable conditions in the exemplary CDR 3<rd >logic component circuit 470 in accordance with another exemplary aspect of the invention. FIG. 15 also illustrates the phase offset between the successive phases PHT3 and PH3, and between PHT4 and PH4. The description of these timings is the same as that of FIG. 13 above, but with more circuit specific labels for greater understanding.FIG. 16 is a schematic illustration of another, alternative exemplary CDR 3<rd >logic component "hold and disable next phase" circuit 1105 solution for the potential metastable condition of FIGS. 15 or 13, intended for elimination of metastable conditions and multiple phase selections by the use of hysteresis (e.g., shown symbolically within gates 1110 & 1120), amplification, feedback, and disabling the next phase, in accordance with another aspect of the present invention. Eight (8) of the "hold and disable next phase" circuits 1105, similar to the one shown in FIG. 16, are combined to make a complete 3<rd >logic component (e.g., 470 of FIG. 6). This combination of the 8 "hold and disable next phase" circuits 1105, produce a plurality of processed phase selections (475 of FIG. 6, or 1096 of FIG. 14).The output of the exemplary 3<rd >logic component "hold and disable next phase" circuit 1105, is the CLK_OUT_PH3 signal 1198. The CLK_OUT_PH3 signal 1198 output of this circuit is processed for the elimination of metastable conditions on PHT3 for the selection of phase 3 (PH3), and the disabling of the "next phase" phase 4 (PH4) and subsequent multiple phase selections. When 8 of the individual circuits 1105, are OR'ed together as shown, for example, by OR gate 1097 and the plurality of inputs 1096 of FIG. 14, a complete phase selection circuit (e.g., 485 of FIG. 6) may also be provided, which is operable to produce a best phase selection CLK_OUT signal (490 of FIG. 6, or 1098 of FIG. 14).In operation of the circuit 1105 of FIG. 16 and the timing of FIG. 15, PHT3 attempts to select the phase PH3 via the OR gate 1110 and AND gate 1120, just as PHT3 is dying-out 1100. This condition may begin to produce a metastable runt signal, until the high amplification and hysteresis of the OR gate 1110 and AND gate 1120 quickly takes effect. When the switching threshold voltage of either of the gates has been reached, feedback 1130 from the output of the AND gate 1120 will latch and hold the input of the OR gate 1110 high. To prevent PHT4 from also being selected, as PHT3 eventually goes away and PHT4 takes over, the output of the AND gate 1120, causes the AND gate 1180 to be enabled, and via the inverter 1140 and PH5, causes the FF 1150 to be triggered. FF 1150 disables AND gate 1190, and therefore disables the next phase PH4 from the OR gate 1195. FF 1150 is used to hold this "disable next" condition in place just long enough that there will not be two adjacent phases selected during the same period. Tie-off 1160 provides a decoupled logical high and low state for the circuit 1105.Thus, the present invention avoids the effects of metastable conditions of conventional CDR systems with an exemplary CDR 3<rd >logic component containing a plurality of "hold and disable next phase" circuits 1105 similar to that of FIG. 16. Other variations of the exemplary "hold and disable next phase" circuit are contemplated as falling within the scope of the present invention, whereby the use of amplification, hysteresis, or feedback to latch the selected phase and disable the next phase, avoids the effects of metastable conditions in a CDR system.One advantage of the present invention, is that all the functions of the exemplary CDR system discussed above may be accomplished, with a relatively small implementation of about 300 gates, for example.Still another advantage of the present invention is that unlike the conventional CDR systems, the present invention does not use the selected phase as feedback to the local oscillator or VCO. In the conventional CDR system, if bit errors or metastable conditions occur, phase lock may also be lost, causing the local oscillator to wander without controlled feedback and additional bits may be lost. By not using phase lock feedback, the present invention has the advantage of keeping the frequency of the VCO fixed, at a more stable frequency, while providing independence from the clock. The data rate however, does maintain a quasi-fixed phase relationship to the local clock, as the plurality of clock phases are derived from the local clock.A CDR circuit is used to recover a clock and data signal from a stream of serial data communications decoded from a receiver circuit of an ASIC or microprocessor device. Sample point averaging of multiple data bits, along with voting logic methods are used typically in the CDR circuit because they eliminate the effect of inevitable metastable conditions of the FFs which latch the data steam bits and yet make a simple system. Other conventional plesiochronous CDR circuits, however, must wait until, for example, 8 bits are stored in a shift register before voting on the best phase choice to provide phase lock (while there is no guarantee there was a transition at all within those 8 data bits, continuous phase update and thus the ability to follow a frequency offset can not be calculated. Also, an instant lock-mandatory for a burst traffic bus-is not an option at all). Because of this long delay before phase lock, or even because of jitter, conventional CDR systems may lose many bits before achieving phase lock, yet must still resolve any metastable conditions long after these events actually take place. The increasing use of higher speed serial data communications devices, such as those used in computers, peripherals, repeaters, routers and hubs, together with the standards set forth in the new USB 2.0 initiative, illustrates the need for a faster locking local clock oscillator as used in a CDR system to recover clock and data signals which are free of metastabilities and jitter, while maintaining a small and simple circuit design.By contrast, the present invention identifies and resolves every data transition (rather than just one in 8 bits) to within the narrow window of time provided by a plurality (e.g., 8) of phase offsets (e.g., about 260 ps), providing a much faster phase acquisition and lock than that attained by a conventional CDR system.In an alternate implementation of the present invention, some or all of the circuits and system functions described herein may be accomplished via high speed software techniques, whereby a plurality of clock phases which are evenly spaced and offset between successive phases are used to identify a transition of a serial communications data stream to select a phase for the fast locking of a CDR circuit, and for the elimination of multiple phase selections and metastable conditions of the data, to recover clock and data signals which are free from data jitter and frequency wander.The system 400 therefore receives a decoded serial communications data stream into a plurality of FFs comprising a 1<st >latch, and a plurality of clock phases which are evenly spaced and offset from each other. The 1<st >latch FFs are individually adapted to be triggered by a clock phase of the plurality of clock phases, and latch a plurality of states of the data stream as each clock phase to the 1<st >latch goes high in succession. The plurality of states of the data stream identifies a data transition point from the "0" states transition to a "1" state as detected by a 1<st >logic component, comprising, for example, a plurality of XOR and OR gates adapted to receive the plurality of data states from the 1<st >latch and generate a plurality of data transition determinations. The data transition determinations are received and latched by a 2<nd >latch, which is coupled to a 2<nd >logic component operable to determine how many and which phases have detected a data stream transition. The 2<nd >logic component then compares these resulting latched data transition determinations to a last or previous phase selection and generates a plurality of phase determinations based upon the comparison.The plurality of phase determinations are latched by a 3<rd >latch which supplies the 2<nd >logic component with the last phase selections and produces a plurality of latched phase determination choices to a 3<rd >logic component. The 3<rd >logic component is operable to eliminate metastabilities by amplification, hysteresis, and feedback of the selections, and is further operable to eliminate multiple phase selections via a plurality of AND gate and FFs circuits which also disable the next phase selections for a period of time, and generates a plurality of processed phase selections. A single new phase selection is then made, which is used to latch and recover a clock and data signal which is free of metastable conditions and independent of data jitter and frequency wander.Another aspect of the present invention provides a methodology for fast locking of the clock and data recovery operation in data communication and transmission applications in the manufacture of ASIC and microprocessor chips illustrated and described herein, as well as with other such devices. Referring now to FIG. 17, an exemplary method 1200 is illustrated for a fast locking clock and data recovery operation in a data communications device in association with an aspect of the present invention. While the exemplary method 1200 is illustrated and described herein as a series of acts or events, it will be appreciated that the present invention is not limited by the illustrated ordering of such acts or events, as some steps may occur in different orders and/or concurrently with other steps apart from that shown and described herein, in accordance with the invention. In addition, not all illustrated steps may be required to implement a methodology in accordance with the present invention. Moreover, it will be appreciated that the method 1200 may be implemented in association with the apparatus and systems illustrated and described herein as well as in association with other systems not illustrated.The method 1200 comprises receiving from a differential receiver, for example, a decoded single-ended serial communications data stream and a plurality of clock phases which are, for example, evenly spaced and offset from each other into a plurality of FFs forming a latch. The latch FFs are triggered in succession with an associated clock phase to latch a plurality of states of the data stream, wherein the plurality of states are used to detect whether there was a transition in the data. The plurality of data transition detections are then latched and used to, determine the quantity of phases which indicate a transition. The transition indications are then compared with a last or previous phase selection, and the results are latched which indicate a plurality of latched phase determinations. The logical results are processed, for example, through a plurality of gates and FF logic to amplify and latch the selection to eliminate any metastable conditions or multiple phase selections. A single phase is then selected to latch the data and synchronize the clock, thus recovering the data signal which is free of metastable conditions and jitter, while maintaining a small and simple CDR design.The fast locking clock and data recovery operation method begins at step 1205. At 1210 a decoded single-ended serial communications data stream is received, for example, as an output of a differential receiver into a 1<st >latch comprising a plurality of D-FFs. At 1215 a plurality of clock phases which are, for example, evenly spaced and offset from each other are received from a clock phase generator coupled to a VCO local oscillator and also input to the 1<st >latch. The 1<st >latch records the states of the data stream as each of the FFs are triggered in succession by the plurality of clock phases at 1220.The states of the data in the latch are examined at step 1230 for data transitions between each data detection associated with a phase. The results of the data transition detections are latched at 1240. A determination is made at 1250, as to the quantity of phases and which phases have detected a data transition. At 1260 the results of the new data transition detections are compared to the last phase selection. At 1268 the logic results are latched to indicate which phases selections are determined to be acceptable choices. The results are also processed at 1270, through a plurality of logic circuits individually adapted to eliminate metastable conditions in the data and to eliminate multiple phase selections. A single phase is selected from the plurality of processed phase selections at 1280. At 1285 the selected single phase is used to latch the data steam and synchronize the clock.Thereafter at step 1290, a clock and data signal, which is free of metastable conditions and jitter, is recovered from a serial data stream of a communications receiver device used in the manufacture of ASIC and microprocessor chips in a fast locking clock and data recovery circuit, while maintaining a small and simple CDR design. At step 1290, a determination may also be made whether the fast locking CDR operation is still enabled. If the operation is still enabled, the fast locking CDR operation continues at 1210, otherwise the operation thereafter ends at 1295, and the method 1200 may be repeated for subsequent fast locking CDR operations of a communications device.FIG. 18 is a flow diagram illustrating an exemplary method for the transition detection step 1230 of FIG. 17 for the fast locking clock data recovery operation in association with an aspect of the present invention. FIG. 18 further illustrates an exemplary method for the transition detection step 1230 of the method 1200 of FIG. 17 for the fast locking clock data recovery operation in association with an aspect of the present invention. The transition detection operation of step 1230, hereinafter begins with step 1231 where a "phase counter" and a "data transitions counter" is reset to "0". The phase counter will keep track of the present clock phase number which is being used to detect a transition in the data stream, while the data transitions counter will count the number of transitions which have been detected thus far within N clock phases.A determination is made at step 1232, whether a data transition has taken place at or before the current phase tested (Note, as this is actually a continuous process, the last state for a data transition determination will be known at all times). If a data transition has taken place, a "1" state is output from the 1<st >logic component at step 1233, and is latched into SELPH(N+4), at phase PH(N+3). At step 1234 the "data transitions counter" is incremented and continues to step 1236. If a data transition has not taken place at step 1232, then a "0" state is output from the 1<st >logic component at step 1235, and is latched into SELPH(N+4), at phase PH(N+3). In either case, the method continues at step 1236 with a check on the present count of the clock phase counter. If the present count is not equal to the maximum phase count of N, then the phase counter is incremented at step 1237 and the method continues to step 1232. Otherwise the maximum phase count of N (e.g., 8 phases) has been achieved, the next data transition is awaited, and the method continues to step 1240 where the data transition detections are latched by the 2<nd >latch.FIG. 19 further illustrates an exemplary method for the data transition detection quantity determination and phase selection step 1250 of the method 1200 of FIG. 17 for the fast locking clock data recovery operation in association with an aspect of the present invention. The data transition detection quantity determination and phase selection operation of step 1250, hereinafter begins with step 1251 wherein a determination is made as to whether there were any transitions within the last N (e.g., 8) phases. If there where no data transitions detected, the previous (last) phase used is selected at step 1252, and the method continues to FIG. 17 and step 1260.Otherwise, if there were transitions detected (e.g., count >0), the data transition count is again examined whether there was one transition detected at step 1253. If one transition was detected, the indicated phase is selected at step 1254, and the method continues to FIG. 17 and step 1260. Otherwise, if there was greater than 1 transition detected, the data transition count is again examined whether there was 2 transitions detected at step 1255. If 2 transitions were detected at step 1255, the first of the 2 phases is selected at step 1256, and the method continues to FIG. 17 and step 1260. Otherwise, if there was greater than 2 transitions detected, the data transition count is again examined whether there was 3 transitions detected at step 1257. If 3 transitions were detected at step 1257, the center of the 3 phases is selected at step 1258, and the method continues to FIG. 17 and step 1260. Otherwise, if there were greater than 3 transitions detected, too many data transitions were detected, and the previous (last) phase which was selected is used at step 1259, and the method continues to FIG. 17 and step 1260.FIG. 20 further illustrates an exemplary method for the data transition detection comparison to last phase selection step 1260 of the method 1200 of FIG. 17 for the fast locking clock data recovery operation in association with another aspect of the present invention. The data transition detection comparison to last phase selection step 1260, hereinafter begins with step 1261, wherein a comparison is made as to whether the new data transition occurred later than (e.g., <, case "B"), at the same time as (e.g., =, case "A"), or earlier than (e.g., >, case "C") the last selected phase transition.If the new data transition occurred later than (e.g., <, case "B") the last selected phase transition, then another determination is made at step 1262, whether there was more than 2 phase jumps (e.g., 2 phase changes, or a time difference of 2 phase offsets) between the new data transition and the last phase selection. If there was not more than 2 phase jumps, then the 2<nd >logic component outputs a logic result which indicates the new phase should be selected at step 1264, and the method continues back to FIG. 17 and step 1268. Otherwise, if there were more than 2 phase jumps between the new data transition and the last phase selection, then the 2<nd >logic component outputs a logic result which indicates the last phase should be selected at step 1266.If the new data transition occurred at the same time as (e.g., =, case "A") the last selected phase transition, then the 2<nd >logic component outputs a logic result which indicates the new phase should be selected at step 1264, and the method continues to FIG. 17 and step 1268.If the new data transition occurred earlier than (e.g., >, case "C") the last selected phase transition, then another determination is made at step 1263, whether there was more than a -1 phase jump between the new data transition and the last phase selection. If there was not more than a -1 phase jump then the 2<nd >logic component outputs a logic result which indicates the new phase should be selected at step 1264, and the method continues to FIG. 17 and step 1268. Otherwise, if there was more than a -1 phase jump between the new data transition and the last phase selection, then the data is processed thru a special logic circuit (not shown) which prohibits phase jumps of greater than -1 jumps at step 1265, and the 2<nd >logic component outputs a logic result which indicates the last phase should be selected at step 1266, and the method continues to FIG. 17 and step 1268.FIG. 21 further illustrates an exemplary method for the elimination of metastables and multi-phase selections step 1270 of FIG. 17 for the fast locking clock data recovery operation in association with an aspect of the present invention. The elimination of metastables and multi-phase selections step 1270, hereinafter begins with step 1271, wherein amplification and hysteresis are applied in a plurality of such circuits, to the PHT(N+9) logic result output, which may contain a metastability.A determination is made at step 1272, whether the PHT(N) input is greater than the gate input switching threshold voltage. If the PHT(N) input is greater than the gate input threshold voltage, a "1" state is output from the 3<rd >logic component at step 1273 to be latched, and the method continues to step 1275. Otherwise, a "0" state is output from the 3<rd >logic component at step 1274 to be latched. At step 1275, additional amplification is applied to the phase selection PHT(N). The current phase output result is held with feedback at step 1276. At step 1277, the next phase PH(N+1), is disabled by the gating the PH(N) selection signal with the next phase PH(N+1) signal. Finally, the result of the elimination Of metastables and multi-phase selections operations of step 1270 is latched at step 1278, with phase PH(N+2), and the method continues to FIG. 17 and step 1280.The methodology 1200 thus provides for a fast locking clock and data recovery system used in data communications applications of ASIC and microprocessor devices, in which the CDR circuit uses a plurality of clock phases evenly spaced and offset between successive phases to identify a data transition point of the data stream within one data transition time period, and a metastable and multi-phase elimination circuit to provide a plurality of processed phase selections, and a phase selection circuit to provide a single best clock phase selection, wherein the single best clock phase selection is used to latch and produce a recovered clock and data signal which is free of metastable conditions and independent of data jitter and frequency wander. Other variants of methodologies may be provided in accordance with the present invention, whereby fast locking clock and data recovery is accomplished employing a plurality of clock phase signals which are evenly spaced and offset between successive phases, and are used to identify the data transition within one data transition time period and provide a quasi-fixed phase relationship to the local clock in a CDR circuit, and a metastable and multi-phase elimination circuit.Although the invention has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In particular regard to the various functions performed by the above described components (assemblies, devices, circuits, etc.), the terms (including a reference to a "means") used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (i.e., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the invention. In addition, while a particular feature of the invention may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms "includes", "having", "has", "with", or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term "comprising." |
PROBLEM TO BE SOLVED: To provide a connection probe between an integrated circuit and an inspection device that reduces an inspection cost and an inspection time for one or a plurality of integrated circuits that are formed on a wafer. SOLUTION: A relaying charger is introduced between an inspection probe that has a contact arrangement which is made for general-purpose or standardized to an inspection device and an integrated circuit of an inspection object. One of the two surfaces of this charger is conformed to the contact arrangement of the inspection probe and the contact arrangement of the second surface is conformed to an inherent pin arrangement of the integrated circuit. The contact points of both these surfaces are connected by a conductive via, and hereby an inspection can be executed by only changing this charger even when the integrated circuit of the inspection object is changed. The contact points of the integrated circuit side in the charger is made so that a good electric connection can be constantly ensured by introducing ultrasonic wave energy to destroy an oxide film on the surface of the integrated circuit. |
A probe combination member for simultaneously providing electrical connection between one or more integrated circuits on a semiconductor wafer and circuit test equipment: a relay inserter comprising a dielectric material having two major surfaces; , A plurality of protruding contact components on one major surface of the relay inserter, each corresponding to a test pad on one or more integrated circuits, and each of the contact components 2 a plurality of conductive vias connecting to metallized pads on the surface, a plurality of conductive leads fanning out from the metallized pads into a standardized array of relay inserter connectors, and a first surface. An array of soft material and a connector corresponding to the relay inserter connector array that underlies either or both of the contact component above and the relay inserter connector on the second surface of the relay inserter. And a means for mounting the probe card on the relay inserter.A method for forming a probe combination member for simultaneously providing an electrical connection between one or more integrated circuits on a semiconductor wafer and a circuit tester: having thermal expansion characteristics similar to silicon. And a dielectric relay inserter having a plurality of conductive vias at positions corresponding to the distance between the chip contact pads, the vias extending from the first main surface to the second main surface of the relay inserter. A layer of highly conductive metal on each major surface, patterning an array of pads corresponding to the chip contact pads on the first surface, and padding the via exit points on the second surface. And an array of conductive lead wires terminating in a standardized pattern to form chip contact components on each of the patterned contact pads on the first surface and on the second surface. Provide a layer of soft material that underlies either or both of the tip contact component, and the probe connector on the interposer, and bond to the connector component on the end of each lead wire. A method of preparing a probe card having a connector to be joined to a connector on the relay inserter, aligning the connector, and mechanically mounting the connector. |
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an integrated circuit test, and more particularly to a probe card device for simultaneously testing a plurality of integrated circuit chips and a method of manufacturing the same.BACKGROUND OF THE INVENTION Integrated circuits (ICs) are formed on semiconductor crystal wafers as identical, discrete chips. Each integrated circuit chip is typically inspected to determine if its function is intended, before cutting the semiconductor wafer into individual chips. Typically, the chips are tested by computer driven test equipment, which implements the circuitry on the chips using a test process commonly referred to as multiple probe test.Conventional multiple probe testing uses a probe card, which includes a plurality of electrical lead wires with needle-shaped ends, which in turn input/output various circuit components on an integrated circuit chip under test. Contact the output contact. The chip contacts are often pads that are electrically connected to the next level of circuitry and are called bond pads. The prior art typically builds probe cards by attaching metal needles, such as tungsten or tungsten rhenium, to conductive traces on a polymer ring. The needle or probe components are adhesively attached to the ring or they are welded to a single blade by welding. An opening is provided in the center of the ring through which a plurality of needles extend to align the needles with the bond pads on the chip. The card is located at the tip of the probe, which is in electrical communication with the control computer, and which brings the needles into mechanical contact with the bond pads on the chip.The needles must all land in the same place to ensure that each makes electrical contact with a particular input/output contact or bond pad on the integrated circuit. This is done by mounting the needles on the probe card and then bending them, which is labor intensive, time consuming and expensive. Even after such adjustment, when the needles are pushed back into their original position or as a result of the scratching action used to ensure penetration of oxide film or dirt on the bond pads. It may be moved by the pressure of the needle.However, the close spacing required to test some chips cannot be achieved with conventional needle contacts. The close placement of the probe needles and their protrusion angles are very difficult to manufacture and consequently expensive. Furthermore, maintenance of such cards makes the inspection cycle very long. As a result of these, many attempts have been made to provide alternative probe card technologies. Much of the new technology is focused on those that have a contact mechanism mounted on the polymer film by means of thin film photo-etching and plating or using springs. Photolithographically building the lead wires adds cost to the inspection procedure, which not only involves an initial cost and multiple steps, but also new devices and/or new steps with each change. This is because work and masks are needed, and therefore additional patrol time for manufacturing. Each of these approaches must have a means to provide uniform pressure for the membrane to make uniform contact with the tip. Uniform contact, as well as alignment problems, are exacerbated by the thermal expansion of the membrane, as the chips often generate a significant amount of heat during the inspection procedure.In addition, the probe tip is oxidized many times due to contact and heat, which requires cleaning or replacement, which increases the inspection time. As a result of the recent development of the probe tip, it has been found that a noble metal that does not oxidize like tungsten or rhenium is used, and as a result, a more stable contact resistance is realized. "High Temperature Wafer Inspection" by Brotz J. J. (Broz, JJ) and Rincon, R., EE-Evaluation Engineering, September 1999, and Blots J. J. (Broz, JJ) and others. "Probe Contact Variations During Elevated Temperature Wafer Test," 30th IEEE International Inspection Conference Bulletin, Atlantic City, New Jersey, September 1999, pp. 396-405.However, the biggest wafer inspection problem will be the long inspection time. Each tip is inspected in turn, requiring realignment and repositioning each time the probe is lowered. The complexity of the test and the time required will vary from circuit to circuit, but the alignment and positioning times will be comparable to or higher than the test itself.In addition, semiconductor wafer sizes have increased and circuit geometries have decreased, resulting in an increase in the number of chips per wafer. Since there are a plurality of wafers in one manufacturing lot and the inspection time for each wafer increases, the processing time required for one lot during wafer manufacturing should be shorter than the inspection time. As a result, inspection times have been delayed in product shipment, and the costs associated with similarly expensive inspection machines have become a very significant problem in the industry.Significantly reduce test time in the industry due to the problems with conventional wafer probe technology mentioned above, the problem that test time will increase further in future integrated circuits, and the fear of narrow bond pad pitch. It would be very beneficial to have a probe device that is capable of and a means by which such devices with high density and durable contacts can be rapidly manufactured.SUMMARY OF THE INVENTION One object of the present invention is to provide a wafer for simultaneously inspecting one or more integrated circuits, whereby the time required to inspect a single wafer can be significantly reduced. Providing probe card components.One object of the present invention is to provide a probe card component capable of making electrical contact between a high density chip input/output contact pad and a probe card connected to an integrated circuit (IC) tester. Is to provide.Yet another object of the present invention is to provide means for making electrical contact between a chip contact pad and a probe card having a connection standardized to a family of general purpose or tested integrated circuit devices. That is.Another object of the present invention is to provide a probe card contact device that can be manufactured quickly and inexpensively.Still another object of the present invention is to provide a probe card device which has a coefficient of thermal expansion equivalent to that of a semiconductor element to be inspected and prevents contacts from being weakened as a result of chip heating during inspection.Yet another object of the present invention is to provide a high performance probe card contact device that is highly reliable.SUMMARY OF THE INVENTION The object of the present invention is realized by providing a probe assembly for wafer inspection, which includes one relay inserter, which relay inserter is mounted on one wafer or on one wafer. An array of tip contact components and contact pads on a second surface having a plurality of protruding contact components on one surface that are aligned to correspond to a pattern for electrically contacting a plurality of chips. A probe card having a contact via and a conductive via penetrating a relay inserter, which is an electrical insulator for connecting to and from each other. The conductive traces on the second surface of the relay insert wrap around the vias and terminate in connectors that are arranged in a universal or standardized pattern. The contact components on one or both surfaces are placed on a soft material so that sufficient pressure is applied to allow good electrical connection. The relay inserter is attached to a probe card having a plurality of connectors corresponding to the connectors on the second surface of the relay inserter. The universal or standardized pattern on the probe card is unique to the structure of the tester and is common to the series of circuits under test, thereby significantly reducing probe card inventory, cost and mounting time.The probe contact component and the relay inserter are designed to simultaneously inspect one or more adjacent chips. The arrangement is preferably less than two chips wide so that the lead wires can be removed and aligned with the probe card contacts. The manufacturing capabilities of probe cards typically do not provide very high density contacts like integrated circuits.The high density chip contact component is manufactured as a protruding structure, such as a noble or non-oxidized metal tack or fine wire, and is placed on a surface of a soft material and connected to a conductive via in a relay inserter.On the second surface of the relay inserter, the via terminates in a conductive pad, which is subsequently routed to the connector component. These vias are formed directly through the relay inserter and/or are configured to extend horizontally to allow access from closely-spaced chip contacts, and a second, more closely spaced second contact. Provides contact points. The relay inserter includes one or more grounded or other performance-enhanced buried metal plates in contact with selected vias.The relay inserter is fixedly mounted on a universal probe card having a connector that mates with the connector on the relay inserter. The probe card contact pattern is standardized to a general-purpose or integrated circuit device type series and a specific inspection machine.Alternatively, for single-chip inspection applications, a relay inserter having at least one soft surface underneath the connector and having a predetermined edge is snapped or press fit into the universal probe card, A low cost durable tip contact assembly is provided.The foregoing and other objects, features and advantages will become more apparent from the following detailed description of the preferred embodiments of the invention, which description proceeds with reference to the accompanying drawings.1 shows a multi-tip probe combination member 100 according to the present invention. The combination member 100 includes a relay inserter 10 having a contact component 21 for inspecting two or more integrated circuit chips on the semiconductor wafer 1, a probe card 50 having a general-purpose or standardized contact, Means 30 for attaching the relay inserter 10 to the probe card 50. The multi-chip probe assembly 100 is suitable for inspecting wafers at high temperatures. During the inspection of the chip portion on the wafer 1, the multi-chip probe combination member has a means for contacting the input/output pads on the chip, and a corresponding on the probe card 50 mounted on the inspection device (not shown). Means for connecting with a pad for connecting. The repeater inserter of the multi-tip probe combination member minimizes the adverse effects of mismatched thermal coefficients due to expansion.FIG. 2 is a more detailed view of a relay inserter having contact components on both major surfaces. A plurality of chip contact components 21 project from the first surface 111 of the relay inserter 10, and a plurality of connectors 22 for electrically contacting the probe card are arranged on the second surface 112. The chip contact component 21 is arranged to be a mirror image of the input/output pads of one or more integrated circuits (not shown) under test.The tip contact component 21 includes a noble metal or non-oxidized metal protrusion, such as a stud protrusion 24 mounted on a metal pad 23, which is subsequently located on the surface of a soft material 27. The term noble or non-oxidized metal is used to include the metal forming the oxide thin film, which itself limits oxidation and is easily penetrated with minimal contact force.A soft material 27 having a relatively low modulus of elasticity is disposed in the lower recess of the contact component to absorb stress when the contact is applied to the chip pad and thus damage any surface. It is designed to prevent. A conductive via (conductive tube/not shown) penetrates the soft material. Alternatively, in FIG. 3, the soft material is the membrane 33, with metal pads and contact components on its surface, with vias (tubes) through the membrane. The protrusion component is suitably formed as a stud 24 and is mounted with ultrasonic welding equipment similar to that used in mechanical or wire bonded semiconductor devices. A stud is a term applied to a metal sphere formed by a wire welder, where the sphere is welded to a pad and excess wire is removed to form a protrusion or otherwise. The case is partially flattened to control the size of the "z" axis. Alternatively, the tip contact component may be a plated micro spring wire, or other type of metal protrusion mounted to a metallized pad 24. Micro spring wire technology can be purchased from Precision Art Coordinators, 22 Almeida Avenue, East Providence, RI 02914.Non-oxidized metal, or self-limiting metal oxide probes are known to minimize the amount of scratching or excessive migration required to make good electrical contact with aluminum or copper bond pads on integrated circuit chips. There is. (Bronze, J. J. and Rincon, Ray, "Probe Contact Resistance Variation During High Temperature Wafer Inspection", International Inspection Conference Bulletin, September 1999, Atlantic City, NJ, 396-405.)Patterning the pad 23 as a contact component is done by photolithography and/or laser ablation. Features larger than 100 micrometers are patterned by photolithography processes typically used in printed circuit and flex film technology, while finer features are laser-ablated to remove all or part of unwanted metal. Be patterned. A software input of the design to a computer controlled thin film laser ablates excess metal from the metal coated surface of the relay inserter. The metallization is preferably a layer of tin over a copper alloy, or a layering technique, or other low resistance metal deposited by vapor deposition on the first surface of the interposer. Subsequent to defining the pad, a thin film of noble metal, preferably gold, is plated over the metal conductor. A combination of photo-etching and laser ablation of a metal film is a suitable method for patterning pads and lead wires on this device, and alternative methods are possible in the art, Includes thin film metallized photographic etching and plating to required thickness.On the opposite surface 112, a series of connectors 22 are arranged in a pattern corresponding to the universal or standardized connector pattern on the probe card. Each connector 22 includes a metal pad 26 and a connector component 25. The connector component is preferably of a slightly thicker metal shape and is capable of pressure contact to provide an electrical connection. Alternatively, the connector component may be a stud or micro spring connector. The connectors and pads are arranged on the surface of the layer of soft material 28 that is inserted into the interposer as is the case with the chip contact components, or alternatively as shown in FIG. The soft film 34 is provided with a via. In yet another embodiment, a soft material is not needed on the probe contact side of the relay inserter because the connector itself absorbs enough stress to avoid damage.The conductive connectors 22 are adapted to be aligned and in contact with corresponding connectors on the probe card. The mechanical joining of the connectors provides a very low contact resistance between the mating connectors.The connectors 22 are often offset from the corresponding chip contact components 21 to allow them to be directly aligned with the universal connector on the probe card. Offsetting and removal on the relay inserter connector 22 is accomplished both by vias 35 inside the relay inserter and by routing the metal conductors on the relay inserter surface or on a soft membrane over the relay inserter surface. ing. Some or all of the vias go straight through the relay inserter and all routing to the desired location is done by patterned metal conductors.The relay inserter 10 is a dielectric material having a thermal expansion coefficient of 2 to 10 PPM, which is substantially equal to the thermal expansion coefficient of a silicon wafer. The relay inserter ensures that the positioning of the plurality of chip contact components 21 on the contact pads of the integrated circuit chip can be reliably maintained during heat transfer, and as shown in FIG. , A plurality of vias 35 support the pads 23 on the first surface 111 to electrically connect the pads 23 on the second surface 112. Techniques for making conductive vias and wires in organic media are becoming widely available, for example, CSP (chip scale packages) or BGA (ball grid array: As a result of area array packages such as the Ball Grid Array) and the circuit boards needed to accommodate such devices. Moreover, multiple metal levels and plates are routinely available to carry a common power and ground connection. These buried metal levels can also route offsets and removals from the chip contacts 21 to the connector 22.FIG. 4 is an example of an array of connectors 22 and lead wires 29 on the second surface 112 of the relay inserter 10 that provides connection of four chip contacts 21 to the probe card. The lead wire 29 extends from the via exit to the general-purpose connector 22 in a fan shape. In FIG. 4, it can be seen that the lead wiring 29 broadens in a fan shape in four separate patterns with respect to the peripheral portion in order to make contact with the probe card connector.An array of apertures 31 is provided near the periphery of the relay inserter so that the relay inserter 10 can be secured to the probe card, preferably with a screw-like element.The example shown in FIG. 4 is a 2×2 (2×2) chip arrangement, but the technique is the same for other arrangement configurations, and the number is mainly for making probe contact possible. It should be appreciated that lead wire extraction capability is limited. This array can be more easily applied to a "Y" by 2 wide array.The configuration of a universal or standardized connector pattern on the relay inserter 10 that matches the probe card will result in outward via removal and lead wire 29 on the second surface 112 to the native position of the universal connector. It is performed by a combination of physically routing (routing). The routing of the lead wires 29 connecting the vias to the connectors 22 is preferably coated with a highly conductive material, such as tin over copper, which can etch the interposer surface with a scientific and fine laser beam. It is done by doing. Features larger than 100 micrometers are preferably defined by printed circuit boards and photolithography as is common in the luffing industry, finer features being laser ablated. Alternatively, the metal pattern is formed by laser ablating unwanted metal. The plating of precious metal without electricity protects the metal leads and pads. In yet another embodiment, the leads are formed by photolithographically patterning a thin film metal layer, etching and then plating the metal leads to the desired thickness.A high performance embodiment of a probe relay inserter is realized by having a ground plate inside the relay inserter and/or by custom designing the lead wire dimensions to exhibit or approach a particular impedance level. R.FIG. 5 is a cross-sectional view of a typical portion of the probe card combination member 100, which includes a soft material 27 inserted into one surface, a relay inserter 10 having a contact component 21 on a chip surface, and a chip contact 21. To the probe card connector 22 on the second surface and a probe card 50 with a mating connector 51 corresponding to the connector of the relay inserter.The probe card 50 of the present invention is manufactured on a printed wiring board structure and enables the use of technologies currently available throughout the probe card industry. The universal probe card 50 includes vias 53 that provide electrical connection between the universal connector 51 and the relay insert, and conductive traces 54 on the opposite surface of the probe card. The metal trace 54 on the upper surface is the conductor to which the test equipment is connected.The connector 51 on the probe card and the connector 22 on the relay inserter are arranged in a pattern standardized for a plurality of general-purpose or inspection target circuits. The position of the connector 51 on the card corresponds to the connector 22 on the relay inserter. A probe card equipped with a general-purpose connector makes it possible to inspect a large number of elements using the same card by simply replacing the relay inserter with a specific chip contact. General purpose probe cards are unique to a particular type of inspection device.The main components of the probe card assembly, namely the relay inserter 10 and the probe card 50, are held securely in a series of fixtures, such as screws 30 located at the periphery of the relay inserter, Applies mechanical force between the connector 22 on the relay inserter and the probe card connector 51 to be joined. Alternatively, the probe card and relay inserter can be secured together by a mechanical locking mechanism such as a cam ring. The firm mechanical contact between the connectors results in an electrical connection between the components.The probe card combination member 100 functions by mounting the relay inserter 10 having the standardized probe connector 22 on the probe card 50 having the joint connector 51. The chip contact components on the combination member are aligned with the input/output pads of one or more chips on the semiconductor wafer, using the microscope to check the vertical direction before contacting the probe head. Be aligned. The probe card is connected to the appropriate tester using conventional connection methods. The application of ultrasonic pulses provides a means of removing surface oxides or contaminants from the pad and contact components, whereby the vertically oriented, durable tip contact components of the probe combination member are Intimate contact to the pad is achieved without excessive xy movement, thus minimizing damage to the bond pad on the thin and fragile IC. The ultrasonic ablation technique was previously disclosed in U.S. Patent Specification, Serial No. 09/443033, filed November 18, 1999, which is hereby incorporated by reference.A preferred embodiment of the probe card combination member has been described and illustrated for multiple chip testing. However, this technique is applicable to single-chip inspection; the embodiment shown in FIG. 6 shows a contact layer 61 on the first layer of the soft layer 67 on the first surface of a connector 62 on the second surface of a universal or connector. It includes a relay inserter 60 with conductive vias 68 connecting to a standardized array. The relay inserter 60 is mounted on a probe card (not shown) having an array of corresponding connectors. In one embodiment, the relay inserter is mechanically attached to the probe card by a threaded member, such as a mechanical screw inserted into the relay inserter as shown in FIG. There is. In another embodiment, a relay insert with curved edges as shown in FIG. 6 is attached to the probe card, which has an opening or groove designed to accommodate the size and shape of the relay insert. It is placed inside and pressed into contact.In yet another embodiment illustrated in FIG. 7 a, the probe combination member includes an array of durable tip contact components 71 different from that described above and formed on a flexible membrane 74. Attached to a series of conductive traces 75, which in turn are attached to the relay inserter 70. Each trace forms a continuous lead wire along the vertical outline of the relay inserter and connects to the connector 72 on the second surface 712. The connectors 72 are arranged in a pattern corresponding to the arrangement of general-purpose connectors on the probe card. A relay inserter 70 with curved edges is snapped or press fit into a probe card (not shown). Preferably, metal traces 75 are patterned and etched and bonded by laser ablation of a highly conductive stretchable metal such as copper. 7b and 7c illustrate the first and second surfaces 711 and 712 of the relay inserter with the tip contact component 71, the probe card connector 72 and the connecting metal traces 75, respectively.The durable chip contact components of each of the described embodiments can be used for full chip inspection of input/output pads on one or more chips, as well as process control or on semiconductor wafers. It can also be used to inspect other inspection structures that are scribed and placed.The probe combination member of the present invention is manufactured by combining the individual processing steps well known in the art. Although the preferred method includes the following sequence of steps, the invention is not limited to this combination and can include alternative methods and modifications known in the art.A combination member for providing simultaneous probe contacts between one or more integrated circuit chips and a tester preferably comprises: (a) providing a dielectric relay inserter having thermal expansion characteristics similar to silicon. Wherein the relay inserter includes a plurality of conductive vias arranged to correspond to the spacing of the chip contact pads, the vias extending from the first major surface to the second major surface of the relay inserter. , (B) providing a soft material below the location of the chip contact component, (c) depositing a layer of highly conductive metal on each major surface, (d) more than 100 micrometers. Patterning and etching conductors of thick design, (e) conductors thinner than about 100 micrometers, also laser ablating excess unetched metal on both surfaces, Bonding connector components to each patterned contact pad on one surface and to the ends of each lead wire on the second surface, (g) connectors to join to connectors on the interposer. And (h) aligning the connectors and fixing or screwing the main components so that they are in electrical contact. The conductor pattern formed in steps (d) and (e) has an array of pads corresponding to the chip contact pads on the first surface and all necessary conductive leads to vias on the first surface. Also included is an array of pads on the second surface at the via exit point and an array of conductive leads terminating in a standardized pad pattern for the connector.The present invention provides a number of innovative features to the semiconductor industry. Testing multiple chips simultaneously adds significant cost to the cycle time to complete the device, and to expensive test equipment time. Durable, high-density contact components on the interposer reduce the cost of probe contacts, reduce maintenance, and relatively low cost of manufacturing methods and fast cycle times, both new and revised products Can take the necessary steps to introduce the chip design at a fast pace. Precise pad position and dimension software input is precisely based on the position and dimension of the chip design, using precious metal contacts to break down the oxide on the aluminum bond pads to get good electrical contact The amount of scratches or unnecessarily large movements needed to minimize it is minimized. The use of ultrasonic energy effectively enables the vertical contact required to inspect multiple chips simultaneously, with minimal scratching. A universal probe card that can be used in multiple circuits with a connector that mates with the connector on the relay inserter provides reduced setup time and probe card cost.Although the present invention has been described with reference to particular embodiments, it is not intended to limit the scope to the particular forms recited herein, but rather to cover the alternatives, modifications and equivalents in the opposite. Accordingly, it is intended to be within the spirit of the invention as set forth in the appended claims.The following items are further disclosed with respect to the above description. (1) A probe combination member for simultaneously providing electrical connection between one or more integrated circuits on a semiconductor wafer and circuit test equipment: a relay comprising a dielectric material having two major surfaces. An inserter, a plurality of protruding contact components on one major surface of the relay inserter, each corresponding to a test pad on one or more integrated circuits, and the relay insert of each of the contact components. A plurality of conductive vias connecting to metallized pads on the second surface of the container; and a plurality of conductive leads that fan out from the metallized pads into a standardized array of relay inserter connectors; A soft material underlying the contact components on the first surface and/or the relay inserter connector on the second surface of the relay inserter and an array of connectors corresponding to the relay inserter connector array; A combination member including a probe card having the same, and means for mounting the probe card on the relay inserter.(2) The combination member according to item 1, wherein the protruding contact constituent element includes a noble metal or an oxidation limiting metal.(3) The combination member according to the item 1, wherein the protruding contact point constituent element is a stud protrusion.(4) The combination member according to item 1, wherein the protruding contact component is a fine wire.(5) The combination member according to item 1, wherein the relay inserter has a coefficient of thermal expansion in the range of 2 to 10 PPM.(6) The combination member according to item 1, wherein the relay inserter includes one or more embedded metal ground plates.(7) The combination member according to item 1, wherein the pad and the connecting lead wire on the relay inserter include a first layer of copper and a second layer of a laser ablatable material.(8) The combination member according to item 1, wherein the pad and the connecting lead wire on the relay inserter are patterned by a combination of laser ablation and chemical etching.(9) The combination member according to item 1, wherein the conductive patterns of the lead wiring and the pad are generated by software and input to the laser.(10) A combination member, wherein the chip contact components are spaced closer together than the probe card connector.(11) The combination member according to item 1, wherein the connector on the second surface of the relay inserter joins with the array of connectors on the probe card.(12) The combination member according to item 1, wherein the connectors on the probe card are arranged in a general-purpose pattern common to a plurality of circuit elements.(13) The combination member according to item 1, wherein the means for mounting the relay inserter on the probe card is a plurality of machine screws having thread grooves.(14) The combination member according to item 1, wherein the means for mounting the relay inserter on the probe card is a cam ring lock mechanism.(15) A combination member according to item 1, which includes an ultrasonic energy source coupled to the tip contact component.(16) A probe combination member for providing an electrical connection between an integrated circuit chip on a semiconductor wafer and a circuit inspection device: a dielectric material having two major surfaces and contoured sides. An interposer including a plurality of protruding contact components disposed on a surface of soft material on one major surface of the interposer, each corresponding to a test pad on a chip; and the contact component. A plurality of conductive vias connecting each of the two to a metallized pad on the second surface of the relay inserter and a standardized connector array, and an array of one connector corresponding to the standardized connector array. And a means for mounting the probe card on the relay inserter.(17) The combination member according to item 16, wherein the means for mounting the probe card and the relay inserter are press-fitted.(18) A probe combination member for providing an electrical connection between an integrated circuit chip on a semiconductor wafer and a circuit inspection instrument: a relay insert including a dielectric material having two major surfaces and a curved side surface. And a plurality of protruding contact components disposed on the surface of the soft material on one major surface of the relay inserter corresponding to the test pads on the chip, and each of the contact components being relay inserted. Probe card having a plurality of conductive lead wires for connecting to a metallized pad and a standardized connector array on a second surface of the container, and an array of one connector corresponding to the standardized connector array And the relay inserter is press-fitted into the probe card.(19) A test probe combination member for simultaneously providing an electrical connection between a linear test structure imprinted on one or more integrated circuits on a semiconductor wafer and an electrical test device: A relay inserter including a dielectric material having two major surfaces and a test pad on one or more integrated circuits disposed on a soft material surface on one major surface of the relay inserter. A corresponding plurality of protruding contact components, a plurality of conductive vias for connecting each of the contact components to a metallized pad on the second surface of the relay inserter, and a standardization of the pad-to-connector A plurality of lead wires that spread in a fan shape toward the arranged array, a probe card having an array of connectors corresponding to the standardized contact array, and means for mounting the probe card on the relay inserter, A combination member including.(20) A method for forming a probe combination member for simultaneously providing electrical connection between one or more integrated circuits on a semiconductor wafer and a circuit tester: thermal expansion similar to silicon A dielectric relay inserter having characteristics and having a plurality of conductive vias at locations corresponding to the distance between the chip contact pads, the vias extending from the first major surface to the second major surface of the relay inserter, Prepare the relay inserter, affix a layer of highly conductive metal on each major surface, pattern an array of pads corresponding to the chip contact pads on the first surface, and a via exit on the second surface. Patterning an array of pads at the points and an array of conductive lead wires terminating in a standardized pattern to provide chip contact components with each patterned contact pad on the first surface, and 2 providing a layer of soft material underlying the tip contact component, and/or the probe connector on the interposer, bonded to the connector component on the end of each lead wire on the two surfaces, A method of providing a probe card having a connector for mating with a connector on the relay inserter, aligning and mechanically mounting the connector.(21) The method of claim 20, wherein the pattern on the relay inserter surface is software input to a computer controlled laser.(22) The method according to claim 20, wherein the metal pattern is at least partially formed by laser ablation.(23) The method according to claim 20, wherein the metal pattern is formed by photolithography and chemical etching.(24) A probe card combination member for simultaneously inspecting one or a plurality of integrated circuit chips, which is arranged on one surface of a soft material and arranged in a pattern corresponding to a chip pad. Also, a relay inserter 10 having a plurality of protruding contact components 21 for making electrical contact with one or more chips of a wafer, and a relay contact inserter 10 that is an electrically insulating material, and a chip contact. It includes a series of vias 35 that join the component 21 to the lead wire configuration that terminates in the universal configuration of the connector 22 on the second surface, and a probe card having a connector that connects to the connector on the relay inserter. The connector on the relay inserter is attached to the connector on the probe card, which provides a vertical probe combination that can use ultrasonic energy to minimize scratching or excessive movement. The general-purpose probe card is unique to the structure of the inspection machine and is common to the series of inspection target circuits. |
A PCI bus time-based weighted round robin arbiter has a phase table (200) divided into a plurality of phases (202). Each of the phases is assigned to one of the ports on the PCI bus. An arbiter state machine (250) is coupled to the phase table and looks at the port assignment for the next plurality of phases, for example, three phases. If the arbiter determines that the next plurality of phases is assigned to a single port, that port is selected as the next bus master. |
CLAIMS 1. A PCI bus time-based weighted round robin arbiter, comprising: a phase table comprising a plurality of phases, each of the phases being assigned to a port on a PCI bus; an arbiter state machine coupled to the phase table for looking at a plurality of phases at a time, the arbiter state machine selecting the next bus master to use the PCI bus when a predetermined number of phases have been assigned to a single port. 2. The PCI bus time-base weighted round robin arbiter of Claim 1, wherein the arbiter state machine looks ahead at the port assigned to upcoming phases. 3. The PCI bus time-based weighted round robin arbiter of Claim 1 or 2, wherein the predetermined number of phases is 3 or 128. 4. The PCI bus time-based weighted round robin arbiter of any of Claims 1 - 3, wherein the arbiter state machine can terminate a bus transaction in order to preserve the isochrony of data from one of a plurality of devices on the PCI bus. 5. The PCI bus time-based weighted round robin arbiter of any of Claims 1 - 4, wherein each of the phases determines a time-based slot for a bus transaction. 6. In a PCI bus arbiter, arbiter means for guaranteeing a predetermined time for transmitting isochronous data comprising: means for dividing a time cycle into a predetermined number of phases, each of the phases being assigned to a port on a PCI bus; an arbiter state machine responsive to the means for dividing for granting the use of the PCI bus to a selected port. 7. The arbiter means of Claim 6, wherein the means for dividing comprises a phase table having the predetermined number of phases. 8. The arbiter of Claim 17 wherein each of the phases determine a time-based slot for a bus transaction. 9. A method of operating a PCI bus comprising: providing a time-bases arbiter for assigning each device on the bus a predetermined time also in which to perform bus transactions; terminating a bus transaction for one device on the bus in order to guarantee isochrony of data being transferred for another one of the devices on the bus. |
TIME-BASED WEIGHTED ROUND ROBIN ARBITER This invention relates to time-based weighted round robin arbiter and more specifically to an arbiter for a PCI Express to PCI bridge which can support isochronous traffic. BACKGROUNDPeripheral Component Interconnect (PCI) is a parallel bus architecture developed in 1992 which has become the predominant local bus for personal computers and similar platforms. The implementation of this technology has come close to its practical limits of performance and can not easily be scaled up in frequency or down in voltage. A new architecture utilizing point-to-point transmission, having a higher speed, and which is scalable for future improvements, is known as "PCI Express."One advantage of PCI Express is the ability to transfer isochronous data. The new IEEE Standard Dictionary of Electrical and Electronics Terms, fifth addition, defines "isochronous" as the time characteristic of an event or signal recurring at known, periodic time intervals. In terms of the architecture, transmission of isochronous data requires that the bus have a guaranteed minimum bandwidth and maximum latency in order to maintain the isochrony of the data. Video data is isochronous data because it is necessary that the frames of data arrive at a time certain or the data has no value.A PCI Express-to-PCI bridge will allow PCI devices to be connected to a PCI bus in a PCI Express architecture. In a PCI bus architecture, the bus arbiter utilizes a round-robin arbitration which is "fair" to all devices on the bus. Once the device on the bus has received a grant to use the bus, it can hold on to the bus until its transaction is complete or until 4 kilobytes of data has been transferred, so that isochrony can not be guaranteed.FIG. 1 shows a block diagram of a computer system 100 implementing a standard PCI Express-to-PCI bridge 112. The bridge is coupled by lines 108 to the PCI Express fabric (a network of interconnected devices and switches) 106, which is coupled by line 104 to CPU 102. The PCI Express fabric is also coupled via lines 110 to other devices (not shown). The PCI bus 114 is connected to the bridge and to two PCI applications 116, 120 respectively. Each of the applications has request/grant lines 118 and 122 respectively. PCI application 120 is isochronous and is connected via line 124 to an isochronous fabric, such as a device that conforms to the IEEE 1394 standard. Because of the way a PCI architecture operates, interfering traffic from the other PCI application will have equal priority and interfere with the isochronous transmission of data from the PCI application 120.Accordingly, there is a need for a PCI bus arbiter that can provide for isochronous data transmission even though the PCI bus does not support this feature. SUMMARYA general object of the present invention is to provide a PCI bus arbiter that allows the PCI bus to provide isochronous data transmission.This and other objects and features are provided, in accordance with one aspect of the present invention by a PCI bus time-based weighted round robin arbiter comprising a phase table comprising a plurality of phases, each of the phases being assigned to a port on a PCI bus. An arbiter state machine is coupled to the phase table for looking at a plurality of phases at a time. The arbiter state machine selects the next bus master to use the PCI bus when a predetermined number of phases have been assigned to a single port.Another aspect of the invention includes an arbiter means in a PCI bus arbiter for guaranteeing a predetermined time for transmitting isochronous data comprising means for dividing a time cycle into a predetermined number of phases, each of the phases being assigned to a port on a PCI bus. An arbiter state machine is responsive to the means for dividing for granting the use of the PCI bus to a selected port.A further aspect of the invention comprises a method of operating a PCI bus including providing a time-based arbiter for assigning each device on the bus a predetermined time also in which to perform bus transactions. Terminating a bus transaction in order to guarantee isochrony of data being transferred for one of the devices on the bus. BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a block diagram of a computer system implementing a standard PCI Express to PCI bridge;FIG. 2 shows the PCI bus arbiter of the present invention implemented as a state machine; andFIG. 3 illustrates the operation of the state machine of the FIG. 2 in issuing grant signals to devices on the PCI bus. DETAILED DESCRIPTION OF THE EMBODIMENTSThe PCI bus is a synchronous bus architecture in which all data transfers are performed relative to a system clock, such as the signal PCI CLK shown in FIG. 3. Initially, the PCI bus operated at a maximum clock speed of 33 MHz, which was later revised to support operation at 66 MHz. The PCI standard implements a 32-bit multiplex address and data bus which allows a reduced pin count to reduce the cost and size for PCI components. A PCI bus cycle is initiated by driving an address onto the 32-bit bus during a first clock edge which is called the address phase. This address phase is signified by the assertion of a FRAME # signal. One or more data phases begin at the next clock edge in which data is transferred over the bus.All PCI devices are in parallel on this 32-bit bus. The device which initiates a data transfer is called the initiator or bus master and the device which receives the data is the target, which is a bus slave. Since all PCI devices which are capable of initiating a data transfer are bus masters, they can take control of the bus to perform a data transfer without requiring the assistance of the CPU. This means that the bus itself must contain a control circuit to resolve conflicts when two or more devices want to transfer data at the same time. This control circuit, called an arbiter, implements an arbitration scheme in order to provide "fair" access to the bus for each device. The object of the "fair" arbitration is to provide access to the bus for all devices when the device needs access without creating too long a delay which would keep other devices from obtaining access. The information needed to ascertain the maximum latency or time within which the device is allowed to transfer its data, is contained within a configuration register in each bus master. The actual arbitration is hidden and occurs while another bus transaction is going on, so that the next bus master can begin its transfer as soon as the ongoing transfer has been completed. FIG. 2 shows a PCI bus arbiter capable of supporting isochronous traffic flow generally as 200. The arbiter consists of a phase table 202 and a state machine 250. The phase table 202 contains 128 phases labeled 0 - 127 each of which contains the identification of a PCI device number which is assigned to that phase. In the implementation shown in FIG. 2, up to sixteen (16) PCI devices are provided for by making each phase table entry 4- bits wide. Table 1 shows an implementation of five (5) PCI devices and a PCI master state machine which contains the PCI bus arbiter as an example implementation. TABLE 1In Table 1, we see that each device has its own port number and that all devices except for the PCI master state machine have a device number and a grant number. In the PCI arbitration scheme, once an arbiter grants a device access the bus, the devices grant number signal GNT# is asserted. This causes the device to monitor the state of the FRAME # and RDY # signals which indicate when the bus is free. If the grant number signal is still asserted when the bus is free, the device initiates its transaction. Each phase or time slot in the phase table 202 contains the port ID for the port assigned to that phase. Once the phase table has been populated to the port assigned to that time slot, the arbiter can be operative. After the time-based arbitration signal on line 208 is enabled, the arbiter state machine enters the Idol_arb state 210. In this state, the arbiter state machine looks ahead 3 phases in the phase table by means 206. If the arbiter state machine sees that at least 3 consecutive phases are programmed with the ID of a new bus master, it will select that new bus master as the next bus master to use the PCI bus. Once the decision is made to select a new bus master, the arbiter state machine will move along path 214 to the Issue _gnt state 220. If the next 3 phase are not programmed for the same bus master identification, the arbiter state machine follows path 212 and remains in the Idle_arb state 210 until the next 3 phases are programmed with the same bus master identification. In state 220, the arbiter state machine continues to monitor the next 3 phases in the phase table 202. Whenever this "look- ahead" phase index determination indicates that a new bus master phase is approaching, the arbiter state machine will move from the Issue _gnt state 220 along path 226 to the Release _gnt state 230. In this state, the transition from one bus master to another can occur. In order to make this determination, the state machine looks at a configuration register stored in the bus master that currently controls the bus. An exemplary 16-bit configuration register is shown in Table 2.TABLE 2In table 2, bits 1 and 2 determine how the arbiter will act. If bit 1 is set to a one, the 128 phase time-based port arbiter (the present invention) will be operational. If this bit is set to a zero, the arbiter functions as a normal PCI bus arbiter using "fair" arbitration scheme. If bit 2 is set to a zero, the arbiter will make a normal transition from one bus master to another. If bit 2 is set to a 1, the arbiter will enter an "aggressive mode" in which case the first bus master will be stopped in the middle of a transaction to preserve the isochrony of traffic on the bus, which is discussed below.Each bus master on the PCI bus has a configuration register which is a maximum latency timer register for that device. Each bus clock decrements the register by 1. If an arbiter wants to allow another device to access the bus, it removes the GNT # signal from the current bus master. If the latency register is zero or less when the grant signal is removed, the device has had its minimum guaranteed period of access and must then complete the data cycle and immediately relinquish control of the bus. If the value of the register is positive, however, the device may continue with its transfer until the register value reaches zero, when it must release a bus for the next device.If bit 2 is programmed with a zero, once the bus state machine has determined that a new bus master will take control of the bus, the machine proceeds via line 226 to the release _gnt 230 in which case the grant signal for the bus master is released. The arbiter state machine stays on path 232 waiting for the current bus master to finish its transaction. Then the arbiter state machine proceeds along path 236 back to the Idle_arb state 210.If, however, bit 2 is programmed with a 1 , the arbiter state machine still proceeds along path 234 to PCI bus state machine 240. Once in this state, the PCI arbiter state machine will reset the internal latency timer register of the active bus master which in turn causes the PCI bus to be released immediately within 1 or 2 clock cycles. This allows a new bus master to take control of the bus. This allows the preservation of the isochrony of data to be transferred by the new bus master, which data can not wait until the present bus master has completed its transaction.Once the action is determined by the arbiter state machine 250, the action is transmitted along path 234 to the PCI bus state machine 240. The PCI bus state machine 240 is a separate state machine from the arbiter state machine. It is connected to the PCI bus and generates the signals on the PCI bus, such as those shown in FIG. 3, in accordance with the PCI bus standard, in order to transfer access to PCI bus from one bus master to another. Path 234 is not an arbiter state machine 250 path, so that control of the arbiter state machine 250 does not go to the PCI bus state machine 240. Control of the arbiter state machine 250 proceeds along path 236 from state 230. Thus, there is no need for a "return path" from state 240, because path 236 couples control signals to an independent state machine, the PCI bus state machine 240. It should be noted that although the clock signal is only shown going to states 220 and 240, it is also coupled to the other states, but has been omitted for clarity of illustration. Each segment of the phase table 202 represents a 100 Nsec time slot. Therefore, a 10 MHz clock is needed. In the present implementation, a 200 MHz clock signal was available and is divided down to generate the phase table clock. The remainder of the arbiter state machine and the PCI bus state machine operate at the 33 or 66 MHz PCI bus clock. FIG. 3 shows the operation of the present invention for 3 PCI bus MH2 masters, in addition to the arbiter, on a PCI bus. As shown, the grant signals GNT_0, GNT_1, and GNT_2 are sequentially asserted by the PCI bus state machine by pulling the grant line low in order to grant permission to devices zero, one and two to utilize the PCI bus.While the invention has been shown and described with reference to certain embodiments, it is well understood by those skilled in the art that various substitutions, additions and modifications can be made, without departing from the scope of the invention |
Systems and methods for reducing the bandwidth needed to read the inputs to a matrix multiply operation may improve system performance. Rather than reading a row of a first input matrix and a column of a second input matrix to produce a column of a product matrix, a column of the first input matrix and a single element of the second input matrix are read to produce a column of partial dot products of the product matrix. Therefore, the number of input matrix elements read to produce each product matrix element is reduced from 2N to N+1, where N is the number of elements in a column of the product matrix. |
1.A method of performing a set of operations including a propagation operand for a plurality of threads or channels, comprising: obtaining a first value specified by the propagation operand included in the set of operations;Providing the first value to a plurality of program instruction execution units;Obtaining a set of second values specified by the parallel operands included in the set of operations, wherein each of the second values corresponds to one of the plurality of threads or channels;Providing a second value of the set of second values to each of the plurality of program instruction execution units; andThe set of operations is performed for each of the plurality of threads or channels.2.The method of claim 1, further comprising determining that a memory operand included in the group operation is a propagation operand based on a format specified for the group operation.3.The method of claim 1, further comprising determining that the memory operand included in the set of operations is a propagation operand based on an address specified for a memory operand.4.The method of claim 1, further comprising determining, based on a register specified for a source operand, that the source operand included in the set of operations is a propagation operand.5.The method of claim 1, wherein the first value and the second value are represented in a fixed-point data format.6.The method of claim 1, wherein the first value and the second value are represented in a floating point data format.7.The method of claim 1, wherein the set of operations includes multiplication-addition operations.8.The method of claim 1, wherein the set of operations is represented as a single program instruction including the propagation operand, the parallel operand, and a calculation for generating a result based on the propagation operand.9.The method of claim 1, wherein the set of operations is represented as a first loader instruction including the propagation operand and the parallel operand and a calculation specified to produce a result based on the propagation operand. The second program instruction.10.The method of claim 1, wherein the set of operations is represented as a first loader instruction including the propagated operand, a second loader instruction including the parallel operand, and providing for A third program instruction that calculates a result of the propagation operand.11.The method of claim 1, wherein the propagation operand specifies an address having a single value for each of the plurality of threads.12.The method of claim 1, wherein the parallel operand specifies an address having a different value for each of the plurality of threads. |
Matrix multiplication with reduced bandwidth requirementsTechnical fieldEmbodiments of the invention relate generally to performing matrix multiplication using multi-threaded processing or vector processing, and more specifically to reducing memory bandwidth.Background techniqueMatrix-matrix multiplication is an important building block of many calculations in the field of high-performance computing. Each multiplication-add operation used to perform matrix-matrix multiplication requires access to two source operands in memory. Therefore, in a multi-threaded processor that simultaneously executes T threads (each thread performs a multiply-add operation), 2T memory operands are needed to supply the operands of the multiplication part of the operation. Similarly, in a vector processor (eg, a T-channel single instruction multiple data (SIMD) vector processor) that executes T data channels in parallel, each vector multiply-add requires 2T memory operands. In general, providing memory bandwidth for 2T simultaneous accesses becomes increasingly difficult as T increases, and therefore matrix multiplication becomes limited for a sufficiently large T. This limits the overall computing performance of the processing device for matrix multiplication.Therefore, it is desirable to reduce the memory bandwidth required for multiplication-addition operations to improve the calculation performance for matrix multiplication.Summary of the inventionThe present invention relates to a new system and method for reducing memory bandwidth requirements for matrix multiplication using a multi-threaded processor. By performing the multiplication of two matrices in a given step of matrix multiplication in a way that T execution thread groups or T vector channels share one of two source operands for their respective multiply-add operations Memory bandwidth requirements. This method is utilized by including an operand propagation mechanism within a multi-threaded processing device. The propagation mechanism allows the contents of one storage location to be propagated to all T threads in a thread group or all T channels in a vector, where the value can be used as the source operand of an execution instruction, the execution instruction includes One or more instructions that make up a multiply-add operation. The mechanism provides a software way to control this propagation delivery. When a propagation mechanism is used, the memory bandwidth requirements required to perform operations such as multiply-add can be reduced.For each simultaneous multiply-add operation, as opposed to the 2T memory locations when the conventional method of performing matrix multiplication is used, the T execution threads of a thread group only access T + 1 memory locations. When memory bandwidth is limited, reducing the memory bandwidth required to obtain operands for matrix multiplication operations can improve matrix multiplication performance. In addition, the performance of other memory bandwidth limited operations can be improved.Various embodiments of the method of the present invention for executing program instructions of a plurality of threads in a thread group include obtaining a first value specified by a propagation operand included in the program instruction and obtaining a parallel value included in the program instruction A set of second values specified by an operand, wherein each of the second values corresponds to one of a plurality of threads in the thread group. The first value is provided to a plurality of program instruction execution units, the second value is provided to the plurality of program instruction execution units, and all of the plurality of threads in the thread group are executed. Mentioned program instructions.Various embodiments of the method of the present invention for multiplying a first matrix with a first column of a second matrix to generate a first column of a product matrix include multiplying each element of the first column of the first matrix by a The first element of the first column of the two matrices is used to generate the first element group corresponding to the first column of the product matrix. The first element group of the column corresponding to the product matrix is stored in a set of registers. Each element of the second column of the matrix is multiplied by the second element of the first column of the second matrix to generate a second element group corresponding to the first column of the product matrix, and each of the stored element groups is The elements are summed with corresponding elements in the second element group to generate a group of product elements within the first column of the product matrix, and the group of product elements is stored in the group register.BRIEF DESCRIPTION OF THE DRAWINGSFor a detailed understanding of the features of the invention stated above, a more specific description of the invention briefly summarized above can be obtained by reference to embodiments, some of which are illustrated in the accompanying drawings. It should be noted, however, that the drawings illustrate only typical embodiments of the invention, and therefore should not be considered as limiting the scope of the invention, as the invention may allow other equally effective embodiments.FIG. 1A illustrates a conceptual diagram of a matrix A and a matrix B that are multiplied to produce a matrix C according to one or more aspects of the present invention.FIG. 1B illustrates a flowchart of an exemplary method of multiplying matrix A and matrix B to produce matrix C according to one or more aspects of the present invention.FIG. 1C illustrates a conceptual block diagram of a plurality of execution units receiving parallel operands and propagating operands according to one or more aspects of the present invention.FIG. 2 illustrates a flowchart of an exemplary method of performing an instruction including a propagation operand in accordance with one or more aspects of the present invention.detailed descriptionIn the following description, various specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without one or more of these specific details. In other cases, to avoid obscuring the present invention, well-known features are not described.FIG. 1A illustrates a conceptual diagram of a matrix A 101 and a matrix B 102 that are multiplied to produce a matrix C 103 according to one or more aspects of the present invention. Conventionally, the rows of the matrix A 101 and the elements in the column B 102 of the matrix are used to calculate the dot product to produce the elements of the column of the matrix C 103. For example, the elements in row 107 of matrix A 101 and the elements 105 (for example, 131, 132, and 146) in column 105 of matrix B 102 are used to generate element 152 in column 104 of matrix C 103. When a conventional system uses multiple execution threads to generate matrix C103, where each thread generates an element of matrix C, each thread reads the elements from matrix A101 and the elements from matrix B102 to execute the matrix Continuous multiplication-addition of columns (or rows) of C103. As described previously, in a conventional system, when T threads are processed in parallel, 2T elements are read for each operation in the multiply-add operation.In the present invention, instead of reading multiple elements from matrix A and matrix 102 and multiple elements from matrix B and matrix 102 to generate columns of matrix C and 103, the columns of matrix A and matrix 101 and single element of matrix B are read. Generate a column of partial product of the matrix C 103. For example, the elements 131 of columns 106 and 105 can be read and multiplied to produce a product column. Then the product column, that is, the product of element 111 and element 131, the product of element 112 and element 131, the product of element 113 and element 131, the product of element 114 and element 131, and so on, is updated with column 104 to update column 104 Product of partial points of. Columns of matrix A 101 and elements of column 105 of matrix B 102 are used to calculate additional product columns. The additional product sequences are accumulated sequentially with the partial point product sequence until the partial point product sequence is completed. Therefore, each thread reads the elements from one column of the matrix A 101, and a single element from one row of the matrix B 102 is read and shared by all threads to perform multiplication-addition. The number of input matrix elements read to produce each partial dot product column of the matrix C103 is reduced from 2T to T + 1. Each element read from the matrix B 102 is propagated to T threads to multiply the elements of the columns of the matrix A 101.FIG. 1B illustrates a flowchart of an exemplary method of multiplying matrix A and matrix B to produce matrix C according to one or more aspects of the present invention. In step 170, the registers or memory locations of the elements of the storage matrix C103 are initialized. For example, each element can be initialized to a value of 0. In step 171, each element in the first column of the matrix A 101 is multiplied by one element in the column of the matrix B 102. For example, the first thread multiplies the element 111 by the element 131, the second thread multiplies the element 112 by the element 131, and so on to generate a product element column. In step 172, each product element generated in step 171 is summed with a corresponding element in a column of the matrix C103. For example, the product of the elements 111 and 131 is summed with the element 151 to accumulate a partial point product.In step 173, the method determines whether another element exists in a column of the matrix B 102. For example, after the partial point product of column 104 of matrix C 103 has been accumulated using element 131, element 132 will be used, and so on, until the last element in the column-element 146 is used. If the method in step 173 determines that all elements in the columns of matrix B 102 have been used, the method proceeds to step 175. Otherwise, the method described in step 174 obtains the next element in the column of matrix B 102 and the next column of matrix A 174, and repeats steps 171, 172, and 173 to accumulate another product into column 104 of matrix C 103 For each partial dot product of. The elements in the columns of matrix B 102 need not be used in any particular order, as long as each element is used to produce a product with the corresponding column of matrix A 101.In step 175, the method determines whether another column exists in matrix B 102, and if not, the method proceeds to step 177 and the matrix multiplication operation is completed. Otherwise, the method described in step 176 obtains the unused columns of matrix B 102 and obtains the first column of matrix A 101. Steps 171, 172, 173, and 174 are repeated to generate another column of the matrix C103.FIG. 1C illustrates a conceptual block diagram of a plurality of program instruction execution units each receiving a propagation operand according to one or more aspects of the present invention. The plurality of program instruction execution units may be configured to reduce the bandwidth required to obtain the source operands (ie, the elements of matrix A 101 and matrix B 102) to produce matrix C 103. Each program instruction execution unit (execution units 180, 181, 182, 183, 184, 185, 186, and 187) is configured to generate at least one element of the matrix C103. Execution units 180, 181, 182, 183, 184, 185, 186, and 187 may be configured to execute program instructions in parallel. For example, each execution unit in the execution unit may process threads in a group of multiple threads to execute program instructions of the multiple threads in parallel, for example, in a multi-threaded processor. In another example, each execution unit in the execution unit may process channels in a group of multiple channels to execute program instructions of the multiple channels in parallel, for example, in single instruction multiple data (SIMD) vector processing Device.Each execution unit receives a unique parallel operand from the parallel operand 190. The elements of matrix A 101 can be parallel operands. Each execution unit also receives a propagation operand from a propagation operand 191. The same propagation operand is output by the propagation operand 191 to each execution unit. The elements of the matrix B 102 can be propagation operands. In other embodiments of the present invention, the matrix A 101 and the matrix B 102 are reversed, the matrix A 101 provides a propagation operand, and the matrix B 102 provides a parallel operand.For each multiply-add operation performed simultaneously, as opposed to the 2T memory locations when the conventional method of performing matrix multiplication is used, the T execution units access only T + 1 memory locations. When a propagation mechanism is used, the memory bandwidth requirements required to perform operations such as multiply-add can be reduced. Therefore, when processing performance is limited by memory bandwidth, the performance improvement can be almost doubled by using a propagation mechanism. Although the propagation mechanism is described in the context of matrix-matrix multiplication (specifically, multiplication-addition operation), other operations during multithreading can also be performed using the propagation mechanism. Examples of other operations include minimum, maximum, addition, subtraction, sum of absolute differences, sum of square differences, multiplication, and division.Conventional processing systems perform matrix-matrix multiplication by subdividing operations that may be at several levels to effectively utilize multiple levels of a memory system composed of memory devices with different performances (e.g., throughput, latency, or similar performance) . The subdivision results in a matrix multiplication that decomposes a matrix multiplication of a large matrix into parts of an entire matrix, referred to as tiles. On processing devices of at least two levels with different speeds coupled to the memory system, the tiles from the two source matrices stored in the slower level of the memory system can be copied to the faster level of the memory system by copying The tile pieces are multiplied to obtain a result tile piece, and the result tile piece is copied back to an appropriate part of the result matrix stored in the slower level of the memory system to accelerate matrix multiplication.Slicing techniques for performing matrix multiplication are known to those skilled in the art. The system and method of the present invention can be applied to calculate elements in each tile of a product matrix. Specifically, the propagation mechanism can be used to calculate the elements of the tile, where the matrix A 101, the matrix B 102, and the matrix C 103 are each a tile of a larger matrix. Similarly, the matrix-vector multiplication is classified as a special case of a matrix whose dimensions are unit elements.FIG. 2 illustrates a flowchart of an exemplary method of performing an instruction including a propagation operand in accordance with one or more aspects of the present invention. In step 200, the method receives an instruction including one or more operands for multi-threaded processing. In step 205, the method determines whether the first operand is a propagation operand. Various techniques can be used to specify a particular operand as a propagation operand. One such technique is to define an instruction that includes an operand specified by the instruction format as a propagation operand. For example, two different load instructions can be defined, one including a parallel operand and the other including a propagation operand.The code shown in Table 1 represents a set of operations or instructions for T parallel execution units of a multithreaded or vector processor as shown in FIG. 1C, which can be used to perform T multiplications-additions for matrix-matrix multiplication Operation.Table 1The LD instruction includes parallel operands for T threads or T vector channels, which specifies the memory address A1 + offsetA of each thread or channel, where A1 can be the base address of a matrix tile, matrix, column, or the like. And offsetA may be an offset of a specific column or a part of a column. offsetA can be omitted. The effective address varies with each thread or channel. For example, T address registers A1 (one for each thread or channel) are initialized with different addresses for each thread or channel. The T elements stored in the T memory locations specified by the T addresses A1 + offsetA are loaded into the register A of each execution unit. Each execution unit of a processing thread or channel reads a different memory location. Therefore, the address A1 + offsetA can be changed with a unique thread or channel identifier to specify a different memory location for each thread or channel. For example, the address register A1 in each thread or channel is initialized with a different address that varies with the thread or channel identifier.The LDB instruction includes a propagation operand specifying a memory address A2 + offsetB, where A2 may be a base address of a matrix tile, a matrix, a column, or the like, and offsetB may be an offset of a particular column or part of a column. The element stored in the memory location specified by A2 + offsetB is loaded into register B of each execution unit. Unlike LD instructions where A1 + offsetA has different values for each thread or channel, A2 + offsetB has the same value for all threads in a thread group or all channels in a vector. Finally, each execution unit executes a FMAD (Floating Point Multiply Accumulate) instruction to perform the multiply-add function using the registers A, B, and C. In other embodiments of the invention, the IMAD (Integer Multiply Accumulate) instruction is used to perform the multiply-add function. In a further embodiment of the invention, another calculation (e.g., addition, subtraction, or similar calculation) may be represented by an instruction to produce a result based on a propagation operand.In some embodiments of the invention, the functionality provided by the arithmetic group shown in Table 1 may be implemented using fewer instructions. For example, the LD and LDB instructions can be combined into a single instruction that has a FMAD instruction for parallel execution in a dual issue manner. In another example, the LD, LDB, and FMAD instructions can be combined to form a combined wide instruction that is provided to multiple execution units for parallel execution.Another technique that can be used to specify a particular operand as a propagated operand is to define a special memory address within a propagated memory area. For example, in Table 1, the LD instruction may be used in place of the LDB instruction, where A2 + offsetB corresponds to the memory address in the propagation memory area. When an address in a propagation memory area is specified, only one memory location is read, and data stored in the one location is propagated to each field of the destination (B).Another technique that can be used to specify a particular operand as a propagated operand is to define a particular register that is propagated to each execution unit. For example, in Table 1, the LDB instruction will load a single register (e.g., register B) instead of propagating the elements stored in the memory location specified by A2 + offsetB to each execution unit. Register B will be specified as a propagation register, and when register B is specified as an operand for an instruction (such as the FMAD instruction in Table 1), the value stored in register B is propagated to each execution unit in order to execute all Mentioned instructions.If the method determines in step 205 that the first operand is a propagation operand, then in step 210 the method reads a single value specified by the operand. In step 215, the single value is propagated to each execution unit in the execution units. In embodiments of the invention that specify one or more propagation registers, the single value is loaded into a propagation register and then propagated to an execution unit. If the method determines in step 205 that the first operand is not a propagation operand, that is, the first operand is a parallel operand, then the method reads a value specified by the operand in step 220. Each execution unit for each thread or channel can read different values, that is, the number of values is equal to the number of threads or channels executing. In step 225, the read value is output (in parallel) to the execution unit.The method determines whether another operand is specified for the instruction in step 230, and if so, the method returns to step 205. Otherwise, the method continues to execute the instructions to produce a result using the parallel and / or propagated values provided to the execution unit. Note that the instructions may represent a single operation, such as a load or calculation, or the instructions may represent a combination of operations, such as multiple loads and / or calculations.Those skilled in the art will appreciate that any system configured to perform the method steps of FIG. 1B or FIG. 2 or an equivalent thereof is within the scope of the present invention. Reduce memory bandwidth requirements by performing multiplication of two matrices in a given step of matrix multiplication in such a way that T groups of execution threads or channels share one of two source operands for their respective multiplication-addition operations . This method is utilized by including an operand propagation mechanism in a parallel processing device (e.g., a multi-threaded processor or a SIMD vector processor).The propagation mechanism allows the contents of a storage location to be propagated to all T threads in the thread group (or all T channels in the SIMD vector processor), where the value described can be used as a source operand to perform a One or more instructions for performing a matrix operation. Software can control this propagation pass by specifying a propagation memory area and program instructions that include one or more propagation operands. When a propagation mechanism is used, memory bandwidth requirements required to perform operations such as multiply-add can be reduced, thereby improving performance when memory bandwidth is limited.Although the above is directed to the embodiments of the present invention, other and additional embodiments of the present invention can be designed without departing from the basic scope of the present invention, and the scope of the present invention is determined by the appended claims. The foregoing description and drawings are therefore to be regarded in an illustrative and not a restrictive sense. The listing of steps in a method item does not imply that the steps are performed in any particular order, unless explicitly stated in the claims.All trademarks are the personal property of their owners. |
A processing apparatus can include a general-purpose parallel processing engine comprising a matrix accelerator including a multi-stage systolic array, where each stage includes multiple processing elements associated with multiple processing channels. The multiple processing elements are configured to receive output sparsity metadata that is independent of input sparsity of input matrix elements and perform processing operations on the input matrix elements based on the output sparsity metadata. |
A processing apparatus having reduced systolic array power consumption, the processing apparatus including:a general-purpose parallel processing engine comprising a matrix accelerator including one or more systolic arrays, at least one of the one or more systolic arrays comprising multiple pipeline stages, each pipeline stage of the multiple pipeline stages including multiple processing elements, the multiple processing elements associated with multiple processing channels, wherein the multiple processing elements are configured to:receive output sparsity metadata at a first pipeline stage, the output sparsity metadata associated with the multiple processing channels, wherein the output sparsity metadata is independent of input sparsity of input matrix elements;perform processing operations on the input matrix elements based on the output sparsity metadata, wherein to perform the processing operations includes to:bypass multiplication at a first processing element associated with a first processing channel and power gate a portion of the first processing element; andmultiply input elements at a second processing element associated with a second processing channel.The processing apparatus as in claim 1, wherein to power gate the portion of the first processing element includes to power gate a multiplier of processing element.The processing apparatus as in claim 2, wherein to power gate the portion of the first processing element additionally includes to power gate an adder of the processing element.The processing apparatus as in claim 1, wherein each of the multiple processing elements includes a first source input associated with an accumulator value, a second source input associated with a first matrix, and a third source input associated with a second matrix.The processing apparatus as in claim 4, wherein to bypass multiplication at the first processing element includes to output the accumulator value received at the first source input.The processing apparatus as in claim 1, wherein to perform the processing operations includes to propagate the output sparsity metadata received at the first pipeline stage to a second pipeline stage and process input elements of the multiple processing channels according to the output sparsity metadata.The processing apparatus as in claim 6, wherein the output sparsity metadata includes a bit associated with each of the multiple processing channels.The processing apparatus as in claim 7, wherein the output sparsity metadata additionally includes a bit associated with each of multiple rows of an input matrix.The processing apparatus as in claim 8, wherein, in a first processing cycle, the output sparsity metadata is to indicate to the first processing element to multiply input elements of a second matrix with input elements of a first matrix and, in a second processing cycle, to bypass multiplication operations for the input elements.A method of using sparsity metadata to reduce systolic array power consumption, the method comprising:fetching an instruction at a processing resource of a graphics processor to perform operations associated with a matrix instruction that specifies metadata for output sparsity;decoding the instruction into a decoded instruction;reading operand data for the decoded instruction from a register file of the processing resource, the operand data including matrix elements and the metadata, wherein the metadata is independent of input sparsity of the matrix elements;executing the decoded instruction via a matrix accelerator including a systolic array of multiple pipeline stages by performing, according to the metadata, multiply-accumulate operations on matrix elements associated with a first channel and bypassing the multiply-accumulate operations on the matrix elements associated with a second channel; andwriting output of the multiply-accumulate operations to the register file.The method as in claim 10, wherein bypassing the multiply-accumulate operations on the matrix elements associated with the second channel includes power gating a multiplier of a processing element associated with the second channel.The method as in claim 11, wherein bypassing the multiply-accumulate operations on the matrix elements associated with the second channel additionally includes power gating an adder of the processing element associated with the second channel.The method as in claim 12, further comprising performing, according to the metadata, the multiply-accumulate operations on matrix elements associated with the first channel and bypassing the multiply-accumulate operations on the matrix elements associated with the second channel at a first pipeline stage of the multiple pipeline stages and concurrently bypassing the multiply-accumulate operations on the matrix elements associated with the first channel and performing the multiply-accumulate operations on the matrix elements associated with the second channel at a second pipeline stage of the multiple pipeline stages.One or more non-transitory machine readable media storing data which, when read by one or more machines, cause the one or more machines to fabricate one or more integrated circuits to perform a method as in any one of claims 10-13.A system having reduced systolic array power consumption, the system comprising means to perform a method as in any one of claims 10-13. |
FIELDThis disclosure relates generally to data processing and more particularly to data processing via a matrix accelerator of a parallel or graphics processing unit.BACKGROUND OF THE DISCLOSUREParallel graphics data processing includes systems and methods developed to perform specific operations on graphics data such as, for example, linear interpolation, tessellation, rasterization, texture mapping, depth testing, etc. Traditionally, graphics processors used fixed function computational units to process graphics data. More recently, portions of graphics processors have been made programmable, enabling such processors to support a wider variety of operations for processing vertex and fragment data. Programmable graphics processors have also been adapted to perform general purpose numerical computing applications, such as high-performance computing (HPC), deep learning (e.g., study of artificial neural networks and related machine learning algorithms), and digital signal processing (DSP). These general-purpose numerical computing applications make extensive use of matrix multiplication computations. Accordingly, programmable portions of parallel and graphics data processing units have been adapted to include processing resources and/or functional units that are configured to perform high-throughput matrix operations, including matrix multiply and add operations or dot product operations.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements, and in which:FIG. 1 is a block diagram illustrating a computer system configured to implement one or more aspects of the embodiments described herein;FIG. 2A-2D illustrate parallel processor components;FIG. 3A-3C are block diagrams of graphics multiprocessors and multiprocessor-based GPUs;FIG. 4A-4F illustrate an exemplary architecture in which a plurality of GPUs is communicatively coupled to a plurality of multi-core processors;FIG. 5 illustrates a graphics processing pipeline;FIG. 6 illustrates a machine learning software stack;FIG. 7 illustrates a general-purpose graphics processing unit;FIG. 8 illustrates a multi-GPU computing system;FIG. 9A-9B illustrate layers of exemplary deep neural networks;FIG. 10 illustrates an exemplary recurrent neural network;FIG. 11 illustrates training and deployment of a deep neural network;FIG. 12A is a block diagram illustrating distributed learning;FIG. 12B is a block diagram illustrating a programmable network interface and data processing unit;FIG. 13 illustrates an exemplary inferencing system on a chip (SOC) suitable for performing inferencing using a trained model;FIG. 14 is a block diagram of a processing system;FIG. 15A-15C illustrate computing systems and graphics processors;FIG. 16A-16C illustrate block diagrams of additional graphics processor and compute accelerator architectures;FIG. 17 is a block diagram of a graphics processing engine of a graphics processor;FIG. 18A-18B illustrate thread execution logic including an array of processing elements employed in a graphics processor core;FIG. 19 illustrates an additional execution unit;FIG. 20 is a block diagram illustrating a graphics processor instruction formats;FIG. 21 is a block diagram of an additional graphics processor architecture;FIG. 22A-22B illustrate a graphics processor command format and command sequence;FIG. 23 illustrates exemplary graphics software architecture for a data processing system;FIG. 24A is a block diagram illustrating an IP core development system;FIG. 24B illustrates a cross-section side view of an integrated circuit package assembly;FIG. 24C illustrates a package assembly that includes multiple units of hardware logic chiplets connected to a substrate (e.g., base die);FIG. 24D illustrates a package assembly including interchangeable chiplets;FIG. 25 is a block diagram illustrating an exemplary system on a chip integrated circuit;FIG. 26A-26B are block diagrams illustrating exemplary graphics processors for use within an SoC;FIG. 27 is a block diagram of a data processing system, according to an embodiment;FIG. 28A-28B illustrate a matrix operation performed by an instruction pipeline, according to an embodiment;FIG. 29 illustrates a systolic array including multiplier and adder circuits organized in a pipelined fashion;FIG. 30A-30B illustrates the use of a systolic array that can be configured to execute operations at an arbitrary systolic depth;FIG. 31 illustrates a two-path matrix multiply accelerator in which each path has a depth of four stages;FIG. 32 illustrates a four-path matrix multiply accelerator in which each path has a depth of two stages;FIG. 33 illustrates a scalable sparse matrix multiply accelerator using systolic arrays with feedback inputs;FIG. 34 shows a scalable sparse matrix multiply accelerator using systolic arrays with feedback inputs and outputs on each stage;FIG. 35A-35B illustrates the use of output sparsity metadata to disable processing channels of a systolic array;FIG. 36 illustrates metadata for matrix multiplication on operations that include half precision matrix elements;FIG. 37 illustrates metadata as depicted in matrix form and as stored within a metadata register;FIG. 38 illustrates a processing element having structured output sparsity support;FIG. 39A-39B illustrates snapshots of processing elements at cycle zero and cycle one of instruction execution when output sparsity is enabled;FIG. 40 is a flow chart of a method performed by a systolic array to reduce power consumption using output sparsity metadata;FIG. 41 illustrates a method of performing processing operations for a machine learning model using output sparsity;FIG. 42 is a flow chart of a method of generating output sparsity metadata based on a sparsity percentage; andFIG. 43 is a block diagram of a computing device including a graphics processor, according to an embodiment.DETAILED DESCRIPTIONA graphics processing unit (GPU) is communicatively coupled to host/processor cores to accelerate, for example, graphics operations, machine-learning operations, pattern analysis operations, and/or various general-purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or another interconnect (e.g., a high-speed interconnect such as PCIe or NVLink). Alternatively, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions.Current parallel graphics data processing includes systems and methods developed to perform specific operations on graphics data such as, for example, linear interpolation, tessellation, rasterization, texture mapping, depth testing, etc. Traditionally, graphics processors used fixed function computational units to process graphics data. However, more recently, portions of graphics processors have been made programmable, enabling such processors to support a wider variety of operations for processing vertex and fragment data.To further increase performance, graphics processors typically implement processing techniques such as pipelining that attempt to process, in parallel, as much graphics data as possible throughout the different parts of the graphics pipeline. Parallel graphics processors with single instruction, multiple thread (SIMT) architectures are designed to maximize the amount of parallel processing in the graphics pipeline. In a SIMT architecture, groups of parallel threads attempt to execute program instructions synchronously together as often as possible to increase processing efficiency. A general overview of software and hardware for SIMT architectures can be found in Shane Cook, CUDA Programming Chapter 3, pages 37-51 (2013 ).In the following description, numerous specific details are set forth to provide a more thorough understanding. However, it will be apparent to one of skill in the art that the embodiments described herein may be practiced without one or more of these specific details. In other instances, well-known features have not been described to avoid obscuring the details of the present embodiments.System OverviewFIG. 1 is a block diagram illustrating a computing system 100 configured to implement one or more aspects of the embodiments described herein. The computing system 100 includes a processing subsystem 101 having one or more processor(s) 102 and a system memory 104 communicating via an interconnection path that may include a memory hub 105. The memory hub 105 may be a separate component within a chipset component or may be integrated within the one or more processor(s) 102. The memory hub 105 couples with an I/O subsystem 111 via a communication link 106. The I/O subsystem 111 includes an I/O hub 107 that can enable the computing system 100 to receive input from one or more input device(s) 108. Additionally, the I/O hub 107 can enable a display controller, which may be included in the one or more processor(s) 102, to provide outputs to one or more display device(s) 110A. In one embodiment the one or more display device(s) 110A coupled with the I/O hub 107 can include a local, internal, or embedded display device.The processing subsystem 101, for example, includes one or more parallel processor(s) 112 coupled to memory hub 105 via a bus or other communication link 113. The communication link 113 may be one of any number of standards-based communication link technologies or protocols, such as, but not limited to PCI Express, or may be a vendor specific communications interface or communications fabric. The one or more parallel processor(s) 112 may form a computationally focused parallel or vector processing system that can include a large number of processing cores and/or processing clusters, such as a many integrated core (MIC) processor. For example, the one or more parallel processor(s) 112 form a graphics processing subsystem that can output pixels to one of the one or more display device(s) 110A coupled via the I/O hub 107. The one or more parallel processor(s) 112 can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s) 110B.Within the I/O subsystem 111, a system storage unit 114 can connect to the I/O hub 107 to provide a storage mechanism for the computing system 100. An I/O switch 116 can be used to provide an interface mechanism to enable connections between the I/O hub 107 and other components, such as a network adapter 118 and/or wireless network adapter 119 that may be integrated into the platform, and various other devices that can be added via one or more add-in device(s) 120. The add-in device(s) 120 may also include, for example, one or more external graphics processor devices, graphics cards, and/or compute accelerators. The network adapter 118 can be an Ethernet adapter or another wired network adapter. The wireless network adapter 119 can include one or more of a Wi-Fi, Bluetooth, near field communication (NFC), or other network device that includes one or more wireless radios.The computing system 100 can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and the like, which may also be connected to the I/O hub 107. Communication paths interconnecting the various components in FIG. 1 may be implemented using any suitable protocols, such as PCI (Peripheral Component Interconnect) based protocols (e.g., PCI-Express), or any other bus or point-to-point communication interfaces and/or protocol(s), such as the NVLink high-speed interconnect, Compute Express Link™ (CXL™) (e.g., CXL.mem), Infinity Fabric (IF), Ethernet (IEEE 802.3), remote direct memory access (RDMA), InfiniBand, Internet Wide Area RDMA Protocol (iWARP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), quick UDP Internet Connections (QUIC), RDMA over Converged Ethernet (RoCE), Intel QuickPath Interconnect (QPI), Intel Ultra Path Interconnect (UPI), Intel On-Chip System Fabric (IOSF), Omnipath, HyperTransport, Advanced Microcontroller Bus Architecture (AMBA) interconnect, OpenCAPI, Gen-Z, Cache Coherent Interconnect for Accelerators (CCIX), 3GPP Long Term Evolution (LTE) (4G), 3GPP 5G, and variations thereof, or wired or wireless interconnect protocols known in the art. In some examples, data can be copied or stored to virtualized storage nodes using a protocol such as non-volatile memory express (NVMe) over Fabrics (NVMe-oF) or NVMe.The one or more parallel processor(s) 112 may incorporate circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). Alternatively or additionally, the one or more parallel processor(s) 112 can incorporate circuitry optimized for general purpose processing, while preserving the underlying computational architecture, described in greater detail herein. Components of the computing system 100 may be integrated with one or more other system elements on a single integrated circuit. For example, the one or more parallel processor(s) 112, memory hub 105, processor(s) 102, and I/O hub 107 can be integrated into a system on chip (SoC) integrated circuit. Alternatively, the components of the computing system 100 can be integrated into a single package to form a system in package (SIP) configuration. In one embodiment at least a portion of the components of the computing system 100 can be integrated into a multi-chip module (MCM), which can be interconnected with other multi-chip modules into a modular computing system.It will be appreciated that the computing system 100 shown herein is illustrative and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of processor(s) 102, and the number of parallel processor(s) 112, may be modified as desired. For instance, system memory 104 can be connected to the processor(s) 102 directly rather than through a bridge, while other devices communicate with system memory 104 via the memory hub 105 and the processor(s) 102. In other alternative topologies, the parallel processor(s) 112 are connected to the I/O hub 107 or directly to one of the one or more processor(s) 102, rather than to the memory hub 105. In other embodiments, the I/O hub 107 and memory hub 105 may be integrated into a single chip. It is also possible that two or more sets of processor(s) 102 are attached via multiple sockets, which can couple with two or more instances of the parallel processor(s) 112.Some of the particular components shown herein are optional and may not be included in all implementations of the computing system 100. For example, any number of add-in cards or peripherals may be supported, or some components may be eliminated. Furthermore, some architectures may use different terminology for components similar to those illustrated in FIG. 1 . For example, the memory hub 105 may be referred to as a Northbridge in some architectures, while the I/O hub 107 may be referred to as a Southbridge.FIG. 2A illustrates a parallel processor 200. The parallel processor 200 may be a GPU, GPGPU or the like as described herein. The various components of the parallel processor 200 may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or field programmable gate arrays (FPGA). The illustrated parallel processor 200 may be one or more of the parallel processor(s) 112 shown in FIG. 1 .The parallel processor 200 includes a parallel processing unit 202. The parallel processing unit includes an I/O unit 204 that enables communication with other devices, including other instances of the parallel processing unit 202. The I/O unit 204 may be directly connected to other devices. For instance, the I/O unit 204 connects with other devices via the use of a hub or switch interface, such as memory hub 105. The connections between the memory hub 105 and the I/O unit 204 form a communication link 113. Within the parallel processing unit 202, the I/O unit 204 connects with a host interface 206 and a memory crossbar 216, where the host interface 206 receives commands directed to performing processing operations and the memory crossbar 216 receives commands directed to performing memory operations.When the host interface 206 receives a command buffer via the I/O unit 204, the host interface 206 can direct work operations to perform those commands to a front end 208. In one embodiment the front end 208 couples with a scheduler 210, which is configured to distribute commands or other work items to a processing cluster array 212. The scheduler 210 ensures that the processing cluster array 212 is properly configured and in a valid state before tasks are distributed to the processing clusters of the processing cluster array 212. The scheduler 210 may be implemented via firmware logic executing on a microcontroller. The microcontroller implemented scheduler 210 is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on the processing cluster array 212. Preferably, the host software can prove workloads for scheduling on the processing cluster array 212 via one of multiple graphics processing doorbells. In other examples, polling for new workloads or interrupts can be used to identify or indicate availability of work to perform. The workloads can then be automatically distributed across the processing cluster array 212 by the scheduler 210 logic within the scheduler microcontroller.The processing cluster array 212 can include up to "N" processing clusters (e.g., cluster 214A, cluster 214B, through cluster 214N). Each cluster 214A-214N of the processing cluster array 212 can execute a large number of concurrent threads. The scheduler 210 can allocate work to the clusters 214A-214N of the processing cluster array 212 using various scheduling and/or work distribution algorithms, which may vary depending on the workload arising for each type of program or computation. The scheduling can be handled dynamically by the scheduler 210, or can be assisted in part by compiler logic during compilation of program logic configured for execution by the processing cluster array 212. Optionally, different clusters 214A-214N of the processing cluster array 212 can be allocated for processing different types of programs or for performing different types of computations.The processing cluster array 212 can be configured to perform various types of parallel processing operations. For example, the processing cluster array 212 is configured to perform general-purpose parallel compute operations. For example, the processing cluster array 212 can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing data transformations.The processing cluster array 212 is configured to perform parallel graphics processing operations. In such embodiments in which the parallel processor 200 is configured to perform graphics processing operations, the processing cluster array 212 can include additional logic to support the execution of such graphics processing operations, including, but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. Additionally, the processing cluster array 212 can be configured to execute graphics processing related shader programs such as, but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. The parallel processing unit 202 can transfer data from system memory via the I/O unit 204 for processing. During processing the transferred data can be stored to on-chip memory (e.g., parallel processor memory 222) during processing, then written back to system memory.In embodiments in which the parallel processing unit 202 is used to perform graphics processing, the scheduler 210 may be configured to divide the processing workload into approximately equal sized tasks, to better enable distribution of the graphics processing operations to multiple clusters 214A-214N of the processing cluster array 212. In some of these embodiments, portions of the processing cluster array 212 can be configured to perform different types of processing. For example a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters 214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters 214A-214N for further processing.During operation, the processing cluster array 212 can receive processing tasks to be executed via the scheduler 210, which receives commands defining processing tasks from front end 208. For graphics processing operations, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how the data is to be processed (e.g., what program is to be executed). The scheduler 210 may be configured to fetch the indices corresponding to the tasks or may receive the indices from the front end 208. The front end 208 can be configured to ensure the processing cluster array 212 is configured to a valid state before the workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated.Each of the one or more instances of the parallel processing unit 202 can couple with parallel processor memory 222. The parallel processor memory 222 can be accessed via the memory crossbar 216, which can receive memory requests from the processing cluster array 212 as well as the I/O unit 204. The memory crossbar 216 can access the parallel processor memory 222 via a memory interface 218. The memory interface 218 can include multiple partition units (e.g., partition unit 220A, partition unit 220B, through partition unit 220N) that can each couple to a portion (e.g., memory unit) of parallel processor memory 222. The number of partition units 220A-220N may be configured to be equal to the number of memory units, such that a first partition unit 220A has a corresponding first memory unit 224A, a second partition unit 220B has a corresponding second memory unit 224B, and an Nth partition unit 220N has a corresponding Nth memory unit 224N. In other embodiments, the number of partition units 220A-220N may not be equal to the number of memory devices.The memory units 224A-224N can include various types of memory devices, including dynamic random-access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. Optionally, the memory units 224A-224N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM). Persons skilled in the art will appreciate that the specific implementation of the memory units 224A-224N can vary, and can be selected from one of various conventional designs. Render targets, such as frame buffers or texture maps may be stored across the memory units 224A-224N, allowing partition units 220A-220N to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processor memory 222. In some embodiments, a local instance of the parallel processor memory 222 may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory.Optionally, any one of the clusters 214A-214N of the processing cluster array 212 has the ability to process data that will be written to any of the memory units 224A-224N within parallel processor memory 222. The memory crossbar 216 can be configured to transfer the output of each cluster 214A-214N to any partition unit 220A-220N or to another cluster 214A-214N, which can perform additional processing operations on the output. Each cluster 214A-214N can communicate with the memory interface 218 through the memory crossbar 216 to read from or write to various external memory devices. In one of the embodiments with the memory crossbar 216 the memory crossbar 216 has a connection to the memory interface 218 to communicate with the I/O unit 204, as well as a connection to a local instance of the parallel processor memory 222, enabling the processing units within the different processing clusters 214A-214N to communicate with system memory or other memory that is not local to the parallel processing unit 202. Generally, the memory crossbar 216 may, for example, be able to use virtual channels to separate traffic streams between the clusters 214A-214N and the partition units 220A-220N.While a single instance of the parallel processing unit 202 is illustrated within the parallel processor 200, any number of instances of the parallel processing unit 202 can be included. For example, multiple instances of the parallel processing unit 202 can be provided on a single add-in card, or multiple add-in cards can be interconnected. For example, the parallel processor 200 can be an add-in device, such as add-in device 120 of FIG. 1 , which may be a graphics card such as a discrete graphics card that includes one or more GPUs, one or more memory devices, and device-to-device or network or fabric interfaces. The different instances of the parallel processing unit 202 can be configured to inter-operate even if the different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. Optionally, some instances of the parallel processing unit 202 can include higher precision floating point units relative to other instances. Systems incorporating one or more instances of the parallel processing unit 202 or the parallel processor 200 can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems. An orchestrator can form composite nodes for workload performance using one or more of: disaggregated processor resources, cache resources, memory resources, storage resources, and networking resources.FIG. 2B is a block diagram of a partition unit 220. The partition unit 220 may be an instance of one of the partition units 220A-220N of FIG. 2A . As illustrated, the partition unit 220 includes an L2 cache 221, a frame buffer interface 225, and a ROP 226 (raster operations unit). The L2 cache 221 is a read/write cache that is configured to perform load and store operations received from the memory crossbar 216 and ROP 226. Read misses and urgent write-back requests are output by L2 cache 221 to frame buffer interface 225 for processing. Updates can also be sent to the frame buffer via the frame buffer interface 225 for processing. In one embodiment the frame buffer interface 225 interfaces with one of the memory units in parallel processor memory, such as the memory units 224A-224N of FIG. 2A (e.g., within parallel processor memory 222). The partition unit 220 may additionally or alternatively also interface with one of the memory units in parallel processor memory via a memory controller (not shown).In graphics applications, the ROP 226 is a processing unit that performs raster operations such as stencil, z test, blending, and the like. The ROP 226 then outputs processed graphics data that is stored in graphics memory. In some embodiments the ROP 226 includes or couples with a CODEC 227 that includes compression logic to compress depth or color data that is written to memory or the L2 cache 221 and decompress depth or color data that is read from memory or the L2 cache 221. The compression logic can be lossless compression logic that makes use of one or more of multiple compression algorithms. The type of compression that is performed by the CODEC 227 can vary based on the statistical characteristics of the data to be compressed. For example, in one embodiment, delta color compression is performed on depth and color data on a per-tile basis. In one embodiment the CODEC 227 includes compression and decompression logic that can compress and decompress compute data associated with machine learning operations. The CODEC 227 can, for example, compress sparse matrix data for sparse machine learning operations. The CODEC 227 can also compress sparse matrix data that is encoded in a sparse matrix format (e.g., coordinate list encoding (COO), compressed sparse row (CSR), compress sparse column (CSC), etc.) to generate compressed and encoded sparse matrix data. The compressed and encoded sparse matrix data can be decompressed and/or decoded before being processed by processing elements or the processing elements can be configured to consume compressed, encoded, or compressed and encoded data for processing.The ROP 226 may be included within each processing cluster (e.g., cluster 214A-214N of FIG. 2A ) instead of within the partition unit 220. In such embodiment, read and write requests for pixel data are transmitted over the memory crossbar 216 instead of pixel fragment data. The processed graphics data may be displayed on a display device, such as one of the one or more display device(s) 110A-110B of FIG. 1 , routed for further processing by the processor(s) 102, or routed for further processing by one of the processing entities within the parallel processor 200 of FIG. 2A .FIG. 2C is a block diagram of a processing cluster 214 within a parallel processing unit. For example, the processing cluster is an instance of one of the processing clusters 214A-214N of FIG. 2A . The processing cluster 214 can be configured to execute many threads in parallel, where the term "thread" refers to an instance of a particular program executing on a particular set of input data. Optionally, single-instruction, multiple-data (SIMD) instruction issue techniques may be used to support parallel execution of a large number of threads without providing multiple independent instruction units. Alternatively, single-instruction, multiple-thread (SIMT) techniques may be used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each one of the processing clusters. Unlike a SIMD execution regime, where all processing engines typically execute identical instructions, SIMT execution allows different threads to more readily follow divergent execution paths through a given thread program. Persons skilled in the art will understand that a SIMD processing regime represents a functional subset of a SIMT processing regime.Operation of the processing cluster 214 can be controlled via a pipeline manager 232 that distributes processing tasks to SIMT parallel processors. The pipeline manager 232 receives instructions from the scheduler 210 of FIG. 2A and manages execution of those instructions via a graphics multiprocessor 234 and/or a texture unit 236. The illustrated graphics multiprocessor 234 is an exemplary instance of a SIMT parallel processor. However, various types of SIMT parallel processors of differing architectures may be included within the processing cluster 214. One or more instances of the graphics multiprocessor 234 can be included within a processing cluster 214. The graphics multiprocessor 234 can process data and a data crossbar 240 can be used to distribute the processed data to one of multiple possible destinations, including other shader units. The pipeline manager 232 can facilitate the distribution of processed data by specifying destinations for processed data to be distributed via the data crossbar 240.Each graphics multiprocessor 234 within the processing cluster 214 can include an identical set of functional execution logic (e.g., arithmetic logic units, load-store units, etc.). The functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete. The functional execution logic supports a variety of operations including integer and floating-point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. The same functional-unit hardware could be leveraged to perform different operations and any combination of functional units may be present.The instructions transmitted to the processing cluster 214 constitute a thread. A set of threads executing across the set of parallel processing engines is a thread group. A thread group executes the same program on different input data. Each thread within a thread group can be assigned to a different processing engine within a graphics multiprocessor 234. A thread group may include fewer threads than the number of processing engines within the graphics multiprocessor 234. When a thread group includes fewer threads than the number of processing engines, one or more of the processing engines may be idle during cycles in which that thread group is being processed. A thread group may also include more threads than the number of processing engines within the graphics multiprocessor 234. When the thread group includes more threads than the number of processing engines within the graphics multiprocessor 234, processing can be performed over consecutive clock cycles. Optionally, multiple thread groups can be executed concurrently on the graphics multiprocessor 234.The graphics multiprocessor 234 may include an internal cache memory to perform load and store operations. Optionally, the graphics multiprocessor 234 can forego an internal cache and use a cache memory (e.g., level 1 (LI) cache 248) within the processing cluster 214. Each graphics multiprocessor 234 also has access to level 2 (L2) caches within the partition units (e.g., partition units 220A-220N of FIG. 2A ) that are shared among all processing clusters 214 and may be used to transfer data between threads. The graphics multiprocessor 234 may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. Any memory external to the parallel processing unit 202 may be used as global memory. Embodiments in which the processing cluster 214 includes multiple instances of the graphics multiprocessor 234 can share common instructions and data, which may be stored in the L1 cache 248.Each processing cluster 214 may include an MMU 245 (memory management unit) that is configured to map virtual addresses into physical addresses. In other embodiments, one or more instances of the MMU 245 may reside within the memory interface 218 of FIG. 2A . The MMU 245 includes a set of page table entries (PTEs) used to map a virtual address to a physical address of a tile and optionally a cache line index. The MMU 245 may include address translation lookaside buffers (TLB) or caches that may reside within the graphics multiprocessor 234 or the L1 cache or processing cluster 214. The physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. The cache line index may be used to determine whether a request for a cache line is a hit or miss.In graphics and computing applications, a processing cluster 214 may be configured such that each graphics multiprocessor 234 is coupled to a texture unit 236 for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering the texture data. Texture data is read from an internal texture L1 cache (not shown) or in some embodiments from the L1 cache within graphics multiprocessor 234 and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. Each graphics multiprocessor 234 outputs processed tasks to the data crossbar 240 to provide the processed task to another processing cluster 214 for further processing or to store the processed task in an L2 cache, local parallel processor memory, or system memory via the memory crossbar 216. A preROP 242 (pre-raster operations unit) is configured to receive data from graphics multiprocessor 234, direct data to ROP units, which may be located with partition units as described herein (e.g., partition units 220A-220N of FIG. 2A ). The preROP 242 unit can perform optimizations for color blending, organize pixel color data, and perform address translations.It will be appreciated that the core architecture described herein is illustrative and that variations and modifications are possible. Any number of processing units, e.g., graphics multiprocessor 234, texture units 236, preROPs 242, etc., may be included within a processing cluster 214. Further, while only one processing cluster 214 is shown, a parallel processing unit as described herein may include any number of instances of the processing cluster 214. Optionally, each processing cluster 214 can be configured to operate independently of other processing clusters 214 using separate and distinct processing units, L1 caches, L2 caches, etc.FIG. 2D shows an example of the graphics multiprocessor 234 in which the graphics multiprocessor 234 couples with the pipeline manager 232 of the processing cluster 214. The graphics multiprocessor 234 has an execution pipeline including but not limited to an instruction cache 252, an instruction unit 254, an address mapping unit 256, a register file 258, one or more general purpose graphics processing unit (GPGPU) cores 262, and one or more load/store units 266. The GPGPU cores 262 and load/store units 266 are coupled with cache memory 272 and shared memory 270 via a memory and cache interconnect 268. The graphics multiprocessor 234 may additionally include tensor and/or ray-tracing cores 263 that include hardware logic to accelerate matrix and/or ray-tracing operations.The instruction cache 252 may receive a stream of instructions to execute from the pipeline manager 232. The instructions are cached in the instruction cache 252 and dispatched for execution by the instruction unit 254. The instruction unit 254 can dispatch instructions as thread groups (e.g., warps), with each thread of the thread group assigned to a different execution unit within GPGPU core 262. An instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. The address mapping unit 256 can be used to translate addresses in the unified address space into a distinct memory address that can be accessed by the load/store units 266.The register file 258 provides a set of registers for the functional units of the graphics multiprocessor 234. The register file 258 provides temporary storage for operands connected to the data paths of the functional units (e.g., GPGPU cores 262, load/store units 266) of the graphics multiprocessor 234. The register file 258 may be divided between each of the functional units such that each functional unit is allocated a dedicated portion of the register file 258. For example, the register file 258 may be divided between the different warps being executed by the graphics multiprocessor 234.The GPGPU cores 262 can each include floating point units (FPUs) and/or integer arithmetic logic units (ALUs) that are used to execute instructions of the graphics multiprocessor 234. In some implementations, the GPGPU cores 262 can include hardware logic that may otherwise reside within the tensor and/or ray-tracing cores 263. The GPGPU cores 262 can be similar in architecture or can differ in architecture. For example and in one embodiment, a first portion of the GPGPU cores 262 include a single precision FPU and an integer ALU while a second portion of the GPGPU cores include a double precision FPU. Optionally, the FPUs can implement the IEEE 754-2008 standard for floating point arithmetic or enable variable precision floating point arithmetic. The graphics multiprocessor 234 can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. One or more of the GPGPU cores can also include fixed or special function logic.The GPGPU cores 262 may include SIMD logic capable of performing a single instruction on multiple sets of data. Optionally, GPGPU cores 262 can physically execute SIMD4, SIMD8, and SIMD16 instructions and logically execute SIMD1, SIMD2, and SIMD32 instructions. The SIMD instructions for the GPGPU cores can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data (SPMD) or SIMT architectures. Multiple threads of a program configured for the SIMT execution model can be executed via a single SIMD instruction. For example and in one embodiment, eight SIMT threads that perform the same or similar operations can be executed in parallel via a single SIMD8 logic unit.The memory and cache interconnect 268 is an interconnect network that connects each of the functional units of the graphics multiprocessor 234 to the register file 258 and to the shared memory 270. For example, the memory and cache interconnect 268 is a crossbar interconnect that allows the load/store unit 266 to implement load and store operations between the shared memory 270 and the register file 258. The register file 258 can operate at the same frequency as the GPGPU cores 262, thus data transfer between the GPGPU cores 262 and the register file 258 is very low latency. The shared memory 270 can be used to enable communication between threads that execute on the functional units within the graphics multiprocessor 234. The cache memory 272 can be used as a data cache for example, to cache texture data communicated between the functional units and the texture unit 236. The shared memory 270 can also be used as a program managed cached. The shared memory 270 and the cache memory 272 can couple with the data crossbar 240 to enable communication with other components of the processing cluster. Threads executing on the GPGPU cores 262 can programmatically store data within the shared memory in addition to the automatically cached data that is stored within the cache memory 272.FIG. 3A-3C illustrate additional graphics multiprocessors, according to embodiments. FIG. 3A-3B illustrate graphics multiprocessors 325, 350, which are related to the graphics multiprocessor 234 of FIG. 2C and may be used in place of one of those. Therefore, the disclosure of any features in combination with the graphics multiprocessor 234 herein also discloses a corresponding combination with the graphics multiprocessor(s) 325, 350, but is not limited to such. FIG. 3C illustrates a graphics processing unit (GPU) 380 which includes dedicated sets of graphics processing resources arranged into multi-core groups 365A-365N, which correspond to the graphics multiprocessors 325, 350. The illustrated graphics multiprocessors 325, 350 and the multi-core groups 365A-365N can be streaming multiprocessors (SM) capable of simultaneous execution of a large number of execution threads.The graphics multiprocessor 325 of FIG. 3A includes multiple additional instances of execution resource units relative to the graphics multiprocessor 234 of FIG. 2D . For example, the graphics multiprocessor 325 can include multiple instances of the instruction unit 332A-332B, register file 334A-334B, and texture unit(s) 344A-344B. The graphics multiprocessor 325 also includes multiple sets of graphics or compute execution units (e.g., GPGPU core 336A-336B, tensor core 337A-337B, ray-tracing core 338A-338B) and multiple sets of load/store units 340A-340B. The execution resource units have a common instruction cache 330, texture and/or data cache memory 342, and shared memory 346.The various components can communicate via an interconnect fabric 327. The interconnect fabric 327 may include one or more crossbar switches to enable communication between the various components of the graphics multiprocessor 325. The interconnect fabric 327 may be a separate, high-speed network fabric layer upon which each component of the graphics multiprocessor 325 is stacked. The components of the graphics multiprocessor 325 communicate with remote components via the interconnect fabric 327. For example, the cores 336A-336B, 337A-337B, and 338A-338B can each communicate with shared memory 346 via the interconnect fabric 327. The interconnect fabric 327 can arbitrate communication within the graphics multiprocessor 325 to ensure a fair bandwidth allocation between components.The graphics multiprocessor 350 of FIG. 3B includes multiple sets of execution resources 356A-356D, where each set of execution resource includes multiple instruction units, register files, GPGPU cores, and load store units, as illustrated in FIG. 2D and FIG. 3A . The execution resources 356A-356D can work in concert with texture unit(s) 360A-360D for texture operations, while sharing an instruction cache 354, and shared memory 353. For example, the execution resources 356A-356D can share an instruction cache 354 and shared memory 353, as well as multiple instances of a texture and/or data cache memory 358A-358B. The various components can communicate via an interconnect fabric 352 similar to the interconnect fabric 327 of FIG. 3A .Persons skilled in the art will understand that the architecture described in FIG. 1 , 2A-2D , and 3A-3B are descriptive and not limiting as to the scope of the present embodiments. Thus, the techniques described herein may be implemented on any properly configured processing unit, including, without limitation, one or more mobile application processors, one or more desktop or server central processing units (CPUs) including multi-core CPUs, one or more parallel processing units, such as the parallel processing unit 202 of FIG. 2A , as well as one or more graphics processors or special purpose processing units, without departure from the scope of the embodiments described herein.The parallel processor or GPGPU as described herein may be communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general-purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or other interconnect (e.g., a high-speed interconnect such as PCIe, NVLink, or other known protocols, standardized protocols, or proprietary protocols). In other embodiments, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions.FIG. 3C illustrates a graphics processing unit (GPU) 380 which includes dedicated sets of graphics processing resources arranged into multi-core groups 365A-365N. While the details of only a single multi-core group 365A are provided, it will be appreciated that the other multi-core groups 365B-365N may be equipped with the same or similar sets of graphics processing resources. Details described with respect to the multi-core groups 365A-365N may also apply to any graphics multiprocessor 234, 325, 350 described herein.As illustrated, a multi-core group 365A may include a set of graphics cores 370, a set of tensor cores 371, and a set of ray tracing cores 372. A scheduler/dispatcher 368 schedules and dispatches the graphics threads for execution on the various cores 370, 371, 372. A set of register files 369 store operand values used by the cores 370, 371, 372 when executing the graphics threads. These may include, for example, integer registers for storing integer values, floating point registers for storing floating point values, vector registers for storing packed data elements (integer and/or floating-point data elements) and tile registers for storing tensor/matrix values. The tile registers may be implemented as combined sets of vector registers.One or more combined level 1 (LI) caches and shared memory units 373 store graphics data such as texture data, vertex data, pixel data, ray data, bounding volume data, etc., locally within each multi-core group 365A. One or more texture units 374 can also be used to perform texturing operations, such as texture mapping and sampling. A Level 2 (L2) cache 375 shared by all or a subset of the multi-core groups 365A-365N stores graphics data and/or instructions for multiple concurrent graphics threads. As illustrated, the L2 cache 375 may be shared across a plurality of multi-core groups 365A-365N. One or more memory controllers 367 couple the GPU 380 to a memory 366 which may be a system memory (e.g., DRAM) and/or a dedicated graphics memory (e.g., GDDR6 memory).Input/output (I/O) circuitry 363 couples the GPU 380 to one or more I/O devices 362 such as digital signal processors (DSPs), network controllers, or user input devices. An on-chip interconnect may be used to couple the I/O devices 362 to the GPU 380 and memory 366. One or more I/O memory management units (IOMMUs) 364 of the I/O circuitry 363 couple the I/O devices 362 directly to the system memory 366. Optionally, the IOMMU 364 manages multiple sets of page tables to map virtual addresses to physical addresses in system memory 366. The I/O devices 362, CPU(s) 361, and GPU(s) 380 may then share the same virtual address space.In one implementation of the IOMMU 364, the IOMMU 364 supports virtualization. In this case, it may manage a first set of page tables to map guest/graphics virtual addresses to guest/graphics physical addresses and a second set of page tables to map the guest/graphics physical addresses to system/host physical addresses (e.g., within system memory 366). The base addresses of each of the first and second sets of page tables may be stored in control registers and swapped out on a context switch (e.g., so that the new context is provided with access to the relevant set of page tables). While not illustrated in FIG. 3C , each of the cores 370, 371, 372 and/or multi-core groups 365A-365N may include translation lookaside buffers (TLBs) to cache guest virtual to guest physical translations, guest physical to host physical translations, and guest virtual to host physical translations.The CPU(s) 361, GPUs 380, and I/O devices 362 may be integrated on a single semiconductor chip and/or chip package. The illustrated memory 366 may be integrated on the same chip or may be coupled to the memory controllers 367 via an off-chip interface. In one implementation, the memory 366 comprises GDDR6 memory which shares the same virtual address space as other physical system-level memories, although the underlying principles described herein are not limited to this specific implementation.The tensor cores 371 may include a plurality of execution units specifically designed to perform matrix operations, which are the fundamental compute operation used to perform deep learning operations. For example, simultaneous matrix multiplication operations may be used for neural network training and inferencing. The tensor cores 371 may perform matrix processing using a variety of operand precisions including single precision floating-point (e.g., 32 bits), half-precision floating point (e.g., 16 bits), integer words (16 bits), bytes (8 bits), and half-bytes (4 bits). For example, a neural network implementation extracts features of each rendered scene, potentially combining details from multiple frames, to construct a high-quality final image.In deep learning implementations, parallel matrix multiplication work may be scheduled for execution on the tensor cores 371. The training of neural networks, in particular, requires a significant number matrix dot product operations. In order to process an inner-product formulation of an N × N × N matrix multiply, the tensor cores 371 may include at least N dot-product processing elements. Before the matrix multiply begins, one entire matrix is loaded into tile registers and at least one column of a second matrix is loaded each cycle for N cycles. Each cycle, there are N dot products that are processed.Matrix elements may be stored at different precisions depending on the particular implementation, including 16-bit words, 8-bit bytes (e.g., INT8) and 4-bit half-bytes (e.g., INT4). Different precision modes may be specified for the tensor cores 371 to ensure that the most efficient precision is used for different workloads (e.g., such as inferencing workloads which can tolerate quantization to bytes and half-bytes). Supported formats additionally include 64-bit floating point (FP64) and non-IEEE floating point formats such as the bfloat16 format (e.g., Brain floating point), a 16-bit floating point format with one sign bit, eight exponents bits, and eight significand bits, of which seven are explicitly stored. One embodiment includes support for a reduced precision tensor-float format (TF32), which has the range of FP32 (8-bits) with the precision of FP16 (10-bits). Reduced precision TF32 operations can be performed on FP32 inputs and produce FP32 outputs at higher performance relative to FP32 and increased precision relative to FP16.In one embodiment the tensor cores 371 support a sparse mode of operation for matrices in which the vast majority of values are zero. The tensor cores 371 include support for sparse input matrices that are encoded in a sparse matrix representation (e.g., coordinate list encoding (COO), compressed sparse row (CSR), compress sparse column (CSC), etc.). The tensor cores 371 also include support for compressed sparse matrix representations in the event that the sparse matrix representation may be further compressed. Compressed, encoded, and/or compressed and encoded matrix data, along with associated compression and/or encoding metadata, can be read by the tensor cores 371 and the non-zero values can be extracted. For example, for a given input matrix A, a non-zero value can be loaded from the compressed and/or encoded representation of at least a portion of matrix A. Based on the location in matrix A for the non-zero value, which may be determined from index or coordinate metadata associated with the non-zero value, a corresponding value in input matrix B may be loaded. Depending on the operation to be performed (e.g., multiply), the load of the value from input matrix B may be bypassed if the corresponding value is a zero value. In one embodiment, the pairings of values for certain operations, such as multiply operations, may be pre-scanned by scheduler logic and only operations between non-zero inputs are scheduled. Depending on the dimensions of matrix A and matrix B and the operation to be performed, output matrix C may be dense or sparse. Where output matrix C is sparse, and depending on the configuration of the tensor cores 371, output matrix C may be output in a compressed format, a sparse encoding, or a compressed sparse encoding.The ray tracing cores 372 may accelerate ray tracing operations for both real-time ray tracing and non-real-time ray tracing implementations. In particular, the ray tracing cores 372 may include ray traversal/intersection circuitry for performing ray traversal using bounding volume hierarchies (BVHs) and identifying intersections between rays and primitives enclosed within the BVH volumes. The ray tracing cores 372 may also include circuitry for performing depth testing and culling (e.g., using a Z buffer or similar arrangement). In one implementation, the ray tracing cores 372 perform traversal and intersection operations in concert with the image denoising techniques described herein, at least a portion of which may be executed on the tensor cores 371. For example, the tensor cores 371 may implement a deep learning neural network to perform denoising of frames generated by the ray tracing cores 372. However, the CPU(s) 361, graphics cores 370, and/or ray tracing cores 372 may also implement all or a portion of the denoising and/or deep learning algorithms.In addition, as described above, a distributed approach to denoising may be employed in which the GPU 380 is in a computing device coupled to other computing devices over a network or high-speed interconnect. In this distributed approach, the interconnected computing devices may share neural network learning/training data to improve the speed with which the overall system learns to perform denoising for different types of image frames and/or different graphics applications.The ray tracing cores 372 may process all BVH traversal and/or ray-primitive intersections, saving the graphics cores 370 from being overloaded with thousands of instructions per ray. For example, each ray tracing core 372 includes a first set of specialized circuitry for performing bounding box tests (e.g., for traversal operations) and/or a second set of specialized circuitry for performing the ray-triangle intersection tests (e.g., intersecting rays which have been traversed). Thus, for example, the multi-core group 365A can simply launch a ray probe, and the ray tracing cores 372 independently perform ray traversal and intersection and return hit data (e.g., a hit, no hit, multiple hits, etc.) to the thread context. The other cores 370, 371 are freed to perform other graphics or compute work while the ray tracing cores 372 perform the traversal and intersection operations.Optionally, each ray tracing core 372 may include a traversal unit to perform BVH testing operations and/or an intersection unit which performs ray-primitive intersection tests. The intersection unit generates a "hit", "no hit", or "multiple hit" response, which it provides to the appropriate thread. During the traversal and intersection operations, the execution resources of the other cores (e.g., graphics cores 370 and tensor cores 371) are freed to perform other forms of graphics work.In one optional embodiment described below, a hybrid rasterization/ray tracing approach is used in which work is distributed between the graphics cores 370 and ray tracing cores 372.The ray tracing cores 372 (and/or other cores 370, 371) may include hardware support for a ray tracing instruction set such as Microsoft's DirectX Ray Tracing (DXR) which includes a DispatchRays command, as well as ray-generation, closest-hit, any-hit, and miss shaders, which enable the assignment of unique sets of shaders and textures for each object. Another ray tracing platform which may be supported by the ray tracing cores 372, graphics cores 370 and tensor cores 371 is Vulkan 1.1.85. Note, however, that the underlying principles described herein are not limited to any particular ray tracing ISA.In general, the various cores 372, 371, 370 may support a ray tracing instruction set that includes instructions/functions for one or more of ray generation, closest hit, any hit, ray-primitive intersection, per-primitive and hierarchical bounding box construction, miss, visit, and exceptions. More specifically, a preferred embodiment includes ray tracing instructions to perform one or more of the following functions:Ray Generation - Ray generation instructions may be executed for each pixel, sample, or other user-defined work assignment.Closest Hit - A closest hit instruction may be executed to locate the closest intersection point of a ray with primitives within a scene.Any Hit - An any hit instruction identifies multiple intersections between a ray and primitives within a scene, potentially to identify a new closest intersection point.Intersection - An intersection instruction performs a ray-primitive intersection test and outputs a result.Per-primitive Bounding box Construction - This instruction builds a bounding box around a given primitive or group of primitives (e.g., when building a new BVH or other acceleration data structure).Miss - Indicates that a ray misses all geometry within a scene, or specified region of a scene.Visit - Indicates the children volumes a ray will traverse.Exceptions - Includes various types of exception handlers (e.g., invoked for various error conditions).In one embodiment the ray tracing cores 372 may be adapted to accelerate general-purpose compute operations that can be accelerated using computational techniques that are analogous to ray intersection tests. A compute framework can be provided that enables shader programs to be compiled into low level instructions and/or primitives that perform general-purpose compute operations via the ray tracing cores. Exemplary computational problems that can benefit from compute operations performed on the ray tracing cores 372 include computations involving beam, wave, ray, or particle propagation within a coordinate space. Interactions associated with that propagation can be computed relative to a geometry or mesh within the coordinate space. For example, computations associated with electromagnetic signal propagation through an environment can be accelerated via the use of instructions or primitives that are executed via the ray tracing cores. Diffraction and reflection of the signals by objects in the environment can be computed as direct ray-tracing analogies.Ray tracing cores 372 can also be used to perform computations that are not directly analogous to ray tracing. For example, mesh projection, mesh refinement, and volume sampling computations can be accelerated using the ray tracing cores 372. Generic coordinate space calculations, such as nearest neighbor calculations can also be performed. For example, the set of points near a given point can be discovered by defining a bounding box in the coordinate space around the point. BVH and ray probe logic within the ray tracing cores 372 can then be used to determine the set of point intersections within the bounding box. The intersections constitute the origin point and the nearest neighbors to that origin point. Computations that are performed using the ray tracing cores 372 can be performed in parallel with computations performed on the graphics cores 372 and tensor cores 371. A shader compiler can be configured to compile a compute shader or other general-purpose graphics processing program into low level primitives that can be parallelized across the graphics cores 370, tensor cores 371, and ray tracing cores 372.Techniques for GPU to Host Processor InterconnectionFIG. 4A illustrates an exemplary architecture in which a plurality of GPUs 410-413, e.g. such as the parallel processors 200 shown in FIG. 2A , are communicatively coupled to a plurality of multi-core processors 405-406 over high-speed links 440A-440D (e.g., buses, point-to-point interconnects, etc.). The high-speed links 440A-440D may support a communication throughput of 4GB/s, 30GB/s, 80GB/s or higher, depending on the implementation. Various interconnect protocols may be used including, but not limited to, PCIe 4.0 or 5.0 and NVLink 2.0. However, the underlying principles described herein are not limited to any particular communication protocol or throughput.Two or more of the GPUs 410-413 may be interconnected over high-speed links 442A-442B, which may be implemented using the same or different protocols/links than those used for high-speed links 440A-440D. Similarly, two or more of the multi-core processors 405-406 may be connected over high speed link 443 which may be symmetric multi-processor (SMP) buses operating at 20GB/s, 30GB/s, 120GB/s or lower or higher speeds. Alternatively, all communication between the various system components shown in FIG. 4A may be accomplished using the same protocols/links (e.g., over a common interconnection fabric). As mentioned, however, the underlying principles described herein are not limited to any particular type of interconnect technology.Each multi-core processor 405-406 may be communicatively coupled to a processor memory 401-402, via memory interconnects 430A-430B, respectively, and each GPU 410-413 is communicatively coupled to GPU memory 420-423 over GPU memory interconnects 450A-450D, respectively. The memory interconnects 430A-430B and 450A-450D may utilize the same or different memory access technologies. By way of example, and not limitation, the processor memories 401-402 and GPU memories 420-423 may be volatile memories such as dynamic random-access memories (DRAMs) (including stacked DRAMs), Graphics DDR SDRAM (GDDR) (e.g., GDDR5, GDDR6), or High Bandwidth Memory (HBM) and/or may be non-volatile memories such as 3D XPoint/Optane or Nano-Ram. For example, some portion of the memories may be volatile memory and another portion may be non-volatile memory (e.g., using a two-level memory (2LM) hierarchy). A memory subsystem as described herein may be compatible with a number of memory technologies, such as Double Data Rate versions released by JEDEC (Joint Electronic Device Engineering Council).As described below, although the various processors 405-406 and GPUs 410-413 may be physically coupled to a particular memory 401-402, 420-423, respectively, a unified memory architecture may be implemented in which the same virtual system address space (also referred to as the "effective address" space) is distributed among all of the various physical memories. For example, processor memories 401-402 may each comprise 64GB of the system memory address space and GPU memories 420-423 may each comprise 32GB of the system memory address space (resulting in a total of 256GB addressable memory in this example).FIG. 4B illustrates additional optional details for an interconnection between a multi-core processor 407 and a graphics acceleration module 446. The graphics acceleration module 446 may include one or more GPU chips integrated on a line card which is coupled to the processor 407 via the high-speed link 440. Alternatively, the graphics acceleration module 446 may be integrated on the same package or chip as the processor 407.The illustrated processor 407 includes a plurality of cores 460A-460D, each with a translation lookaside buffer 461A-461D and one or more caches 462A-462D. The cores may include various other components for executing instructions and processing data which are not illustrated to avoid obscuring the underlying principles of the components described herein (e.g., instruction fetch units, branch prediction units, decoders, execution units, reorder buffers, etc.). The caches 462A-462D may comprise level 1 (LI) and level 2 (L2) caches. In addition, one or more shared caches 456 may be included in the caching hierarchy and shared by sets of the cores 460A-460D. For example, one embodiment of the processor 407 includes 24 cores, each with its own L1 cache, twelve shared L2 caches, and twelve shared L3 caches. In this embodiment, one of the L2 and L3 caches are shared by two adjacent cores. The processor 407 and the graphics accelerator integration module 446 connect with system memory 441, which may include processor memories 401-402.Coherency is maintained for data and instructions stored in the various caches 462A-462D, 456 and system memory 441 via inter-core communication over a coherence bus 464. For example, each cache may have cache coherency logic/circuitry associated therewith to communicate to over the coherence bus 464 in response to detected reads or writes to particular cache lines. In one implementation, a cache snooping protocol is implemented over the coherence bus 464 to snoop cache accesses. Cache snooping/coherency techniques are well understood by those of skill in the art and will not be described in detail here to avoid obscuring the underlying principles described herein.A proxy circuit 425 may be provided that communicatively couples the graphics acceleration module 446 to the coherence bus 464, allowing the graphics acceleration module 446 to participate in the cache coherence protocol as a peer of the cores. In particular, an interface 435 provides connectivity to the proxy circuit 425 over high-speed link 440 (e.g., a PCIe bus, NVLink, etc.) and an interface 437 connects the graphics acceleration module 446 to the high-speed link 440.In one implementation, an accelerator integration circuit 436 provides cache management, memory access, context management, and interrupt management services on behalf of a plurality of graphics processing engines 431, 432, N of the graphics acceleration module 446. The graphics processing engines 431, 432, N may each comprise a separate graphics processing unit (GPU). Alternatively, the graphics processing engines 431, 432, N may comprise different types of graphics processing engines within a GPU such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In other words, the graphics acceleration module may be a GPU with a plurality of graphics processing engines 431-432, N or the graphics processing engines 431-432, N may be individual GPUs integrated on a common package, line card, or chip.The accelerator integration circuit 436 may include a memory management unit (MMU) 439 for performing various memory management functions such as virtual-to-physical memory translations (also referred to as effective-to-real memory translations) and memory access protocols for accessing system memory 441. The MMU 439 may also include a translation lookaside buffer (TLB) (not shown) for caching the virtual/effective to physical/real address translations. In one implementation, a cache 438 stores commands and data for efficient access by the graphics processing engines 431, 432, N. The data stored in cache 438 and graphics memories 433-434, M may be kept coherent with the core caches 462A-462D, 456 and system memory 441. As mentioned, this may be accomplished via proxy circuit 425 which takes part in the cache coherency mechanism on behalf of cache 438 and memories 433-434, M (e.g., sending updates to the cache 438 related to modifications/accesses of cache lines on processor caches 462A-462D, 456 and receiving updates from the cache 438).A set of registers 445 store context data for threads executed by the graphics processing engines 431-432, N and a context management circuit 448 manages the thread contexts. For example, the context management circuit 448 may perform save and restore operations to save and restore contexts of the various threads during contexts switches (e.g., where a first thread is saved and a second thread is restored so that the second thread can be execute by a graphics processing engine). For example, on a context switch, the context management circuit 448 may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore the register values when returning to the context. An interrupt management circuit 447, for example, may receive and processes interrupts received from system devices.In one implementation, virtual/effective addresses from a graphics processing engine 431 are translated to real/physical addresses in system memory 441 by the MMU 439. Optionally, the accelerator integration circuit 436 supports multiple (e.g., 4, 8, 16) graphics accelerator modules 446 and/or other accelerator devices. The graphics accelerator module 446 may be dedicated to a single application executed on the processor 407 or may be shared between multiple applications. Optionally, a virtualized graphics execution environment is provided in which the resources of the graphics processing engines 431-432, N are shared with multiple applications, virtual machines (VMs), or containers. The resources may be subdivided into "slices" which are allocated to different VMs and/or applications based on the processing requirements and priorities associated with the VMs and/or applications. VMs and containers can be used interchangeably herein.A virtual machine (VM) can be software that runs an operating system and one or more applications. A VM can be defined by specification, configuration files, virtual disk file, non-volatile random access memory (NVRAM) setting file, and the log file and is backed by the physical resources of a host computing platform. A VM can include an operating system (OS) or application environment that is installed on software, which imitates dedicated hardware. The end user has the same experience on a virtual machine as they would have on dedicated hardware. Specialized software, called a hypervisor, emulates the PC client or server's CPU, memory, hard disk, network and other hardware resources completely, enabling virtual machines to share the resources. The hypervisor can emulate multiple virtual hardware platforms that are isolated from each other, allowing virtual machines to run Linux®, Windows® Server, VMware ESXi, and other operating systems on the same underlying physical host.A container can be a software package of applications, configurations and dependencies so the applications run reliably on one computing environment to another. Containers can share an operating system installed on the server platform and run as isolated processes. A container can be a software package that contains everything the software needs to run such as system tools, libraries, and settings. Containers are not installed like traditional software programs, which allows them to be isolated from the other software and the operating system itself. The isolated nature of containers provides several benefits. First, the software in a container will run the same in different environments. For example, a container that includes PHP and MySQL can run identically on both a Linux® computer and a Windows® machine. Second, containers provide added security since the software will not affect the host operating system. While an installed application may alter system settings and modify resources, such as the Windows registry, a container can only modify settings within the container.Thus, the accelerator integration circuit 436 acts as a bridge to the system for the graphics acceleration module 446 and provides address translation and system memory cache services. In one embodiment, to facilitate the bridging functionality, the accelerator integration circuit 436 may also include shared I/O 497 (e.g., PCIe, USB, or others) and hardware to enable system control of voltage, clocking, performance, thermals, and security. The shared I/O 497 may utilize separate physical connections or may traverse the high-speed link 440. In addition, the accelerator integration circuit 436 may provide virtualization facilities for the host processor to manage virtualization of the graphics processing engines, interrupts, and memory management.Because hardware resources of the graphics processing engines 431-432, N are mapped explicitly to the real address space seen by the host processor 407, any host processor can address these resources directly using an effective address value. One optional function of the accelerator integration circuit 436 is the physical separation of the graphics processing engines 431-432, N so that they appear to the system as independent units.One or more graphics memories 433-434, M may be coupled to each of the graphics processing engines 431-432, N, respectively. The graphics memories 433-434, M store instructions and data being processed by each of the graphics processing engines 431-432, N. The graphics memories 433-434, M may be volatile memories such as DRAMs (including stacked DRAMs), GDDR memory (e.g., GDDR5, GDDR6), or HBM, and/or may be non-volatile memories such as 3D XPoint/Optane, Samsung Z-NAND, or Nano-Ram.To reduce data traffic over the high-speed link 440, biasing techniques may be used to ensure that the data stored in graphics memories 433-434, M is data which will be used most frequently by the graphics processing engines 431-432, N and preferably not used by the cores 460A-460D (at least not frequently). Similarly, the biasing mechanism attempts to keep data needed by the cores (and preferably not the graphics processing engines 431-432, N) within the caches 462A-462D, 456 of the cores and system memory 441.According to a variant shown in FIG. 4C the accelerator integration circuit 436 is integrated within the processor 407. The graphics processing engines 431-432, N communicate directly over the high-speed link 440 to the accelerator integration circuit 436 via interface 437 and interface 435 (which, again, may be utilize any form of bus or interface protocol). The accelerator integration circuit 436 may perform the same operations as those described with respect to FIG. 4B, but potentially at a higher throughput given its close proximity to the coherence bus 464 and caches 462A-462D, 456.The embodiments described may support different programming models including a dedicated-process programming model (no graphics acceleration module virtualization) and shared programming models (with virtualization). The latter may include programming models which are controlled by the accelerator integration circuit 436 and programming models which are controlled by the graphics acceleration module 446.In the embodiments of the dedicated process model, graphics processing engines 431, 432, ... N may be dedicated to a single application or process under a single operating system. The single application can funnel other application requests to the graphics engines 431, 432, ... N, providing virtualization within a VM/partition.In the dedicated-process programming models, the graphics processing engines 431,432, N, may be shared by multiple VM/application partitions. The shared models require a system hypervisor to virtualize the graphics processing engines 431-432, N to allow access by each operating system. For single-partition systems without a hypervisor, the graphics processing engines 431-432, N are owned by the operating system. In both cases, the operating system can virtualize the graphics processing engines 431-432, N to provide access to each process or application.For the shared programming model, the graphics acceleration module 446 or an individual graphics processing engine 431-432, N selects a process element using a process handle. The process elements may be stored in system memory 441 and be addressable using the effective address to real address translation techniques described herein. The process handle may be an implementation-specific value provided to the host process when registering its context with the graphics processing engine 431-432, N (that is, calling system software to add the process element to the process element linked list). The lower 16-bits of the process handle may be the offset of the process element within the process element linked list.FIG. 4D illustrates an exemplary accelerator integration slice 490. As used herein, a "slice" comprises a specified portion of the processing resources of the accelerator integration circuit 436. Application effective address space 482 within system memory 441 stores process elements 483. The process elements 483 may be stored in response to GPU invocations 481 from applications 480 executed on the processor 407. A process element 483 contains the process state for the corresponding application 480. A work descriptor (WD) 484 contained in the process element 483 can be a single job requested by an application or may contain a pointer to a queue of jobs. In the latter case, the WD 484 is a pointer to the job request queue in the application's address space 482.The graphics acceleration module 446 and/or the individual graphics processing engines 431-432, N can be shared by all or a subset of the processes in the system. For example, the technologies described herein may include an infrastructure for setting up the process state and sending a WD 484 to a graphics acceleration module 446 to start a job in a virtualized environment.In one implementation, the dedicated-process programming model is implementation-specific. In this model, a single process owns the graphics acceleration module 446 or an individual graphics processing engine 431. Because the graphics acceleration module 446 is owned by a single process, the hypervisor initializes the accelerator integration circuit 436 for the owning partition and the operating system initializes the accelerator integration circuit 436 for the owning process at the time when the graphics acceleration module 446 is assigned.In operation, a WD fetch unit 491 in the accelerator integration slice 490 fetches the next WD 484 which includes an indication of the work to be done by one of the graphics processing engines of the graphics acceleration module 446. Data from the WD 484 may be stored in registers 445 and used by the MMU 439, interrupt management circuit 447 and/or context management circuit 448 as illustrated. For example, the MMU 439 may include segment/page walk circuitry for accessing segment/page tables 486 within the OS virtual address space 485. The interrupt management circuit 447 may process interrupt events 492 received from the graphics acceleration module 446. When performing graphics operations, an effective address 493 generated by a graphics processing engine 431-432, N is translated to a real address by the MMU 439.The same set of registers 445 may be duplicated for each graphics processing engine 431 - 432, N and/or graphics acceleration module 446 and may be initialized by the hypervisor or operating system. Each of these duplicated registers may be included in an accelerator integration slice 490. In one embodiment, each graphics processing engine 431-432, N may be presented to the hypervisor 496 as a distinct graphics processor device. QoS settings can be configured for clients of a specific graphics processing engine 431-432, N and data isolation between the clients of each engine can be enabled. Exemplary registers that may be initialized by the hypervisor are shown in Table 1.Table 1 - Hypervisor Initialized Registers1Slice Control Register2Real Address (RA) Scheduled Processes Area Pointer3Authority Mask Override Register4Interrupt Vector Table Entry Offset5Interrupt Vector Table Entry Limit6State Register7Logical Partition ID8Real address (RA) Hypervisor Accelerator Utilization Record Pointer9Storage Description RegisterExemplary registers that may be initialized by the operating system are shown in Table 2.Table 2 - Operating System Initialized Registers1Process and Thread Identification2Effective Address (EA) Context Save/Restore Pointer3Virtual Address (VA) Accelerator Utilization Record Pointer4Virtual Address (VA) Storage Segment Table Pointer5Authority Mask6Work descriptorEach WD 484 may be specific to a particular graphics acceleration module 446 and/or graphics processing engine 431-432, N. It contains all the information a graphics processing engine 431-432, N requires to do its work or it can be a pointer to a memory location where the application has set up a command queue of work to be completed.FIG. 4E illustrates additional optional details of a shared model. It includes a hypervisor real address space 498 in which a process element list 499 is stored. The hypervisor real address space 498 is accessible via a hypervisor 496 which virtualizes the graphics acceleration module engines for the operating system 495.The shared programming models allow for all or a subset of processes from all or a subset of partitions in the system to use a graphics acceleration module 446. There are two programming models where the graphics acceleration module 446 is shared by multiple processes and partitions: time-sliced shared and graphics directed shared.In this model, the system hypervisor 496 owns the graphics acceleration module 446 and makes its function available to all operating systems 495. For a graphics acceleration module 446 to support virtualization by the system hypervisor 496, the graphics acceleration module 446 may adhere to the following requirements: 1) An application's job request must be autonomous (that is, the state does not need to be maintained between jobs), or the graphics acceleration module 446 must provide a context save and restore mechanism. 2) An application's job request is guaranteed by the graphics acceleration module 446 to complete in a specified amount of time, including any translation faults, or the graphics acceleration module 446 provides the ability to preempt the processing of the job. 3) The graphics acceleration module 446 must be guaranteed fairness between processes when operating in the directed shared programming model.For the shared model, the application 480 may be required to make an operating system 495 system call with a graphics acceleration module 446 type, a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). The graphics acceleration module 446 type describes the targeted acceleration function for the system call. The graphics acceleration module 446 type may be a system-specific value. The WD is formatted specifically for the graphics acceleration module 446 and can be in the form of a graphics acceleration module 446 command, an effective address pointer to a user-defined structure, an effective address pointer to a queue of commands, or any other data structure to describe the work to be done by the graphics acceleration module 446. In one embodiment, the AMR value is the AMR state to use for the current process. The value passed to the operating system is similar to an application setting the AMR. If the accelerator integration circuit 436 and graphics acceleration module 446 implementations do not support a User Authority Mask Override Register (UAMOR), the operating system may apply the current UAMOR value to the AMR value before passing the AMR in the hypervisor call. The hypervisor 496 may optionally apply the current Authority Mask Override Register (AMOR) value before placing the AMR into the process element 483. The CSRP may be one of the registers 445 containing the effective address of an area in the application's address space 482 for the graphics acceleration module 446 to save and restore the context state. This pointer is optional if no state is required to be saved between jobs or when a job is preempted. The context save/restore area may be pinned system memory.Upon receiving the system call, the operating system 495 may verify that the application 480 has registered and been given the authority to use the graphics acceleration module 446. The operating system 495 then calls the hypervisor 496 with the information shown in Table 3.Table 3 - OS to Hypervisor Call Parameters1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentially masked).3An effective address (EA) Context Save/Restore Area Pointer (CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization record pointer (AURP)6The virtual address of the storage segment table pointer (SSTP)7A logical interrupt service number (LISN)Upon receiving the hypervisor call, the hypervisor 496 verifies that the operating system 495 has registered and been given the authority to use the graphics acceleration module 446. The hypervisor 496 then puts the process element 483 into the process element linked list for the corresponding graphics acceleration module 446 type. The process element may include the information shown in Table 4.Table 4 - Process Element Information1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentially masked).3An effective address (EA) Context Save/Restore Area Pointer (CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization record pointer (AURP)6The virtual address of the storage segment table pointer (SSTP)7A logical interrupt service number (LISN)8Interrupt vector table, derived from the hypervisor call parameters.9A state register (SR) value10A logical partition ID (LPID)11A real address (RA) hypervisor accelerator utilization record pointer12The Storage Descriptor Register (SDR)The hypervisor may initialize a plurality of accelerator integration slice 490 registers 445.As illustrated in FIG. 4F , in one optional implementation a unified memory addressable via a common virtual memory address space used to access the physical processor memories 401-402 and GPU memories 420-423 is employed. In this implementation, operations executed on the GPUs 410-413 utilize the same virtual/effective memory address space to access the processors memories 401-402 and vice versa, thereby simplifying programmability. A first portion of the virtual/effective address space may be allocated to the processor memory 401, a second portion to the second processor memory 402, a third portion to the GPU memory 420, and so on. The entire virtual/effective memory space (sometimes referred to as the effective address space) may thereby be distributed across each of the processor memories 401-402 and GPU memories 420-423, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory.Bias/coherence management circuitry 494A-494E within one or more of the MMUs 439A-439E may be provided that ensures cache coherence between the caches of the host processors (e.g., 405) and the GPUs 410-413 and implements biasing techniques indicating the physical memories in which certain types of data should be stored. While multiple instances of bias/coherence management circuitry 494A-494E are illustrated in FIG. 4F , the bias/coherence circuitry may be implemented within the MMU of one or more host processors 405 and/or within the accelerator integration circuit 436.The GPU-attached memory 420-423 may be mapped as part of system memory, and accessed using shared virtual memory (SVM) technology, but without suffering the typical performance drawbacks associated with full system cache coherence. The ability to GPU-attached memory 420-423 to be accessed as system memory without onerous cache coherence overhead provides a beneficial operating environment for GPU offload. This arrangement allows the host processor 405 software to setup operands and access computation results, without the overhead of tradition I/O DMA data copies. Such traditional copies involve driver calls, interrupts and memory mapped I/O (MMIO) accesses that are all inefficient relative to simple memory accesses. At the same time, the ability to access GPU attached memory 420-423 without cache coherence overheads can be critical to the execution time of an offloaded computation. In cases with substantial streaming write memory traffic, for example, cache coherence overhead can significantly reduce the effective write bandwidth seen by a GPU 410-413. The efficiency of operand setup, the efficiency of results access, and the efficiency of GPU computation all play a role in determining the effectiveness of GPU offload.A selection between GPU bias and host processor bias may be driven by a bias tracker data structure. A bias table may be used, for example, which may be a page-granular structure (i.e., controlled at the granularity of a memory page) that includes 1 or 2 bits per GPU-attached memory page. The bias table may be implemented in a stolen memory range of one or more GPU-attached memories 420-423, with or without a bias cache in the GPU 410-413 (e.g., to cache frequently/recently used entries of the bias table). Alternatively, the entire bias table may be maintained within the GPU.In one implementation, the bias table entry associated with each access to the GPU-attached memory 420-423 is accessed prior the actual access to the GPU memory, causing the following operations. First, local requests from the GPU 410-413 that find their page in GPU bias are forwarded directly to a corresponding GPU memory 420-423. Local requests from the GPU that find their page in host bias are forwarded to the processor 405 (e.g., over a high-speed link as discussed above). Optionally, requests from the processor 405 that find the requested page in host processor bias complete the request like a normal memory read. Alternatively, requests directed to a GPU-biased page may be forwarded to the GPU 410-413. The GPU may then transition the page to a host processor bias if it is not currently using the page.The bias state of a page can be changed either by a software-based mechanism, a hardware-assisted software-based mechanism, or, for a limited set of cases, a purely hardware-based mechanism.One mechanism for changing the bias state employs an API call (e.g. OpenCL), which, in turn, calls the GPU's device driver which, in turn, sends a message (or enqueues a command descriptor) to the GPU directing it to change the bias state and, for some transitions, perform a cache flushing operation in the host. The cache flushing operation is required for a transition from host processor 405 bias to GPU bias, but is not required for the opposite transition.Cache coherency may be maintained by temporarily rendering GPU-biased pages uncacheable by the host processor 405. To access these pages, the processor 405 may request access from the GPU 410 which may or may not grant access right away, depending on the implementation. Thus, to reduce communication between the host processor 405 and GPU 410 it is beneficial to ensure that GPU-biased pages are those which are required by the GPU but not the host processor 405 and vice versa.Graphics Processing PipelineFIG. 5 illustrates a graphics processing pipeline 500. A graphics multiprocessor, such as graphics multiprocessor 234 as in FIG. 2D , graphics multiprocessor 325 of FIG. 3A , graphics multiprocessor 350 of FIG. 3B can implement the illustrated graphics processing pipeline 500. The graphics multiprocessor can be included within the parallel processing subsystems as described herein, such as the parallel processor 200 of FIG. 2A , which may be related to the parallel processor(s) 112 of FIG. 1 and may be used in place of one of those. The various parallel processing systems can implement the graphics processing pipeline 500 via one or more instances of the parallel processing unit (e.g., parallel processing unit 202 of FIG. 2A ) as described herein. For example, a shader unit (e.g., graphics multiprocessor 234 of FIG. 2C ) may be configured to perform the functions of one or more of a vertex processing unit 504, a tessellation control processing unit 508, a tessellation evaluation processing unit 512, a geometry processing unit 516, and a fragment/pixel processing unit 524. The functions of data assembler 502, primitive assemblers 506, 514, 518, tessellation unit 510, rasterizer 522, and raster operations unit 526 may also be performed by other processing engines within a processing cluster (e.g., processing cluster 214 of FIG. 2A ) and a corresponding partition unit (e.g., partition unit 220A-220N of FIG. 2A ). The graphics processing pipeline 500 may also be implemented using dedicated processing units for one or more functions. It is also possible that one or more portions of the graphics processing pipeline 500 are performed by parallel processing logic within a general-purpose processor (e.g., CPU). Optionally, one or more portions of the graphics processing pipeline 500 can access on-chip memory (e.g., parallel processor memory 222 as in FIG. 2A ) via a memory interface 528, which may be an instance of the memory interface 218 of FIG. 2A . The graphics processor pipeline 500 may also be implemented via a multi-core group 365A as in FIG. 3C .The data assembler 502 is a processing unit that may collect vertex data for surfaces and primitives. The data assembler 502 then outputs the vertex data, including the vertex attributes, to the vertex processing unit 504. The vertex processing unit 504 is a programmable execution unit that executes vertex shader programs, lighting and transforming vertex data as specified by the vertex shader programs. The vertex processing unit 504 reads data that is stored in cache, local or system memory for use in processing the vertex data and may be programmed to transform the vertex data from an object-based coordinate representation to a world space coordinate space or a normalized device coordinate space.A first instance of a primitive assembler 506 receives vertex attributes from the vertex processing unit 504. The primitive assembler 506 readings stored vertex attributes as needed and constructs graphics primitives for processing by tessellation control processing unit 508. The graphics primitives include triangles, line segments, points, patches, and so forth, as supported by various graphics processing application programming interfaces (APIs).The tessellation control processing unit 508 treats the input vertices as control points for a geometric patch. The control points are transformed from an input representation from the patch (e.g., the patch's bases) to a representation that is suitable for use in surface evaluation by the tessellation evaluation processing unit 512. The tessellation control processing unit 508 can also compute tessellation factors for edges of geometric patches. A tessellation factor applies to a single edge and quantifies a view-dependent level of detail associated with the edge. A tessellation unit 510 is configured to receive the tessellation factors for edges of a patch and to tessellate the patch into multiple geometric primitives such as line, triangle, or quadrilateral primitives, which are transmitted to a tessellation evaluation processing unit 512. The tessellation evaluation processing unit 512 operates on parameterized coordinates of the subdivided patch to generate a surface representation and vertex attributes for each vertex associated with the geometric primitives.A second instance of a primitive assembler 514 receives vertex attributes from the tessellation evaluation processing unit 512, reading stored vertex attributes as needed, and constructs graphics primitives for processing by the geometry processing unit 516. The geometry processing unit 516 is a programmable execution unit that executes geometry shader programs to transform graphics primitives received from primitive assembler 514 as specified by the geometry shader programs. The geometry processing unit 516 may be programmed to subdivide the graphics primitives into one or more new graphics primitives and calculate parameters used to rasterize the new graphics primitives.The geometry processing unit 516 may be able to add or delete elements in the geometry stream. The geometry processing unit 516 outputs the parameters and vertices specifying new graphics primitives to primitive assembler 518. The primitive assembler 518 receives the parameters and vertices from the geometry processing unit 516 and constructs graphics primitives for processing by a viewport scale, cull, and clip unit 520. The geometry processing unit 516 reads data that is stored in parallel processor memory or system memory for use in processing the geometry data. The viewport scale, cull, and clip unit 520 performs clipping, culling, and viewport scaling and outputs processed graphics primitives to a rasterizer 522.The rasterizer 522 can perform depth culling and other depth-based optimizations. The rasterizer 522 also performs scan conversion on the new graphics primitives to generate fragments and output those fragments and associated coverage data to the fragment/pixel processing unit 524. The fragment/pixel processing unit 524 is a programmable execution unit that is configured to execute fragment shader programs or pixel shader programs. The fragment/pixel processing unit 524 transforming fragments or pixels received from rasterizer 522, as specified by the fragment or pixel shader programs. For example, the fragment/pixel processing unit 524 may be programmed to perform operations included but not limited to texture mapping, shading, blending, texture correction and perspective correction to produce shaded fragments or pixels that are output to a raster operations unit 526. The fragment/pixel processing unit 524 can read data that is stored in either the parallel processor memory or the system memory for use when processing the fragment data. Fragment or pixel shader programs may be configured to shade at sample, pixel, tile, or other granularities depending on the sampling rate configured for the processing units.The raster operations unit 526 is a processing unit that performs raster operations including, but not limited to stencil, z-test, blending, and the like, and outputs pixel data as processed graphics data to be stored in graphics memory (e.g., parallel processor memory 222 as in FIG. 2A , and/or system memory 104 as in FIG 1 ), to be displayed on the one or more display device(s) 110A-110B or for further processing by one of the one or more processor(s) 102 or parallel processor(s) 112. The raster operations unit 526 may be configured to compress z or color data that is written to memory and decompress z or color data that is read from memory.Machine Learning OverviewThe architecture described above can be applied to perform training and inference operations using machine learning models. Machine learning has been successful at solving many kinds of tasks. The computations that arise when training and using machine learning algorithms (e.g., neural networks) lend themselves naturally to efficient parallel implementations. Accordingly, parallel processors such as general-purpose graphics processing units (GPGPUs) have played a significant role in the practical implementation of deep neural networks. Parallel graphics processors with single instruction, multiple thread (SIMT) architectures are designed to maximize the amount of parallel processing in the graphics pipeline. In an SIMT architecture, groups of parallel threads attempt to execute program instructions synchronously together as often as possible to increase processing efficiency. The efficiency provided by parallel machine learning algorithm implementations allows the use of high capacity networks and enables those networks to be trained on larger datasets.A machine learning algorithm is an algorithm that can learn based on a set of data. For example, machine learning algorithms can be designed to model high-level abstractions within a data set. For example, image recognition algorithms can be used to determine which of several categories to which a given input belong; regression algorithms can output a numerical value given an input; and pattern recognition algorithms can be used to generate translated text or perform text to speech and/or speech recognition.An exemplary type of machine learning algorithm is a neural network. There are many types of neural networks; a simple type of neural network is a feedforward network. A feedforward network may be implemented as an acyclic graph in which the nodes are arranged in layers. Typically, a feedforward network topology includes an input layer and an output layer that are separated by at least one hidden layer. The hidden layer transforms input received by the input layer into a representation that is useful for generating output in the output layer. The network nodes are fully connected via edges to the nodes in adjacent layers, but there are no edges between nodes within each layer. Data received at the nodes of an input layer of a feedforward network are propagated (i.e., "fed forward") to the nodes of the output layer via an activation function that calculates the states of the nodes of each successive layer in the network based on coefficients ("weights") respectively associated with each of the edges connecting the layers. Depending on the specific model being represented by the algorithm being executed, the output from the neural network algorithm can take various forms.Before a machine learning algorithm can be used to model a particular problem, the algorithm is trained using a training data set. Training a neural network involves selecting a network topology, using a set of training data representing a problem being modeled by the network, and adjusting the weights until the network model performs with a minimal error for all instances of the training data set. For example, during a supervised learning training process for a neural network, the output produced by the network in response to the input representing an instance in a training data set is compared to the "correct" labeled output for that instance, an error signal representing the difference between the output and the labeled output is calculated, and the weights associated with the connections are adjusted to minimize that error as the error signal is backward propagated through the layers of the network. The network is considered "trained" when the errors for each of the outputs generated from the instances of the training data set are minimized.The accuracy of a machine learning algorithm can be affected significantly by the quality of the data set used to train the algorithm. The training process can be computationally intensive and may require a significant amount of time on a conventional general-purpose processor. Accordingly, parallel processing hardware is used to train many types of machine learning algorithms. This is particularly useful for optimizing the training of neural networks, as the computations performed in adjusting the coefficients in neural networks lend themselves naturally to parallel implementations. Specifically, many machine learning algorithms and software applications have been adapted to make use of the parallel processing hardware within general-purpose graphics processing devices.FIG. 6 is a generalized diagram of a machine learning software stack 600. A machine learning application 602 is any logic that can be configured to train a neural network using a training dataset or to use a trained deep neural network to implement machine intelligence. The machine learning application 602 can include training and inference functionality for a neural network and/or specialized software that can be used to train a neural network before deployment. The machine learning application 602 can implement any type of machine intelligence including but not limited to image recognition, mapping and localization, autonomous navigation, speech synthesis, medical imaging, or language translation. Example machine learning applications 602 include, but are not limited to, voice-based virtual assistants, image or facial recognition algorithms, autonomous navigation, and the software tools that are used to train the machine learning models used by the machine learning applications 602.Hardware acceleration for the machine learning application 602 can be enabled via a machine learning framework 604. The machine learning framework 604 can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms. Without the machine learning framework 604, developers of machine learning algorithms would be required to create and optimize the main computational logic associated with the machine learning algorithm, then re-optimize the computational logic as new parallel processors are developed. Instead, the machine learning application can be configured to perform the necessary computations using the primitives provided by the machine learning framework 604. Exemplary primitives include tensor convolutions, activation functions, and pooling, which are computational operations that are performed while training a convolutional neural network (CNN). The machine learning framework 604 can also provide primitives to implement basic linear algebra subprograms performed by many machine-learning algorithms, such as matrix and vector operations. Examples of a machine learning framework 604 include, but are not limited to, TensorFlow, TensorRT, PyTorch, MXNet, Caffee, and other high-level machine learning frameworks.The machine learning framework 604 can process input data received from the machine learning application 602 and generate the appropriate input to a compute framework 606. The compute framework 606 can abstract the underlying instructions provided to the GPGPU driver 608 to enable the machine learning framework 604 to take advantage of hardware acceleration via the GPGPU hardware 610 without requiring the machine learning framework 604 to have intimate knowledge of the architecture of the GPGPU hardware 610. Additionally, the compute framework 606 can enable hardware acceleration for the machine learning framework 604 across a variety of types and generations of the GPGPU hardware 610. Exemplary compute frameworks 606 include the CUDA compute framework and associated machine learning libraries, such as the CUDA Deep Neural Network (cuDNN) library. The machine learning software stack 600 can also include communication libraries or frameworks to facilitate multi-GPU and multi-node compute.GPGPU Machine Learning AccelerationFIG. 7 illustrates a general-purpose graphics processing unit 700, which may be the parallel processor 200 of FIG. 2A or the parallel processor(s) 112 of FIG. 1 . The general-purpose processing unit (GPGPU) 700 may be configured to provide support for hardware acceleration of primitives provided by a machine learning framework to accelerate the processing the type of computational workloads associated with training deep neural networks. Additionally, the GPGPU 700 can be linked directly to other instances of the GPGPU to create a multi-GPU cluster to improve training speed for particularly deep neural networks. Primitives are also supported to accelerate inference operations for deployed neural networks.The GPGPU 700 includes a host interface 702 to enable a connection with a host processor. The host interface 702 may be a PCI Express interface. However, the host interface can also be a vendor specific communications interface or communications fabric. The GPGPU 700 receives commands from the host processor and uses a global scheduler 704 to distribute execution threads associated with those commands to a set of processing clusters 706A-706H. The processing clusters 706A-706H share a cache memory 708. The cache memory 708 can serve as a higher-level cache for cache memories within the processing clusters 706A-706H. The illustrated processing clusters 706A-706H may correspond with processing clusters 214A-214N as in FIG. 2A .The GPGPU 700 includes memory 714A-714B coupled with the processing clusters 706A-706H via a set of memory controllers 712A-712B. The memory 714A-714B can include various types of memory devices including dynamic random-access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. The memory 714A-714B may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM).Each of the processing clusters 706A-706H may include a set of graphics multiprocessors, such as the graphics multiprocessor 234 of FIG. 2D , graphics multiprocessor 325 of FIG. 3A , graphics multiprocessor 350 of FIG. 3B , or may include a multi-core group 365A-365N as in FIG. 3C . The graphics multiprocessors of the compute cluster include multiple types of integer and floating-point logic units that can perform computational operations at a range of precisions including suited for machine learning computations. For example, at least a subset of the floating-point units in each of the processing clusters 706A-706H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of the floating-point units can be configured to perform 64-bit floating point operations.Multiple instances of the GPGPU 700 can be configured to operate as a compute cluster. The communication mechanism used by the compute cluster for synchronization and data exchange varies across embodiments. For example, the multiple instances of the GPGPU 700 communicate over the host interface 702. In one embodiment the GPGPU 700 includes an I/O hub 709 that couples the GPGPU 700 with a GPU link 710 that enables a direct connection to other instances of the GPGPU. The GPU link 710 may be coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances of the GPGPU 700. Optionally, the GPU link 710 couples with a high-speed interconnect to transmit and receive data to other GPGPUs or parallel processors. The multiple instances of the GPGPU 700 may be located in separate data processing systems and communicate via a network device that is accessible via the host interface 702. The GPU link 710 may be configured to enable a connection to a host processor in addition to or as an alternative to the host interface 702.While the illustrated configuration of the GPGPU 700 can be configured to train neural networks, an alternate configuration of the GPGPU 700 can be configured for deployment within a high performance or low power inferencing platform. In an inferencing configuration, the GPGPU 700 includes fewer of the processing clusters 706A-706H relative to the training configuration. Additionally, memory technology associated with the memory 714A-714B may differ between inferencing and training configurations. In one embodiment, the inferencing configuration of the GPGPU 700 can support inferencing specific instructions. For example, an inferencing configuration can provide support for one or more 8-bit integer dot product instructions, which are commonly used during inferencing operations for deployed neural networks.FIG. 8 illustrates a multi-GPU computing system 800. The multi-GPU computing system 800 can include a processor 802 coupled to multiple GPGPUs 806A-806D via a host interface switch 804. The host interface switch 804 may be a PCI express switch device that couples the processor 802 to a PCI express bus over which the processor 802 can communicate with the set of GPGPUs 806A-806D. Each of the multiple GPGPUs 806A-806D can be an instance of the GPGPU 700 of FIG. 7 . The GPGPUs 806A-806D can interconnect via a set of high-speed point to point GPU to GPU links 816. The high-speed GPU to GPU links can connect to each of the GPGPUs 806A-806D via a dedicated GPU link, such as the GPU link 710 as in FIG. 7 . The P2P GPU links 816 enable direct communication between each of the GPGPUs 806A-806D without requiring communication over the host interface bus to which the processor 802 is connected. With GPU-to-GPU traffic directed to the P2P GPU links, the host interface bus remains available for system memory access or to communicate with other instances of the multi-GPU computing system 800, for example, via one or more network devices. While in FIG. 8 the GPGPUs 806A-806D connect to the processor 802 via the host interface switch 804, the processor 802 may alternatively include direct support for the P2P GPU links 816 and connect directly to the GPGPUs 806A-806D. In one embodiment the P2P GPU link 816 enable the multi-GPU computing system 800 to operate as a single logical GPU.Machine Learning Neural Network ImplementationsThe computing architecture described herein can be configured to perform the types of parallel processing that is particularly suited for training and deploying neural networks for machine learning. A neural network can be generalized as a network of functions having a graph relationship. As is well-known in the art, there are a variety of types of neural network implementations used in machine learning. One exemplary type of neural network is the feedforward network, as previously described.A second exemplary type of neural network is the Convolutional Neural Network (CNN). A CNN is a specialized feedforward neural network for processing data having a known, grid-like topology, such as image data. Accordingly, CNNs are commonly used for compute vision and image recognition applications, but they also may be used for other types of pattern recognition such as speech and language processing. The nodes in the CNN input layer are organized into a set of "filters" (feature detectors inspired by the receptive fields found in the retina), and the output of each set of filters is propagated to nodes in successive layers of the network. The computations for a CNN include applying the convolution mathematical operation to each filter to produce the output of that filter. Convolution is a specialized kind of mathematical operation performed by two functions to produce a third function that is a modified version of one of the two original functions. In convolutional network terminology, the first function to the convolution can be referred to as the input, while the second function can be referred to as the convolution kernel. The output may be referred to as the feature map. For example, the input to a convolution layer can be a multidimensional array of data that defines the various color components of an input image. The convolution kernel can be a multidimensional array of parameters, where the parameters are adapted by the training process for the neural network.Recurrent neural networks (RNNs) are a family of feedforward neural networks that include feedback connections between layers. RNNs enable modeling of sequential data by sharing parameter data across different parts of the neural network. The architecture for an RNN includes cycles. The cycles represent the influence of a present value of a variable on its own value at a future time, as at least a portion of the output data from the RNN is used as feedback for processing subsequent input in a sequence. This feature makes RNNs particularly useful for language processing due to the variable nature in which language data can be composed.The figures described below present exemplary feedforward, CNN, and RNN networks, as well as describe a general process for respectively training and deploying each of those types of networks. It will be understood that these descriptions are exemplary and non-limiting as to any specific embodiment described herein and the concepts illustrated can be applied generally to deep neural networks and machine learning techniques in general.The exemplary neural networks described above can be used to perform deep learning. Deep learning is machine learning using deep neural networks. The deep neural networks used in deep learning are artificial neural networks composed of multiple hidden layers, as opposed to shallow neural networks that include only a single hidden layer. Deeper neural networks are generally more computationally intensive to train. However, the additional hidden layers of the network enable multistep pattern recognition that results in reduced output error relative to shallow machine learning techniques.Deep neural networks used in deep learning typically include a front-end network to perform feature recognition coupled to a back-end network which represents a mathematical model that can perform operations (e.g., object classification, speech recognition, etc.) based on the feature representation provided to the model. Deep learning enables machine learning to be performed without requiring hand crafted feature engineering to be performed for the model. Instead, deep neural networks can learn features based on statistical structure or correlation within the input data. The learned features can be provided to a mathematical model that can map detected features to an output. The mathematical model used by the network is generally specialized for the specific task to be performed, and different models will be used to perform different task.Once the neural network is structured, a learning model can be applied to the network to train the network to perform specific tasks. The learning model describes how to adjust the weights within the model to reduce the output error of the network. Backpropagation of errors is a common method used to train neural networks. An input vector is presented to the network for processing. The output of the network is compared to the desired output using a loss function and an error value is calculated for each of the neurons in the output layer. The error values are then propagated backwards until each neuron has an associated error value which roughly represents its contribution to the original output. The network can then learn from those errors using an algorithm, such as the stochastic gradient descent algorithm, to update the weights of the of the neural network.FIG. 9A-9B illustrate an exemplary convolutional neural network. FIG. 9A illustrates various layers within a CNN. As shown in FIG. 9A , an exemplary CNN used to model image processing can receive input 902 describing the red, green, and blue (RGB) components of an input image. The input 902 can be processed by multiple convolutional layers (e.g., convolutional layer 904, convolutional layer 906). The output from the multiple convolutional layers may optionally be processed by a set of fully connected layers 908. Neurons in a fully connected layer have full connections to all activations in the previous layer, as previously described for a feedforward network. The output from the fully connected layers 908 can be used to generate an output result from the network. The activations within the fully connected layers 908 can be computed using matrix multiplication instead of convolution. Not all CNN implementations make use of fully connected layers 908. For example, in some implementations the convolutional layer 906 can generate output for the CNNThe convolutional layers are sparsely connected, which differs from traditional neural network configuration found in the fully connected layers 908. Traditional neural network layers are fully connected, such that every output unit interacts with every input unit. However, the convolutional layers are sparsely connected because the output of the convolution of a field is input (instead of the respective state value of each of the nodes in the field) to the nodes of the subsequent layer, as illustrated. The kernels associated with the convolutional layers perform convolution operations, the output of which is sent to the next layer. The dimensionality reduction performed within the convolutional layers is one aspect that enables the CNN to scale to process large images.FIG. 9B illustrates exemplary computation stages within a convolutional layer of a CNN Input to a convolutional layer 912 of a CNN can be processed in three stages of a convolutional layer 914. The three stages can include a convolution stage 916, a detector stage 918, and a pooling stage 920. The convolutional layer 914 can then output data to a successive convolutional layer. The final convolutional layer of the network can generate output feature map data or provide input to a fully connected layer, for example, to generate a classification value for the input to the CNNIn the convolution stage 916 performs several convolutions in parallel to produce a set of linear activations. The convolution stage 916 can include an affine transformation, which is any transformation that can be specified as a linear transformation plus a translation. Affine transformations include rotations, translations, scaling, and combinations of these transformations. The convolution stage computes the output of functions (e.g., neurons) that are connected to specific regions in the input, which can be determined as the local region associated with the neuron. The neurons compute a dot product between the weights of the neurons and the region in the local input to which the neurons are connected. The output from the convolution stage 916 defines a set of linear activations that are processed by successive stages of the convolutional layer 914.The linear activations can be processed by a detector stage 918. In the detector stage 918, each linear activation is processed by a non-linear activation function. The non-linear activation function increases the nonlinear properties of the overall network without affecting the receptive fields of the convolution layer. Several types of non-linear activation functions may be used. One particular type is the rectified linear unit (ReLU), which uses an activation function defined as f(x) = max(0, x), such that the activation is thresholded at zero.The pooling stage 920 uses a pooling function that replaces the output of the convolutional layer 906 with a summary statistic of the nearby outputs. The pooling function can be used to introduce translation invariance into the neural network, such that small translations to the input do not change the pooled outputs. Invariance to local translation can be useful in scenarios where the presence of a feature in the input data is more important than the precise location of the feature. Various types of pooling functions can be used during the pooling stage 920, including max pooling, average pooling, and l2-norm pooling. Additionally, some CNN implementations do not include a pooling stage. Instead, such implementations substitute and additional convolution stage having an increased stride relative to previous convolution stages.The output from the convolutional layer 914 can then be processed by the next layer 922. The next layer 922 can be an additional convolutional layer or one of the fully connected layers 908. For example, the first convolutional layer 904 of FIG. 9A can output to the second convolutional layer 906, while the second convolutional layer can output to a first layer of the fully connected layers 908.FIG. 10 illustrates an exemplary recurrent neural network 1000. In a recurrent neural network (RNN), the previous state of the network influences the output of the current state of the network. RNNs can be built in a variety of ways using a variety of functions. The use of RNNs generally revolves around using mathematical models to predict the future based on a prior sequence of inputs. For example, an RNN may be used to perform statistical language modeling to predict an upcoming word given a previous sequence of words. The illustrated RNN 1000 can be described has having an input layer 1002 that receives an input vector, hidden layers 1004 to implement a recurrent function, a feedback mechanism 1005 to enable a 'memory' of previous states, and an output layer 1006 to output a result. The RNN 1000 operates based on time-steps. The state of the RNN at a given time step is influenced based on the previous time step via the feedback mechanism 1005. For a given time step, the state of the hidden layers 1004 is defined by the previous state and the input at the current time step. An initial input (xi) at a first time step can be processed by the hidden layer 1004. A second input (x2) can be processed by the hidden layer 1004 using state information that is determined during the processing of the initial input (xi). A given state can be computed as st = f(Uxt + Wst-1), where U and W are parameter matrices. The function f is generally a nonlinearity, such as the hyperbolic tangent function (Tanh) or a variant of the rectifier function f(x) = max(0, x). However, the specific mathematical function used in the hidden layers 1004 can vary depending on the specific implementation details of the RNN 1000.In addition to the basic CNN and RNN networks described, acceleration for variations on those networks may be enabled. One example RNN variant is the long short term memory (LSTM) RNN. LSTM RNNs are capable of learning long-term dependencies that may be necessary for processing longer sequences of language. A variant on the CNN is a convolutional deep belief network, which has a structure similar to a CNN and is trained in a manner similar to a deep belief network. A deep belief network (DBN) is a generative neural network that is composed of multiple layers of stochastic (random) variables. DBNs can be trained layer-by-layer using greedy unsupervised learning. The learned weights of the DBN can then be used to provide pre-train neural networks by determining an optimal initial set of weights for the neural network. In further embodiments, acceleration for reinforcement learning is enabled. In reinforcement learning, an artificial agent learn by interacting with its environment. The agent is configured to optimize certain objectives to maximize cumulative rewards.FIG. 11 illustrates training and deployment of a deep neural network. Once a given network has been structured for a task the neural network is trained using a training dataset 1102. Various training frameworks 1104 have been developed to enable hardware acceleration of the training process. For example, the machine learning framework 604 of FIG. 6 may be configured as a training framework 1104. The training framework 1104 can hook into an untrained neural network 1106 and enable the untrained neural net to be trained using the parallel processing resources described herein to generate a trained neural network 1108.To start the training process the initial weights may be chosen randomly or by pre-training using a deep belief network. The training cycle then be performed in either a supervised or unsupervised manner.Supervised learning is a learning method in which training is performed as a mediated operation, such as when the training dataset 1102 includes input paired with the desired output for the input, or where the training dataset includes input having known output and the output of the neural network is manually graded. The network processes the inputs and compares the resulting outputs against a set of expected or desired outputs. Errors are then propagated back through the system. The training framework 1104 can adjust to adjust the weights that control the untrained neural network 1106. The training framework 1104 can provide tools to monitor how well the untrained neural network 1106 is converging towards a model suitable to generating correct answers based on known input data. The training process occurs repeatedly as the weights of the network are adjusted to refine the output generated by the neural network. The training process can continue until the neural network reaches a statistically desired accuracy associated with a trained neural net 1108. The trained neural network 1108 can then be deployed to implement any number of machine learning operations to generate an inference result 1114 based on input of new data 1112.Unsupervised learning is a learning method in which the network attempts to train itself using unlabeled data. Thus, for unsupervised learning the training dataset 1102 will include input data without any associated output data. The untrained neural network 1106 can learn groupings within the unlabeled input and can determine how individual inputs are related to the overall dataset. Unsupervised training can be used to generate a self-organizing map, which is a type of trained neural network 1108 capable of performing operations useful in reducing the dimensionality of data. Unsupervised training can also be used to perform anomaly detection, which allows the identification of data points in an input dataset that deviate from the normal patterns of the data.Variations on supervised and unsupervised training may also be employed. Semi-supervised learning is a technique in which in the training dataset 1102 includes a mix of labeled and unlabeled data of the same distribution. Incremental learning is a variant of supervised learning in which input data is continuously used to further train the model. Incremental learning enables the trained neural network 1108 to adapt to the new data 1112 without forgetting the knowledge instilled within the network during initial training.Whether supervised or unsupervised, the training process for particularly deep neural networks may be too computationally intensive for a single compute node. Instead of using a single compute node, a distributed network of computational nodes can be used to accelerate the training process.FIG. 12A is a block diagram illustrating distributed learning. Distributed learning is a training model that uses multiple distributed computing nodes to perform supervised or unsupervised training of a neural network. The distributed computational nodes can each include one or more host processors and one or more of the general-purpose processing nodes, such as the highly parallel general-purpose graphics processing unit 700 as in FIG. 7 . As illustrated, distributed learning can be performed with model parallelism 1202, data parallelism 1204, or a combination of model and data parallelism 1206.In model parallelism 1202, different computational nodes in a distributed system can perform training computations for different parts of a single network. For example, each layer of a neural network can be trained by a different processing node of the distributed system. The benefits of model parallelism include the ability to scale to particularly large models. Splitting the computations associated with different layers of the neural network enables the training of very large neural networks in which the weights of all layers would not fit into the memory of a single computational node. In some instances, model parallelism can be particularly useful in performing unsupervised training of large neural networks.In data parallelism 1204, the different nodes of the distributed network have a complete instance of the model and each node receives a different portion of the data. The results from the different nodes are then combined. While different approaches to data parallelism are possible, data parallel training approaches all require a technique of combining results and synchronizing the model parameters between each node. Exemplary approaches to combining data include parameter averaging and update based data parallelism. Parameter averaging trains each node on a subset of the training data and sets the global parameters (e.g., weights, biases) to the average of the parameters from each node. Parameter averaging uses a central parameter server that maintains the parameter data. Update based data parallelism is similar to parameter averaging except that instead of transferring parameters from the nodes to the parameter server, the updates to the model are transferred. Additionally, update based data parallelism can be performed in a decentralized manner, where the updates are compressed and transferred between nodes.Combined model and data parallelism 1206 can be implemented, for example, in a distributed system in which each computational node includes multiple GPUs. Each node can have a complete instance of the model with separate GPUs within each node are used to train different portions of the model.Distributed training has increased overhead relative to training on a single machine. However, the parallel processors and GPGPUs described herein can each implement various techniques to reduce the overhead of distributed training, including techniques to enable high bandwidth GPU-to-GPU data transfer and accelerated remote data synchronization.FIG. 12B is a block diagram illustrating a programmable network interface 1210 and data processing unit. The programmable network interface 1210 is a programmable network engine that can be used to accelerate network-based compute tasks within a distributed environment. The programmable network interface 1210 can couple with a host system via host interface 1270. The programmable network interface 1210 can be used to accelerate network or storage operations for CPUs or GPUs of the host system. The host system can be, for example, a node of a distributed learning system used to perform distributed training, for example, as shown in FIG. 12A . The host system can also be a data center node within a data center.In one embodiment, access to remote storage containing model data can be accelerated by the programmable network interface 1210. For example, the programmable network interface 1210 can be configured to present remote storage devices as local storage devices to the host system. The programmable network interface 1210 can also accelerate remote direct memory access (RDMA) operations performed between GPUs of the host system with GPUs of remote systems. In one embodiment, the programmable network interface 1210 can enable storage functionality such as, but not limited to NVME-oF. The programmable network interface 1210 can also accelerate encryption, data integrity, compression, and other operations for remote storage on behalf of the host system, allowing remote storage to approach the latencies of storage devices that are directly attached to the host system.The programmable network interface 1210 can also perform resource allocation and management on behalf of the host system. Storage security operations can be offloaded to the programmable network interface 1210 and performed in concert with the allocation and management of remote storage resources. Network-based operations to manage access to the remote storage that would otherwise by performed by a processor of the host system can instead be performed by the programmable network interface 1210.In one embodiment, network and/or data security operations can be offloaded from the host system to the programmable network interface 1210. Data center security policies for a data center node can be handled by the programmable network interface 1210 instead of the processors of the host system. For example, the programmable network interface 1210 can detect and mitigate against an attempted network-based attack (e.g., DDoS) on the host system, preventing the attack from compromising the availability of the host system.The programmable network interface 1210 can include a system on a chip (SoC 1220) that executes an operating system via multiple processor cores 1222. The processor cores 1222 can include general-purpose processor (e.g., CPU) cores. In one embodiment the processor cores 1222 can also include one or more GPU cores. The SoC 1220 can execute instructions stored in a memory device 1240. A storage device 1250 can store local operating system data. The storage device 1250 and memory device 1240 can also be used to cache remote data for the host system. Network ports 1260A-1260B enable a connection to a network or fabric and facilitate network access for the SoC 1220 and, via the host interface 1270, for the host system. The programmable network interface 1210 can also include an I/O interface 1275, such as a USB interface. The I/O interface 1275 can be used to couple external devices to the programmable network interface 1210 or as a debug interface. The programmable network interface 1210 also includes a management interface 1230 that enables software on the host device to manage and configure the programmable network interface 1210 and/or SoC 1220. In one embodiment the programmable network interface 1210 may also include one or more accelerators or GPUs 1245 to accept offload of parallel compute tasks from the SoC 1220, host system, or remote systems coupled via the network ports 1260A-1260B.Exemplary Machine Learning ApplicationsMachine learning can be applied to solve a variety of technological problems, including but not limited to computer vision, autonomous driving and navigation, speech recognition, and language processing. Computer vision has traditionally been one of the most active research areas for machine learning applications. Applications of computer vision range from reproducing human visual abilities, such as recognizing faces, to creating new categories of visual abilities. For example, computer vision applications can be configured to recognize sound waves from the vibrations induced in objects visible in a video. Parallel processor accelerated machine learning enables computer vision applications to be trained using significantly larger training dataset than previously feasible and enables inferencing systems to be deployed using low power parallel processors.Parallel processor accelerated machine learning has autonomous driving applications including lane and road sign recognition, obstacle avoidance, navigation, and driving control. Accelerated machine learning techniques can be used to train driving models based on datasets that define the appropriate responses to specific training input. The parallel processors described herein can enable rapid training of the increasingly complex neural networks used for autonomous driving solutions and enables the deployment of low power inferencing processors in a mobile platform suitable for integration into autonomous vehicles.Parallel processor accelerated deep neural networks have enabled machine learning approaches to automatic speech recognition (ASR). ASR includes the creation of a function that computes the most probable linguistic sequence given an input acoustic sequence. Accelerated machine learning using deep neural networks have enabled the replacement of the hidden Markov models (HMMs) and Gaussian mixture models (GMMs) previously used for ASR.Parallel processor accelerated machine learning can also be used to accelerate natural language processing. Automatic learning procedures can make use of statistical inference algorithms to produce models that are robust to erroneous or unfamiliar input. Exemplary natural language processor applications include automatic machine translation between human languages.The parallel processing platforms used for machine learning can be divided into training platforms and deployment platforms. Training platforms are generally highly parallel and include optimizations to accelerate multi-GPU single node training and multi-node, multi-GPU training. Exemplary parallel processors suited for training include the general-purpose graphics processing unit 700 of FIG. 7 and the multi-GPU computing system 800 of FIG. 8 . On the contrary, deployed machine learning platforms generally include lower power parallel processors suitable for use in products such as cameras, autonomous robots, and autonomous vehicles.Additionally, machine learning techniques can be applied to accelerate or enhance graphics processing activities. For example, a machine learning model can be trained to recognize output generated by a GPU accelerated application and generate an upscaled version of that output. Such techniques can be applied to accelerate the generation of high resolution images for a gaming application. Various other graphics pipeline activities can benefit from the use of machine learning. For example, machine learning models can be trained to perform tessellation operations on geometry data to increase the complexity of geometric models, allowing fine-detailed geometry to be automatically generated from geometry of relatively lower detail.FIG. 13 illustrates an exemplary inferencing system on a chip (SOC) 1300 suitable for performing inferencing using a trained model. The SOC 1300 can integrate processing components including a media processor 1302, a vision processor 1304, a GPGPU 1306 and a multi-core processor 1308. The GPGPU 1306 may be a GPGPU as described herein, such as the GPGPU 700, and the multi-core processor 1308 may be a multi-core processor described herein, such as the multi-core processors 405-406. The SOC 1300 can additionally include on-chip memory 1305 that can enable a shared on-chip data pool that is accessible by each of the processing components. The processing components can be optimized for low power operation to enable deployment to a variety of machine learning platforms, including autonomous vehicles and autonomous robots. For example, one implementation of the SOC 1300 can be used as a portion of the main control system for an autonomous vehicle. Where the SOC 1300 is configured for use in autonomous vehicles the SOC is designed and configured for compliance with the relevant functional safety standards of the deployment jurisdiction.During operation, the media processor 1302 and vision processor 1304 can work in concert to accelerate computer vision operations. The media processor 1302 can enable low latency decode of multiple high-resolution (e.g., 4K, 8K) video streams. The decoded video streams can be written to a buffer in the on-chip memory 1305. The vision processor 1304 can then parse the decoded video and perform preliminary processing operations on the frames of the decoded video in preparation of processing the frames using a trained image recognition model. For example, the vision processor 1304 can accelerate convolution operations for a CNN that is used to perform image recognition on the high-resolution video data, while back end model computations are performed by the GPGPU 1306.The multi-core processor 1308 can include control logic to assist with sequencing and synchronization of data transfers and shared memory operations performed by the media processor 1302 and the vision processor 1304. The multi-core processor 1308 can also function as an application processor to execute software applications that can make use of the inferencing compute capability of the GPGPU 1306. For example, at least a portion of the navigation and driving logic can be implemented in software executing on the multi-core processor 1308. Such software can directly issue computational workloads to the GPGPU 1306 or the computational workloads can be issued to the multi-core processor 1308, which can offload at least a portion of those operations to the GPGPU 1306.The GPGPU 1306 can include compute clusters such as a low power configuration of the processing clusters 706A-706H within general-purpose graphics processing unit 700. The compute clusters within the GPGPU 1306 can support instruction that are specifically optimized to perform inferencing computations on a trained neural network. For example, the GPGPU 1306 can support instructions to perform low precision computations such as 8-bit and 4-bit integer vector operations.Additional System OverviewFIG. 14 is a block diagram of a processing system 1400. The elements of FIG. 14 having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. System 1400 may be used in a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 1402 or processor cores 1407. The system 1400 may be a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices such as within Internet-of-things (IoT) devices with wired or wireless connectivity to a local or wide area network.The system 1400 may be a processing system having components that correspond with those of FIG. 1 . For example, in different configurations, processor(s) 1402 or processor core(s) 1407 may correspond with processor(s) 102 of FIG. 1 . Graphics processor(s) 1408 may correspond with parallel processor(s) 112 of FIG. 1 . External graphics processor 1418 may be one of the add-in device(s) 120 of FIG. 1 .The system 1400 can include, couple with, or be integrated within: a server-based gaming platform; a game console, including a game and media console; a mobile gaming console, a handheld game console, or an online game console. The system 1400 may be part of a mobile phone, smart phone, tablet computing device or mobile Internet-connected device such as a laptop with low internal storage capacity. Processing system 1400 can also include, couple with, or be integrated within: a wearable device, such as a smart watch wearable device; smart eyewear or clothing enhanced with augmented reality (AR) or virtual reality (VR) features to provide visual, audio or tactile outputs to supplement real world visual, audio or tactile experiences or otherwise provide text, audio, graphics, video, holographic images or video, or tactile feedback; other augmented reality (AR) device; or other virtual reality (VR) device. The processing system 1400 may include or be part of a television or set top box device. The system 1400 can include, couple with, or be integrated within a self-driving vehicle such as a bus, tractor trailer, car, motor or electric power cycle, plane or glider (or any combination thereof). The self-driving vehicle may use system 1400 to process the environment sensed around the vehicle.The one or more processors 1402 may include one or more processor cores 1407 to process instructions which, when executed, perform operations for system or user software. The least one of the one or more processor cores 1407 may be configured to process a specific instruction set 1409. The instruction set 1409 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). One or more processor cores 1407 may process a different instruction set 1409, which may include instructions to facilitate the emulation of other instruction sets. Processor core 1407 may also include other processing devices, such as a Digital Signal Processor (DSP).The processor 1402 may include cache memory 1404. Depending on the architecture, the processor 1402 can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor 1402. In some embodiments, the processor 1402 also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 1407 using known cache coherency techniques. A register file 1406 can be additionally included in processor 1402 and may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor 1402.The one or more processor(s) 1402 may be coupled with one or more interface bus(es) 1410 to transmit communication signals such as address, data, or control signals between processor 1402 and other components in the system 1400. The interface bus 1410, in one of these embodiments, can be a processor bus, such as a version of the Direct Media Interface (DMI) bus. However, processor busses are not limited to the DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., PCI, PCI express), memory busses, or other types of interface busses. For example, the processor(s) 1402 may include an integrated memory controller 1416 and a platform controller hub 1430. The memory controller 1416 facilitates communication between a memory device and other components of the system 1400, while the platform controller hub (PCH) 1430 provides connections to I/O devices via a local I/O bus.The memory device 1420 can be a dynamic random-access memory (DRAM) device, a static random-access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. The memory device 1420 can, for example, operate as system memory for the system 1400, to store data 1422 and instructions 1421 for use when the one or more processors 1402 executes an application or process. Memory controller 1416 also couples with an optional external graphics processor 1418, which may communicate with the one or more graphics processors 1408 in processors 1402 to perform graphics and media operations. In some embodiments, graphics, media, and or compute operations may be assisted by an accelerator 1412 which is a coprocessor that can be configured to perform a specialized set of graphics, media, or compute operations. For example, the accelerator 1412 may be a matrix multiplication accelerator used to optimize machine learning or compute operations. The accelerator 1412 can be a ray-tracing accelerator that can be used to perform ray-tracing operations in concert with the graphics processor 1408. In one embodiment, an external accelerator 1419 may be used in place of or in concert with the accelerator 1412.A display device 1411 may be provided that can connect to the processor(s) 1402. The display device 1411 can be one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). The display device 1411 can be a head mounted display (HMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications.The platform controller hub 1430 may enable peripherals to connect to memory device 1420 and processor 1402 via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller 1446, a network controller 1434, a firmware interface 1428, a wireless transceiver 1426, touch sensors 1425, a data storage device 1424 (e.g., non-volatile memory, volatile memory, hard disk drive, flash memory, NAND, 3D NAND, 3D XPoint/Optane, etc.). The data storage device 1424 can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as a Peripheral Component Interconnect bus (e.g., PCI, PCI express). The touch sensors 1425 can include touch screen sensors, pressure sensors, or fingerprint sensors. The wireless transceiver 1426 can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, 5G, or Long-Term Evolution (LTE) transceiver. The firmware interface 1428 enables communication with system firmware, and can be, for example, a unified extensible firmware interface (UEFI). The network controller 1434 can enable a network connection to a wired network. In some embodiments, a high-performance network controller (not shown) couples with the interface bus 1410. The audio controller 1446 may be a multi-channel high definition audio controller. In some of these embodiments the system 1400 includes an optional legacy I/O controller 1440 for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. The platform controller hub 1430 can also connect to one or more Universal Serial Bus (USB) controllers 1442 connect input devices, such as keyboard and mouse 1443 combinations, a camera 1444, or other USB input devices.It will be appreciated that the system 1400 shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, an instance of the memory controller 1416 and platform controller hub 1430 may be integrated into a discrete external graphics processor, such as the external graphics processor 1418. The platform controller hub 1430 and/or memory controller 1416 may be external to the one or more processor(s) 1402. For example, the system 1400 can include an external memory controller 1416 and platform controller hub 1430, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with the processor(s) 1402.For example, circuit boards ("sleds") can be used on which components such as CPUs, memory, and other components are placed are designed for increased thermal performance. Processing components such as the processors may be located on a top side of a sled while near memory, such as DIMMs, are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in a rack, thereby enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity.A data center can utilize a single network architecture ("fabric") that supports multiple other network architectures including Ethernet and Omni-Path. The sleds can be coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center may, in use, pool resources, such as memory, accelerators (e.g., GPUs, graphics accelerators, FPGAs, ASICs, neural network and/or artificial intelligence accelerators, etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local.A power supply or source can provide voltage and/or current to system 1400 or any component or system described herein. In one example, the power supply includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such AC power can be renewable energy (e.g., solar power) power source. In one example, the power source includes a DC power source, such as an external AC to DC converter. A power source or power supply may also include wireless charging hardware to charge via proximity to a charging field. The power source can include an internal battery, alternating current supply, motion-based power supply, solar power supply, or fuel cell source.FIG. 15A-15C illustrate computing systems and graphics processors. The elements of FIG. 15A-15C having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such.FIG. 15A is a block diagram of a processor 1500, which may be a variant of one of the processors 1402 and may be used in place of one of those. Therefore, the disclosure of any features in combination with the processor 1500 herein also discloses a corresponding combination with the processor(s) 1402, but is not limited to such. The processor 1500 may have one or more processor cores 1502A-1502N, an integrated memory controller 1514, and an integrated graphics processor 1508. Where an integrated graphics processor 1508 is excluded, the system that includes the processor will include a graphics processor device within a system chipset or coupled via a system bus. Processor 1500 can include additional cores up to and including additional core 1502N represented by the dashed lined boxes. Each of processor cores 1502A-1502N includes one or more internal cache units 1504A-1504N. In some embodiments each processor core 1502A-1502N also has access to one or more shared cache units 1506. The internal cache units 1504A-1504N and shared cache units 1506 represent a cache memory hierarchy within the processor 1500. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units 1506 and 1504A-1504N.The processor 1500 may also include a set of one or more bus controller units 1516 and a system agent core 1510. The one or more bus controller units 1516 manage a set of peripheral buses, such as one or more PCI or PCI express busses. System agent core 1510 provides management functionality for the various processor components. The system agent core 1510 may include one or more integrated memory controllers 1514 to manage access to various external memory devices (not shown).For example, one or more of the processor cores 1502A-1502N may include support for simultaneous multi-threading. The system agent core 1510 includes components for coordinating and operating cores 1502A-1502N during multi-threaded processing. System agent core 1510 may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores 1502A-1502N and graphics processor 1508.The processor 1500 may additionally include graphics processor 1508 to execute graphics processing operations. In some of these embodiments, the graphics processor 1508 couples with the set of shared cache units 1506, and the system agent core 1510, including the one or more integrated memory controllers 1514. The system agent core 1510 may also include a display controller 1511 to drive graphics processor output to one or more coupled displays. The display controller 1511 may also be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor 1508.A ring-based interconnect unit 1512 may be used to couple the internal components of the processor 1500. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some of these embodiments with a ring-based interconnect 1512, the graphics processor 1508 couples with the ring-based interconnect 1512 via an I/O link 1513.The exemplary I/O link 1513 represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module 1518, such as an eDRAM module. Optionally, each of the processor cores 1502A-1502N and graphics processor 1508 can use embedded memory modules 1518 as a shared Last Level Cache.The processor cores 1502A-1502N may, for example, be homogenous cores executing the same instruction set architecture. Alternatively, the processor cores 1502A-1502N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores 1502A-1502N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. The processor cores 1502A-1502N may be heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. As another example, the processor cores 1502A-1502N are heterogeneous in terms of computational capability. Additionally, processor 1500 can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components.FIG. 15B is a block diagram of hardware logic of a graphics processor core 1519, according to some embodiments described herein. The graphics processor core 1519, sometimes referred to as a core slice, can be one or multiple graphics cores within a modular graphics processor. The graphics processor core 1519 is exemplary of one graphics core slice, and a graphics processor as described herein may include multiple graphics core slices based on target power and performance envelopes. Each graphics processor core 1519 can include a fixed function block 1530 coupled with multiple sub-cores 1521A-1521F, also referred to as sub-slices, that include modular blocks of general-purpose and fixed function logic.The fixed function block 1530 may include a geometry/fixed function pipeline 1531 that can be shared by all sub-cores in the graphics processor core 1519, for example, in lower performance and/or lower power graphics processor implementations. The geometry/fixed function pipeline 1531 may include a 3D fixed function pipeline (e.g., 3D pipeline 1612 as in FIG. 16A described below) a video front-end unit, a thread spawner and thread dispatcher, and a unified return buffer manager, which manages unified return buffers (e.g., unified return buffer 1718 in FIG. 17 , as described below).The fixed function block 1530 may also include a graphics SoC interface 1532, a graphics microcontroller 1533, and a media pipeline 1534. The graphics SoC interface 1532 provides an interface between the graphics processor core 1519 and other processor cores within a system on a chip integrated circuit. The graphics microcontroller 1533 is a programmable sub-processor that is configurable to manage various functions of the graphics processor core 1519, including thread dispatch, scheduling, and pre-emption. The media pipeline 1534 (e.g., media pipeline 1616 of FIG. 16A and FIG. 17 ) includes logic to facilitate the decoding, encoding, preprocessing, and/or post-processing of multimedia data, including image and video data. The media pipeline 1534 implement media operations via requests to compute or sampling logic within the sub-cores 1521-1521F.The SoC interface 1532 may enable the graphics processor core 1519 to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC, including memory hierarchy elements such as a shared last level cache memory, the system RAM, and/or embedded on-chip or on-package DRAM. The SoC interface 1532 can also enable communication with fixed function devices within the SoC, such as camera imaging pipelines, and enables the use of and/or implements global memory atomics that may be shared between the graphics processor core 1519 and CPUs within the SoC. The SoC interface 1532 can also implement power management controls for the graphics processor core 1519 and enable an interface between a clock domain of the graphics processor core 1519 and other clock domains within the SoC. Optionally, the SoC interface 1532 enables receipt of command buffers from a command streamer and global thread dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. The commands and instructions can be dispatched to the media pipeline 1534, when media operations are to be performed, or a geometry and fixed function pipeline (e.g., geometry and fixed function pipeline 1531, geometry and fixed function pipeline 1537) when graphics processing operations are to be performed.The graphics microcontroller 1533 can be configured to perform various scheduling and management tasks for the graphics processor core 1519. In one configuration the graphics microcontroller 1533 can, for example, perform graphics and/or compute workload scheduling on the various graphics parallel engines within execution unit (EU) arrays 1522A-1522F, 1524A-1524F within the sub-cores 1521A-1521F. In this workload scheduling, host software executing on a CPU core of an SoC including the graphics processor core 1519 can submit workloads to one of multiple graphics processor doorbells, which invokes a scheduling operation on the appropriate graphics engine. Scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. Optionally, the graphics microcontroller 1533 can also facilitate low-power or idle states for the graphics processor core 1519, providing the graphics processor core 1519 with the ability to save and restore registers within the graphics processor core 1519 across low-power state transitions independently from the operating system and/or graphics driver software on the system.The graphics processor core 1519 may have more than or fewer than the illustrated sub-cores 1521A-1521F, up to N modular sub-cores. For each set of N sub-cores, the graphics processor core 1519 can also include shared function logic 1535, shared and/or cache memory 1536, a geometry/fixed function pipeline 1537, as well as additional fixed function logic 1538 to accelerate various graphics and compute processing operations. The shared function logic 1535 can include logic units associated with the shared function logic 1720 of FIG. 17 (e.g., sampler, math, and/or inter-thread communication logic) that can be shared by each N sub-cores within the graphics processor core 1519. The shared and/or cache memory 1536 can be a last-level cache for the set of N sub-cores 1521A-1521F within the graphics processor core 1519 and can also serve as shared memory that is accessible by multiple sub-cores. The geometry/fixed function pipeline 1537 can be included instead of the geometry/fixed function pipeline 1531 within the fixed function block 1530 and can include the same or similar logic units.The graphics processor core 1519 may include additional fixed function logic 1538 that can include various fixed function acceleration logic for use by the graphics processor core 1519. Optionally, the additional fixed function logic 1538 includes an additional geometry pipeline for use in position only shading. In position-only shading, two geometry pipelines exist, the full geometry pipeline within the geometry/fixed function pipeline 1538, 1531, and a cull pipeline, which is an additional geometry pipeline which may be included within the additional fixed function logic 1538. For example, the cull pipeline may be a trimmed down version of the full geometry pipeline. The full pipeline and the cull pipeline can execute different instances of the same application, each instance having a separate context. Position only shading can hide long cull runs of discarded triangles, enabling shading to be completed earlier in some instances. For example, the cull pipeline logic within the additional fixed function logic 1538 can execute position shaders in parallel with the main application and generally generates critical results faster than the full pipeline, as the cull pipeline fetches and shades only the position attribute of the vertices, without performing rasterization and rendering of the pixels to the frame buffer. The cull pipeline can use the generated critical results to compute visibility information for all the triangles without regard to whether those triangles are culled. The full pipeline (which in this instance may be referred to as a replay pipeline) can consume the visibility information to skip the culled triangles to shade only the visible triangles that are finally passed to the rasterization phase.Optionally, the additional fixed function logic 1538 can also include machine-learning acceleration logic, such as fixed function matrix multiplication logic, for implementations including optimizations for machine learning training or inferencing.Within each graphics sub-core 1521A-1521F a set of execution resources is included that may be used to perform graphics, media, and compute operations in response to requests by graphics pipeline, media pipeline, or shader programs. The graphics sub-cores 1521A-1521F include multiple EU arrays 1522A-1522F, 1524A-1524F, thread dispatch and inter-thread communication (TD/IC) logic 1523A-1523F, a 3D (e.g., texture) sampler 1525A-1525F, a media sampler 1526A-1526F, a shader processor 1527A-1527F, and shared local memory (SLM) 1528A-1528F. The EU arrays 1522A-1522F, 1524A-1524F each include multiple execution units, which are general-purpose graphics processing units capable of performing floating-point and integer/fixed-point logic operations in service of a graphics, media, or compute operation, including graphics, media, or compute shader programs. The TD/IC logic 1523A-1523F performs local thread dispatch and thread control operations for the execution units within a sub-core and facilitate communication between threads executing on the execution units of the sub-core. The 3D sampler 1525A-1525F can read texture or other 3D graphics related data into memory. The 3D sampler can read texture data differently based on a configured sample state and the texture format associated with a given texture. The media sampler 1526A-1526F can perform similar read operations based on the type and format associated with media data. For example, each graphics sub-core 1521A-1521F can alternately include a unified 3D and media sampler. Threads executing on the execution units within each of the sub-cores 1521A-1521F can make use of shared local memory 1528A-1528F within each sub-core, to enable threads executing within a thread group to execute using a common pool of on-chip memory.FIG. 15C is a block diagram of general-purpose graphics processing unit (GPGPU) 1570 that can be configured as a graphics processor, e.g. the graphics processor 1508, and/or compute accelerator, according to embodiments described herein. The GPGPU 1570 can interconnect with host processors (e.g., one or more CPU(s) 1546) and memory 1571, 1572 via one or more system and/or memory busses. Memory 1571 may be system memory that can be shared with the one or more CPU(s) 1546, while memory 1572 is device memory that is dedicated to the GPGPU 1570. For example, components within the GPGPU 1570 and memory 1572 may be mapped into memory addresses that are accessible to the one or more CPU(s) 1546. Access to memory 1571 and 1572 may be facilitated via a memory controller 1568. The memory controller 1568 may include an internal direct memory access (DMA) controller 1569 or can include logic to perform operations that would otherwise be performed by a DMA controller.The GPGPU 1570 includes multiple cache memories, including an L2 cache 1553, L1 cache 1554, an instruction cache 1555, and shared memory 1556, at least a portion of which may also be partitioned as a cache memory. The GPGPU 1570 also includes multiple compute units 1560A-1560N. Each compute unit 1560A-1560N includes a set of vector registers 1561, scalar registers 1562, vector logic units 1563, and scalar logic units 1564. The compute units 1560A-1560N can also include local shared memory 1565 and a program counter 1566. The compute units 1560A-1560N can couple with a constant cache 1567, which can be used to store constant data, which is data that will not change during the run of kernel or shader program that executes on the GPGPU 1570. The constant cache 1567 may be a scalar data cache and cached data can be fetched directly into the scalar registers 1562.During operation, the one or more CPU(s) 1546 can write commands into registers or memory in the GPGPU 1570 that has been mapped into an accessible address space. The command processors 1557 can read the commands from registers or memory and determine how those commands will be processed within the GPGPU 1570. A thread dispatcher 1558 can then be used to dispatch threads to the compute units 1560A-1560N to perform those commands. Each compute unit 1560A-1560N can execute threads independently of the other compute units. Additionally, each compute unit 1560A-1560N can be independently configured for conditional computation and can conditionally output the results of computation to memory. The command processors 1557 can interrupt the one or more CPU(s) 1546 when the submitted commands are complete.FIG. 16A-16C illustrate block diagrams of additional graphics processor and compute accelerator architectures provided by embodiments described herein, e.g. in accordance with FIG. 15A-15C . The elements of FIG. 16A-16C having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such.FIG. 16A is a block diagram of a graphics processor 1600, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores, or other semiconductor devices such as, but not limited to, memory devices or network interfaces. The graphics processor 1600 may be a variant of the graphics processor 1508 and may be used in place of the graphics processor 1508. Therefore, the disclosure of any features in combination with the graphics processor 1508 herein also discloses a corresponding combination with the graphics processor 1600, but is not limited to such. The graphics processor may communicate via a memory mapped I/O interface to registers on the graphics processor and with commands placed into the processor memory. Graphics processor 1600 may include a memory interface 1614 to access memory. Memory interface 1614 can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.Optionally, graphics processor 1600 also includes a display controller 1602 to drive display output data to a display device 1618. Display controller 1602 includes hardware for one or more overlay planes for the display and composition of multiple layers of video or user interface elements. The display device 1618 can be an internal or external display device. In one embodiment the display device 1618 is a head mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. Graphics processor 1600 may include a video codec engine 1606 to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, H.265/HEVC, Alliance for Open Media (AOMedia) VP8, VP9, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats.Graphics processor 1600 may include a block image transfer (BLIT) engine 1603 to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, alternatively, 2D graphics operations may be performed using one or more components of graphics processing engine (GPE) 1610. In some embodiments, GPE 1610 is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.GPE 1610 may include a 3D pipeline 1612 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline 1612 includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media subsystem 1615. While 3D pipeline 1612 can be used to perform media operations, an embodiment of GPE 1610 also includes a media pipeline 1616 that is specifically used to perform media operations, such as video post-processing and image enhancement.Media pipeline 1616 may include fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine 1606. Media pipeline 1616 may additionally include a thread spawning unit to spawn threads for execution on 3D/Media subsystem 1615. The spawned threads perform computations for the media operations on one or more graphics execution units included in 3D/Media subsystem 1615.The 3D/Media subsystem 1615 may include logic for executing threads spawned by 3D pipeline 1612 and media pipeline 1616. The pipelines may send thread execution requests to 3D/Media subsystem 1615, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics execution units to process the 3D and media threads. The 3D/Media subsystem 1615 may include one or more internal caches for thread instructions and data. Additionally, the 3D/Media subsystem 1615 may also include shared memory, including registers and addressable memory, to share data between threads and to store output data.FIG. 16B illustrates a graphics processor 1620, being a variant of the graphics processor 1600 and may be used in place of the graphics processor 1600 and vice versa. Therefore, the disclosure of any features in combination with the graphics processor 1600 herein also discloses a corresponding combination with the graphics processor 1620, but is not limited to such. The graphics processor 1620 has a tiled architecture, according to embodiments described herein. The graphics processor 1620 may include a graphics processing engine cluster 1622 having multiple instances of the graphics processing engine 1610 of FIG. 16A within a graphics engine tile 1610A-1610D. Each graphics engine tile 1610A-1610D can be interconnected via a set of tile interconnects 1623A-1623F. Each graphics engine tile 1610A-1610D can also be connected to a memory module or memory device 1626A-1626D via memory interconnects 1625A-1625D. The memory devices 1626A-1626D can use any graphics memory technology. For example, the memory devices 1626A-1626D may be graphics double data rate (GDDR) memory. The memory devices 1626A-1626D may be high-bandwidth memory (HBM) modules that can be on-die with their respective graphics engine tile 1610A-1610D. The memory devices 1626A-1626D may be stacked memory devices that can be stacked on top of their respective graphics engine tile 1610A-1610D. Each graphics engine tile 1610A-1610D and associated memory 1626A-1626D may reside on separate chiplets, which are bonded to a base die or base substrate, as described in further detail in FIG. 24B-24D .The graphics processor 1620 may be configured with a non-uniform memory access (NUMA) system in which memory devices 1626A-1626D are coupled with associated graphics engine tiles 1610A-1610D. A given memory device may be accessed by graphics engine tiles other than the tile to which it is directly connected. However, access latency to the memory devices 1626A-1626D may be lowest when accessing a local tile. In one embodiment, a cache coherent NUMA (ccNUMA) system is enabled that uses the tile interconnects 1623A-1623F to enable communication between cache controllers within the graphics engine tiles 1610A-1610D to keep a consistent memory image when more than one cache stores the same memory location.The graphics processing engine cluster 1622 can connect with an on-chip or on-package fabric interconnect 1624. In one embodiment the fabric interconnect 1624 includes a network processor, network on a chip (NoC), or another switching processor to enable the fabric interconnect 1624 to act as a packet switched fabric interconnect that switches data packets between components of the graphics processor 1620. The fabric interconnect 1624 can enable communication between graphics engine tiles 1610A-1610D and components such as the video codec engine 1606 and one or more copy engines 1604. The copy engines 1604 can be used to move data out of, into, and between the memory devices 1626A-1626D and memory that is external to the graphics processor 1620 (e.g., system memory). The fabric interconnect 1624 can also be used to interconnect the graphics engine tiles 1610A-1610D. The graphics processor 1620 may optionally include a display controller 1602 to enable a connection with an external display device 1618. The graphics processor may also be configured as a graphics or compute accelerator. In the accelerator configuration, the display controller 1602 and display device 1618 may be omitted.The graphics processor 1620 can connect to a host system via a host interface 1628. The host interface 1628 can enable communication between the graphics processor 1620, system memory, and/or other system components. The host interface 1628 can be, for example, a PCI express bus or another type of host system interface. For example, the host interface 1628 may be an NVLink or NVSwitch interface. The host interface 1628 and fabric interconnect 1624 can cooperate to enable multiple instances of the graphics processor 1620 to act as single logical device. Cooperation between the host interface 1628 and fabric interconnect 1624 can also enable the individual graphics engine tiles 1610A-1610D to be presented to the host system as distinct logical graphics devices.FIG. 16C illustrates a compute accelerator 1630, according to embodiments described herein. The compute accelerator 1630 can include architectural similarities with the graphics processor 1620 of FIG. 16B and is optimized for compute acceleration. A compute engine cluster 1632 can include a set of compute engine tiles 1640A-1640D that include execution logic that is optimized for parallel or vector-based general-purpose compute operations. The compute engine tiles 1640A-1640D may not include fixed function graphics processing logic, although in some embodiments one or more of the compute engine tiles 1640A-1640D can include logic to perform media acceleration. The compute engine tiles 1640A-1640D can connect to memory 1626A-1626D via memory interconnects 1625A-1625D. The memory 1626A-1626D and memory interconnects 1625A-1625D may be similar technology as in graphics processor 1620, or can be different. The graphics compute engine tiles 1640A-1640D can also be interconnected via a set of tile interconnects 1623A-1623F and may be connected with and/or interconnected by a fabric interconnect 1624. In one embodiment the compute accelerator 1630 includes a large L3 cache 1636 that can be configured as a device-wide cache. The compute accelerator 1630 can also connect to a host processor and memory via a host interface 1628 in a similar manner as the graphics processor 1620 of FIG. 16B .The compute accelerator 1630 can also include an integrated network interface 1642. In one embodiment the integrated network interface 1642 includes a network processor and controller logic that enables the compute engine cluster 1632 to communicate over a physical layer interconnect 1644 without requiring data to traverse memory of a host system. In one embodiment, one of the compute engine tiles 1640A-1640D is replaced by network processor logic and data to be transmitted or received via the physical layer interconnect 1644 may be transmitted directly to or from memory 1626A-1626D. Multiple instances of the compute accelerator 1630 may be joined via the physical layer interconnect 1644 into a single logical device. Alternatively, the various compute engine tiles 1640A-1640D may be presented as distinct network accessible compute accelerator devices.Graphics Processing EngineFIG. 17 is a block diagram of a graphics processing engine 1710 of a graphics processor in accordance with some embodiments. The graphics processing engine (GPE) 1710 may be a version of the GPE 1610 shown in FIG. 16A, and may also represent a graphics engine tile 1610A-1610D of FIG. 16B. The elements of FIG. 17 having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. For example, the 3D pipeline 1612 and media pipeline 1616 of FIG. 16A are also illustrated in FIG. 17. The media pipeline 1616 is optional in some embodiments of the GPE 1710 and may not be explicitly included within the GPE 1710. For example and in at least one embodiment, a separate media and/or image processor is coupled to the GPE 1710.GPE 1710 may couple with or include a command streamer 1703, which provides a command stream to the 3D pipeline 1612 and/or media pipelines 1616. Alternatively or additionally, the command streamer 1703 may be directly coupled to a unified return buffer 1718. The unified return buffer 1718 may be communicatively coupled to a graphics core array 1714. Optionally, the command streamer 1703 is coupled with memory, which can be system memory, or one or more of internal cache memory and shared cache memory. The command streamer 1703 may receive commands from the memory and sends the commands to 3D pipeline 1612 and/or media pipeline 1616. The commands are directives fetched from a ring buffer, which stores commands for the 3D pipeline 1612 and media pipeline 1616. The ring buffer can additionally include batch command buffers storing batches of multiple commands. The commands for the 3D pipeline 1612 can also include references to data stored in memory, such as but not limited to vertex and geometry data for the 3D pipeline 1612 and/or image data and memory objects for the media pipeline 1616. The 3D pipeline 1612 and media pipeline 1616 process the commands and data by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to the graphics core array 1714. The graphics core array 1714 may include one or more blocks of graphics cores (e.g., graphics core(s) 1715A, graphics core(s) 1715B), each block including one or more graphics cores. Each graphics core includes a set of graphics execution resources that includes general-purpose and graphics specific execution logic to perform graphics and compute operations, as well as fixed function texture processing and/or machine learning and artificial intelligence acceleration logic.In various embodiments the 3D pipeline 1612 can include fixed function and programmable logic to process one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader programs, by processing the instructions and dispatching execution threads to the graphics core array 1714. The graphics core array 1714 provides a unified block of execution resources for use in processing these shader programs. Multi-purpose execution logic (e.g., execution units) within the graphics core(s) 1715A-1715B of the graphics core array 1714 includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders.The graphics core array 1714 may include execution logic to perform media functions, such as video and/or image processing. The execution units may include general-purpose logic that is programmable to perform parallel general-purpose computational operations, in addition to graphics processing operations. The general-purpose logic can perform processing operations in parallel or in conjunction with general-purpose logic within the processor core(s) 1407 of FIG. 14 or core 1502A-1502N as in FIG. 15A .Output data generated by threads executing on the graphics core array 1714 can output data to memory in a unified return buffer (URB) 1718. The URB 1718 can store data for multiple threads. The URB 1718 may be used to send data between different threads executing on the graphics core array 1714. The URB 1718 may additionally be used for synchronization between threads on the graphics core array 1714 and fixed function logic within the shared function logic 1720.Optionally, the graphics core array 1714 may be scalable, such that the array includes a variable number of graphics cores, each having a variable number of execution units based on the target power and performance level of GPE 1710. The execution resources may be dynamically scalable, such that execution resources may be enabled or disabled as needed.The graphics core array 1714 couples with shared function logic 1720 that includes multiple resources that are shared between the graphics cores in the graphics core array. The shared functions within the shared function logic 1720 are hardware logic units that provide specialized supplemental functionality to the graphics core array 1714. In various embodiments, shared function logic 1720 includes but is not limited to sampler 1721, math 1722, and inter-thread communication (ITC) 1723 logic. Additionally, one or more cache(s) 1725 within the shared function logic 1720 may be implemented.A shared function is implemented at least in a case where the demand for a given specialized function is insufficient for inclusion within the graphics core array 1714. Instead a single instantiation of that specialized function is implemented as a stand-alone entity in the shared function logic 1720 and shared among the execution resources within the graphics core array 1714. The precise set of functions that are shared between the graphics core array 1714 and included within the graphics core array 1714 varies across embodiments. Specific shared functions within the shared function logic 1720 that are used extensively by the graphics core array 1714 may be included within shared function logic 1716 within the graphics core array 1714. Optionally, the shared function logic 1716 within the graphics core array 1714 can include some or all logic within the shared function logic 1720. All logic elements within the shared function logic 1720 may be duplicated within the shared function logic 1716 of the graphics core array 1714. Alternatively, the shared function logic 1720 is excluded in favor of the shared function logic 1716 within the graphics core array 1714.Execution UnitsFIG. 18A-18B illustrate thread execution logic 1800 including an array of processing elements employed in a graphics processor core according to embodiments described herein. The elements of FIG. 18A-18B having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. FIG. 18A-18B illustrates an overview of thread execution logic 1800, which may be representative of hardware logic illustrated with each sub-core 1521A-1521F of FIG. 15B . FIG. 18A is representative of an execution unit within a general-purpose graphics processor, while FIG. 18B is representative of an execution unit that may be used within a compute accelerator.As illustrated in FIG. 18A , thread execution logic 1800 may include a shader processor 1802, a thread dispatcher 1804, instruction cache 1806, a scalable execution unit array including a plurality of graphics execution units 1808A-1808N, a sampler 1810, shared local memory 1811, a data cache 1812, and a data port 1814. Optionally, the scalable execution unit array can dynamically scale by enabling or disabling one or more execution units (e.g., any of graphics execution units 1808A, 1808B, 1808C, 1808D, through 1808N-1 and 1808N) based on the computational requirements of a workload. The included components may be interconnected via an interconnect fabric that links to each of the components. Thread execution logic 1800 may include one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache 1806, data port 1814, sampler 1810, and graphics execution units 1808A-1808N. Each execution unit (e.g. 1808A) may be a stand-alone programmable general-purpose computational unit that is capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In various embodiments, the array of execution units 1808A-1808N is scalable to include any number individual execution units.In some embodiments the graphics execution units 1808A-1808N may be primarily used to execute shader programs. A shader processor 1802 can process the various shader programs and dispatch execution threads associated with the shader programs via a thread dispatcher 1804. The thread dispatcher may include logic to arbitrate thread initiation requests from the graphics and media pipelines and instantiate the requested threads on one or more execution units in the graphics execution units 1808A-1808N. For example, a geometry pipeline can dispatch vertex, tessellation, or geometry shaders to the thread execution logic for processing. Optionally, the thread dispatcher 1804 can also process runtime thread spawning requests from the executing shader programs.In some embodiments, the graphics execution units 1808A-1808N may support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. The execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders). Each of the graphics execution units 1808A-1808N is capable of multi-issue single instruction multiple data (SIMD) execution and multi-threaded operation enables an efficient execution environment in the face of higher latency memory accesses. Each hardware thread within each execution unit has a dedicated high-bandwidth register file and associated independent thread-state. Execution is multi-issue per clock to pipelines capable of integer, single and double precision floating point operations, SIMD branch capability, logical operations, transcendental operations, and other miscellaneous operations. While waiting for data from memory or one of the shared functions, dependency logic within the execution units 1808A-1808N causes a waiting thread to sleep until the requested data has been returned. While the waiting thread is sleeping, hardware resources may be devoted to processing other threads. For example, during a delay associated with a vertex shader operation, an execution unit can perform operations for a pixel shader, fragment shader, or another type of shader program, including a different vertex shader, such as vertex shader 2107 illustrated in FIG. 21. Various embodiments can apply to use execution by use of Single Instruction Multiple Thread (SIMT) as an alternate to use of SIMD or in addition to use of SIMD. Reference to a SIMD core or operation can apply also to SIMT or apply to SIMD in combination with SIMT.Each execution unit in graphics execution units 1808A-1808N operates on arrays of data elements. The number of data elements is the "execution size," or the number of channels for the instruction. An execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs), Floating-Point Units (FPUs), or other logic units (e.g., tensor cores, ray tracing cores, etc.) for a particular graphics processor. Additionally, the graphics execution units 1808A-1808N may support integer and floating-point data types.The execution unit instruction set includes SIMD instructions. The various data elements can be stored as a packed data type in a register and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in a register and the execution unit operates on the vector as four separate 64-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, different vector widths and register sizes are possible.Optionally, one or more execution units can be combined into a fused graphics execution unit 1809A-1809N having thread control logic (1807A-1807N) that is common to the fused EUs. Multiple EUs can be fused into an EU group. Each EU in the fused EU group can be configured to execute a separate SIMD hardware thread. The number of EUs in a fused EU group can vary according to embodiments. Additionally, various SIMD widths can be performed per-EU, including but not limited to SIMD8, SIMD16, and SIMD32. Each fused graphics execution unit 1809A-1809N includes at least two execution units. For example, fused execution unit 1809A includes a first EU 1808A, second EU 1808B, and thread control logic 1807A that is common to the first EU 1808A and the second EU 1808B. The thread control logic 1807A controls threads executed on the fused graphics execution unit 1809A, allowing each EU within the fused execution units 1809A-1809N to execute using a common instruction pointer register.One or more internal instruction caches (e.g., 1806) are included in the thread execution logic 1800 to cache thread instructions for the execution units. One or more data caches (e.g., 1812) may be included in the thread execution logic 1800 to cache thread data during thread execution. Threads executing on the execution logic 1800 can also store explicitly managed data in the shared local memory 1811. A sampler 1810 may be included to provide texture sampling for 3D operations and media sampling for media operations. Sampler 1810 may include specialized texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to an execution unit.During execution, the graphics and media pipelines send thread initiation requests to thread execution logic 1800 via thread spawning and dispatch logic. Once a group of geometric objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within the shader processor 1802 is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). A pixel shader or fragment shader may calculate the values of the various vertex attributes that are to be interpolated across the rasterized object. The pixel processor logic within the shader processor 1802 may then execute an application programming interface (API)-supplied pixel or fragment shader program. To execute the shader program, the shader processor 1802 dispatches threads to an execution unit (e.g., 1808A) via thread dispatcher 1804. Shader processor 1802 may use texture sampling logic in the sampler 1810 to access texture data in texture maps stored in memory. Arithmetic operations on the texture data and the input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing.In addition, the data port 1814 may provide a memory access mechanism for the thread execution logic 1800 to output processed data to memory for further processing on a graphics processor output pipeline. The data port 1814 may include or couple to one or more cache memories (e.g., data cache 1812) to cache data for memory access via the data port 1814.Optionally, the execution logic 1800 can also include a ray tracer 1805 that can provide ray tracing acceleration functionality. The ray tracer 1805 can support a ray tracing instruction set that includes instructions/functions for ray generation. The ray tracing instruction set can be similar to or different from the ray-tracing instruction set supported by the ray tracing cores 372 in FIG. 3C .FIG. 18B illustrates exemplary internal details of an execution unit 1808. A graphics execution unit 1808 can include an instruction fetch unit 1837, a general register file array (GRF) 1824, an architectural register file array (ARF) 1826, a thread arbiter 1822, a send unit 1830, a branch unit 1832, a set of SIMD floating point units (FPUs) 1834, and optionally a set of dedicated integer SIMD ALUs 1835. The GRF 1824 and ARF 1826 includes the set of general register files and architecture register files associated with each simultaneous hardware thread that may be active in the graphics execution unit 1808. Per thread architectural state may be maintained in the ARF 1826, while data used during thread execution is stored in the GRF 1824. The execution state of each thread, including the instruction pointers for each thread, can be held in thread-specific registers in the ARF 1826.The graphics execution unit 1808 may have an architecture that is a combination of Simultaneous Multi-Threading (SMT) and fine-grained Interleaved Multi-Threading (IMT). The architecture may have a modular configuration that can be fine-tuned at design time based on a target number of simultaneous threads and number of registers per execution unit, where execution unit resources are divided across logic used to execute multiple simultaneous threads. The number of logical threads that may be executed by the graphics execution unit 1808 is not limited to the number of hardware threads, and multiple logical threads can be assigned to each hardware thread.Optionally, the graphics execution unit 1808 can co-issue multiple instructions, which may each be different instructions. The thread arbiter 1822 of the graphics execution unit 1808 can dispatch the instructions to one of the send unit 1830, branch unit 1832, or SIMD FPU(s) 1834 for execution. Each execution thread can access 128 general-purpose registers within the GRF 1824, where each register can store 32 bytes, accessible as a SIMD 8-element vector of 32-bit data elements. Each execution unit thread may have access to 4 Kbytes within the GRF 1824, although embodiments are not so limited, and greater or fewer register resources may be provided in other embodiments. The graphics execution unit 1808 may be partitioned into seven hardware threads that can independently perform computational operations, although the number of threads per execution unit can also vary according to embodiments, for example, up to 16 hardware threads may be supported. In an exemplary embodiment, in which seven threads may access 4 Kbytes, the GRF 1824 can store a total of twenty-eight Kbytes. In another exemplary embodiment, where 16 threads may access 4Kbytes, the GRF 1824 can store a total of 64Kbytes. The number of threads per execution unit are, however, not limited to those examples and may be more or less than the given numbers. Flexible addressing modes can permit registers to be addressed together to build effectively wider registers or to represent strided rectangular block data structures.Additionally or alternatively, memory operations, sampler operations, and other longer-latency system communications may be dispatched via "send" instructions that are executed by the message passing send unit 1830. Branch instructions may be dispatched to a dedicated branch unit 1832 to facilitate SIMD divergence and eventual convergence.The graphics execution unit 1808 may include one or more SIMD floating point units (FPU(s)) 1834 to perform floating-point operations. The FPU(s) 1834 may also support integer computation. In some instances, the FPU(s) 1834 can SIMD execute up to M number of 32-bit floating-point (or integer) operations, or SIMD execute up to 2M 16-bit integer or 16-bit floating-point operations. Optionally, at least one of the FPU(s) provides extended math capability to support high-throughput transcendental math functions and double precision 64-bit floating-point. A set of 8-bit integer SIMD ALUs 1835 may also be present, and may be specifically optimized to perform operations associated with machine learning computations.Optionally, arrays of multiple instances of the graphics execution unit 1808 can be instantiated in a graphics sub-core grouping (e.g., a sub-slice). For scalability, product architects can choose the exact number of execution units per sub-core grouping. The execution unit 1808 may execute instructions across a plurality of execution channels. In addition, each thread executed on the graphics execution unit 1808 may be executed on a different channel.FIG. 19 illustrates a further exemplary execution unit 1900. The elements of FIG. 19 having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. The execution unit 1900 may be a compute-optimized execution unit for use in, for example, a compute engine tile 1640A-1640D as in FIG. 16C , but is not limited as such. The execution unit 1900 may also be used in a graphics engine tile 1610A-1610D as in FIG. 16B . The execution unit 1900 may include a thread control unit 1901, a thread state unit 1902, an instruction fetch/prefetch unit 1903, and an instruction decode unit 1904. The execution unit 1900 may additionally include a register file 1906 that stores registers that can be assigned to hardware threads within the execution unit. The execution unit 1900 may additionally include a send unit 1907 and a branch unit 1908. The send unit 1907 and branch unit 1908 may operate similarly as the send unit 1830 and a branch unit 1832 of the graphics execution unit 1808 of FIG. 18B .The execution unit 1900 can also include a compute unit 1910 that includes multiple different types of functional units. The compute unit 1910 may also include an ALU 1911, a systolic array 1912, and a math unit 1913. The ALU 1911 includes an array of arithmetic logic units. The ALU 1911 can be configured to perform 64-bit, 32-bit, and 16-bit integer and floating-point operations across multiple processing lanes and data channels and for multiple hardware and/or software threads. The ALU 1911 can perform integer and floating-point operations simultaneously (e.g., within the same clock cycle).The systolic array 1912 includes a W wide and D deep network of data processing units that can be used to perform vector or other data-parallel operations in a systolic manner. The systolic array 1912 can be configured to perform various matrix operations, including as dot product, outer product, and general matrix-matrix multiplication (GEMM) operations. The systolic array 1912 may support 16-bit floating point operations, as well as 8-bit, 4-bit, 2-bit, and binary integer operations. The systolic array 1912 may be configured to accelerate machine learning operations. The systolic array 1912 can be configured with support for bfloat16, (brain floating point) 16-bit floating point format or a tensor float 32-bit floating point format (TF32) that have different numbers of mantissa and exponent bits relative to Institute of Electrical and Electronics Engineers (IEEE) 754 formats. FP64 formats can also be supported.In one embodiment, the systolic array 1912 includes hardware to accelerate sparse matrix operations. Multiplication operations for sparse regions of input data can be bypassed without sacrificing throughput. Block sparsity within input matrices can be detected and operations having known output values can be bypassed. In one embodiment, the systolic array 1912 includes hardware to enable operations on sparse data having a compressed representation. A compressed representation of a sparse matrix stores non-zero values and metadata that defines the position of the non-zero values within the matrix. Exemplary compressed representations include but are not limited to compressed tensor representations such as compressed sparse row (CSR), compressed sparse column (CSC), compressed sparse fiber (CSF) representations. Support for compressed representations enable operations to be performed on input in a compressed tensor format without requiring the compressed representation to be decompressed or decoded. In such embodiment, operations can be performed only on non-zero input values and the resulting non-zero output values can be mapped into an output matrix. In some embodiments, hardware support is also provided for machine-specific lossless data compression formats that are used when transmitting data within hardware or across system busses. Such data may be retained in a compressed format for sparse input data and the systolic array 1912 can used the compression metadata for the compressed data to enable operations to be performed on only non-zero values, or to enable blocks of zero data input to be bypassed for multiply operations.The math unit 1913 can be configured to perform a specific subset of mathematical operations in an efficient and lower-power manner than then ALU unit 1911. The math unit 1913 can include math logic found in shared function logic of a graphics processing engine provided by other embodiments described, e.g., the math logic 1722 of the shared function logic 1720 of FIG. 17 . The math unit 1913 can be configured to perform 32-bit and 64-bit floating point operations.The thread control unit 1901 includes logic to control the execution of threads within the execution unit. The thread control unit 1901 can include thread arbitration logic to start, stop, and preempt execution of threads within the execution unit 1900. The thread state unit 1902 can be used to store thread state for threads assigned to execute on the execution unit 1900. Storing the thread state within the execution unit 1900 enables the rapid pre-emption of threads when those threads become blocked or idle. The instruction fetch/prefetch unit 1903 can fetch instructions from an instruction cache of higher-level execution logic (e.g., instruction cache 1806 as in FIG. 18A ). The instruction fetch/prefetch unit 1903 can also issue prefetch requests for instructions to be loaded into the instruction cache based on an analysis of currently executing threads. The instruction decode unit 1904 can be used to decode instructions to be executed by the compute units. The instruction decode unit 1904 can be used as a secondary decoder to decode complex instructions into constituent micro-operations.The execution unit 1900 additionally includes a register file 1906 that can be used by hardware threads executing on the execution unit 1900. Registers in the register file 1906 can be divided across the logic used to execute multiple simultaneous threads within the compute unit 1910 of the execution unit 1900. The number of logical threads that may be executed by the graphics execution unit 1900is not limited to the number of hardware threads, and multiple logical threads can be assigned to each hardware thread. The size of the register file 1906 can vary across embodiments based on the number of supported hardware threads. Register renaming may be used to dynamically allocate registers to hardware threads.FIG. 20 is a block diagram illustrating graphics processor instruction formats 2000. The graphics processor execution units support an instruction set having instructions in multiple formats. The solid lined boxes illustrate the components that are generally included in an execution unit instruction, while the dashed lines include components that are optional or that are only included in a sub-set of the instructions. In some embodiments the graphics processor instruction formats 2000 described and illustrated are macro-instructions, in that they are instructions supplied to the execution unit, as opposed to micro-operations resulting from instruction decode once the instruction is processed. Thus, a single instruction may cause hardware to perform multiple micro-operationsThe graphics processor execution units as described herein may natively support instructions in a 128-bit instruction format 2010. A 64-bit compacted instruction format 2030 is available for some instructions based on the selected instruction, instruction options, and number of operands. The native 128-bit instruction format 2010 provides access to all instruction options, while some options and operations are restricted in the 64-bit format 2030. The native instructions available in the 64-bit format 2030 vary by embodiment. The instruction is compacted in part using a set of index values in an index field 2013. The execution unit hardware references a set of compaction tables based on the index values and uses the compaction table outputs to reconstruct a native instruction in the 128-bit instruction format 2010. Other sizes and formats of instruction can be used.For each format, instruction opcode 2012 defines the operation that the execution unit is to perform. The execution units execute each instruction in parallel across the multiple data elements of each operand. For example, in response to an add instruction the execution unit performs a simultaneous add operation across each color channel representing a texture element or picture element. By default, the execution unit performs each instruction across all data channels of the operands. Instruction control field 2014 may enable control over certain execution options, such as channels selection (e.g., predication) and data channel order (e.g., swizzle). For instructions in the 128-bit instruction format 2010 an exec-size field 2016 limits the number of data channels that will be executed in parallel. An exec-size field 2016 may not be available for use in the 64-bit compact instruction format 2030.Some execution unit instructions have up to three operands including two source operands, src0 2020, src1 2022, and one destination 2018. The execution units may support dual destination instructions, where one of the destinations is implied. Data manipulation instructions can have a third source operand (e.g., SRC2 2024), where the instruction opcode 2012 determines the number of source operands. An instruction's last source operand can be an immediate (e.g., hard-coded) value passed with the instruction.The 128-bit instruction format 2010 may include an access/address mode field 2026 specifying, for example, whether direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is directly provided by bits in the instruction.The 128-bit instruction format 2010 may also include an access/address mode field 2026, which specifies an address mode and/or an access mode for the instruction. The access mode may be used to define a data access alignment for the instruction. Access modes including a 16-byte aligned access mode and a 1-byte aligned access mode may be supported, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, the instruction may use byte-aligned addressing for source and destination operands and when in a second mode, the instruction may use 16-byte-aligned addressing for all source and destination operands.The address mode portion of the access/address mode field 2026 may determine whether the instruction is to use direct or indirect addressing. When direct register addressing mode is used bits in the instruction directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands may be computed based on an address register value and an address immediate field in the instruction.Instructions may be grouped based on opcode 2012 bit-fields to simplify Opcode decode 2040. For an 8-bit opcode, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely an example. A move and logic opcode group 2042 may include data movement and logic instructions (e.g., move (mov), compare (cmp)). Move and logic group 2042 may share the five least significant bits (LSB), where move (mov) instructions are in the form of 0000xxxxb and logic instructions are in the form of 0001xxxxb. A flow control instruction group 2044 (e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). A miscellaneous instruction group 2046 includes a mix of instructions, including synchronization instructions (e.g., wait, send) in the form of 001 1xxxxb (e.g., 0x30). A parallel math instruction group 2048 includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math instruction group 2048 performs the arithmetic operations in parallel across data channels. The vector math group 2050 includes arithmetic instructions (e.g., dp4) in the form of OlOlxxxxb (e.g., 0x50). The vector math group performs arithmetic such as dot product calculations on vector operands. The illustrated opcode decode 2040, in one embodiment, can be used to determine which portion of an execution unit will be used to execute a decoded instruction. For example, some instructions may be designated as systolic instructions that will be performed by a systolic array. Other instructions, such as ray-tracing instructions (not shown) can be routed to a ray-tracing core or ray-tracing logic within a slice or partition of execution logic.Graphics PipelineFIG. 21 is a block diagram of graphics processor 2100, according to another embodiment. The elements of FIG. 21 having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such.The graphics processor 2100 may include different types of graphics processing pipelines, such as a geometry pipeline 2120, a media pipeline 2130, a display engine 2140, thread execution logic 2150, and a render output pipeline 2170. Graphics processor 2100 may be a graphics processor within a multi-core processing system that includes one or more general-purpose processing cores. The graphics processor may be controlled by register writes to one or more control registers (not shown) or via commands issued to graphics processor 2100 via a ring interconnect 2102. Ring interconnect 2102 may couple graphics processor 2100 to other processing components, such as other graphics processors or general-purpose processors. Commands from ring interconnect 2102 are interpreted by a command streamer 2103, which supplies instructions to individual components of the geometry pipeline 2120 or the media pipeline 2130.Command streamer 2103 may direct the operation of a vertex fetcher 2105 that reads vertex data from memory and executes vertex-processing commands provided by command streamer 2103. The vertex fetcher 2105 may provide vertex data to a vertex shader 2107, which performs coordinate space transformation and lighting operations to each vertex. Vertex fetcher 2105 and vertex shader 2107 may execute vertex-processing instructions by dispatching execution threads to execution units 2152A-2152B via a thread dispatcher 2131.The execution units 2152A-2152B may be an array of vector processors having an instruction set for performing graphics and media operations. The execution units 2152A-2152B may have an attached L1 cache 2151 that is specific for each array or shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions.A geometry pipeline 2120 may include tessellation components to perform hardware-accelerated tessellation of 3D objects. A programmable hull shader 2111 may configure the tessellation operations. A programmable domain shader 2117 may provide back-end evaluation of tessellation output. A tessellator 2113 may operate at the direction of hull shader 2111 and contain special purpose logic to generate a set of detailed geometric objects based on a coarse geometric model that is provided as input to geometry pipeline 2120. In addition, if tessellation is not used, tessellation components (e.g., hull shader 2111, tessellator 2113, and domain shader 2117) can be bypassed. The tessellation components can operate based on data received from the vertex shader 2107.Complete geometric objects may be processed by a geometry shader 2119 via one or more threads dispatched to execution units 2152A-2152B, or can proceed directly to the clipper 2129. The geometry shader may operate on entire geometric objects, rather than vertices or patches of vertices as in previous stages of the graphics pipeline. If the tessellation is disabled the geometry shader 2119 receives input from the vertex shader 2107. The geometry shader 2119 may be programmable by a geometry shader program to perform geometry tessellation if the tessellation units are disabled.Before rasterization, a clipper 2129 processes vertex data. The clipper 2129 may be a fixed function clipper or a programmable clipper having clipping and geometry shader functions. A rasterizer and depth test component 2173 in the render output pipeline 2170 may dispatch pixel shaders to convert the geometric objects into per pixel representations. The pixel shader logic may be included in thread execution logic 2150. Optionally, an application can bypass the rasterizer and depth test component 2173 and access un-rasterized vertex data via a stream out unit 2123.The graphics processor 2100 has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and message passing amongst the major components of the processor. In some embodiments, execution units 2152A-2152B and associated logic units (e.g., L1 cache 2151, sampler 2154, texture cache 2158, etc.) interconnect via a data port 2156 to perform memory access and communicate with render output pipeline components of the processor. A sampler 2154, caches 2151, 2158 and execution units 2152A-2152B each may have separate memory access paths. Optionally, the texture cache 2158 can also be configured as a sampler cache.The render output pipeline 2170 may contain a rasterizer and depth test component 2173 that converts vertex-based objects into an associated pixel-based representation. The rasterizer logic may include a windower/masker unit to perform fixed function triangle and line rasterization. An associated render cache 2178 and depth cache 2179 are also available in some embodiments. A pixel operations component 2177 performs pixel-based operations on the data, though in some instances, pixel operations associated with 2D operations (e.g. bit block image transfers with blending) are performed by the 2D engine 2141, or substituted at display time by the display controller 2143 using overlay display planes. A shared L3 cache 2175 may be available to all graphics components, allowing the sharing of data without the use of main system memory.The media pipeline 2130 may include a media engine 2137 and a video front-end 2134. Video front-end 2134 may receive pipeline commands from the command streamer 2103. The media pipeline 2130 may include a separate command streamer. Video front-end 2134 may process media commands before sending the command to the media engine 2137. Media engine 2137 may include thread spawning functionality to spawn threads for dispatch to thread execution logic 2150 via thread dispatcher 2131.The graphics processor 2100 may include a display engine 2140. This display engine 2140 may be external to processor 2100 and may couple with the graphics processor via the ring interconnect 2102, or some other interconnect bus or fabric. Display engine 2140 may include a 2D engine 2141 and a display controller 2143. Display engine 2140 may contain special purpose logic capable of operating independently of the 3D pipeline. Display controller 2143 may couple with a display device (not shown), which may be a system integrated display device, as in a laptop computer, or an external display device attached via a display device connector.The geometry pipeline 2120 and media pipeline 2130 maybe configurable to perform operations based on multiple graphics and media programming interfaces and are not specific to any one application programming interface (API). A driver software for the graphics processor may translate API calls that are specific to a particular graphics or media library into commands that can be processed by the graphics processor. Support may be provided for the Open Graphics Library (OpenGL), Open Computing Language (OpenCL), and/or Vulkan graphics and compute API, all from the Khronos Group. Support may also be provided for the Direct3D library from the Microsoft Corporation. A combination of these libraries may be supported. Support may also be provided for the Open Source Computer Vision Library (OpenCV). A future API with a compatible 3D pipeline would also be supported if a mapping can be made from the pipeline of the future API to the pipeline of the graphics processor.Graphics Pipeline ProgrammingFIG. 22A is a block diagram illustrating a graphics processor command format 2200 used for programming graphics processing pipelines, such as, for example, the pipelines described herein in conjunction with FIG. 16A , 17 , 21 . FIG. 22B is a block diagram illustrating a graphics processor command sequence 2210 according to an embodiment. The solid lined boxes in FIG. 22A illustrate the components that are generally included in a graphics command while the dashed lines include components that are optional or that are only included in a sub-set of the graphics commands. The exemplary graphics processor command format 2200 of FIG. 22A includes data fields to identify a client 2202, a command operation code (opcode) 2204, and data 2206 for the command. A sub-opcode 2205 and a command size 2208 are also included in some commands.Client 2202 may specify the client unit of the graphics device that processes the command data. A graphics processor command parser may examine the client field of each command to condition the further processing of the command and route the command data to the appropriate client unit. The graphics processor client units may include a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit may have a corresponding processing pipeline that processes the commands. Once the command is received by the client unit, the client unit reads the opcode 2204 and, if present, sub-opcode 2205 to determine the operation to perform. The client unit performs the command using information in data field 2206. For some commands an explicit command size 2208 is expected to specify the size of the command. The command parser may automatically determine the size of at least some of the commands based on the command opcode. Commands may be aligned via multiples of a double word. Other command formats can also be used.The flow diagram in FIG. 22B illustrates an exemplary graphics processor command sequence 2210. Software or firmware of a data processing system that features an exemplary graphics processor may use a version of the command sequence shown to set up, execute, and terminate a set of graphics operations. A sample command sequence is shown and described for purposes of example only and is not limited to these specific commands or to this command sequence. Moreover, the commands may be issued as batch of commands in a command sequence, such that the graphics processor will process the sequence of commands in at least partially concurrence.The graphics processor command sequence 2210 may begin with a pipeline flush command 2212 to cause any active graphics pipeline to complete the currently pending commands for the pipeline. Optionally, the 3D pipeline 2222 and the media pipeline 2224 may not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. Optionally, any data in the render cache that is marked 'dirty' can be flushed to memory. Pipeline flush command 2212 can be used for pipeline synchronization or before placing the graphics processor into a low power state.A pipeline select command 2213 may be used when a command sequence requires the graphics processor to explicitly switch between pipelines. A pipeline select command 2213 may be required only once within an execution context before issuing pipeline commands unless the context is to issue commands for both pipelines. A pipeline flush command 2212 may be required immediately before a pipeline switch via the pipeline select command 2213.A pipeline control command 2214 may configure a graphics pipeline for operation and may be used to program the 3D pipeline 2222 and the media pipeline 2224. The pipeline control command 2214 may configure the pipeline state for the active pipeline. The pipeline control command 2214 may be used for pipeline synchronization and to clear data from one or more cache memories within the active pipeline before processing a batch of commands.Commands related to the return buffer state 2216 may be used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. The graphics processor may also use one or more return buffers to store output data and to perform cross thread communication. The return buffer state 2216 may include selecting the size and number of return buffers to use for a set of pipeline operations.The remaining commands in the command sequence differ based on the active pipeline for operations. Based on a pipeline determination 2220, the command sequence is tailored to the 3D pipeline 2222 beginning with the 3D pipeline state 2230 or the media pipeline 2224 beginning at the media pipeline state 2240.The commands to configure the 3D pipeline state 2230 include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that are to be configured before 3D primitive commands are processed. The values of these commands are determined at least in part based on the particular 3D API in use. The 3D pipeline state 2230 commands may also be able to selectively disable or bypass certain pipeline elements if those elements will not be used.A 3D primitive 2232 command may be used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters that are passed to the graphics processor via the 3D primitive 2232 command are forwarded to the vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive 2232 command data to generate vertex data structures. The vertex data structures are stored in one or more return buffers. The 3D primitive 2232 command may be used to perform vertex operations on 3D primitives via vertex shaders. To process vertex shaders, 3D pipeline 2222 dispatches shader execution threads to graphics processor execution units.The 3D pipeline 2222 may be triggered via an execute 2234 command or event. A register may write trigger command executions. An execution may be triggered via a 'go' or 'kick' command in the command sequence. Command execution may be triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometry processing for the 3D primitives. Once operations are complete, the resulting geometric objects are rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for those operations.The graphics processor command sequence 2210 may follow the media pipeline 2224 path when performing media operations. In general, the specific use and manner of programming for the media pipeline 2224 depends on the media or compute operations to be performed. Specific media decode operations may be offloaded to the media pipeline during media decode. The media pipeline can also be bypassed and media decode can be performed in whole or in part using resources provided by one or more general-purpose processing cores. The media pipeline may also include elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations using computational shader programs that are not explicitly related to the rendering of graphics primitives.Media pipeline 2224 may be configured in a similar manner as the 3D pipeline 2222. A set of commands to configure the media pipeline state 2240 are dispatched or placed into a command queue before the media object commands 2242. Commands for the media pipeline state 2240 may include data to configure the media pipeline elements that will be used to process the media objects. This includes data to configure the video decode and video encode logic within the media pipeline, such as encode or decode format. Commands for the media pipeline state 2240 may also support the use of one or more pointers to "indirect" state elements that contain a batch of state settings.Media object commands 2242 may supply pointers to media objects for processing by the media pipeline. The media objects include memory buffers containing video data to be processed. Optionally, all media pipeline states must be valid before issuing a media object command 2242. Once the pipeline state is configured and media object commands 2242 are queued, the media pipeline 2224 is triggered via an execute command 2244 or an equivalent execute event (e.g., register write). Output from media pipeline 2224 may then be post processed by operations provided by the 3D pipeline 2222 or the media pipeline 2224. GPGPU operations may be configured and executed in a similar manner as media operations.Graphics Software ArchitectureFIG. 23 illustrates an exemplary graphics software architecture for a data processing system 2300. Such a software architecture may include a 3D graphics application 2310, an operating system 2320, and at least one processor 2330. Processor 2330 may include a graphics processor 2332 and one or more general-purpose processor core(s) 2334. The processor 2330 may be a variant of the processor 1402 or any other of the processors described herein. The processor 2330 may be used in place of the processor 1402 or any other of the processors described herein. Therefore, the disclosure of any features in combination with the processor 1402 or any other of the processors described herein also discloses a corresponding combination with the graphics processor 2330, but is not limited to such. Moreover, the elements of FIG. 23 having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. The graphics application 2310 and operating system 2320 are each executed in the system memory 2350 of the data processing system.3D graphics application 2310 may contain one or more shader programs including shader instructions 2312. The shader language instructions may be in a high-level shader language, such as the High-Level Shader Language (HLSL) of Direct3D, the OpenGL Shader Language (GLSL), and so forth. The application may also include executable instructions 2314 in a machine language suitable for execution by the general-purpose processor core 2334. The application may also include graphics objects 2316 defined by vertex data.The operating system 2320 may be a Microsoft® Windows® operating system from the Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. The operating system 2320 can support a graphics API 2322 such as the Direct3D API, the OpenGL API, or the Vulkan API. When the Direct3D API is in use, the operating system 2320 uses a front-end shader compiler 2324 to compile any shader instructions 2312 in HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT) compilation or the application can perform shader pre-compilation. High-level shaders may be compiled into low-level shaders during the compilation of the 3D graphics application 2310. The shader instructions 2312 may be provided in an intermediate form, such as a version of the Standard Portable Intermediate Representation (SPIR) used by the Vulkan API.User mode graphics driver 2326 may contain a back-end shader compiler 2327 to convert the shader instructions 2312 into a hardware specific representation. When the OpenGL API is in use, shader instructions 2312 in the GLSL high-level language are passed to a user mode graphics driver 2326 for compilation. The user mode graphics driver 2326 may use operating system kernel mode functions 2328 to communicate with a kernel mode graphics driver 2329. The kernel mode graphics driver 2329 may communicate with graphics processor 2332 to dispatch commands and instructions.IP Core ImplementationsOne or more aspects may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as "IP cores," are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein.FIG. 24A is a block diagram illustrating an IP core development system 2400 that may be used to manufacture an integrated circuit to perform operations according to an embodiment. The IP core development system 2400 may be used to generate modular, re-usable designs that can be incorporated into a larger design or used to construct an entire integrated circuit (e.g., an SOC integrated circuit). A design facility 2430 can generate a software simulation 2410 of an IP core design in a high-level programming language (e.g., C/C++). The software simulation 2410 can be used to design, test, and verify the behavior of the IP core using a simulation model 2412. The simulation model 2412 may include functional, behavioral, and/or timing simulations. A register transfer level (RTL) design 2415 can then be created or synthesized from the simulation model 2412. The RTL design 2415 is an abstraction of the behavior of the integrated circuit that models the flow of digital signals between hardware registers, including the associated logic performed using the modeled digital signals. In addition to an RTL design 2415, lower-level designs at the logic level or transistor level may also be created, designed, or synthesized. Thus, the particular details of the initial design and simulation may vary.The RTL design 2415 or equivalent may be further synthesized by the design facility into a hardware model 2420, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a 3rd party fabrication facility 2465 using non-volatile memory 2440 (e.g., hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted (e.g., via the Internet) over a wired connection 2450 or wireless connection 2460. The fabrication facility 2465 may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein.FIG. 24B illustrates a cross-section side view of an integrated circuit package assembly 2470. The integrated circuit package assembly 2470 illustrates an implementation of one or more processor or accelerator devices as described herein. The package assembly 2470 includes multiple units of hardware logic 2472, 2474 connected to a substrate 2480. The logic 2472, 2474 may be implemented at least partly in configurable logic or fixed-functionality logic hardware, and can include one or more portions of any of the processor core(s), graphics processor(s), or other accelerator devices described herein. Each unit of logic 2472, 2474 can be implemented within a semiconductor die and coupled with the substrate 2480 via an interconnect structure 2473. The interconnect structure 2473 may be configured to route electrical signals between the logic 2472, 2474 and the substrate 2480, and can include interconnects such as, but not limited to bumps or pillars. The interconnect structure 2473 may be configured to route electrical signals such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of the logic 2472, 2474. Optionally, the substrate 2480 may be an epoxy-based laminate substrate. The substrate 2480 may also include other suitable types of substrates. The package assembly 2470 can be connected to other electrical devices via a package interconnect 2483. The package interconnect 2483 may be coupled to a surface of the substrate 2480 to route electrical signals to other electrical devices, such as a motherboard, other chipset, or multi-chip module.The units of logic 2472, 2474 may be electrically coupled with a bridge 2482 that is configured to route electrical signals between the logic 2472, 2474. The bridge 2482 may be a dense interconnect structure that provides a route for electrical signals. The bridge 2482 may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide a chip-to-chip connection between the logic 2472, 2474.Although two units of logic 2472, 2474 and a bridge 2482 are illustrated, embodiments described herein may include more or fewer logic units on one or more dies. The one or more dies may be connected by zero or more bridges, as the bridge 2482 may be excluded when the logic is included on a single die. Alternatively, multiple dies or units of logic can be connected by one or more bridges. Additionally, multiple logic units, dies, and bridges can be connected together in other possible configurations, including three-dimensional configurations.FIG. 24C illustrates a package assembly 2490 that includes multiple units of hardware logic chiplets connected to a substrate 2480 (e.g., base die). A graphics processing unit, parallel processor, and/or compute accelerator as described herein can be composed from diverse silicon chiplets that are separately manufactured. In this context, a chiplet is an at least partially packaged integrated circuit that includes distinct units of logic that can be assembled with other chiplets into a larger package. A diverse set of chiplets with different IP core logic can be assembled into a single device. Additionally the chiplets can be integrated into a base die or base chiplet using active interposer technology. The concepts described herein enable the interconnection and communication between the different forms of IP within the GPU. IP cores can be manufactured using different process technologies and composed during manufacturing, which avoids the complexity of converging multiple IPs, especially on a large SoC with several flavors IPs, to the same manufacturing process. Enabling the use of multiple process technologies improves the time to market and provides a cost-effective way to create multiple product SKUs. Additionally, the disaggregated IPs are more amenable to being power gated independently, components that are not in use on a given workload can be powered off, reducing overall power consumption.In various embodiments a package assembly 2490 can include fewer or greater number of components and chiplets that are interconnected by a fabric 2485 or one or more bridges 2487. The chiplets within the package assembly 2490 may have a 2.5D arrangement using Chip-on-Wafer-on-Substrate stacking in which multiple dies are stacked side-by-side on a silicon interposer that includes through-silicon vias (TSVs) to couple the chiplets with the substrate 2480, which includes electrical connections to the package interconnect 2483.In one embodiment, silicon interposer is an active interposer 2489 that includes embedded logic in addition to TSVs. In such embodiment, the chiplets within the package assembly 2490 are arranged using 3D face to face die stacking on top of the active interposer 2489. The active interposer 2489 can include hardware logic for I/O 2491, cache memory 2492, and other hardware logic 2493, in addition to interconnect fabric 2485 and a silicon bridge 2487. The fabric 2485 enables communication between the various logic chiplets 2472, 2474 and the logic 2491, 2493 within the active interposer 2489. The fabric 2485 may be an NoC interconnect or another form of packet switched fabric that switches data packets between components of the package assembly. For complex assemblies, the fabric 2485 may be a dedicated chiplet enables communication between the various hardware logic of the package assembly 2490.Bridge structures 2487 within the active interposer 2489 may be used to facilitate a point to point interconnect between, for example, logic or I/O chiplets 2474 and memory chiplets 2475. In some implementations, bridge structures 2487 may also be embedded within the substrate 2480.The hardware logic chiplets can include special purpose hardware logic chiplets 2472, logic or I/O chiplets 2474, and/or memory chiplets 2475. The hardware logic chiplets 2472 and logic or I/O chiplets 2474 may be implemented at least partly in configurable logic or fixed-functionality logic hardware and can include one or more portions of any of the processor core(s), graphics processor(s), parallel processors, or other accelerator devices described herein. The memory chiplets 2475 can be DRAM (e.g., GDDR, HBM) memory or cache (SRAM) memory. Cache memory 2492 within the active interposer 2489 (or substrate 2480) can act as a global cache for the package assembly 2490, part of a distributed global cache, or as a dedicated cache for the fabric 2485Each chiplet can be fabricated as separate semiconductor die and coupled with a base die that is embedded within or coupled with the substrate 2480. The coupling with the substrate 2480 can be performed via an interconnect structure 2473. The interconnect structure 2473 may be configured to route electrical signals between the various chiplets and logic within the substrate 2480. The interconnect structure 2473 can include interconnects such as, but not limited to bumps or pillars. In some embodiments, the interconnect structure 2473 may be configured to route electrical signals such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of the logic, I/O and memory chiplets. In one embodiment, an additional interconnect structure couples the active interposer 2489 with the substrate 2480.The substrate 2480 may be an epoxy-based laminate substrate, however, it is not limited to that and the substrate 2480 may also include other suitable types of substrates. The package assembly 2490 can be connected to other electrical devices via a package interconnect 2483. The package interconnect 2483 may be coupled to a surface of the substrate 2480 to route electrical signals to other electrical devices, such as a motherboard, other chipset, or multi-chip module.A logic or I/O chiplet 2474 and a memory chiplet 2475 may be electrically coupled via a bridge 2487 that is configured to route electrical signals between the logic or I/O chiplet 2474 and a memory chiplet 2475. The bridge 2487 may be a dense interconnect structure that provides a route for electrical signals. The bridge 2487 may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide a chip-to-chip connection between the logic or I/O chiplet 2474 and a memory chiplet 2475. The bridge 2487 may also be referred to as a silicon bridge or an interconnect bridge. For example, the bridge 2487 is an Embedded Multi-die Interconnect Bridge (EMIB). Alternatively, the bridge 2487 may simply be a direct connection from one chiplet to another chiplet.FIG. 24D illustrates a package assembly 2494 including interchangeable chiplets 2495, according to an embodiment. The interchangeable chiplets 2495 can be assembled into standardized slots on one or more base chiplets 2496, 2498. The base chiplets 2496, 2498 can be coupled via a bridge interconnect 2497, which can be similar to the other bridge interconnects described herein and may be, for example, an EMIB. Memory chiplets can also be connected to logic or I/O chiplets via a bridge interconnect. I/O and logic chiplets can communicate via an interconnect fabric. The base chiplets can each support one or more slots in a standardized format for one of logic or I/O or memory/cache.SRAM and power delivery circuits may be fabricated into one or more of the base chiplets 2496, 2498, which can be fabricated using a different process technology relative to the interchangeable chiplets 2495 that are stacked on top of the base chiplets. For example, the base chiplets 2496, 2498 can be fabricated using a larger process technology, while the interchangeable chiplets can be manufactured using a smaller process technology. One or more of the interchangeable chiplets 2495 may be memory (e.g., DRAM) chiplets. Different memory densities can be selected for the package assembly 2494 based on the power, and/or performance targeted for the product that uses the package assembly 2494. Additionally, logic chiplets with a different number of type of functional units can be selected at time of assembly based on the power, and/or performance targeted for the product. Additionally, chiplets containing IP logic cores of differing types can be inserted into the interchangeable chiplet slots, enabling hybrid processor designs that can mix and match different technology IP blocks.Exemplary System on a Chip Integrated CircuitFIG. 25-26B illustrate exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores. In addition to what is illustrated, other logic and circuits may be included, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores. The elements of FIG. 25-26B having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such.FIG. 25 is a block diagram illustrating an exemplary system on a chip integrated circuit 2500 that may be fabricated using one or more IP cores. Exemplary integrated circuit 2500 includes one or more application processor(s) 2505 (e.g., CPUs), at least one graphics processor 2510, which may be a variant of the graphics processor 1408, 1508, 2510, or of any graphics processor described herein and may be used in place of any graphics processor described. Therefore, the disclosure of any features in combination with a graphics processor herein also discloses a corresponding combination with the graphics processor 2510, but is not limited to such. The integrated circuit 2500 may additionally include an image processor 2515 and/or a video processor 2520, any of which may be a modular IP core from the same or multiple different design facilities. Integrated circuit 2500 may include peripheral or bus logic including a USB controller 2525, UART controller 2530, an SPI/SDIO controller 2535, and an I2S/I2C controller 2540. Additionally, the integrated circuit can include a display device 2545 coupled to one or more of a high-definition multimedia interface (HDMI) controller 2550 and a mobile industry processor interface (MIPI) display interface 2555. Storage may be provided by a flash memory subsystem 2560 including flash memory and a flash memory controller. Memory interface may be provided via a memory controller 2565 for access to SDRAM or SRAM memory devices. Some integrated circuits additionally include an embedded security engine 2570.FIG. 26A-26B are block diagrams illustrating exemplary graphics processors for use within an SoC, according to embodiments described herein. The graphics processors may be variants of the graphics processor 1408, 1508, 2510, or any other graphics processor described herein. The graphics processors may be used in place of the graphics processor 1408, 1508, 2510, or any other of the graphics processors described herein. Therefore, the disclosure of any features in combination with the graphics processor 1408, 1508, 2510, or any other of the graphics processors described herein also discloses a corresponding combination with the graphics processors of FIG. 26A-26B , but is not limited to such. FIG. 26A illustrates an exemplary graphics processor 2610 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. FIG. 26B illustrates an additional exemplary graphics processor 2640 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor 2610 of FIG. 26A is an example of a low power graphics processor core. Graphics processor 2640 of FIG. 26B is an example of a higher performance graphics processor core. For example, each of graphics processor 2610 and graphics processor 2640 can be a variant of the graphics processor 2510 of FIG. 25, as mentioned at the outset of this paragraph.As shown in FIG. 26A , graphics processor 2610 includes a vertex processor 2605 and one or more fragment processor(s) 2615A-2615N (e.g., 2615A, 2615B, 2615C, 2615D, through 2615N-1, and 2615N). Graphics processor 2610 can execute different shader programs via separate logic, such that the vertex processor 2605 is optimized to execute operations for vertex shader programs, while the one or more fragment processor(s) 2615A-2615N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. The vertex processor 2605 performs the vertex processing stage of the 3D graphics pipeline and generates primitives and vertex data. The fragment processor(s) 2615A-2615N use the primitive and vertex data generated by the vertex processor 2605 to produce a framebuffer that is displayed on a display device. The fragment processor(s) 2615A-2615N may be optimized to execute fragment shader programs as provided for in the OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in the Direct 3D API.Graphics processor 2610 additionally includes one or more memory management units (MMUs) 2620A-2620B, cache(s) 2625A-2625B, and circuit interconnect(s) 2630A-2630B. The one or more MMU(s) 2620A-2620B provide for virtual to physical address mapping for the graphics processor 2610, including for the vertex processor 2605 and/or fragment processor(s) 2615A-2615N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in the one or more cache(s) 2625A-2625B. The one or more MMU(s) 2620A-2620B may be synchronized with other MMUs within the system, including one or more MMUs associated with the one or more application processor(s) 2505, image processor 2515, and/or video processor 2520 of FIG. 25, such that each processor 2505-2520 can participate in a shared or unified virtual memory system. Components of graphics processor 2610 may correspond with components of other graphics processors described herein. The one or more MMU(s) 2620A-2620B may correspond with MMU 245 of FIG. 2C . Vertex processor 2605 and fragment processor 2615A-2615N may correspond with graphics multiprocessor 234. The one or more circuit interconnect(s) 2630A-2630B enable graphics processor 2610 to interface with other IP cores within the SoC, either via an internal bus of the SoC or via a direct connection, according to embodiments. The one or more circuit interconnect(s) 2630A-2630B may correspond with the data crossbar 240 of FIG. 2C . Further correspondence may be found between analogous components of the graphics processor 2610 and the various graphics processor architectures described herein.As shown FIG. 26B , graphics processor 2640 includes the one or more MMU(s) 2620A-2620B, cache(s) 2625A-2625B, and circuit interconnect(s) 2630A-2630B of the graphics processor 2610 of FIG. 26A. Graphics processor 2640 includes one or more shader cores 2655A-2655N (e.g., 2655A, 2655B, 2655C, 2655D, 2655E, 2655F, through 2655N-1, and 2655N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. The exact number of shader cores present can vary among embodiments and implementations. Additionally, graphics processor 2640 includes an inter-core task manager 2645, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores 2655A-2655N and a tiling unit 2658 to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches. Shader cores 2655A-2655N may correspond with, for example, graphics multiprocessor 234 as in FIG. 2D , or graphics multiprocessors 325, 350 of FIG. 3A and 3B respectively, or multi-core group 365A of FIG. 3C .Tensor Acceleration Logic for Graphics and Machine Learning WorkloadsFIG. 27 is a block diagram of a data processing system 2700, according to an embodiment. The data processing system 2700 is a heterogeneous processing system having a processor 2702, unified memory 2710, and a GPGPU 2720 including machine learning acceleration logic. The processor 2702 and the GPGPU 2720 can be any of the processors and GPGPU/parallel processors as described herein. For example, with additional reference to FIG. 1 , processor 2702 can be a variant of and/or share an architecture with a processor of the illustrated one or more processor(s) 102 and the GPGPU 2720 can be a variant of and/or share an architecture with a parallel processor of the illustrated one or more parallel processor(s) 112. With additional reference to FIG. 14 , processor 2702 can be a variant of and/or share an architecture with one of the illustrated processor(s) 1402 and the GPGPU 2720 can be a variant of and/or share an architecture with one of the illustrated graphics processor(s) 1408.The processor 2702 can execute instructions for a compiler 2715 stored in system memory 2712. The compiler 2715 executes on the processor 2702 to compile source code 2714A into compiled code 2714B. The compiled code 2714B can include instructions that may be executed by the processor 2702 and/or instructions that may be executed by the GPGPU 2720. Compilation of instructions to be executed by the GPGPU can be facilitated using shader or compute program compilers, such as shader compiler 2327 and/or shader compiler 2324 as in FIG. 23 . During compilation, the compiler 2715 can perform operations to insert metadata, including hints as to the level of data parallelism present in the compiled code 2714B and/or hints regarding the data locality associated with threads to be dispatched based on the compiled code 2714B. The compiler 2715 can include the information necessary to perform such operations or the operations can be performed with the assistance of a runtime library 2716. The runtime library 2716 can also assist the compiler 2715 in the compilation of the source code 2714A and can also include instructions that are linked at runtime with the compiled code 2714B to facilitate execution of the compiled instructions on the GPGPU 2720. The compiler 2715 can also facilitate register allocation for variables via a register allocator (RA) and generate load and store instructions to move data for variables between memory and the register assigned for the variable.The unified memory 2710 represents a unified address space that may be accessed by the processor 2702 and the GPGPU 2720. The unified memory can include system memory 2712 as well as GPGPU memory 2718. The GPGPU memory 2718 is memory within an address pace of the GPGPU 2720 and can include some or all of system memory 2712. In one embodiment the GPGPU memory 2718 can also include at least a portion of any memory dedicated for use exclusively by the GPGPU 2720. In one embodiment, compiled code 2714B stored in system memory 2712 can be mapped into GPGPU memory 2718 for access by the GPGPU 2720.The GPGPU 2720 includes multiple compute blocks 2724A-2724N, which can include one or more of a variety of processing resources described herein. The processing resources can be or include a variety of different computational resources such as, for example, execution units, compute units, streaming multiprocessors, graphics multiprocessors, or multi-core groups. In one embodiment the GPGPU 2720 additionally includes a tensor accelerator 2723 (e.g., matrix accelerator), which can include one or more special function compute units that are designed to accelerate a subset of matrix operations (e.g., dot product, etc.). The tensor accelerator 2723 may also be referred to as a tensor accelerator or tensor core. In one embodiment, logic components within the tensor accelerator 2723 may be distributed across the processing resources of the multiple compute blocks 2724A-2724N.The GPGPU 2720 can also include a set of resources that can be shared by the compute blocks 2724A-2724N and the tensor accelerator 2723, including but not limited to a set of registers 2725, a power and performance module 2726, and a cache 2727. In one embodiment the registers 2725 include directly and indirectly accessible registers, where the indirectly accessible registers are optimized for use by the tensor accelerator 2723. The power and performance module 2726 can be configured to adjust power delivery and clock frequencies for the compute blocks 2724A-2724N to power gate idle components within the compute blocks 2724A-2724N. In various embodiments the cache 2727 can include an instruction cache and/or a lower-level data cache.The GPGPU 2720 can additionally include an L3 data cache 2730, which can be used to cache data accessed from the unified memory 2710 by the tensor accelerator 2723 and/or the compute elements within the compute blocks 2724A-2724N. In one embodiment the L3 data cache 2730 includes shared local memory 2732 that can be shared by the compute elements within the compute blocks 2724A-2724N and the tensor accelerator 2723.In one embodiment the GPGPU 2720 includes instruction handling logic, such as a fetch and decode unit 2721 and a scheduler controller 2722. The fetch and decode unit 2721 includes a fetch unit and decode unit to fetch and decode instructions for execution by one or more of the compute blocks 2724A-2724N or the tensor accelerator 2723. The instructions can be scheduled to the appropriate functional unit within the compute block 2724A-2724N or the tensor accelerator via the scheduler controller 2722. In one embodiment the scheduler controller 2722 is an ASIC configurable to perform advanced scheduling operations. In one embodiment the scheduler controller 2722 is a micro-controller or a low energy-per-instruction processing core capable of executing scheduler instructions loaded from a firmware module.In one embodiment some functions to be performed by the compute blocks 2724A-2724N can be directly scheduled to or offloaded to the tensor accelerator 2723. In various embodiments the tensor accelerator 2723 includes processing element logic configured to efficiently perform matrix compute operations, such as multiply and add operations and dot product operations used by 3D graphics or compute shader programs. In one embodiment the tensor accelerator 2723 can be configured to accelerate operations used by machine learning frameworks. In one embodiment the tensor accelerator 2723 is an application specific integrated circuit explicitly configured to perform a specific set of parallel matrix multiplication and/or addition operations. In one embodiment the tensor accelerator 2723 is a field programmable gate array (FPGA) that provides fixed function logic that can updated between workloads. In one embodiment, the set of compute operations that can be performed by the tensor accelerator 2723 may be limited relative to the operations that can be performed by the compute block 2724A-2724N. However, the tensor accelerator 2723 can perform parallel tensor operations at a significantly higher throughput relative to the compute block 2724A-2724N.FIG. 28A-28B illustrate a matrix operation 2805 performed by an instruction pipeline 2800, according to embodiments. FIG. 28A illustrates the instruction pipeline 2800 when configured with a systolic array 2808 within the tensor accelerator 2723. FIG 28B illustrates the instruction pipeline when configured with an execution unit 1900 that includes a systolic array 1912.As shown in FIG. 28A , the instruction pipeline 2800 can be configured to perform a matrix operation 2805, such as, but not limited to a dot product operation. The dot product of two vectors is a scalar value that is equal to sum of products of corresponding components of the vectors. The dot product can be calculated as shown in equation (1) below. a → ⋅ b → = ∑ i = 1 n a i b i = a 1 b 1 + … + a n b nThe dot product can be used in a convolution operation for a convolutional neural network (CNN). While 2D convolution is illustrated, N-dimensional convolution can be performed on an N-dimensional volume using N-dimensional filters. A receptive field tile 2802 highlights a portion of an input volume in an input volume buffer 2804. The input volume buffer can be stored in memory 2830. A dot product matrix operation 2805 can be performed between the data within the receptive field tile 2802 and a convolutional filter to generate a data point within output buffer 2806, which can also be stored in memory 2830. The memory 2830 can be any of the memory described herein, including system memory 2712, GPGPU memory 2718, or one or more cache memories 2727, 2730 as in FIG 27 .The combination of the data points within the output buffer 2806 represents an activation map generated by the convolution operation. Each point within the activation map is generated by sliding the receptive field tile across the input volume buffer 2804. The activation map data can be input to an activation function to determine an output activation value. In one embodiment, convolution of the input volume buffer 2804 can be defined within a framework as high-level matrix operation 2805. The high-level matrix operations can be performed via primitive operations, such as a basic linear algebra subprogram (BLAS) operation. The primitive operations can be accelerated via hardware instructions executed by the instruction pipeline 2800.The instruction pipeline 2800 used to accelerate hardware instructions can include the instruction fetch and decode unit 2721, which can fetch and decode hardware instructions, and the scheduler controller 2722 which can schedule decoded instructions to one or more processing resources within the compute blocks 2724A-2724N and/or the tensor accelerator 2723. In one embodiment, a hardware instruction can be scheduled to the compute blocks 2724A-2724N and offloaded to the tensor accelerator 2723. The one or more hardware instructions and associated data to perform the matrix operation 2805 can be stored in the memory 2830. Output of the hardware instruction can also be stored in the memory 2830.In one embodiment, the tensor accelerator 2723 can execute one or more hardware instructions to perform the matrix operation 2805 using a systolic array 2808 of processing elements. The systolic array 2808 includes a combination of programmable and fixed function hardware that is configurable to perform matrix-matrix and matrix-vector dot product operations, as well as other operations, such as matrix-matrix and matrix-vector fused multiply-add operations.In various embodiment, as an alternative or in addition to the tensor accelerator 2723, matrix acceleration logic can also be included within the processing resources of the compute blocks 2724A-2724N. For example, as shown in FIG. 28B , in one embodiment each compute block (e.g., compute block 2724N) includes an array of execution units 1900A-1900N. In one embodiment, each execution unit in the array of execution units 1900A-1900N can include systolic arrays 1912A-1912N. In one embodiment, one or more of a subset of the execution units is configured with a systolic array. The number of systolic arrays and the throughput of the available systolic arrays can vary based on the power and performance targets for a device. The scheduler controller 2722 can schedule systolic matrix operations (dot products, fused multiply-adds, etc.) to available systolic arrays 1912A-1912N within the execution units 1900A-1900N of the various compute blocks 2724A-2724N.While in one embodiment each of the compute blocks 2724A-2724N include an array of execution units 1900A-1900N, in another embodiment the compute blocks 2724A-2724N share an architecture with the processing clusters 214A-214N of the processing cluster array in FIG. 2A . In such embodiment, the compute blocks 2724A-2724N include multiple graphics multiprocessors 234 as in FIG. 2C , which include internal components as illustrated in FIG. 2D . Thus, the graphics multiprocessors within the compute blocks can include a load/store unit 266, GPGPU cores 262, and tensor/RT cores 263. In one embodiment the compute blocks 2724A-2724N can include multi-core group 365A-365N of the GPU 380 of FIG. 3C and include multiple sets of GFX cores 370, tensor cores 371, and ray tracing cores 372. In such embodiment, the scheduler controller 2722 can schedule instructions to perform matrix operations to the tensor/RT cores 263 and/or tensor cores 371 within the compute blocks 2724A-2724N. Accelerated matrix operations include dot product operations, matrix multiply operations, and/or fused multiply-add operations, which can be performed on integer or floating-point matrix elements and various levels of precision. Additionally, in one embodiment the compute blocks 2724A-2724N can include a variant of the compute units 1560A-1560N of FIG. 15C , where such variants include matrix acceleration logic as described herein (e.g., systolic array, tensor core, systolic tensor core) that can execute integer or floating-point matrix acceleration instructions.FIG. 29 illustrates a systolic array 2900 including multiplier and adder circuits organized in a pipelined fashion. In one embodiment, systolic array 2900 is representative of the physical pipeline stages included in the systolic array 1912 and includes capabilities described in relation to that systolic array 1912, including support for sparse and block sparse operations, and may additionally be configured to support structured sparsity within a vector of elements or across a set of channels. Inputs 2912A-2912H for the first input matrix are represented by the data elements contained in the inputs labeled Src1 and Src1+1 through Src1+7. Inputs 2910A-2910H correspond to the second input matrix and are labeled as Src2. Inputs 2902A-2902B, which may include initial accumulator values, can be provided as Src0. An array of processing elements make up the physical pipeline stages 2911A-2911H of the systolic array 2900. Matrix-Matrix or Matrix-Vector operations, including fused multiply-add and/or dot product operations, can be performed at each pipeline stage 2911A-2911H during each clock cycle. On each cycle, every pipeline stage can receive a new Src2 input can be used by the processing elements of the pipeline stage to compute a value using either the new Src1 input or an older Src1 input that was previously read, although during initial startup it may take several cycles before all of the pipeline stages 2911A-2911H become active as the initial set of computed values propagate through the stages.Input 2902A can provide a Src0 value to processing element of pipeline stage 2911A, for use as an initial accumulator value. Alternatively, input 2902B can provide the Src0 value to be added to the values computed by pipeline stage 2911H of the systolic array, which enables partial pass operation for systolic array 2900 using the lower stages of the array while the unused upper stages are power gated. During operation, the data elements of a selected channel of the Src2 input are broadcast across all channels of the processing elements of the pipeline stages 2911A-2911H, where each channel represents a vector of multiple elements. The number of elements per channel can vary based on the size of the elements. The processing elements of a stage then perform operations using the selected Src2 channel and all channels of a given Src1 input. A Src2 input operates with eight Src1 inputs (e.g., one Src1 input per stage). The data elements of a channel of the Src2 input are broadcast across all channels of processing elements 2911A-2911H. The processing elements then operate the Src2 channel with all channels of a Src1 input. In a first clock cycle, a Src1 input is operated with data elements of the first channel of Src2. In the next cycle, a second Src1 (labeled as Src1+1) operates with the data elements of the second channel of Src2. This sequence repeats on the eight stages of the pipeline. Each stage adds its operation to the output of the previous stage. Across the pipeline stages, multiple Src2 inputs are operated in a pipelined fashion. As successive channels of a first Src2 input are pushed through the pipeline stages, a new Src2 input can be provided at the first stage.Output 2922 from the final stage is labeled as Dst. Where d = the systolic depth and e = the number of data elements per channel, the output of a channel is described by equation (2) below: Dst i = Src 0 i + ∑ j = 0 d ∑ k = 0 e Scr 1 + j element k of channel i ∗ Scr 2 element k of channel jAs shown in equation (2), each channel can include multiple data elements on which operations are performed in parallel. In one embodiment, each channel represents a four element data vector, although a different number of elements can be configured for each channel. In one embodiment, the number of data elements within a channel can vary based on the size of each data element. Dot products can be performed using, for example, four element vectors with 8-bit data types per element, two element vectors with 16-bit data types, eight element vectors with 4-bit data types (e.g., INT4), or 16 element vectors with 2-bit data types (e.g., INT2). The number of channels can be automatically adjusted depending on the datatype of Src1 and Src2. An instruction can also specify a required systolic depth to be used for the instruction.In one embodiment the processing elements 2911A-2911H may read inputs 2910A-2910H, 2912A-2912H directly from the general-purpose register file. In one embodiment systolic array 2900 includes logic to read inputs 2910A-2910H, 2912A-2912H from the general-purpose register file and store input data in registers, buffers, or memory that is internal to the systolic array. Internal logic can then feed the input data elements to the processing elements 2911A-2911H for processing. Output 2922 can be written to internal registers or memory of the systolic array 2900 and/or written directly to the general-purpose register file.FIG. 30A-30B illustrates the use of a systolic array 3000 that can be configured to execute operations at an arbitrary systolic depth. In the illustrated example, the systolic array 3000 has a physical depth of four, which corresponds with four physical pipeline stages. The systolic array can be configured to operate using an arbitrary number of logical stages, including four, eight, twelve, or sixteen logical stages, or other numbers of logical stages that are not divisible by the number of physical stages using partial-pass operations as in FIG. 31 described below. FIG. 30A shows the array receiving Src0 inputs from an external source and processing the first four stages with Src1 and Src2 inputs. The output of this array is fed back into the second step shown in FIG. 30B. FIG. 30B shows that the next four stages are calculated using the loopback data that includes the already processed values and the Src1 and Src2 inputs.As shown in FIG. 30A , systolic array 3000 can accept input 2902, as Src0 input, which is read (3002) via data selector 3004. Data selector 3004 selects between the input 2902 and loopback input 3006. Processing elements 2911A-2911D can process inputs 2910A-2910D and 2912A-2912D in a similar manner as systolic array 2900. If four stages are sufficient to complete an operation, pipeline stage 2911D can write (3022) output 2922 to a specified Dst register or memory via data selector 3024. Where further stages are required, data selector 3024 can write loopback output 3026, which is provided as loopback input 3006 to processing elements of pipeline stage 2911A.As shown in FIG. 30B , in one embodiment, loopback input 3006 can be further processed by processing elements 291 1A-2911D. Loopback input 3006 includes the already processed values. In one embodiment, loopback input 3006 can also include input 2910E-2910H, input 2912E-2912H, which can be pre-fetched while processing the first four stages. Data selector 3004 select loopback input 3006 for input by pipeline stage 2911A. Processing elements of the pipeline stages 2911A-2911D can then process inputs 2910E-2910H and 2912E-2912H. Data selector 3024 can then write (3022) the eighth stage result as output 2922 to the specified Dst register.In one embodiment, the systolic array 3000 is modified to exclude the loopback output 3026 and loopback input 3006 and instead include intermediate storage 3025, as shown in FIG. 30A-30B . The intermediate storage 3025 may be a memory device or register that is internal to the systolic array 3000 or may be a register in a register file that is external to the systolic array 3000. During the operations shown in FIG. 30A , output from pipeline stage 2911D can be stored in the intermediate storage 3025 instead of being output by loopback output 3026 and read by loopback input 3006 before the operations shown in FIG. 30B . During the operations shown in FIG. 30B , output from pipeline stage 2911D can be added to the data stored in the intermediate storage 3025 and written to output 2922. The systolic array 3000 can also be configured to perform multi-pass operations using at least one partial pass, as described below, to enable logical depths that are not divisible by the physical depth of the array.Scalable Matrix Multiply Accelerator with Feedback InputsA second embodiment enables increased throughput using simultaneous instructions executed using parallel units. Several instances or paths of the multiply accelerator are run in parallel. These instances can share Srcl, or they can have independent Src1 inputs. Each path will have their own Src2 and SrcO inputs. These instances will have their own src2 and src0 inputs. A version showing two paths with a depth of four stages is shown in FIG. 31. Alternatively, a version using four paths of depth of two stages is shown in FIG. 32.FIG. 31 illustrates a two-path matrix multiply accelerator 3100 in which each path has a depth of four stages. The two-path matrix multiply accelerator 3100 includes input logic 3102A-3102B for SrcO inputs, input buffers 3111A-3111B to store data elements received from input logic 3110A-3110B, and input buffers 3113A-3113B to store data elements received from shared input logic 3112 for Src1. Each stage includes a pair of processing elements, which may operate in parallel. Stage one includes processing elements 3131A-3131B, stage two includes processing elements 3132A-3132B, stage three includes processing elements 3133A-3133B, stage four includes processing elements 3134A-3134B. Hardware logic of each of the processing elements 3131A-3131B, 3132A-3132B, 3131A-3133B, 3134A-3134B can be the same as or similar to the hardware logic of processing elements of systolic array 2900 or systolic array 3000 and may be manufactured with the same process technology or a more advanced process technology. The processing elements of the two-path matrix multiply accelerator 3100 may also operate at a higher frequency relative to implementations of systolic array 2900. The processing elements and may be manufactured using more advanced process technology.Feedback may be implemented using data selectors that are the same as or similar to data selectors 3004, 3024. Depending on the configuration of the read logic, input data can be pre-fetched into the input buffer in advance or read from registers or a cache within the two-path matrix multiply accelerator 3100 one or more cycles before input into the processing elements 3131A-3131B. Processing elements 3134A-3134B of stage four can feed back into the corresponding processing elements 3131A-3131B stage one. Dynamic logical depth may be enabled in multiples of four. After a configured number of logical stages, results may be written by output logic 3122A-3122B to a specified destination.FIG. 32 illustrates a four-path matrix multiply accelerator 3200 in which each path has a depth of two stages. Four-path matrix multiply accelerator 3200 includes the same number of processing elements as two-path matrix multiply accelerator 3100, with the processing elements configured with twice as many paths, but each path is half as deep. Four-path matrix multiply accelerator 3200 includes input logic 3202A-3202D for SrcO, input buffers 3211A-3211D to store input elements read by input logic 3210A-3210D for Src2, and input buffers 3213A-3213D to store input elements read by shared input logic 3212 for Src1. Processing elements 3231A-3231B enable parallel processing for stage 1. Processing elements 3232A-3232B enable parallel processing for stage 2. Stage 2 of each path can feed back into stage 1 or write results via output logic 3222A-3222D to a specified destination. Processing elements 3231A-3231B, 3232A-3232B may include hardware logic similar to that of processing elements 3131A-3131B, 3132A-3132B, 3131A-3133B, 3134A-3134B and can implement loopback functionality using similar hardware logic.The advantages of a two-path matrix multiply accelerator 3100 or a four-path matrix multiply accelerator 3200 include scalability, software compatibility, and throughput. The modular architecture of these accelerators enables more efficient scaling relative to an 8-deep systolic array. Different configurations of a matrix multiply accelerator can be tailored for different product needs or use cases without redesign. Additionally, the same software model that is used is independent of the hardware implementation. Algorithms designed for an instruction intended to be executed by a systolic pipeline of eight stages can be used in an implementation using a Matrix Multiply accelerator of four stages. Hardware will use feedback to simulate a pipeline of eight stages in a way that is transparent to the software. Multiple paths can be used in a design requiring high DPAS instruction throughput. Implementations with a greater number of paths can be coupled with higher bandwidth input logic and output logic. In one embodiment, the two-path matrix multiply accelerator 3100 and a four-path matrix multiply accelerator 3200 are configured to bypass inputs with block sparsity at a greater efficiency and/or finer granularity than possible with an 8-deep systolic array.Sparse Multiplications on the Scalable Matrix Multiply AcceleratorA third embodiment facilitates increased instruction throughput when processing for data with irregular sparsity. Elements of Src1 and Src2 inputs can be individually selected via input multiplexer logic and processing can be performed using only non-zero values.FIG. 33 illustrates a scalable sparse matrix multiply accelerator 3300 using systolic arrays with feedback inputs. Scalable sparse matrix multiply accelerator 3300 can include processing elements 3231A-3231D as in four-path matrix multiply accelerator 3200, or any other processing elements described herein. Processing elements 3231A-3221B at the beginning of each path include input logic for SrcO. Each stage of each path of scalable sparse matrix multiply accelerator 3300 can receive any element of an independent or shared Src1 via input selectors 3312A-3312D. Each stage of each path can also receive any element of a Src2. Independent Src2 inputs are provided via separate input element selectors (e.g., Src2A via input selector 3310A and input selector 3311A, Src2B via input selector 3310B and input selector 3311B). The separate Src2 input enables the separate paths to compute different instructions. Separate output logic 3322A-3322B is present for each path to enable output for the different instructions.FIG. 34 shows a scalable sparse matrix multiply accelerator 3400 using systolic arrays with feedback inputs and outputs on each stage. Scalable sparse matrix multiply accelerator 3400 includes similar hardware logic as scalable sparse matrix multiply accelerator 3300, along with additional input and output logic to enable SrcO elements to be provided to each stage of each path and to provide separate outputs for each stage of each path. In addition to input selectors 3310A and 3311A to select Src2A elements for the first path and input selectors 3310A and 3311B to select Src2B input for the second path, an input splitter 3403A-3403B is added for each path for SrcO input. Each input splitter 340A-3402B can include a demultiplexer or similar hardware logic to enable SrcO input elements that are read by input logic 3402A-3402B to be sent to each stage. Input selectors 3312A-3312D are also included to enable Src1 input to be elected by each stage of each path. In addition to output logic 3322A-3322B from the second stage of each path (processing element 3431C-3431D), additional output logic 3422A-3422B is provided to enable output from the first stage of each path (3431A-3431B). The processing elements 3431A-3431C may be otherwise similar to other processing elements described herein.During operation, scalable sparse matrix multiply accelerator 3400 is configurable to accept groups of only one element. Given Src2 input {B0, 0, B2, B3, 0, 0, 0, 0}, two groups ([B0,B2], [B3,0]) are made for the non-zero elements on Src2 for the third embodiment (e.g., scalable sparse matrix multiply accelerator 3300), with the second group including a zero padding. The optimizations shown in FIG. 34 enable the groups to be formed as [B0,B2], [B3]. B0 and B2 will be assigned to the first and second stage of a path (e.g., either of a first set including of processing element 3431A and processing element 3431C or a second set including processing element 3431B and processing element 3431D). After the feedback, B3 will be assigned to the first stage of that path. As the first stage of a path can provide output (e.g., via either output logic 3422A or 3422B), there is no need to consume the second stage of the path (either of processing element 3431C or processing element 3431D). Moreover, the next Src2 input accepted for that path can start from the second stage, so a group of two elements will be assigned to the second and first stage respectively. SrcO for processing the new Src2 input can be assigned to the second stage of the path (e.g., via either output logic 3422A or 3422B)In addition to the hardware logic of scalable sparse matrix multiply accelerator 3300 illustrated in FIG. 33 and scalable sparse matrix multiply accelerator 3400 illustrated FIG. 34 , some embodiments additionally include input and output hardware memory buffers. Input memory buffers can be used to store and have ready groups of SrcO and Src2 inputs, which reduces the need for high bandwidth input logic. The output buffer allows Dst outputs generated in a same cycle to be steadily written to memory at a slower rate, which reduces the need for high bandwidth output logic.Additionally, some embodiments include a bypass for inputs in which all elements are zero. The bypass allows a direct write of SrcO as by output logic without passing through the systolic array. This bypass is used in concert with a data dependency strategy to prevent read-after-write (RAW) risks among instructions can damage the integrity of the data.Leveraging Output Sparsity Support in Systolic Arrays to Reduce Power ConsumptionEmbodiments described herein adapt systolic arrays with support for input sparsity to configure those arrays to support structured output sparsity. Output sparsity is a technique to bypass multiply-accumulate operations belonging to an output, or similarly, to a full row-column multiplication in a matrix multiplication operation, without regard to the sparsity of the input data. For output sparsity, a set of metadata bits arrive alongside the data to be multiplied. The metadata bits indicate the outputs that are to be masked. The operations on the masked outputs are bypassed by the systolic array. For example, if a model contains 1 million neurons, it may be possible to attempt to force 10% of those neurons to be zero. The neurons are switched off and the model is trained. If the result is reasonable, it may be possible to not use those neurons. However, it is not possible to change the model at this point. Instead, the input to some of the neurons are changed without changing the structure of the neural network. This technique is useful for accelerating the computation of the Backward by Weight (BWD_W) pass, which is one of the three components of deep learning (DL) training workloads, the others being the Backward by Data (BWD_D) pass and the forward (FWD) pass.In one embodiment, the set of neurons that will be forced to zero is pre-determined. In another embodiment, the percentage of neurons that will be zero is determined and each neuron may have a probability of being disabled. While in some implementations is it possible to re-use those processing elements that will be zero to perform other operations, embodiments described herein instead focus on reducing power consumption of the systolic array by not performing calculations for disabled neurons.From the perspective of a processing resource of a graphics processor (e.g., execution unit 1808A-1808N of execution logic 1800 and/or execution unit 1900; graphics multiprocessor 234; compute unit 1506A-1506N), a group of metadata bits can indicate whether the computation of an output will be skipped. In one embodiment, the metadata indicates channels of the systolic array that will be disabled during a computation, thus the results computed at that channel will be forced to zero. In one embodiment, where the computation at a channel is forced to zero, the SrcO input for that channel may be passed to the output from the channel. In one embodiment, initial accumulator values provided by the SrcO input are added to data output from the final stage of the systolic array instead of propagating through the array. In such embodiment, when computation at a channel is forced to zero, no data is output by the processing element associated with the channel and the output from that channel may be the value assigned to the initial accumulator value (SrcO) for the channel. Alternatively, the systolic array can be configured to force the output for the channel to be zero, without regard to the initial accumulator value for the channel.FIG. 35A-35B illustrates the use of output sparsity metadata to disable processing channels of a systolic array. Output sparsity metadata can be generated by a programmer or neural network training library to adjust the neuron sparsity of an existing model during training. The metadata can be streamed to the processing elements and indicates which channels (e.g., processing elements) will not participate in an operation. If the metadata in the metadata register indicates that a channel will be bypassed, the processing element associated with that channel will be disabled.As shown in FIG. 35A an eight-stage systolic array having eight channels, such as systolic array 2900 of FIG. 29 , can be configured with processing elements that can be disabled according to output sparsity metadata 3512. For a given instruction, metadata 3512 can indicate to bypass one or more channels 3502A-3502H when operations for that instruction are processed by physical pipeline stages 2911A-2911H of the array. The metadata 3512 can include one bit per channel, where each bit indicates whether a channel is active (ON) or disabled (OFF). The metadata bits will propagate through the stages along with operations for an instruction. Those channels are bypassed and disabled during operations for that instruction.As shown in FIG. 35B while metadata for an instruction propagates through the array with the instruction, the array can simultaneously execute multiple instructions. As the various instructions may be in various stages across the physical pipeline stages 2911A-2911H of the array, during a given cycle of execution, different channels may be disabled at different stages. For example, pipeline stage 2911A can have a first set of disabled channels 3511A-3511B (Channels [1, 2]), pipeline stage 2911B can have a second set of disabled channels 3512A-3512B (Channels [5, 6]), pipeline stage 2911C can have a third set of disabled channels 3513A-3513B (Channels [2, 6]), pipeline stage 2911D can have a fourth set of disabled channels 3514A-3514B (Channels [2, 5]), pipeline stage 2911E can have a fifth set of disabled channels 3515A-3515B (Channels [2, 7]), pipeline stage 2911F can have a sixth set of disabled channels 3516A-3516C (Channels [5, 6, 7]), pipeline stage 2911G can have a seventh set of disabled channels 3517A-3517D (Channels [5, 6, 7]), while all channels of pipeline stage 2911H are enabled. The set of disabled channels will then shift down to the next stage each cycle.FIG. 36 illustrates metadata for matrix multiplication on operations that include half precision matrix elements. One embodiment provides a matrix accelerator including a systolic array 3600 having sixteen channels and eight stages, with each stage configurable to perform multiplies on two pairs of matrix elements. The systolic array 3600 is illustrated as having loaded a column of elements associated with matrix 3602, which may be loaded as Matrix B (Srcl) into a set of registers and read into the systolic array 3600 as input for a matrix multiply or dot product instruction. A matrix operation is performed using the column of elements associated with Matrix B and row data from matrix 3604, which may be loaded into registers and read as Matrix A (e.g., Src2) for the matrix multiply operation. Each cell in matrix 3604 is a half-float element, where each register stores 32 elements. Metadata 3606 indicates in strikethrough which outputs will be skipped, while the other outputs will be generated.In one embodiment, for the same output row some outputs are skipped and some outputs are not skipped, due randomness associated with the output sparsity. No restriction is placed on the sparsity of the rows of outputs. For a column of outputs, for each sequential group of four outputs, two outputs are to be skipped and two outputs are not skipped, which is a structured sparsity restriction (4:2) that is placed for the operation of output sparsity in the systolic array. In other embodiments, other structured sparsity restrictions (2:1; 8:4; 16:8) may be used to constrain the sparsity of output generated by the systolic array.FIG. 37 illustrates metadata 3700 as depicted in matrix form 3702 and as stored within a metadata register 3704. The matrix form 3702 of the metadata illustrates that the metadata includes one metadata bit for each channel for a row. For a systolic array having 16 channels, 16 bits are used for each row. For a given channel, the metadata bits for the successive rows for that channel are offset by the number of channels. For example, for channel 0, metadata for the topmost row (e.g., row zero if counting from zero, row one if counting from one) is stored at bit 0 of the metadata register 3704, while the metadata for the next row is stored at bit 16. For a 16-channel systolic array, a 512-bit metadata register can hold metadata for 32 rows. When evaluating output sparsity metadata for the first three rows of channel 0, bits 0, 16, and 32 are evaluated within the metadata register, while bits 1, 17, and 33 are evaluated for channel 1, and so forth.FIG. 38 illustrates a processing element 3800 having structured output sparsity support. The processing element 3800 represents a processing element of stage 0, channel 0. A systolic array will include as many instances of the processing element 3800 as needed to support the target number of channels and associated physical pipeline stages of the array. In one embodiment, the processing element 3800 includes a pair of multipliers 3804A-3804B and an adder 3806 to perform a sub-operation of a matrix multiply and/or dot product instruction. An input data line provides an input 3810 for SrcO and an input 3811 for Src1. Input 3812 is provided for Src2. A selector 3802 is configurable to send the output of the multipliers 3804A-3804B to the adder 3806 to be added to the value received via the input 3810 for SrcO, or in one embodiment, to pass through the value received via input 3810. When the passthrough is enabled, the multiply-add circuitry can be fully or partially disabled and power gated. For example, the multipliers 3804A-3804B can be disabled and power gated or both the multipliers 3804A-3804B and the adder 3806 for the processing element 3800 can be disabled. Metadata 3805 used to enable or disable operations at the processing element is propagated through to the processing element of the channel in the next pipeline stage and is used to enable or disable the channel when processing matrix elements for that channel in the next pipeline stage.FIG. 39A-39B illustrates snapshots 3900, 3910 of processing elements at cycle zero and cycle one of instruction execution when output sparsity is enabled. Processing elements 3800AA-3800HA of channel 0 and processing elements 3800AB-3800HB of channel 1 are shown. A similar pattern is repeated for other supported channels of the systolic array. In various embodiments, various numbers of channels can be supported by systolic arrays as described herein, including but not limited to eight, sixteen, or thirty-two channels. Each channel can perform operations multiple sets of elements. In some embodiments, the number of elements per channel can vary based on the size of the elements. For example, in one embodiment a processing element for a channel can process four pairs of 8-bit integers (INT8) or two pairs of 16-bit floating point elements (e.g., FP16, BF16), or one pair of 32-bit floating point elements (FP32). In one embodiment, a processing element for a channel can process eight pairs of INT8 elements, four pairs of FP16 or BP16 elements, two pairs of FP32 elements, or a pair of 64-bit floating point (FP64) elements. Additionally, while an eight-stage array is illustrated in FIG. 39A-39B , a systolic array can include any number of physical pipeline stages and can use feedback inputs to support a larger number of logical pipeline stages than physical pipeline stages.Metadata input 3805AA-3805HA is associated respectively with processing elements 3800AA-3800HA of channel 0. Metadata input 3805AB-3805HB is associated respectively with processing elements 3800AB-3800HB of channel 1. As shown in FIG. 39A , snapshot 3900 shows that during cycle zero, an instruction can be executed with associated metadata {0, 1} for channel 0 and channel 1. Such metadata indicates that channel 0 will be bypassed and that channel 1 will be active. When channel 0 is bypassed, the multipliers and adders are disabled and the SrcO input is passed through to the output for the channel. As shown in FIG. 39B , snapshot 3910 shows that the metadata used for the processing element 3800AA and processing element 3800AB for channel 0 and channel 1 of stage 0 is propagated to processing element 3800BA and processing element 3800BB of stage 1. Processing is performed based on the propagated metadata, which is received at the processing elements via metadata input 3805BA for processing element 3800BA and metadata input 3805BB for processing element 3800BB. Processing element 3800AA and processing element 3800AB for channel 0 and channel 1 of stage 0 will receive new metadata at metadata input 3805AA and metadata input 3805AB, which is used to process the next instruction.FIG. 40 illustrates a method 4000 performed by a systolic array to reduce power consumption using output sparsity metadata. Method 4000 can be performed by a processing resource including a systolic array that is configured to support output sparsity in which matrix operations are selectively bypassed. Output sparsity can be enabled for a matrix accelerator via a systolic array that includes multiple instances of processing element 3800 as in FIG. 38 .According to method 4000, a processing resource of a graphics processor described herein (e.g., execution unit 1808A-1808N of execution logic 1800 and/or execution unit 1900; graphics multiprocessor 234; compute unit 1506A-1506N) can fetch an instruction at a processing resource to perform operations associated with a matrix instruction (e.g., multiply-accumulate, dot product) with support for output sparsity (4002). The processing resource can then decode the instruction into a decoded instruction (4004). Fetch and decode operations can be performed using circuitry such as a fetch and decode unit 2721 of data processing system 2700 shown in FIG. 27 . The processing element can then read operand data for the decoded instruction from a register file of a processing element (4006). The operand data includes elements from multiple matrices and metadata to specify the output sparsity pattern.The processing resource can then configure a matrix accelerator of the processing resource to disable channels at various physical pipeline stages based on the metadata (4008). The matrix accelerator of the processing element can include a systolic array as described above. Method 4000 can also be performed by processing resources that include tensor/RT cores 263 and/or tensor cores 371 as described herein, which may be configured to include a systolic array having support for output sparsity using the techniques described herein. The processing element can then execute the decoded instruction via a matrix accelerator by performing multiply-accumulate (e.g., dot product) operations using active channels while power gating disabled channels (4010). The processing resource can then write output of the dot product operations to the register file (4012).FIG. 41 illustrates a method 4100 of performing processing operations for a machine learning model using output sparsity. The method 4100 can be performed by a processing system that includes a machine learning framework that includes logic to tune the training of weights for a neural network via a dot product operation with support for output sparsity. The machine learning framework can be a machine learning framework 604 as in FIG. 6 that is used to accelerate the training of machine learning models. Exemplary machine learning frameworks include but are not limited to TensorFlow and MXNet.In one embodiment, the machine learning framework can be used to determine an output sparsity pattern to apply during training of the neural network (4102). The machine learning framework, or associated logic, can be used to generate metadata to process the weights of the neural network according to the determined sparsity pattern (4104). The machine learning framework, or associated logic, can request a compute framework to perform multiply-accumulate operations with output sparsity on matrix elements selected via metadata (4106). The metadata can indicate which operations to perform and which operations to bypass. Via the requested compute operations, the machine leaning framework can request the matrix accelerator to generate weight updates for neural network according to output sparsity operations (4108).FIG. 42 illustrates a method 4200 of generating output sparsity metadata based on a sparsity percentage. The method 4200 can be performed by a processing system that includes a machine learning framework that includes logic to tune the training of weights for a machine learning model via a dot product or multiply-accumulate instruction with support for output sparsity, as with method 4100 of FIG. 41 .In one embodiment, the machine learning framework can receive an output sparsity percentage to apply while training a neural network (4202). The output sparsity percentage can be provided by a programmer while fine-tuning the training process for the machine leaning model. The machine learning framework can then determine an output sparsity mode for the neural network (4202). The output sparsity mode can be determined based on settings or configurations that are provided to the machine learning model or can be determined automatically by the machine learning model. In one embodiment, the output sparsity mode can be determined to be random sparsity (4205, "Random") or structured sparsity (4205, "Structured"). When random sparsity is configured, the machine learning framework can generate output sparsity metadata having random sparsity (4206). When structured sparsity is configured, the machine learning model can generate output sparsity metadata having structured sparsity (4208). When random sparsity is enabled, each neuron of the machine learning model will have a possibility of being bypassed according to the sparsity percentage selected for the machine learning model. When structured sparsity is enabled, metadata is also generating to bypass neurons according to the selected sparsity percentage, with the sparsity additionally being constrained to a sparsity pattern (e.g., 2:1; 4:2; 8:4; 16:8). The selected sparsity pattern can confirm to a sparsity pattern for which explicit support is provided by matrix accelerator hardware that will be used to train the machine learning model.Additional Exemplary Computing DeviceFIG. 43 is a block diagram of a computing device 4300 including a graphics processor 4304, according to an embodiment. Versions of the computing device 4300 may be or be included within a communication device such as a set-top box (e.g., Internet-based cable television set-top boxes, etc.), global positioning system (GPS)-based devices, etc. The computing device 4300 may also be or be included within mobile computing devices such as cellular phones, smartphones, personal digital assistants (PDAs), tablet computers, laptop computers, e-readers, smart televisions, television platforms, wearable devices (e.g., glasses, watches, bracelets, smartcards, jewelry, clothing items, etc.), media players, etc. For example, in one embodiment, the computing device 4300 includes a mobile computing device employing an integrated circuit ("IC"), such as system on a chip ("SoC" or "SOC"), integrating various hardware and/or software components of computing device 4300 on a single chip. The computing device 4300 can be a computing device including components illustrated in the data processing system 2700 as in of FIG. 27 .The computing device 4300 includes a graphics processor 4304. The graphics processor 4304 represents any graphics processor described herein. In one embodiment, the graphics processor 4304 includes a cache 4314, which can be a single cache or divided into multiple segments of cache memory, including but not limited to any number of L1, L2, L3, or L4 caches, render caches, depth caches, sampler caches, and/or shader unit caches. In one embodiment the cache 4314 may be a last level cache that is shared with the application processor 4306.In one embodiment the graphics processor 4304 includes a graphics microcontroller that implements control and scheduling logic for the graphics processor. The control and scheduling logic can be firmware executed by the graphics microcontroller 4315. The firmware may be loaded at boot by the graphics driver logic 4322. The firmware may also be programmed to an electronically erasable programmable read only memory or loaded from a flash memory device within the graphics microcontroller 4315. The firmware may enable a GPU OS 4316 that includes device management logic 4317 and driver logic 4318, and a scheduler 4319. The GPU OS 4316 may also include a graphics memory manager 4320 that can supplement or replace the graphics memory manager 4321 within the graphics driver logic 4322.The graphics processor 4304 also includes a GPGPU engine 4344 that includes one or more graphics engine(s), graphics processor cores, and other graphics execution resources as described herein. Such graphics execution resources can be presented in the forms including but not limited to execution units, shader engines, fragment processors, vertex processors, streaming multiprocessors, graphics processor clusters, or any collection of computing resources suitable for the processing of graphics resources or image resources or performing general purpose computational operations in a heterogeneous processor. The processing resources of the GPGPU engine 4344 can be included within multiple tiles of hardware logic connected to a substrate, as illustrated in FIG. 24B-24D . The GPGPU engine 4344 can include GPU tiles 4345 that include graphics processing and execution resources, caches, samplers, etc. The GPU tiles 4345 may also include local volatile memory or can be coupled with one or more memory tiles, such as memory tiles 1626A-1626D as in FIG. 16B-16C .The GPGPU engine 4344 can also include and one or more special tiles 4346 that include, for example, a non-volatile memory tile 4356, a network processor tile 4357, and/or a general-purpose compute tile 4358. The GPGPU engine 4344 also includes a matrix multiply accelerator 4360. The general-purpose compute tile 4358 may also include logic to accelerate matrix multiplication operations. The non-volatile memory tile 4356 can include non-volatile memory cells and controller logic. The controller logic of the non-volatile memory tile 4356 may be managed by one of device management logic 4317 or driver logic 4318. The network processor tile 4357 can include network processing resources that are coupled to a physical interface within the input/output (I/O) sources 4310 of the computing device 4300. The network processor tile 4357 may be managed by one or more of device management logic 4317 or driver logic 4318.The matrix multiply accelerator 4360 is a modular scalable sparse matrix multiply accelerator as described herein. The matrix multiply accelerator 4360 can includes multiple processing paths, with each processing path including multiple pipeline stages. Each processing path can execute a separate instruction. In various embodiments, the matrix multiply accelerator 4360 can have architectural features of any one of more of the matrix multiply accelerators described herein. For example, in one embodiment, the matrix multiply accelerator 4360 is a systolic array 3000 that is configurable to operate with a multiple of four number of logical stages (e.g., four, eight, twelve, sixteen, etc.). In one embodiment the matrix multiply accelerator 4360 includes one or more instances of a two-path matrix multiply accelerator 3100 with a four-stage pipeline or a four-path matrix multiply accelerator 3200 with a two-stage pipeline. In one embodiment the matrix multiply accelerator 4360 includes processing elements configured as the scalable sparse matrix multiply accelerators described herein. The matrix multiply accelerator 4360 can be configured to operate only on non-zero values of at least one input matrix. Operations on entire columns or submatrices can be bypassed where block sparsity is present. The matrix multiply accelerator 4360 can also include any logic based on any combination of these embodiments, and particularly include logic to enable support for random sparsity, structured sparsity, and output sparsity, according to embodiments described herein.As illustrated, in one embodiment, and in addition to the graphics processor 4304, the computing device 4300 may further include any number and type of hardware components and/or software components, including, but not limited to an application processor 4306, memory 4308, and input/output (I/O) sources 4310. The application processor 4306 can interact with a hardware graphics pipeline to share graphics pipeline functionality. Processed data is stored in a buffer in the hardware graphics pipeline and state information is stored in memory 4308. The resulting data can be transferred to a display controller for output via a display device. The display device may be of various types, such as Cathode Ray Tube (CRT), Thin Film Transistor (TFT), Liquid Crystal Display (LCD), Organic Light Emitting Diode (OLED) array, etc., and may be configured to display information to a user via a graphical user interface.The application processor 4306 can include one or processors, such as processor(s) 102 of FIG. 1 and may be the central processing unit (CPU) that is used at least in part to execute an operating system (OS) 4302 for the computing device 4300. The OS 4302 can serve as an interface between hardware and/or physical resources of the computing device 4300 and one or more users. The OS 4302 can include driver logic for various hardware devices in the computing device 4300. The driver logic can include graphics driver logic 4322, which can include the user mode graphics driver 2326 and/or kernel mode graphics driver 2329 of FIG. 23 . The graphics driver logic can include a graphics memory manager 4321 to manage a virtual memory address space for the graphics processor 4304. The graphics memory manager 4321 can facilitate a unified virtual address space that may be accessed by the application processor 4306 and the graphics processor 4304.It is contemplated that in some embodiments the graphics processor 4304 may exist as part of the application processor 4306 (such as part of a physical CPU package) in which case, at least a portion of the memory 4308 may be shared by the application processor 4306 and graphics processor 4304, although at least a portion of the memory 4308 may be exclusive to the graphics processor 4304, or the graphics processor 4304 may have a separate store of memory. The memory 4308 may comprise a pre-allocated region of a buffer (e.g., framebuffer); however, it should be understood by one of ordinary skill in the art that the embodiments are not so limited, and that any memory accessible to the lower graphics pipeline may be used. The memory 4308 may include various forms of random-access memory (RAM) (e.g., SDRAM, SRAM, etc.) comprising an application that makes use of the graphics processor 4304 to render a desktop or 3D graphics scene. A memory controller hub, such as memory controller 1416 of FIG. 14 , may access data in the memory 4308 and forward it to graphics processor 4304 for graphics pipeline processing. The memory 4308 may be made available to other components within the computing device 4300. For example, any data (e.g., input graphics data) received from various I/O sources 4310 of the computing device 4300 can be temporarily queued into memory 4308 prior to their being operated upon by one or more processor(s) (e.g., application processor 4306) in the implementation of a software program or application. Similarly, data that a software program determines should be sent from the computing device 4300 to an outside entity through one of the computing system interfaces, or stored into an internal storage element, is often temporarily queued in memory 4308 prior to its being transmitted or stored.The I/O sources can include devices such as touchscreens, touch panels, touch pads, virtual or regular keyboards, virtual or regular mice, ports, connectors, network devices, or the like, and can attach via a platform controller hub 1430 as referenced in FIG. 14 . Additionally, the I/O sources 4310 may include one or more I/O devices that are implemented for transferring data to and/or from the computing device 4300 (e.g., a networking adapter); or, for a large-scale non-volatile storage within the computing device 4300 (e.g., SSD/HDD). User input devices, including alphanumeric and other keys, may be used to communicate information and command selections to graphics processor 4304. Another type of user input device is cursor control, such as a mouse, a trackball, a touchscreen, a touchpad, or cursor direction keys to communicate direction information and command selections to GPU and to control cursor movement on the display device. Camera and microphone arrays of the computing device 4300 may be employed to observe gestures, record audio and video and to receive and transmit visual and audio commands.The I/O sources 4310 can include one or more network interfaces. The network interfaces may include associated network processing logic and/or be coupled with the network processor tile 4357. The one or more network interface can provide access to a LAN, a wide area network (WAN), a metropolitan area network (MAN), a personal area network (PAN), Bluetooth, a cloud network, a cellular or mobile network (e.g., 3rd Generation (3G), 4th Generation (4G), 5th Generation (5G), etc.), an intranet, the Internet, etc. Network interface(s) may include, for example, a wireless network interface having one or more antenna(e). Network interface(s) may also include, for example, a wired network interface to communicate with remote devices via network cable, which may be, for example, an Ethernet cable, a coaxial cable, a fiber optic cable, a serial cable, or a parallel cable.Network interface(s) may provide access to a LAN, for example, by conforming to IEEE 802.11 standards, and/or the wireless network interface may provide access to a personal area network, for example, by conforming to Bluetooth standards. Other wireless network interfaces and/or protocols, including previous and subsequent versions of the standards, may also be supported. In addition to, or instead of, communication via the wireless LAN standards, network interface(s) may provide wireless communication using, for example, Time Division, Multiple Access (TDMA) protocols, Global Systems for Mobile Communications (GSM) protocols, Code Division, Multiple Access (CDMA) protocols, and/or any other type of wireless communications protocols.It is to be appreciated that a lesser or more equipped system than the example described above may be preferred for certain implementations. Therefore, the configuration of the computing devices described herein may vary from implementation to implementation depending upon numerous factors, such as price constraints, performance requirements, technological improvements, or other circumstances. Examples include (without limitation) a mobile device, a personal digital assistant, a mobile computing device, a smartphone, a cellular telephone, a handset, a one-way pager, a two-way pager, a messaging device, a computer, a personal computer (PC), a desktop computer, a laptop computer, a notebook computer, a handheld computer, a tablet computer, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, consumer electronics, programmable consumer electronics, television, digital television, set top box, wireless access point, base station, subscriber station, mobile subscriber center, radio network controller, router, hub, gateway, bridge, switch, machine, or combinations thereof.One embodiment provides a processing apparatus can include a general-purpose parallel processing engine comprising a matrix accelerator including a multi-stage systolic array, where each stage includes multiple processing elements associated with multiple processing channels. The multiple processing elements are configured to receive output sparsity metadata that is independent of input sparsity of input matrix elements and perform processing operations on the input matrix elements based on the output sparsity metadata. To perform the processing operations, the multiple processing elements can receive output sparsity metadata at a first pipeline stage and perform processing operations on the input matrix elements based on the output sparsity metadata at the first stage. To perform the processing operations includes to bypass multiplication at a first processing element associated with a first processing channel and power gate a portion of the first processing element and multiply input elements at a second processing element associated with a second processing channel. Power gating the portion of the first processing element includes to power gate a multiplier of processing element and/or an adder of the processing element. Each of the multiple processing elements includes a first source input associated with an accumulator value, a second source input associated with a first matrix, and a third source input associated with a second matrix.In one embodiment, to bypass multiplication at the first processing element includes to output the accumulator value received at the first source input. In another embodiment, no data is output by the first processing element. In yet another embodiment, a zero value is output by the processing element. The processing elements can propagate the output sparsity metadata received at the first pipeline stage to a second pipeline stage and process input elements of the multiple processing channels according to the output sparsity metadata. The output sparsity metadata can include a bit associated with each of the multiple processing channels for each of multiple rows of an input matrix. In one embodiment, in a first processing cycle, the output sparsity metadata indicates to the first processing element to multiply input elements of a second matrix with input elements of a first matrix and, in a second processing cycle, to bypass multiplication operations for the input elements.One embodiment provides a method comprising fetching an instruction at a processing resource of a graphics processor to perform operations associated with a matrix instruction that specifies metadata for output sparsity, decoding the instruction into a decoded instruction, reading operand data for the decoded instruction from a register file of the processing resource, the operand data including matrix elements and the metadata, wherein the metadata is independent of input sparsity of the matrix elements, executing the decoded instruction via a matrix accelerator including a systolic array of multiple pipeline stages by performing, according to the metadata, multiply-accumulate operations on matrix elements associated with a first channel and bypassing the multiply-accumulate operations on the matrix elements associated with a second channel, and writing output of the multiply-accumulate operations to the register file. In one embodiment, bypassing the multiply-accumulate operations on the matrix elements associated with the second channel includes power gating a multiplier of a processing element associated with the second channel and/or an adder of the processing element associated with the second channel. In a further embodiment, according to the metadata, the multiply-accumulate operations on matrix elements associated with the first channel are performed and the multiply-accumulate operations on the matrix elements associated with the second channel at a first pipeline stage of the multiple pipeline stages are bypassed. Concurrently, a second stage bypasses the multiply-accumulate operations on the matrix elements associated with the first channel and performs the multiply-accumulate operations on the matrix elements associated with the second channel at a second pipeline stage of the multiple pipeline stages. One embodiment provides a system and/or apparatus to perform a method as described above.One embodiment provides a system comprising a memory device and a graphics processor coupled to the memory device, the graphics processor comprising a general-purpose parallel processing engine. The general-purpose parallel processing engine includes a matrix accelerator including one or more systolic arrays, at least one of the one or more systolic arrays comprising multiple pipeline stages, each pipeline stage of the multiple pipeline stages including multiple processing elements, the multiple processing elements associated with multiple processing channels. The multiple processing elements are configured to receive output sparsity metadata at a first pipeline stage, the output sparsity metadata associated with the multiple processing channels, wherein the output sparsity metadata is independent of input sparsity of input matrix elements and perform processing operations on the input matrix elements based on the output sparsity metadata. To perform the processing operations includes to bypass multiplication at a first processing element associated with a first processing channel and power gate a portion of the first processing element and multiply input elements at a second processing element associated with a second processing channel.In a further embodiment, to power gate the portion of the first processing element includes to power gate one or more of a multiplier of processing element and an adder of the processing element. Each of the multiple processing elements can include a first source input associated with an accumulator value, a second source input associated with a first matrix, and a third source input associated with a second matrix. In one embodiment, bypassing multiplication at the first processing element includes outputting the accumulator value received at the first source input. In one embodiment, to perform the processing operations includes to propagate the output sparsity metadata received the first pipeline stage to a second pipeline stage and process input elements of the multiple processing channels according to the output sparsity metadata.In one embodiment, the output sparsity metadata includes a bit associated with each processing channel. The output sparsity metadata can include a bit associated with a row of an input matrix and in a first processing cycle, the output sparsity metadata can indicate to the first processing element to multiply input elements of a second matrix with input elements of a first matrix and, in a second processing cycle, to bypass multiplication operations for the input elements.The foregoing description and drawings are to be regarded in an illustrative rather than a restrictive sense. Persons skilled in the art will understand that various modifications and changes may be made to the embodiments described herein without departing from the broader spirit and scope of the features set forth in the appended claims. |
Some embodiments include methods of forming patterns in which a block copolymer-containing composition is formed over a substrate, and is then patterned to form a first mask. The block copolymer of the composition is subsequently induced into forming a repeating pattern within the first mask. Portions of the repeating pattern are then removed to form a second mask from the first mask. The patterning of the block copolymer-containing composition may utilize photolithography. Alternatively, the substrate may have regions which wet differently relative to one another with respect to the block copolymer-containing composition, and the patterning of the first mask may utilize such differences in wetting in forming the first mask. |
CLAIMS I/we claim, 1 . A method of forming a pattern, comprising: depositing a block copolymer-comprising composition over a substrate; patterning the composition to form a first mask from the composition; and inducing assembly of the block copolymer to form a second mask from the first mask. 2. The method of claim 1 wherein the patterning comprises photolithography. 3. The method of claim 1 wherein the substrate comprises an upper surface having some regions that are more wettable by the composition than other regions, and wherein the patterning comprises beading of the composition induced by differences in wettability across the substrate upper surface. 4. A method of forming a pattern, comprising: depositing a radiation-sensitive composition over a substrate, the radiation-sensitive composition comprising block copolymer containing one or more leaving groups that are released through radiation-induced cleavage; photolithographically patterning the radiation-sensitive composition to form a first patterned mask from the radiation-sensitive composition; and inducing assembly of the block copolymer to form a second patterned mask within the first patterned mask. 5. The method of claim 4 wherein the block copolymer is a diblock copolymer. 6. The method of claim 4 wherein the radiation-sensitive composition comprises the block copolymer dispersed in a photoresist. 7. The method of claim 4 wherein the radiation-sensitive composition consists of the block copolymer. 8. A method of forming a pattern, comprising: depositing a radiation-sensitive composition over a substrate, the radiation-sensitive composition comprising block copolymer containing one or more leaving groups that are released through radiation-induced cleavage; exposing the radiation-sensitive composition to patterned electromagnetic radiation followed by developer to remove a first portion of the radiation-sensitive composition while leaving a second portion of the radiation-sensitive composition in a first pattern induced by the patterned electromagnetic radiation; and inducing assembly of the block copolymer to form a second pattern from the radiation-sensitive composition. 9. The method of claim 8 wherein the block copolymer is a diblock copolymer. 10. The method of claim 8 wherein the block copolymer comprises at least one of poly{4-[(tert-butoxycarbonyl)oxy]styrene} and cycloolefin-polymethacrylate. 1 1. A method of forming a pattern, comprising: depositing a material over a substrate, the material comprising diblock copolymer and being photolithographically patternable; photolithographically patterning the material to form a first pattern; inducing assembly of the diblock copolymer to form alternating first and second segments within the patterned material; selectively removing one of the first and second segments relative to the other of the first and second segments to form a second pattern superimposed on the first pattern; and utilizing the second pattern to define locations of integrated circuit components within the substrate. 12. The method of 1 1 wherein the photolithography includes: exposure to patterned actinic radiation; thermal treatment of the material at a temperature of less than or equal to about a glass transition temperature of the material after the exposure to the actinic radiation; and exposure to a developer after the thermal treatment. 13. The method of 12 wherein the thermal treatment is a first thermal treatment, and wherein the inducing assembly of the diblock copolymer comprises a second thermal treatment; said second thermal treatment being at a temperature of greater than about the glass transition temperature of the material. 14. The method of 1 1 wherein the integrated circuit components are part of one or more of a DRAM array, a NAND array, and a cross-point memory array. 15. A method of forming a pattern, comprising: depositing a material over a substrate, the substrate having a plurality of first regions and a plurality of second regions, with the first regions being more wettable to the material than the second regions; the material comprising diblock copolymer and being patterned into a first pattern by the difference in wettability relative to the first and second regions of the substrate; inducing assembly of the diblock copolymer to form alternating first and second segments within the patterned material; selectively removing at least some of one of the first and second segments relative to at least some of the other of the first and second segments to form a second pattern superimposed on the first pattern; and utilizing the second pattern to define locations of integrated circuit components within the substrate. 16. The method of 15 wherein the diblock copolymer is PS-bP2VP, PS-b- PEO, or PS-b-PDMS. 17. The method of 16 wherein said first regions consist of silicon or doped silicon, and wherein said second regions comprise silicon-containing regions that have been treated with one or more fluoroalkyl silanes, and/or with one or more silicones, and/or with other dewetting agents. 18. The method of 16 wherein the second regions are formed by treating portions of the substrate with one or more fluoroalkyl silanes and/or with one or more silicones. 19. The method of 15 wherein the integrated circuit components are part of one or more of a DRAM array, a NAND array, and a cross-point memory array. |
METHODS OF UTILIZING BLOCK COPOLYMER TO FORM PATTERNS TECHNICAL FIELD [0001] Methods of utilizing block copolymer to form patterns. BACKGROUND [0001] A continuing goal of semiconductor processing is to increase integration density. This goal of increasing circuit density permeates through fabrication of all types of circuitry, including memory, logic and sensors. Significant improvement in integrated circuit density may be achieved by reducing the size of individual structures in layouts in which there is a large number of repeating units, such as with integrated memory. The individual structures of integrated memory may be comprised by memory-storage units. Example memory-storage units are NAND unit cells, dynamic random access (DRAM) unit cells, and cross-point memory unit cells. [0002] Photolithography is a conventional method utilized for fabrication of integrated components. Photolithography utilizes light to pattern a photosensitive material. The photolithographically-patterned photosensitive material may then be utilized as a mask for patterning underlying materials to form integrated circuit components. [0003] If only photolithography is utilized to pattern integrated circuit components, integrated circuit density cannot increase beyond a threshold dictated by the minimum attainable feature size obtainable utilizing the photolithography. The minimum feature size may be dictated by, for example, a wavelength utilized during the photolithography. [0004] Several methods have been developed which can be utilized in combination with photolithography to push the minimum attainable feature size to smaller dimensions than may be achieved with photolithography alone. Among such methods is a procedure comprising utilization of a block copolymer to form a pattern within photolithographically-patterned features. The pattern created with the block copolymer may be at higher density than is achievable with photolithographic patterning, and thus may be utilized to create higher integrated circuit densities than are achievable with photolithography alone. [0005] Although the utilization of block copolymers shows promise for increasing integrated circuit density, there are technical obstacles to overcome before block copolymers are adopted for wide-scale use in semiconductor device fabrication. [0006] It would be desirable to develop new methods of forming patterns with block copolymers which enable repeating patterns to be formed to high density. It would befurther desirable for such methods to be readily applicable for semiconductor device fabrication. BRIEF DESCRIPTION OF THE DRAWINGS [0007] FIGS. 1 -5 illustrate a portion of a semiconductor construction at various process stages of an example embodiment. [0008] FIGS. 6-8 illustrate the portion of the semiconductor construction of FIG. 2 at various process stages subsequent to FIG. 2 in accordance with another example embodiment. [0009] FIGS. 9-14 illustrate a portion of a semiconductor construction at various process stages of another example embodiment. [0010] FIG. 15 illustrates a portion of a semiconductor construction at a process stage subsequent to that of FIG. 2 in accordance with another example embodiment. [0011] FIG. 16 illustrates a portion of a semiconductor construction at a process stage subsequent to that of FIG. 1 1 in accordance with another example embodiment. DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS [0012] Some embodiments include methods in which material is provided over a substrate and patterned into a first masking pattern. Subsequently, the material is treated to form repeating segments within the material, and then one or more of the segments is selectively removed to form a second masking pattern superimposed within the first masking pattern. Example embodiments are described with reference to FIGS. 1 -16. [0013] Referring to FIG. 1 , a portion of a semiconductor construction 10 is illustrated. The construction 10 includes a semiconductor substrate 12 and a material 18 formed over the substrate. [0014] Substrate 12 comprises a base 14, and a material 16 supported over the base. [0015] The terms "semiconductive substrate" and "semiconductor substrate" mean any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductive substrates described above.[0016] Base 14 may correspond to a semiconductor material, and in some embodiments may correspond to a monocrystalline silicon wafer. [0017] Material 16 represents a material which is to be patterned during fabrication of integrated circuitry. Material 16 may be an electrically insulative material (for instance, may comprise one or more of silicon nitride, silicon dioxide, etc.), an electrically conductive material (for instance, may comprise one or more of various metals, metal-containing compositions, conductively-doped semiconductor material, etc.) or a semiconductive material (for instance, silicon, germanium, etc.). Although only the single material 16 is shown supported by base 14, in some embodiments multiple materials may be supported by the base. For instance, if it is desired to form NAND unit cells over base 14, there may be a plurality of gate materials stacked over the base; with such gate materials ultimately being simultaneously patterned to form a plurality of gate constructions supported by the base. As another example, if it is desired to form cross- point memory, there may be a plurality of materials stacked over base 14; with such materials ultimately being simultaneously patterned to form a plurality of lines extending across the base. As yet another example, if it is desired to form DRAM, there may be a plurality of materials stacked over base 14; with such materials ultimately being simultaneously patterned to form a plurality of wordlines and/or bitlines extending across the base. [0018] Material 18 is radiation-sensitive so that it may be patterned by photolithographic methodology, and comprises block copolymer. In some embodiments, material 18 may comprise a blend of block copolymer and conventional photoresist. In other embodiments, material 18 may comprise, consist essentially of, or consist of a material which includes both the self-assembling properties of block copolymers and the photosensitivity of photoresist materials. Material 18 may have one or more "leaving groups", which are either radiation-releasable and/or releasable after interaction with additional species that are radiation-releasable, (e.g. photo-acids). Such leaving groups may be referred to as leaving groups that may be released through radiation-induced cleavage. [0019] Copolymers are polymers derived from two or more different monomeric species. Block copolymers contain two or more homopolymer subunits linked by covalent bonds. The union of the homopolymer subunits may utilize an intermediate non-repeating linkage, known as a junction block. The term "block copolymer" may be generic for any heterogeneous material that can micro-phase separate to form domains on sub-lithographic-length scales. Block copolymers may be, for example, organic, organo-metallic, or organo-Si. Block copolymers with two distinct blocks may bereferred to as diblock copolymers. Block copolymers may be identified by the number of distinct homopolymer subunits contained therein. For example, block copolymers containing only two distinct homopolymer subunits may be referred to as diblock copolymers, and block copolymers containing only three distinct homopolymer subunits may be referred to as triblock copolymers. [0020] Example block copolymers that may be utilized in applications in which the copolymer is dispersed in conventional photoresist are polystyrene-b-poly (2- vinylpyridine) (PS-b-P2VP); polystyrene-b-poly (ethylene-alt-propylene); polystyrene-b- poly(methylmethacrylate) (PS-b-PMMA); polystyrene-block-poly(ethylene oxide) (PS-b- PEO); and polystyrene-b-poly(dimethyl-siloxane) (PS-b-PDMS). The "b" utilized in each of the above chemical formulas indicates a block linkage. [0021] Example block copolymers that may be utilized in applications in which the copolymer is utilized as a radiation-sensitive compound are copolymers analogous to PS-b-PMMA, and comprising modified subunits that contain leaving groups that may be released through radiation-induced cleavage; with such molecules being base soluble after cleavage of the leaving groups in some embodiments. The modified subunits may be the polystyrene subunit alone, the methylmethacrylate subunit alone, or both the polystyrene subunit and the methylmethacrylate subunit. [0022] If the polystyrene subunit is modified, such modification may utilize poly{4- [(tert-butoxycarbonyl)oxy]styrene} in place of the polystyrene, with the tert-butoxyl group being a leaving group that may be released through radiation-induced cleavage; and if the methylmethacrylate subunit is modified, such modification may utilize cycloolefin- polymethacrylate in place of methylmethacrylate, with the cycloolefin being a group that may be released through radiation-induced cleavage. [0023] Other example block polymers are PS-b-PSmodlfιed-b-PMMA and PS-b-PMMA- b-PMMAmod,fieci; where PSm0dιfιed and PMMAm0dιfιed are derivatives of polystyrene and poly(methylmethacrylate), respectively. [0024] Material 18 may be deposited over material 16 utilizing any suitable methodology, including, for example, spin-on methodologies. Material 18 may be treated with a so-called "soft bake" after deposition of material 18. In some embodiments, the soft bake may be at a temperature that is near or below the glass transition temperature (Tg) of material 18. In some embodiments, the soft bake may be at a temperature of from about 1100C to about 120°C, while material 18 has a Tg of from about 140°C to about 1500C. The soft bake may be utilized to remove solvent that was present in material 18 as a carrier during deposition of material 18.[0025] Material 18 may be photolithographically patterned, and FIG. 2 shows construction 10 at a processing stage after photolithographic patterning of material 18. The patterning has formed material 18 into a patterned mask 19. Patterned mask 19 includes a plurality of masking features 20, 22 and 24, which are spaced from one another by intervening gaps 26 and 28. The patterned mask of FIG. 2 (i.e., the mask formed by photolithographic patterning of material 18) may be referred to as a first patterned mask to distinguish it from other masks formed in subsequent processing (discussed below). [0026] The photolithographic patterning of material 18 comprises exposure of some regions of material 18 to electromagnetic radiation (i.e. actinic radiation), while leaving other regions unexposed; followed by utilization of a developer solution to remove either the exposed or unexposed regions, and to leave the non-removed regions as the patterned mask. [0027] The exposure of some regions of material 18 to electromagnetic radiation may be considered to be exposure of material 18 to patterned electromagnetic radiation. The patterned electromagnetic radiation may be of any suitable wavelength, and may, for example, be 365 nanometer wavelength radiation, 248 nanometer wavelength radiation, 193 nanometer wavelength radiation, extreme ultraviolet (EUV) radiation, etc.. [0028] In some embodiments, material 18 may receive a thermal treatment after the exposure to the electromagnetic radiation, and prior to the utilization of the developer. Such thermal treatment may be referred to as a "post exposure bake", and may be utilized to enhance migration of chemicals (for instance acid) within chemically-amplified photoresist. The post exposure bake may be conducted at a temperature of less than or equal to a glass temperature of material 18 (with the glass transition temperature being a temperature of at least about 1000C and less than or equal to about150°C in some embodiments); and in some embodiments may be conducted at a temperature of from about 900C to about 1200C. [0029] In embodiments in which material 18 comprises block copolymer dispersed in conventional photoresist, the conventional photoresist may be a chemically-amplified resist. If the addition of the block copolymer influences a rate of chemical amplification, the concentration of amplifying chemical and/or the duration of a post-exposure bake may be adjusted to compensate for such influence. For instance, the chemistry or the concentration of a photoacid generator (PAG) may be adjusted. [0030] In embodiments in which material 18 comprises block copolymer modified to have leaving groups that may be released through radiation-induced cleavage, such block copolymer may be utilized in combination with chemical amplifiers (such as, forexample, PAGs). In such embodiments, the duration and temperature of the postexposure bake and/or photoacid generator chemistry, and/or photoacid quench chemistry, may be adjusted to obtain desired amplification of the effect of the electromagnetic radiation exposure. [0031] In embodiments in which the block copolymer comprises poly{4-[(tert- butoxycarbonyl)oxy]styrene} and cycloolefin-polymethacrylate (or similar blocks), the exposure to radiation may convert the subunits of the block copolymer to polyhydroxystyrene (PHOST) and polyacrylic acid (PAA) or similar subunits that may be developed and selectively removed/left from the subunits in unexposed regions. In some embodiments, such conversion may be chemically amplified with a post exposure bake. The specific chemistry described herein is an example chemistry, and other embodiments may utilize other chemistries to achieve similar results. [0032] The exposure to the electromagnetic radiation, and the post-exposure bake (in embodiments utilizing a post-exposure bake), cause some portions of material 18 to be modified relative to other portions. The developer mentioned previously is then utilized to selectively remove either the modified portions, or the unmodified portions. The developer may be a conventional developer suitable for selectively dissolving either the modified or unmodified portions, and may, for example, comprise an aqueous solution of tetramethylammonium hydroxide (TMAH). In embodiments comprising blends of block copolymer and photoresist, the block copolymer in exposed regions may be "developable" by the action of a photoacid generator, and/or the developer may be configured for selectively dissolving the block copolymer in exposed regions without extracting significant amounts of block copolymer from the unexposed regions. [0033] An upper surface of material 16 is uncovered within gaps 26 and 28. In some embodiments, the uncovered upper surface of material 16 may be coated, grafted and/or functionalized to change properties of the upper surface so that it becomes less wettable relative to material 18. Such can impede material 18 from accumulating across gaps 26 and 28 in subsequent processing (discussed below). In some embodiments, the amount of material 18, size of gaps 26 and 28, and parameters of the subsequent processing may be adjusted so that material 18 does not disperse entirely across gaps 26 and 28 regardless of whether or not the upper surface of material 16 is treated. It is noted, however, that there may alternatively be some embodiments in which it is desired for material 18 to extend entirely across gaps 26 and 28 after the subsequent processing. [0034] Referring to FIG. 3, material 18 is subjected to conditions that induce self- assembly of the block copolymer to form features 32 and 34 from the block copolymer.The block copolymer may be a diblock copolymer, and in such embodiments may be generically represented as A-B, where the "A" represents one of the homopolymer subunits, the "B" represents the other of the homopolymer subunits, and the hyphen represents a covalent bond. A pattern resulting from self-assembly of diblock copolymer may be designated by the shorthand A-B:B-A:A-B:B-A; where the hyphen represents covalent interactions, and the colon represents non-covalent interactions. Thus, features 32 may comprise the A subunits of the block copolymer, and features 34 may comprise the B subunits of the block copolymer, or vice versa. The features 32 and 34 differ from one another relative to the wetting of air and substrate interfaces, and this leads to the self-assembly of the features 32 and 34 into the pattern shown in FIG. 3. FIG. 3 illustrates one of many configurations that may result from self-assembly of block copolymer. FIG. 15 shows another configuration that may result from self-assembly of the block copolymer. [0035] In some embodiments, the features 32 may include other components in addition to one of the subunits of the block copolymer. For instance, in embodiments in which material 18 (FIG. 2) comprises the block copolymer in a mixture with other substances, the features 32 may include such other substances as well as including one of the subunits of the block copolymer. [0036] In some embodiments, features 32 may be considered to alternate with features 34 along a cross-section through masking blocks 20, 22 and 24; and in such embodiments the features 32 and 34 along such cross-section may be considered to comprise alternating first and second segments formed from the block copolymer. [0037] The features 34 may be considered to correspond to a second patterned mask 35 that is formed from the first patterned mask 19. Also, a pattern of the features 34 may be referred to as a second pattern. Such second pattern may be considered to be within the first pattern corresponding to the pattern of features 20, 22 and 24, or to be superimposed on the first pattern corresponding to the pattern of features 20, 22 and 24. [0038] FIG. 3 illustrates one of many configurations that may result from self- assembly of block copolymer. FIG. 15 shows another configuration that may result from self-assembly of the block copolymer. In the embodiments of FIGS. 3 and 15, features 34 are cylinders extending into and out of the page relative to the cross-sectional views of the figures. In other embodiments the features may be lamellae, micelles, or surface- perpendicular cylinders. [0039] The conditions utilized to induce self-assembly of the copolymer may be thermal conditions, and may utilize a temperature greater than about the glass transitiontemperature of material 18 (such temperature may be from greater than 1500C to less than or equal to about 2500C in some embodiments). In another embodiment, self- assembly may be induced during a solvent anneal step, where the material is exposed to the partial pressure of an appropriate solvent vapor. [0040] Referring to FIG. 3, the blocks 20, 22 and 24 of the first patterned mask 19 are shown to have undergone reflow during exposure to the conditions utilized to induce self-assembly of the copolymer. Such reflow has changed the shape of blocks 20, 22 and 24 so that the individual blocks have now spread, and become dome-shaped. The spreading of the blocks has reduced the size of gaps 26 and 28 relative to the initial size present at the processing stage of FIG. 2. The amount of spreading of the individual blocks may be influenced by numerous factors, which may include one or more of the following: the composition of the blocks, the initial volume of the blocks, the initial shape of the blocks, the temperature of a treatment utilized to induce self-assembly of the block copolymer, the duration of such treatment, the type of solvent utilized if a solvent anneal is utilized to induce the self-assembly, and a drive to minimize a total area of an air interface. Additionally, the amount of spreading of individual blocks may be influenced by a composition of the surface of material 16, and specifically by the contact angle of material 18 relative to surface 16. In some embodiments, at least some of the surface 16 exposed within gaps 26 and 28 may be treated to render the surface non- wettable by material 18 (FIG. 2) so that the material 18 beads from the surface and does not extend entirely across gaps 26 and 28. Such treatment of surface 16 may include, for example, exposure of the surface to one or more fluoroalkyl silanes and/or silicones; and may be conducted before or after formation of blocks 20, 22 and 24 over surface 16. In another embodiment, the gaps 26 and 28 are closed as material 18 (FIG. 2) reflows during the self-assembly anneal to form the second mask 35, and the features 34 are then formed to be uniformly periodic across the entire surface 16. [0041] The formation of features 34 may be referred to as grapho-epitaxial alignment, and may form the features 34 to a pitch that is substantially smaller than a minimum pitch achievable by photolithographic exposure. For instance, features 34 may be formed to a pitch that is less than or equal to one-half of the minimum pitch achievable by the photolithographic process utilized to form the blocks 20, 22 and 24 of FIG. 2. [0042] Referring to FIG. 4, most of the features 32 (FIG. 3) are selectively removed relative to features 34, to leave features 34 of patterned mask 35 remaining over material 16. Some of the features 32 remain beneath features 34 in the shown embodiment due to anisotropy of the etch utilized to remove features 32. One methodof selectively removing the shown portions of features 32 relative to features 34 is to first selectively modify features 34 relative to features 32 by oxidizing or metalizing the features 34 (i.e., incorporating oxygen or metal into features 34), and to subsequently remove portions of features 32 by ashing with O2 plasma. If the embodiment of FIG. 15 were utilized instead of that of FIG. 3, a punch-through etch may be conducted to remove at least part of the outer skin (which one of the features 34 in the FIG. 15 embodiment) and thereby expose features 32 for subsequent removal. [0043] Referring to FIG. 5, the patterned mask 35 may be utilized to fabricate a pattern within material 16. In some embodiments, material 16 may be representative of one or more materials utilized for fabrication of memory architecture (e.g., NAND, DRAM and/or cross-point memory). In such embodiments, the transfer of a pattern into material 16 may represent patterning of one or more materials into structures of memory architecture. In such embodiments, the features 34 may be used to define locations of integrated circuit components within substrate 12. For instance, patterning of material 16 may represent patterning of one or more gate materials of NAND unit cells; may represent patterning of a plurality of lines of cross-point memory cells; and/or may represent patterning of wordlines and/or bitlines of DRAM. [0044] In some embodiments, features 32 and 34 of FIG. 5 may be removed in subsequent processing; and in other embodiments, features 32 and 34 may be left to become incorporated into an integrated circuit construction. [0045] FIG. 6 shows construction 10 at a processing stage subsequent to that of FIG. 2 in accordance with an embodiment in which the self-assembly of block copolymer has formed lamella rather than cylinders. Accordingly, the material 18 of FIG. 2 has assembled into alternating segments of features 32 and 34. The features 32 and 34 may correspond to the A subunit of a diblock copolymer, and to the B subunit of the diblock copolymer, respectively. The construction of FIG. 6 may be induced by any suitable method, including, for example, changing the volume fractions of the A and B subunits relative to the volume fractions that would form the construction of FIG. 3. The shown lamellae may form if the surfaces of material 16 that are covered by blocks 22, 24 and 26 are neutral relative to wettability by features 32 and 34 (i.e., if features 32 and 34 both wet the surfaces to a comparable amount), and if features 32 are preferentially formed along an air interface relative to features 34. [0046] If composition 18 of FIG. 2 consists of diblock material, then the structure of FIG. 6 may result from induction of self-assembly of the diblock copolymer. In other words, masking blocks 20, 22 and 24 are converted into structures in which only repeating segments formed from the self-assembly are present after the self-assembly.In other embodiments, in which material 18 comprises diblock copolymer in a mixture with other substances, the blocks 20, 22 and 24 at the processing stage of FIG. 6 may comprise other components in addition to the segments formed from self-assembly of the diblock copolymer. [0047] The blocks 20, 22 and 24 at the processing stage of FIG. 6 are illustrated to be less dome-shaped and less spread than analogous blocks at the processing stage of FIG. 3. Such difference between FIGS. 3 and 6 is provided to illustrate that the amount of spreading of blocks 20, 22 and 24 that occurs during inducement of self-assembly of block copolymer may be adjusted by adjusting one or more of the parameters discussed above with reference to FIG. 3. [0048] In subsequent processing, one of the two types of features 32 and 34 of FIG. 6 may be selectively removed relative to the other. If features 34 are to be selectively removed, there can be an etch partially into features 32 to expose features 34 for subsequent removal. [0049] FIG. 7 shows construction 10 after the features 32 have been selectively removed relative to the features 34. The remaining features 34 form a patterned mask of upwardly projecting structures over material 16. [0050] FIG. 8 shows construction 10 after the pattern of the patterned mask of FIG. 7 has been transferred into material 16 with one or more suitable etches. [0051] FIGS. 1 -8 illustrate embodiments in which photolithographic processing is utilized to form a first pattern within a photosensitive material comprising block copolymer, and then self-assembly of the block copolymer is utilized to form a second pattern superimposed on the first pattern. Other methods besides photolithography may be utilized to form the first pattern. For instance, FIGS. 9-12 illustrate an example process in which differences of wettability of a substrate surface are utilized to induce the first pattern in a material comprising block copolymer. [0052] Referring to FIG. 9, a portion of a semiconductor construction 50 is illustrated. The construction 50 comprises a substrate 52. In some embodiments, substrate 52 may be a semiconductor substrate. In an example embodiment, substrate 52 may be a monocrystalline silicon wafer. [0053] Substrate 52 comprises an upper surface 53. In some embodiments, the upper surface 53 may initially consist of silicon or doped silicon. A plurality of regions 54 are illustrated, with such regions corresponding to segments of upper surface 53 that have been changed in composition relative to the remainder of upper surface 53. Such change in composition will alter wettability of a block copolymer-containing material that is to be subsequently formed over substrate 52. If upper surface 53 consists of siliconor doped silicon, the treatment of the upper surface may comprise subjecting the upper surface to one or more fluoroalkyl silanes and/or silanols. For instance, regions 54 may correspond to portions of the upper surface that have been exposed to perfluoroalkyl silane. Photoresist may be utilized to protect portions of surface 53 which are not to be altered during the treatment that is utilized to form the alterations that lead to regions 54. [0054] In some embodiments, the regions of upper surface 53 that have not been treated may be referred to as first regions 51 of the upper surface, and the treated regions 54 may be referred to as second regions of the upper surface. [0055] Referring to FIGS. 10 and 1 1 , material 60 is deposited over substrate 52 (FIG. 10); and the material then redistributes to not be over the regions 54 of the surface of substrate 52, and to accumulate (or bead) over the regions 51 of the surface (FIG. 1 1 ). [0056] Material 60 comprises block copolymer, and in some embodiments consists of block copolymer dispersed in a carrier solvent. The block copolymer may comprise any suitable block copolymer, and in some embodiments is a diblock copolymer consisting of either polystyrene-block-vinylpyridine or polystyrene-block-ethylene oxide. The block copolymer disperses from regions 54 along the upper surface of material 52, and beads over regions 51 along such upper surface, due to the differences in wettability of material 60 relative to regions 51 and 54. Specifically, regions 51 may be configured to be wettable by material 60, while regions 54 are configured to be non- wettable, and such may cause material 60 to accumulate over regions 51 while dispersing from over regions 54. [0057] The material 60 of FIG. 11 forms a patterned mask 62 having masking features 64, 66 and 68, which are spaced from one another by gaps 63 over the regions 54. The pattern of the masking features 64, 66 and 68 of the patterned mask 62 may be referred to as a first pattern. [0058] Material 60 may be deposited by any suitable method, including, for example, spin-casting. [0059] In some embodiments, the patterned mask 62 may be subjected to a low- temperature bake (i.e., a bake at a temperature of less than about the glass transition temperature of material 60, which may be less than or equal to 1500C in some embodiments) to remove carrier solvent; and in other embodiments such bake may be omitted. In some embodiments, the low-temperature bake may induce the dewetting from regions 54. [0060] Referring to FIG. 12, patterned mask 62 is subjected to conditions which induce self-assembly of the block copolymer therein. The self-assembly within the blockcopolymer is shown converting material 60 into features 70 and 72. Features 70 are in the form of cylinders extending into and out of the page relative to the cross-sectional view of FIG. 12. Feature 72 is over and between the features 70. In some embodiments, features 70 may be considered to alternate with features 72 along the a cross-section through masking blocks 64, 66 and 68; and in such embodiments the features 70 and 72 along such cross-section may be considered to comprise alternating first and second segments formed from the block copolymer. In some embodiments, one or both of solvent treatment and thermal treatment (e.g., baking) may induce the self-assembly of FIG. 12 and the dewetting of FIG. 1 1 simultaneously. [0061] The features 70 may be considered to correspond to a second patterned mask 74 that is formed from the first patterned mask 62. Also, a pattern of the features 70 may be referred to as a second pattern, and such second pattern may be considered to be superimposed on the first pattern corresponding to the pattern of features 64, 66 and 68. [0062] Although features 70 are cylinders in the shown embodiment, in other embodiments the features may have other shapes. For instance, in some embodiments the features may be lamellar. In embodiments in which the features are lamellar, the construction of FIG. 12 may comprise alternating segments of features 70 and 72 analogous to the alternating segments 32 and 34 shown in FIG. 6. FIG. 12 illustrates one of many configurations that may result from self-assembly of block copolymer. FIG. 16 shows another configuration that may result from self-assembly of the block copolymer. [0063] Referring to FIG. 13, most of the features 72 are selectively removed relative to features 70, to leave features 70 of patterned mask 74 remaining over substrate 52. In the shown embodiment, portions of the features 72 remain under the features 70 due to anisotropy of the etch utilized to remove features 72. One method of selectively removing the shown portions of features 72 relative to features 70 is to first selectively modify features 70 relative to features 72 by oxidizing or metalizing the features 70 (i.e., incorporating oxygen or metal into features 70), and to subsequently remove portions of features 72 by ashing with O2 plasma. [0064] Referring to FIG. 14, the patterned mask 74 has been utilized to fabricate a pattern within substrate 52. In some embodiments, substrate 52 may be utilized for fabrication of memory architecture (e.g., NAND, DRAM and/or cross-point memory). In such embodiments, the transfer of a pattern into substrate 52 may represent patterning of one or more materials into structures of memory architecture. In some embodiments, the features 70 may be used to define locations of integrated circuit components withinsubstrate 52. For instance, patterning of the substrate may represent patterning of one or more gate materials of NAND unit cells; may represent patterning of a plurality of lines of cross-point memory cells; and/or may represent patterning of wordlines and/or bitlines of DRAM. [0065] In some embodiments, features 70 and/or modified regions 54 may be removed in subsequent processing; and in other embodiments, features 70 and/or modified regions 54 may be left to become incorporated into an integrated circuit construction. [0066] The embodiments specifically shown are example embodiments, and the invention includes other embodiments which are not specifically shown. For instance, the example embodiments shown in FIGS. 1 -16 induce self-assembly of block copolymer to form masking features that extend horizontally across a substrate surface (specifically, the features 34 of FIG. 4, the features 34 of FIG. 7, and the features 70 of FIG. 13). In other embodiments, not shown, self-assembly of block copolymer may form structures that extend vertically (i.e., project primarily upwardly) relative to a substrate surface. Such vertical structures may be utilized for various applications in semiconductor fabrication, including, for example, fabrication of contact openings to wiring or other electrically conductive structures. |
An improved semiconductor device fabrication method comprises insertion of a semiconductor wafer into a high-pressure heated chamber and deposition of a low melting-point aluminum material into a contact hole or via and over an insulating layer overlying a substrate of the wafer. The wafer is heated up to the melting point of the aluminum material and the chamber is pressurized to force the aluminum material into the contact holes or vias and eliminate voids present therein. A second layer of material, comprising a different metal or alloy, which is used as a dopant source, is deposited over an outer surface of the deposited aluminum material layer and allowed to diffuse into the aluminum material layer in order to form a homogenous aluminum alloy within the contact hole or via. A semiconductor device structure made according to the method is also disclosed. |
1. A method of filling contact holes formed in an insulating layer overlying a substrate of a semiconductor device, comprising:depositing an aluminum material on an outer surface of the insulating layer and over the contact holes;wherein the aluminum material exhibits a first stress migration property and a first electromagnetic migration property;applying pressure to the aluminum material to substantially fill the contact holes therewith;depositing a different metal material on the aluminum material; anddiffusing the different metal material into the aluminum material to form a substantially homogeneous alloyed material layer having a second stress migration property and a second electromagnetic migration property.2. The method of filling contact holes of claim 1, wherein depositing an aluminum material comprises physical vapor deposition of the aluminum material.3. The method of filling contact holes of claim 1, wherein diffusing the different metal material into the aluminum material to form a substantially homogeneous alloyed material layer comprises heating the aluminum material by irradiating the aluminum material with argon plasma.4. The method of filling contact holes of claim 1, wherein diffusing the different metal material into the aluminum material to form a substantially homogeneous alloyed material layer comprises simultaneously heating the aluminum material with a heater and irradiating the aluminum material with argon plasma.5. The method of filling contact holes of claim 1, wherein applying pressure comprises introducing the semiconductor device into a high pressure chamber and pressurizing the high pressure chamber.6. The method of filling contact holes of claim 5, further comprising maintaining a temperature within the high pressure chamber at about 400[deg.] C.7. The method of filling contact holes of claim 5, wherein the high pressure chamber is pressurized to more than 500 atm.8. The method of filling contact holes of claim 1, wherein depositing the different metal material comprises physical vapor deposition of the different metal material.9. The method of filling contact holes of claim 1, wherein depositing the different metal material comprises vacuum evaporation deposition of the different metal material.10. The method of filling contact holes of claim 1, further comprising selecting the different metal material to comprise a metal alloy.11. The method of filling contact holes of claim 1, further comprising selecting the different metal material to comprise a substantially pure metal.12. The method of filling contact holes of claim 11, further comprising selecting the substantially pure metal to comprise copper.13. The method of filling contact holes of claim 12, wherein the copper is deposited on the aluminum material through an electroless plating process.14. The method of filling contact holes of claim 11, further comprising selecting the substantially pure metal to comprise nickel.15. The method of filling contact holes of claim 14, wherein the nickel is deposited on the aluminum material through an electroless plating process.16. The method of filling contact holes of claim 1, wherein diffusing the different metal material comprises annealing the different metal material and the aluminum material to form the substantially homogenous aluminum alloy material.17. The method of filling contact holes of claim 1, wherein diffusing the different metal material comprises heating the different metal material sufficiently to diffuse the different metal material into the aluminum material.18. A method of filling contact holes formed in an insulating layer overlying a substrate of a semiconductor device, comprising:depositing an aluminum material on an outer surface of the insulating layer and over the contact holes;wherein the aluminum material exhibits a first melting point;applying pressure to the aluminum material to substantially fill the contact holes therewith;depositing a different metal material on the aluminum material; anddiffusing the different metal material into the aluminum material to form a substantially homogeneous alloyed material layer having a second melting point.19. The method of filling contact holes of claim 18, further comprising selecting the second melting point of the substantially homogeneous alloyed material layer to be greater than the first melting point of the aluminum material.20. The method of filling contact holes of claim 18, wherein depositing an aluminum material comprises physical vapor deposition of the aluminum material.21. The method of filling contact holes of claim 18, wherein diffusing the different metal material into the aluminum material to form a substantially homogeneous alloyed material layer comprises heating the aluminum material by irradiating the aluminum material with argon plasma.22. The method of filling contact holes of claim 18, wherein diffusing the different metal material into the aluminum material to form a substantially homogeneous alloyed material layer comprises simultaneously heating the aluminum material with a heater and irradiating the aluminum material with argon plasma.23. The method of filling contact holes of claim 18, wherein applying pressure comprises introducing the semiconductor device into a high pressure chamber and pressurizing the high pressure chamber.24. The method of filling contact holes of claim 23, further comprising maintaining a temperature within the high pressure chamber at about 400[deg.] C.25. The method of filling contact holes of claim 23, wherein the high pressure chamber is pressurized to more than 500 atm.26. The method of filling contact holes of claim 18, wherein depositing the different metal material comprises physical vapor deposition of the metal material.27. The method of filling contact holes of claim 18, wherein depositing the different metal material comprises vacuum evaporation deposition of the different metal material.28. The method of filling contact holes of claim 18, further comprising selecting the different metal material to comprise a metal alloy.29. The method of filling contact holes of claim 18, further comprising selecting the different metal material to comprise a substantially pure metal.30. The method of filling contact holes of claim 29, further comprising selecting the substantially pure metal to comprise copper.31. The method of filling contact holes of claim 30, wherein the copper is deposited on the aluminum material through an electroless plating process.32. The method of filling contact holes of claim 29, further comprising selecting the substantially pure metal to comprise nickel.33. The method of filling contact holes of claim 32, wherein the nickel is deposited on the aluminum material through an electroless plating process.34. The method of filling contact holes of claim 18, wherein diffusing the different metal material comprises annealing the different metal material and the aluminum material to form the substantially homogenous aluminum alloy material.35. The method of filling contact holes of claim 18, wherein diffusing the different metal material comprises heating the different metal material sufficiently to diffuse the different metal material into the aluminum material. |
CROSS REFERENCE TO RELATED APPLICATIONThis application is a divisional of application Ser. No. 09/146,719, filed Sep. 3, 1998, now U.S. Pat. No. 6,124,205, issued Sep. 26, 2000.BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates to semiconductor devices and, more particularly, to a low temperature method of filling contact holes or vias with a low melting point aluminum material and subsequently depositing a second layer dopant for diffusion into the aluminum-filled contact hole or via to form an alloy therein.2. State of the ArtAs semiconductor device dimensions shrink, both gap-fill and planarity of the dielectric films become increasingly important. These challenging gap-fill requirements have initiated and stimulated a search for new processes and materials. Many of these devices, such as advanced ultra-large scale integrated (ULSI) devices, utilize elaborate, multi-level metallization schemes to enhance performance and achieve functional integration. As these device dimensions shrink, intra-lead capacitance becomes a major limiting factor in determining the total interconnect capacitance. Use of multi-level metal structures incorporating low dielectric constant materials is therefore necessary to limit the impact of capacitance on power, cross-talk, and RC delay of dense, deep sub-half micron interconnects.Due to the ease of its integration therein, aluminum materials are a preferred material for contact/via resistances, fewer overall process steps, and improved electromigration performance. While aluminum reflow has been used for filling contacts and vias having widths equal to or smaller than 0.5 [mu]m, aluminum reflow processes have not been widely accepted due to the higher deposition temperatures required in comparison to filling processes employing metals or alloys having lower melting-point temperatures than aluminum materials. Additionally, aluminum reflow processes are usually ineffective in completely filling contacts and vias having high aspect ratios, that is, contacts and vias having a high ratio of length or depth of a hole or via in relation to the preplated diameter of the contact or via.Various methods of spreading aluminum or other conductive film on the principal surface to fill the contact holes are already in practical use. These methods include a high temperature sputter method, a bias sputter method, and a reflow after sputter method. A major disadvantage of these conventional aluminum reflow processes is the sensitivity of reflow to surface conditions, hole profile and the type of substrate material. For example, conventional hot sputter deposition and/or reflow processes rely on the diffusive mobility of the atoms. Reflow characteristics are adversely affected by higher contact/via aspect ratios and the typical protrusion of sputtered barrier layers at the hole entrance, making consistent global filling difficult to achieve. Other detriments to complete filling include the presence of spin-on dielectrics and the associated out-gassing from the vias during the reflow process. Global filling is of particular concern for sub-half micron applications since a feasible aluminum reflow technology must be capable of achieving at least an equivalent yield and reliability as compared to conventional technologies, such as a tungsten plug process.To alleviate some of these problems, a high pressure (>700 atm) forced fill Al-plug process has been used for sub-half micron contact and via hole filling. This process typically consists of a bake, soft sputter etch, barrier deposition and aluminum plug formation. The aluminum hole filling is achieved via a two-step process. As shown in FIGS. 1 and 2 (representing a section or segment of a semiconductor wafer 30), metal is applied to insulating layer 24 (typically comprising a dielectric such as SiO, boron nitride, and silicon nitride deposited over a substrate 20) through a conventional sputter deposition technique at about 400[deg.] C. Prior to the deposition of aluminum, holes or vias 25 are created (e.g., by etching) in insulating layer 24. The deposited aluminum fills or bridges the mouth of each hole 25 with metal alloy layer 22. However, due to the high aspect ratio of the formed hole and the inherent surface tension of metal alloy layer 22, void 26 usually forms inside each hole below the filled or bridged mouth. The wafer is then transferred under vacuum to a so-called FORCE FILL(TM) Module, shown schematically in FIG. 7, consisting of a high-pressure chamber 80 with two radiant heaters 82 for controlling the temperature of wafer 84. Outlet port 88 is connected to a vacuum and controls pressurization and removal of gases from high-pressure chamber 80, a wafer-receiving area. Inlet port 86 is connected to a pressurized source of gas, such as argon, for pressure regulation within high-pressure chamber 80 and introduction of a precursor for plasma formation. The deposited aluminum is then forced into the holes by pressurizing the chamber, usually to about 760 atm, with argon while maintaining the temperature at about 400[deg.] C. As a result of the forced external pressure (represented by arrows 27 in FIG. 2), the aluminum bridge over hole or via 25 is deformed or extruded inwardly to accomplish complete hole filling, as shown in FIG. 2.For purposes of the forced fill process, use of a low melting-point aluminum alloy (e.g., alloys of aluminum containing between about 10% and about 60% copper), which flows at reduced temperatures, is preferred over pure aluminum or high melting-point aluminum alloys, such as alloys containing 98% aluminum and 2% copper. As a consequence, because lower temperatures can be used for effective hole filling, the respective wafer or substrate containing the hole undergoes less thermal stress, which decreases the potential for damage to the structures, and ultimately the complete devices, being formed on and in the semiconductor material. On the other hand, high melting-point aluminum alloys, such as the Al-Cu alloy referenced above, possess superior electromagnetic and stress migration properties in comparison to low melting-point aluminum alloys and would thus be favored for use in contact/via fill processes if the disadvantages thereof could be reduced or eliminated.Thus, it would be advantageous to provide an aluminum plug fill process which could be carried out at reduced temperatures and which also affords the superior electromagnetic and stress migration properties inherent in high melting-point aluminum alloys.SUMMARY OF THE INVENTIONThe present invention is directed to an improved method for filling contact holes or vias of semiconductor devices and the resulting structures. The improved method begins with insertion of the semiconductor wafer or other substrate of semiconductive material, having one or more contact holes or vias formed in an insulating layer overlying a wafer substrate, into a high-pressure heated chamber. A low-melting point base layer of aluminum material is then deposited over the insulating layer and into the contact holes or vias. During the deposition step, the wafer is heated up to the melting point of the aluminum material to reflow the same into the contact hole or via. Once deposition is completed and while maintaining the elevated temperature, the chamber is pressurized to force the aluminum material into the contact holes or vias and thus eliminate voids present therein under the aluminum material base layer. A second layer of material, comprising a metal or alloy to be used as a dopant source, is then deposited over a top surface of the deposited aluminum material base layer and allowed to diffuse into the aluminum material base layer in order to form a substantially homogenous aluminum alloy within the contact hole or via. The newly formed homogenous aluminum alloy possesses the desirable characteristics of the previously-mentioned high melting-point aluminum alloys, but without the associated difficulties and disadvantages of depositing such alloys in their preformed state. Formation of the homogenous aluminum alloy within the contact holes or vias of the wafer thus improves the strength, stress migration, and electromagnetic properties of the contacts or vias in a viable, economical manner easily applied to existing fabrication methodologies.BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGSWhile the specification concludes with claims particularly pointing out and distinctly claiming that which is regarded as the present invention, the advantages of this invention can be more readily ascertained from the following description of the invention when read in conjunction with the accompanying drawings, in which:FIG. 1 is a cross-sectional view of a portion of an integrated circuit structure created through conventional sputter deposition of an aluminum alloy over a via or contact;FIG. 2 is a cross-sectional view of the integrated circuit structure of FIG. 1 illustrating a high pressure forced fill process applied subsequent to the deposition step of FIG. 1;FIG. 3 is a cross-sectional view of a portion of an integrated circuit made in accordance with the present invention after high pressure forced fill of a contact hole or via with the aluminum material base layer;FIG. 4 is a cross-sectional view of the integrated circuit structure of FIG. 3 after deposition of a diffusion layer over the aluminum material base layer;FIG. 5 is a cross-sectional view of the integrated circuit structure of FIG. 3 after the diffusion layer has diffused into the underlying aluminum layer to form an alloy of the two materials;FIG. 6 is a cross-sectional view of a portion of a multilevel wiring structure made in accordance with the principles of the present invention; andFIG. 7 is a schematic representation of a FORCE FILL(TM) Module used to carry out the present invention.DETAILED DESCRIPTION OF THE INVENTIONReferring to FIG. 3, a cross-sectional view of a portion of a wafer or integrated circuit segment 30 is depicted. For purposes of this application, the term "wafer" or "integrated circuit" includes not only traditional wafers, but other substrates of semiconductor materials formed in different manners and specifically contemplated silicon-on-insulator (SOI) structures, silicon-on-ceramic structures, and layers of other semiconductive materials such as gallium arsenide and indium phosphide. For purposes of simplicity, elements common to FIGS. 1 and 2 will hereinafter be numbered identically in subsequent figures. The wafer 30 includes a semiconductive substrate layer 32 and an interlayer isolation or insulation layer 33. A contact hole or via 37 is defined by sidewall 34, extending from a principal or top surface 36 of insulation layer 33, to a bottom wall 35 that is defined by an exposed surface portion of the substrate layer 32. Contact hole 37 is representative of a plurality of contact holes or vias formed in wafer 30 and associated with the same or other circuit structures.The hole filling process of the invention is suitable for, although not limited to, sub-half micron contact and via hole filling. The method can be applied in the fabrication of a variety of semiconductor devices and ULSI circuits, such as dynamic random access memories (DRAMs), static random access memories (SRAMs), flash memory processors, and application-specific integrated circuits (ASICs). While the diameter of contact hole 37 in most of these devices is typically less than or equal to 50 [mu]m, it can be extended to any diameter in which substantially complete yield of contact filling is achievable. Where multiple-level metal formation is desired, such as in DRAM generation, contacts and vias with varying diameters can be patterned after interlevel dielectric deposition and planarization.Generally, the hole filling process is initiated by performing the forced fill process, previously described in conjunction with FIG. 2, with a low melting-point aluminum alloy base layer 38 being deposited on top surface 36 of insulation layer 33, as shown in FIG. 3. Low melting-point aluminum alloys suitable for use in the hole-filling step of the present invention include any aluminum alloy having a lower melting point than alloys, such as an aluminum alloy containing 98% aluminum and 2% copper, which has a melting point of about 650[deg.] C., which are typically used in hole filling processes. Alternatively, low melting-point aluminum alloy base layer 38 can be selectively deposited over the contact hole 37 areas and not over top surface 36 of insulation layer 33. This selective deposition step can be facilitated through the use of a masking step or any other method known in the art for selective deposition of materials.The aluminum layer used to fill the top of each contact hole 37 may be deposited through conventional sputter deposition techniques (also known as physical vapor deposition (PVD)). In this preferred method, a solid slab of a low melting-point aluminum alloy is electrically grounded within a vacuum chamber to form a "target." A gas, typically argon, is introduced into the chamber and is ionized to a positive charge, thus forming a plasma. The positively charged argon atoms are attracted to the grounded target and accelerate toward the target, eventually striking the target and causing the aluminum atoms to scatter into the vacuum chamber. The sputtered aluminum atoms or molecules scatter in the chamber, with some coming to rest on wafer 30. Once the initial aluminum alloy layer is deposited, plasma continues to contact and heat aluminum alloy base layer 38, thus facilitating reflow of aluminum alloy base layer 38 into the contact holes 37. Advantageously, heat produced in the aluminum alloy base layer 38 due to argon ion plasma irradiation dissipates through the wafer 30 towards a wafer support structure (not shown) of the PVD chamber. The dissipation of heat keeps wafer 30 at a sufficiently low temperature capable of preventing an adverse chemical reaction or thermal stress from taking place between aluminum alloy base layer 38 and both insulation layer 33 and substrate layer 32 of wafer 30.The sputter deposition technique is preferably conducted at a temperature of about 400[deg.] C. Radiant heaters 82 (FIG. 7), contained within the high-pressure chamber 80 (FIG. 7), can be used to subsequently heat aluminum alloy base layer 38 to a sufficiently high temperature to cause the aluminum alloy base layer 38 to reflow into contact hole 37. Alternatively, it is possible to heat the aluminum alloy base layer 38 for reflow simultaneously while irradiation with the plasma is performed, especially when a reduction in the argon ion and plasma energy is desired.A principal feature of the sputtering process is that the "target" material is deposited on the substrate 32 over insulation layer 33 without chemical or compositional change, such as seen in the process of chemical vapor deposition (CVD). Deposition of aluminum through sputtering, as opposed to a CVD process, eliminates the need for deposition of TiN, which is required to ensure consistent nucleation of CVD-deposited aluminum prior to such deposition. Another advantage of sputtering over CVD is the conservation of target material composition.Adhesion of the sputtered film to the top surface 36 of the insulation layer 33 is also improved in comparison to evaporation processes (such as electron-beam evaporation and inductive heating evaporation). The higher energy of the arriving aluminum atoms provides better adhesion, and the plasma environment (i.e., the ionized argon gas) inside the chamber has a "scrubbing" action on principal surface 36 and within contact hole 37 surface that cleans these surfaces and thus enhances adhesion.Various sputtering methods can be used in the method of the invention, such as diode sputtering using direct current, diode sputtering using radio frequency, triode sputtering, or magnetron sputtering. Sputter deposition of aluminum according to such processes bridges the top of each contact hole 37 and at least a portion of top surface 36 of the insulation layer 33 with aluminum, usually leaving an underlying void 26 inside hole 37, as previously described and shown in FIG. 1. High aspect ratio contacts and vias (i.e., contacts and vias having a high ratio of length or depth of a hole or via in relation to the preplated diameter of the contact or via) are particularly prone to incomplete filling of the hole 37.According to the principles of the present invention, it is possible to thoroughly fill contact hole 37 with a low melting-point aluminum alloy base layer 38, even where contact hole 37 has a high aspect ratio, while maintaining semiconductor substrate 32 at an appreciably low temperature, such as 400[deg.] C. This low temperature process advantageously prevents impurities, usually emanating from insulation layer 33, from being taken into aluminum alloy base layer 38, giving aluminum alloy base layer 38 a substantially flat or planar surface which facilitates its working into and alignment with the wirings and surrounding structures. Furthermore, the low temperature process decreases the attendant thermal stress typically seen between substrate 32, insulation layer 33, and aluminum alloy base layer 38 when using high temperature reflow processes.Removal of the void inside contact hole 37 (already removed in FIG. 3) is accomplished through a forced fill process, as described above. However, because low melting point aluminum alloys are used in place of the aluminum alloys traditionally used in the forced fill process (e.g., aluminum alloy containing 98% Al and 2% Cu, pure Al, or metal and alloys having a melting point greater than pure Al), operating pressures and temperatures may be reduced below conventional levels while still achieving complete hole filling. Alternatively, due to the lower melting point of the selected aluminum alloys, complete hole filling can be accomplished more rapidly when applying conventional operating pressures and temperatures.As shown in FIG. 4, following the deposition and forced fill steps, a second diffusion layer 40 of metal or alloy is deposited onto an exposed or outer surface 39 of the aluminum alloy base layer 38. Suitable alloys for use as second diffusion layer 40 include alloys of aluminum containing from about 10% to about 60% copper, from about 10% to about 70% silver, greater than about 20% zinc, and greater than about 30% tin. In one preferred embodiment, substantially pure copper is used as the diffusion or dopant source and forms the second diffusion layer 40. Alternatively, an Al-Cu alloy can be used as a copper diffusion source. Suitable elements for use as a diffusion or dopant source include any metal or alloy which can be made to diffuse into the underlying aluminum alloy base layer 38 and form a homogeneous aluminum alloy having desired electromagnetic and stress migration properties applicable for ULSI devices. Preferred alloys for use as second diffusion layer 40 include alloys of aluminum containing copper, silver, zinc, and tin. Preferred metals for use as second diffusion layer 40 include copper, silver, zinc, tin, and magnesium.Where aluminum alloy base layer 38 is selectively deposited over the contact hole 37 areas and not over top surface 36 of insulation layer 33, as previously described in the alternative embodiment, second diffusion layer 40 of metal or alloy is selectively deposited onto exposed or outer surface 39 of the aluminum alloy base layer 38. This selective deposition step can be facilitated through the use of a masking step or any other method known in the art for selective deposition of materials.The metals and alloys forming second diffusion layer 40 can be deposited through any suitable deposition technique. One preferred deposition technique involves the deposition of copper by an electroless process. Traditional electroless copper plating processes, wherein an alkaline chelated copper reducing solution deposits a thin copper layer (usually 20 to 100 [mu]m) on surfaces, can be employed in the instant process. Generally, the electroless plating process is initiated by combining a source of copper, such as copper sulfate (CuSO4), with a reducing agent (preferably formaldehyde) to reduce the elemental copper (i.e., Cu<+2> =2e ->Cu<0> ). Sodium hydroxide is simultaneously combined to maintain the pH between about 11.5 and 12.5 in order to optimize aldehyde reduction. Complexers, such as EDTA and tartrates, hold the copper cations in solution at a high pH. In such a manner, metals such as copper and nickel can be deposited on underlying aluminum alloy base layer 38 to form second diffusion layer 40. Those skilled in the art will recognize and apply the process steps, specific operating conditions, and process controls required to carry out electroless plating of second diffusion layer 40 according to the principles of this invention.Vacuum evaporation is another technique which can be used for the deposition of metals on aluminum alloy base layer 38. Vacuum evaporation takes place inside an evacuated chamber, where a metal is heated to a liquid state so that the atoms or molecules evaporate into the surrounding atmosphere within the chamber. Any known and suitable evaporation method (e.g., filament, electron beam, and flash hot plate evaporation) can be used to evaporate the metals, which will eventually form second diffusion layer 40, in the vacuum system. Vacuum evaporation is preferably performed with pure metals, as alloys are difficult to deposit by this method due to the different evaporation rates at specific temperatures for each element comprising the alloy, which would lead to deposition of second diffusion layer 40 having a different composition than the source alloy material.Another preferred deposition technique involves PVD or sputter deposition, as described above with respect to the deposition of aluminum alloy base layer 38. In contrast to the sputter deposition of aluminum alloy base layer 38, the target can comprise any suitable or desirable metal (except aluminum) or alloy which makes an effective diffusion or dopant source (e.g., Cu or AlCu). As previously discussed, various sputtering methods can be used, such as diode sputtering using direct current, diode sputtering using radio frequency, triode sputtering, or magnetron sputtering.Sputter deposition is particularly well suited when depositing an alloy as second diffusion layer 40, since sputter deposition does not rely on evaporation of materials having different evaporation rates. For example, in sputtering, an aluminum and 2% copper target material yields a substantially unchanged aluminum and 2% copper alloy second diffusion layer 40 over aluminum alloy base layer 38.As shown in FIG. 5, once the second diffusion layer 40 is deposited onto the aluminum alloy base layer 38, the second layer element(s) diffuse into and form a substantially homogeneous aluminum alloy layer 50. The second layer element(s) 42, constituting the material of the dopant source, is uniformly distributed throughout the aluminum alloy base layer 38 by subjecting wafer 30 to elevated temperatures (preferably 400-500[deg.] C.), thus forming new alloy layer 50 over insulation layer 33 and within the contact hole 37. An annealing step can be added to improve dopant distribution and further diffuse the second layer element(s) 42 into the aluminum alloy base layer 38.In another preferred embodiment of the present invention, second insulation layer 78 can be deposited on homogeneous aluminum alloy layer 50 to create a multilevel wiring structure 70, as shown in FIG. 6. A third insulation layer 72 can be deposited between the second insulation layer 78 and the homogeneous aluminum alloy layer 50 to provide insulation between wiring structures being formed. Once second insulation layer 78 is deposited, the aforementioned steps (previously described in conjunction with FIGS. 3 through 5) are repeated to form a structure comprising second homogeneous aluminum alloy layer 74 which fills second hole 76 formed within second insulation layer 78. In carrying out reflow of second homogeneous aluminum alloy layer 74 into second hole 76 formed in the second insulation layer 78, attention should be directed to avoidance of any disturbance, such as reflow of previously-formed homogeneous aluminum alloy layer 50 of underlying hole 37. Due to the relatively higher melting point of homogeneous aluminum alloy layer 50 as compared to the low melting-point aluminum material initially being deposited within second hole 76, use of irradiation, either solely or in combination with heating of the second insulating layer by the heater to a temperature slightly above the melting point of the low melting-point aluminum material, is effective in preventing such reflow of existing hole fill materials.While the hole fill method of the present invention has been described in terms of various preferred embodiments, it is understood that other methods could be adopted by one skilled in the art. For example, various deposition techniques, such as ion deposition, could be employed to deposit the aluminum alloy or second (dopant) layers. Where plasma-dependent deposition is employed, various inert gases could be used for generation of ion plasmas. Where alloys are deposited through PVD techniques, a single target consisting of an alloy can be used or individual targets, each containing individual metals which comprise the alloy, can be used to deposit the selected alloy in the desired constituent ratios. Accordingly, it is understood that the scope of the invention is not to be limited except as otherwise set forth in the claims. |
Apparatuses, methods and storage medium for providing access from outside a multicore processor System on Chip (SoC) are disclosed herein. In embodiments, an SoC may include a memory to store a plurality of embedded values correspondingly associated with a plurality of architecturally identical cores. Each embedded value may indicate a default voltage for a respective one of the plurality of architecturally identical cores. In embodiments, an apparatus may include one or more processors, devices, and/or circuitry to provide access from outside the multicore processor SoC to individually configure voltages of the plurality of architecturally identical cores to values that are different than the values of the default voltages. Other embodiments may be described and/or claimed. |
ClaimsWhat is claimed is:1. An apparatus with per-core voltage adjustability, the apparatus comprising: a multicore processor SoC (system-on-chip) including:a plurality of architecturally identical cores, wherein a first core of the plurality of architecturally identical cores has a first physical characteristic and a second core of the plurality of architecturally identical cores has a second physical characteristic that is different than the first physical characteristic; anda memory to store a plurality of embedded values correspondingly associated with the plurality of architecturally identical cores, each embedded value to indicate a default voltage for a respective one of the plurality of architecturally identical cores; anda component to provide access from outside the multicore processor SoC to individually set voltages of the plurality of architecturally identical cores to values that are different than the values of the default voltages.2. The apparatus of claim 1, wherein the multicore processor SoC further includes:a voltage regulator coupled to the plurality of architecturally identical cores, the voltage regulator to provide the individually set voltages to the plurality of architecturally identical cores.3. The apparatus of claim 1, further comprising an external voltage regulator coupled to the multicore processor SoC.4. The apparatus of claim 3, wherein the component includes a plurality of pins each corresponding to a respective core of the plurality of architecturally identical cores.5. The apparatus of claim 1, wherein the multicore processor SoC further includes:a power control unit (PCU); andmemory having instructions stored thereon that, in response to execution by the PCU, the PCU to perform operations, to:recognize information of a signal received by the multicore processor SoC via the component; andcontrol a voltage regulator of the multicore processor SoC based on the information of the recognition to cause the voltage regulator to provide a first voltage to a first core of the plurality of architecturally identical cores responsive to the recognition of the information, wherein a magnitude of the first voltage is different than a magnitude of a corresponding one of the default voltages.6. The apparatus of claim 5, wherein the information of the recognition includes at least one of an actual voltage value corresponding to respective one of the cores, an offset from a respective one of the default voltages, or a voltage value and an offset from the base voltage value.7. The apparatus of claim 1, further comprising a module to facilitate a user to access the component to individually set voltages of the plurality of architecturally identical cores to values that are different than the values of the default voltages.8. The apparatus of claim 7, wherein the module is to message the interface in response to inputs from the user.9. The apparatus of any of claims 1-8, wherein the component includes a control register.10. The apparatus of any of claims 1-8, further comprising a Basic InputOutput System (BIOS) to facilitate a user to access the component to individually set voltages of the plurality of architecturally identical cores to values that are different than the values of the default voltages.11. An apparatus for programming a processor voltage on a per-core basis, the apparatus comprising:means for recognizing information of a signal received by a multicore processor SoC (system on chip);wherein a first core of the multicore processor SoC has a first physical characteristic and a second core of the multicore processor SoC has a second physical characteristic that is different than the first physical characteristic; andmeans for controlling a voltage regulator of the multicore processor SoC based on the information of the recognition to cause the voltage regulator to selectively provide a first voltage to a first core of the plurality of cores.12. The apparatus of claim 11, further comprising means for selectively providing a second voltage to a second core of the plurality of cores, wherein said selective providing of the second voltage to the second core is contemporaneous with the selective providing of the first voltage to the first core.13. The apparatus of claim 12, wherein the second voltage is a default voltage corresponding to said second core.14. The apparatus of any of claims 11 -13, wherein:the information of the recognition includes at least one of an actual voltage value corresponding to respective one of the cores, an offset from a respective one of a default voltage corresponding to the first core, or a base voltage value and an offset from the base voltage value; andthe first voltage corresponds to at least one of the actual voltage value, the offset from the respective one of the default voltages, or the offset from the base voltage value.15. A method to program a processor voltage on a per-core basis, the method comprising:accessing a plurality of embedded values correspondingly associated with the plurality of cores of a multicore processor, each embedded value indicating a default voltage for a respective one of the plurality of architecturally identical cores;at a first time, providing core voltages to the plurality of cores, the core voltages corresponding to the default voltages;responsive to receiving an override selection, identifying a new core voltage that is different than the provided core voltages; andat a second time that is later than the first time, providing the new core voltage to one of the cores of the plurality of cores.16. The method of claim 15, further comprising:at the second time, providing a different new core voltage to a different one of the cores of the plurality of cores.17. The method of claim 15, wherein the override selection includes an overvolt command.18. The method of claim 15, wherein the override selection includes an undervolt command.19. The method of claim 15, wherein the override selection indicates a voltage range, and wherein the method further comprises:selecting the new core voltage from the voltage range.20. The method of any of claims 15-19, further comprising:at the second time, providing a core voltage of the core voltages to a different one of the cores of the plurality of cores.21. An apparatus to program a processor voltage on a per-core basis, the apparatus comprising:means for accessing a plurality of embedded values correspondingly associated with the plurality of cores of a multicore processor, each embedded value indicating a default voltage for a respective one of the plurality of architecturally identical cores; means for providing core voltages to the plurality of cores at a first time, the core voltages corresponding to the default voltages;means for identifying a new core voltage that is different than the provided core voltages responsive to receiving an override selection; andmeans for providing the new core voltage to one of the cores of the plurality of cores at a second time that is later than the first time.22. The apparatus of claim 21 , further comprising:means for providing a different new core voltage to a different one of the cores of the plurality of cores at the second time.23. The apparatus of claim 21, wherein the override selection includes an overvolt command or an undervolt command.24. The apparatus of claim 21, wherein the override selection indicates a voltage range, and wherein the apparatus further comprises:means for selecting the new core voltage from the voltage range.25. The apparatus of any of claims 21 -24, further comprising:means for providing a core voltage of the core voltages to a different one of the cores of the plurality of cores at the second time. |
PROVIDING ACCESS FROM OUTSIDE A MULTICORE PROCESSOR SoC TO INDIVIDUALLY CONFIGURE VOLTAGESRelated ApplicationThis application claims priority to U.S Patent Application 15/007,021, entitled"PROVIDING ACCESS FROM OUTSIDE A MULTICORE PROCESSOR SoC TO INDIVIDUALLY CONFIGURE VOLTAGES," filed January 26, 2016.Technical FieldThe present disclosure relates to multicore processors, for example multicore processors of a System on Chip (SoC) architecture, and more specifically relates to access from outside a multicore processor to individually configure voltages of the processor cores.BackgroundThe background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.In order to manage manufacturing variation during fabrication of multicore processors while maintaining quality and reliability, conservative guard bands are employed during testing, and devices are "binned" or classified based on their speed and power characteristics. Conventional speed binning treats multicore processors as single- core devices by assigning a single related speed and minimum operating voltage for the processor as a whole. The rated speed and minimum voltage typically reflects the speed of the slowest core and the minimum voltage of the core having the poorest minimum voltage.Brief Description of the DrawingsEmbodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.FIG. 1 illustrates an example system having access from outside a multicore processor to individually configure voltages of the processor cores, according to various embodiments.FIG. 2 illustrates example operations that may be performed by the system of FIG. 1, according to various embodiments.FIG. 3 illustrates an example of a system that may include a Processor Control Unit (PCU) and an integrated voltage regulator, according to various embodiments.FIG. 4 illustrates example an example process that may be used in the example system of FIG. 3, according to various embodiments.FIG. 5 illustrates an example computing device that may employthe apparatuses and/or methods described herein, according to various embodiments.Detailed DescriptionApparatuses, methods and storage medium associated with computing that includes providing access from outside a multicore processor SoC to individually configure voltages of the processor cores are disclosed herein. In embodiments, an apparatus may include one or more processors, devices, and/or circuitry to provide access from outside the multicore processor SoC to individually configure voltages of the plurality of architecturally identical cores to values that are different than the values of the default voltages.In the following detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.Aspects of the disclosure are disclosed in the accompanying description. Alternate embodiments of the present disclosure and their equivalents may be devised without parting from the spirit or scope of the present disclosure. It should be noted that like elements disclosed below are indicated by like reference numbers in the drawings.Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter.However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments. For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).The description may use the phrases "in an embodiment," or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous.As used herein, the term "circuitry" may refer to, be part of, or include anApplication Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.In known multicore processor SoCs, for a variety of reasons, such asmanufacturing variations, all processor cores (hereinafter, simply "cores") are not created equal. While the cores may be logically identical (e.g., architecturally identical), the cores may have different physical characteristics. One core of a multicore processor SoC may be capable of operating at a higher performance level, e.g., higher frequency or lower voltage, than another core of the multicore processor SoC.In some processors, at manufacturing time, different voltages, which may be called "manufacturing fused voltages" may be associated with the different cores, reflective of their different operating capability. The manufacturing fused voltages may configure a fixed maximum voltage for each core. The manufacturing fused voltages may be stored in read only memory of the multicore processor.Performance modification tools, sometimes referred to as "overclocking" tools, may provide voltage configurability. In known performance modification tools, an operating system interface or a Basic Input Output System (BIOS) interface may communicate with a Processor Control Unit (PCU) to enable a system administrator to select an operating voltage that is different than at least some of the fused voltages of the multicore processor, by a signal to the PCU. For instance, an example six-core multicore processor that is capable of operating at 1.55V (to achieve a higher frequency and/or performance level) may include a read only memory storing the voltages 1.3V, 1.35V, 1.55V, 1.35V, 1.35V, and 1.3V for core 0, core 1, core 2, core 3, core 4, and core 5, respectively. A system administrator may, accepting the associated tradeoffs, configure the multicore processor for 1.55V operation with known performance modification tools, to attempt to operate the cores at voltages 1.55V, 1.55V, 1.55V, 1.55V, 1.55V, and 1.55V for core 0, core 1, core 2, core 3, core 4, and core 5, respectively. If the computing system using the multicore processor appears to be stable at administered multicore processor setting (e.g. 1.55V in the above example), certain tradeoffs are still realized (despite the apparent stability). These tradeoffs may include accelerated reliability degradation and thermal considerations. The higher the voltage, the shorter the life of a given processor core will be. All processor cores are subjected to accelerated degradation because they all run at the voltage required by the "weakest" core. The administered multicore processor setting will also result in higher processor package temperatures. This could result in throttling (lower performance) and/or may require more aggressive, such as more expensive, cooling solution.Various embodiments disclosed herein enable the ability to externally configure voltage of each core of a multicore processor with an SoC architecture, which may be used to improve performance, power efficiency, reliability, or the like, or combinations thereof. In an example, a system may support user/system/software programmability of processor core voltage for each processor core. In an example, the system may support individual processor core voltages above a respective fixed voltage maximum voltage of the manufacturing fused voltages of the multicore processor.Various embodiments may include an externally configurable interface for per core voltage settings (e.g., access from outside the multicore processor SoC to individually configure voltages to values that are different than the values of manufacturing fused voltages). In an example, a processor interface, accessible to BIOS or other software (e.g., an operating system, driver, application, etc.), may enable configuration of voltage on a per individual core basis. In an example, the interface may utilize register(s), such as control register(s) (e.g., an MSR (model specific registers ))), a messaging interface, such as OC (overclocking) Mailbox, or the like, or combinations thereof. The interface may provide more than one format in which extemal configuration may be managed, e.g. actual voltage value per processor core, an offset from the a default such as the manufacturing fused voltages, a single voltage value and an offset from the single voltage value, or the like, or combinations thereof.Various embodiments may include an automated system, which may be internal to the processor. In an example, the automated system may include a PCU to scale and/or select voltage values for individual processor cores based on information received from the externally configurable interface, such as based on an externally requested frequency (e.g., a user-selected frequency or a frequency selected by an external system such as an application of an operating system associated with the multicore processor). The automated system may receive an external configuration input including and/or indicating a request for individualized voltage mode. The request may be from BIOS, other software (such as an operating system), a register (such as a control register), a messaging interface, or the like, or combinations thereof. The PCU may generate an estimate of how to scale an operation characteristic, e.g., scaling up voltage, and then apply the PCU generated estimate responsive to receiving the request.Various embodiments may include a multicore processor, e.g., a multicore processor of an SoC architecture, of a computing system. The multicore processor may include an interface exposed to the computing system, the exposed interface to accept and implement voltage values requested by the computing system. The computing system may include software code (of BIOS, an application, an operating system, etc.) to communicate with the exposed interface to program the multicore processor with the voltage values.Various embodiments may improve performance and/or reliability. The increased performance may be associated with a lower individual voltage for at least one core, which may create less heat. The lower heat may result in a higher "overclock" to another core, which may provide improved computing performance. The lower heat may result in improved performance of automated turbo operation systems. The improved reliability may be based on reduction of the average voltage per core.FIG. 1 illustrates an example system having access from outside a multicore processor to individually configure voltages, according to various embodiments.The system 100 may include a multicore processor 21, e.g., an SoC multicore processor. The multicore processor 21 may include a plurality of cores 99, e.g. a plurality of architecturally identical cores. A first core of the plurality of cores 99 may have a first physical characteristic, and a second core of the plurality 99 may have a second physical characteristic that is different than the first physical characteristic. The different physical characteristics may be related to process, heat, or temperature variations during manufacturing, the different relative positions of the cores of the plurality 99 in the multicore processor 21, or the like, or combinations thereof.The multicore processor 21 may include a memory 23 storing a plurality of embedded values 22 correspondingly associated with the plurality of cores 99, each embedded value to indicate a default voltage for a respective one of the plurality of cores 99.A component 51 may provide access from outside the multicore processor 21 to individually set/program voltages of the plurality of cores 21 to values that are different than the embedded values 22. The component 51 may receive an external selection 15 to individually override at least one of the embedded values 22, e.g., to override at least a subset of the values 22 and/or to override one of the values 22 differently than another one of the values. The external selection 15 may be from BIOS or other software (an application, an operating system) of a computing system in which the multicore processor 21 is resides. In various embodiments, the component 51 may include a register-based interface, a message-based interface, an instruction-based interface, pins, or the like, or combinations thereof.In an example where the multicore processor 21 utilizes a discrete voltage regulator that provides voltage to the multicore processor 21. The discrete voltage regulator may be an external voltage regulator. In such a case the component 21 may include pins (e.g. input pins) that couple the multicore processor 21 to each voltage regulator of multiple individual voltage regulators or multiple individual voltage regulator components of the external voltage regulator. The external voltage regulator may be controlled by circuitry (e.g., the PCU) of the multicore processor 21 to provide, over the pins, voltages at least one of which is different than the manufacturing fused voltages. The component 21 may also include the controlling circuitry, such as a PCU and code, such as microcode and/or pcode, to determine a voltage regulator setting for the external voltage regulator.FIG. 2 illustrates example operations that may be performed by the system of FIG.1 , according to various embodiments.In block 201, the system 100 may access a plurality of embedded values correspondingly associated with a plurality of cores of the multicore processor 21. The access may occur at a first time.In block 202, the system 100 may provide the core voltages to the plurality of cores. The core voltages may correspond to the default voltages. The system 100 may provide the core voltages by a control processor to read the plurality of embedded values and signal an integrated voltage regulator, such as a Fully Integrated Voltage Regulator(FIVR) to provide the core voltages. In another example, the system 100 may provide the core voltages by a component, such as BIOS, to allow an external voltage regulator to provide the core voltages. In block 203, the system 100 may identify a new core voltage that is different than a corresponding one of the provided core voltages responsive to receiving an override selection. The override selection may be a manually determined override selection (by a system administrator), or otherwise determined override selection, for instance by software such as a tuning application, BIOS, an operating system, or the like, or combinations thereof.In block 204, the system 100 may provide the new core voltage to a corresponding one of the cores of the plurality of cores. The system 100 may provide the new core voltage at a second time that is later than the first time.FIG. 3 illustrates an example of a system that may include a PCU 253 and an integrated voltage regulator (FIVR 255), according to various embodiments. In the system 200, the multicore processor 21 may include code 254, e.g., microcode and/or pcode) of a PCU 253 to control the FIVR 255 of the multicore processor 21.The code 254 may recognize an external selection 215 to individually override fused voltage(s) corresponding to the cores 99. In an example, the code 215 may utilize a protocol of a messaging interface or other interface to recognize the external selection 215.The code 254 may be control the FIVR 255 to cause the FIVR 255 to provide Vcores 257 to the cores 99 responsive to receiving the external selection 215. For example if the FIVR 255 is already providing a voltage corresponding to a respective one of a plurality of embedded values 22 of the memory 23, the FIVR 255 may change at least one of the provided voltages based on the control from the PCU 253. The respective core(s) of the cores 99 may receive the new core voltage that is different than the previously provided voltage.FIG. 4 illustrates example an example process that may be used in the example system of FIG. 3, according to various embodiments.In block 301, the system 200 may be powered on (or cold reset). In block 302, the PCU 253 may read fuse values 22 for the core voltages of the memory 23 and may configure accordingly.In block 303, the system 200 may be operating via BIOS/EFI (extensive firmware interface) or an operating system. In block 304, a software automation or a system administer may determine a new voltage for at least one individual core. The code 254 may recognize a request for the new voltage using the protocol.In block 305, the system 200 may override the corresponding fused voltage(s), e.g., the PCU 253 may control the FIVR 255 to cause the new core voltage(s) to be provided to the appropriate one(s) of the cores 99. In block 306, the system 200 may output an indication to BIOS, an application, and/or the operating system, that the modification setting (e.g., a tuning or overclocking setting for the new core voltage) was granted, e.g., granted by the PCU 253. The process may be partially repeated (the control processor of the multicore processor may poll for a next request) as shown by the return arrow for a next new voltage (also may be determined by a software automation or a system administer).In an example, a six-core multicore processor that is capable of operating two of its cores at higher frequencies with 1.40V, may include a read only memory storing the voltages 1.3V, 1.35V, 1.25V, 1.35V, 1.35V, and 1.25V for core 0, core 1, core 2, core 3, core 4, and core 5, respectively, is provided. A system administrator may, accepting the associated tradeoffs, configure these two cores at 1.40V operation, to attempt to operate the cores at voltages 1.3V, 1.35V, 1.40V, 1.35V, 1.35V, and lAOViox core 0, core 1, core 2, core 3, core 4, and core 5, respectively. As such, cores 2 and 5 may be configured to a higher frequency without the need of changing the voltage of other cores.FIG. 5 illustrates an example computing device that may employthe apparatuses and/or methods described herein, according to various embodiments.Example computing device 500 may employ the apparatuses and/or methods described herein, in accordance with various embodiments. As shown, computing device 500 may include a number of components, such as one or more processor(s) 504 (one shown) and at least one communication chip 506.In various embodiments, the one or more processor(s) 504 each may include one or more processor cores. At least one of the one or more processor(s) 504 may be a multicore processor SoC of FIG. 1 or 3. In various embodiments, the at least one communication chip 506 may be physically and electrically coupled to the one or more processor(s) 504. In further implementations, the communication chip 506 may be part of the one or more processor(s) 504. In various embodiments, computing device 500 may include printed circuit board (PCB) 502. For these embodiments, the one or more processor(s) 504 and communication chip 506 may be disposed thereon. In alternate embodiments, the various components may be coupled without the employment of PCB 502.Depending on its applications, computing device 500 may includeother components that may or may not be physically and electrically coupled to the PCB 502. These other components include, but are not limited to, a memory controller (not shown), volatile memory (e.g., dynamic random access memory (DRAM) 520), nonvolatile memory such as read only memory (ROM) 524, flash memory 522, an I/O controller (not shown), a digital signal processor (not shown), a crypto processor (not shown), a graphics processor 530, one or more antenna 528, a display (not shown), a touch screen display 532, a touch screen controller 546, a battery 536, an audio codec (not shown), a video codec (not shown), a global positioning system (GPS) device 540, a compass 542, an accelerometer (not shown), a gyroscope (not shown), a speaker 550, a camera 552, and a mass storage device (such as hard disk drive, a solid state drive, compact disk (CD), digital versatile disk (DVD)) (not shown), and so forth.In some embodiments, the one or more processor(s) 504, flash memory522, and/or a storage device (not shown) may include associated firmware (not shown) storing programming instructions configured to enable computing device 500, in response to execution of the programming instructions by one or more processor(s) 504, to practice all or selected aspects of the methods described herein. For example, the programming instructions may implement the earlier control processor, e.g., PCU, with references to the respective ones of Figures 1-4. In various embodiments, these aspects may additionally or alternatively be implemented using hardware separate from the one or more processor(s) 504, flash memory 512, or storage device 511. For example, the alternate hardware may include the earlier described control processor equipped with code, e.g., microcode and/or pcode, to perform the operations earlier described with references to the respective ones of Figures 1-4.The communication chips 506 may enable wired and/or wireless communications for the transfer of data to and from the computing device 500. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 506 may implement any of a number of wireless standards or protocols, including but not limited to IEEE 702.20, Long Term Evolution (LTE), LTE Advanced (LTE-A), General Packet Radio Service (GPRS), Evolution Data Optimized (Ev-DO), Evolved High Speed Packet Access (HSPA+), Evolved High Speed Downlink Packet Access (HSDPA+), Evolved High Speed Uplink Packet Access (HSUPA+), Global System for Mobile Communications (GSM), Enhanced Data rates forGSM Evolution (EDGE), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Worldwide Interoperability for Microwave Access (WiMAX), Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 5G, 5G, and beyond. The computing device 500 may include a plurality of communication chips 506. For instance, a first communication chip 506 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth, and a second communication chip 506 may be dedicated to longer range wirelesscommunications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.In various implementations, the computing device 500 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a computing tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit (e.g., a gaming console or automotive entertainment unit), a digital camera, an appliance, a portable music player, or a digital video recorder. In further implementations, the computing device 500 may be any other electronic device that processes data.Any combination of one or more computer usable or computer readable medium(s) may be utilized. The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non- exhaustive list) of the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a transmission media such as those supporting the Internet or an intranet, or a magnetic storage device. Note that the computer- usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In the context of this document, a computer-usable or computer- readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer- usable medium may include a propagated data signal with the computer-usable program code embodied therewith, either in baseband or as part of a carrier wave. The computer usable program code may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc.Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).ExamplesExample 1 is an apparatus with per-core voltage adjustability. The apparatus may include a multicore processor SoC (system-on-chip) including: a plurality ofarchitecturally identical cores, wherein a first core of the plurality of architecturally identical cores has a first physical characteristic and a second core of the plurality of architecturally identical cores has a second physical characteristic that is different than the first physical characteristic; and a memory to store a plurality of embedded values correspondingly associated with the plurality of architecturally identical cores, each embedded value to indicate a default voltage for a respective one of the plurality of architecturally identical cores; and a component to provide access from outside the multicore processor SoC to individually set voltages of the plurality of architecturally identical cores to values that are different than the values of the default voltages.Example 2 includes the subject matter of example 1, and the multicore processor SoC further includes: a voltage regulator coupled to the plurality of architecturally identical cores, the voltage regulator to provide the individually set voltages to the plurality of architecturally identical cores.Example 3 includes the subject matter of any of examples 1-2, and further comprises an external voltage regulator coupled to the multicore processor SoC.Example 4 includes the subject matter of any of examples 1-3, and the component includes a plurality of pins each corresponding to a respective core of the plurality of architecturally identical cores.Example 5 includes the subject matter of any of examples 1-4, and the multicore processor SoC further includes: a power control unit (PCU); and memory having instructions stored thereon that, in response to execution by the PCU, the PCU to perform operations, to: recognize information of a signal received by the multicore processor SoC via the component; and control a voltage regulator of the multicore processor SoC based on the information of the recognition to cause the voltage regulator to provide a first voltage to a first core of the plurality of architecturally identical cores responsive to the recognition of the information, wherein a magnitude of the first voltage is different than a magnitude of a corresponding one of the default voltages.Example 6 includes the subject matter of any of examples 1-5, and the information of the recognition includes at least one of an actual voltage value corresponding to respective one of the cores, an offset from a respective one of the default voltages, or a voltage value and an offset from the base voltage value.Example 7 includes the subject matter of any of examples 1-6, and further comprises a module to facilitate a user to access the component to individually set voltages of the plurality of architecturally identical cores to values that are different than the values of the default voltages.Example 8 includes the subject matter of any of examples 1-7, and the module is to message the interface in response to inputs from the user.Example 9 includes the subject matter of any of examples 1-8, wherein the component includes a control register.Example 10 includes the subject matter of any of examples 1-10, and further comprises a Basic Input Output System (BIOS) to facilitate a user to access the component to individually set voltages of the plurality of architecturally identical cores to values that are different than the values of the default voltages.Example 11 is a computer-readable medium having instructions of per-core processor programming stored thereon that, in response to execution by a processing device, cause the processing device to perform operations, to: recognize information of a signal received by a multicore processor SoC (system on chip); wherein a first core of the multicore processor SoC has a first physical characteristic and a second core of the multicore processor SoC has a second physical characteristic that is different than the first physical characteristic; and control a voltage regulator of the multicore processor SoC based on the information of the recognition to cause the voltage regulator to selectively provide a first voltage to a first core of the plurality of cores.Example 12 includes the subject matter of example 11 , wherein the operations are further to selectively provide a second voltage to a second core of the plurality of cores, said selective providing of the second voltage to the second core to be contemporaneous with the selective providing of the first voltage to the first core.Example 13 includes the subject matter of any of examples 1 1 -12, wherein the second voltage is a default voltage corresponding to said second core.Example 14 includes the subject matter of any of examples 1 1-13, wherein: the information of the recognition includes at least one of an actual voltage valuecorresponding to respective one of the cores, an offset from a respective one of a default voltage corresponding to the first core, or a base voltage value and an offset from the base voltage value; and the first voltage corresponds to at least one of the actual voltage value, the offset from the respective one of the default voltages, or the offset from the base voltage value.Example 15 is a method for programming a processor voltage on a per-core basis.The method may include accessing a plurality of embedded values correspondingly associated with the plurality of cores of a multicore processor, each embedded value indicating a default voltage for a respective one of the plurality of architecturally identical cores; at a first time, providing core voltages to the plurality of cores, the core voltages corresponding to the default voltages; responsive to receiving an override selection, identifying a new core voltage that is different than the provided core voltages; and at a second time that is later than the first time, providing the new core voltage to one of the cores of the plurality of cores.Example 16 includes the subject matter of example 15, and at the second time, providing a core voltage of the core voltages to a different one of the cores of the plurality of cores.Example 17 includes the subject matter of any of examples 15-16, and at the second time, providing a different new core voltage to a different one of the cores of the plurality of cores.Example 18 includes the subject matter of any of examples 15-17, and the override selection includes an overvolt command.Example 19 includes the subject matter of any of examples 15-17, and wherein the override selection includes an undervolt command.Example 20 includes the subject matter of any of examples 15-19, and the override selection indicates a voltage range, and wherein the method further comprises: selecting the new core voltage from the voltage range.Example 21 is an apparatus to program a processor voltage on a per-core basis. The apparatus may include means for accessing a plurality of embedded values correspondingly associated with the plurality of cores of a multicore processor, each embedded value indicating a default voltage for a respective one of the plurality of architecturally identical cores; means for providing core voltages to the plurality of cores at a first time, the core voltages corresponding to the default voltages; means for identifying a new core voltage that is different than the provided core voltages responsive to receiving an override selection; and means for providing the new core voltage to one of the cores of the plurality of cores at a second time that is later than the first time.Example 22 includes the subject matter of example 21 , and means for providing a core voltage of the core voltages to a different one of the cores of the plurality of cores at the second time.Example 23 includes the subject matter of any of examples 21 -22, and means for providing a different new core voltage to a different one of the cores of the plurality of cores at the second time.Example 24 includes the subject matter of any of examples 21 -23, and the override selection includes an overvolt command.Example 25 includes the subject matter of any of examples 21 -23, and the override selection includes an undervolt command.Example 26 includes the subject matter of any of examples 21 -25, and the override selection indicates a voltage range, and the apparatus may include means for selecting the new core voltage from the voltage range. |
An accelerator device includes a first processing unit to access a structure of a graph dataset, and a second processing unit coupled with the first processing unit to perform computations based on data values in the graph dataset. |
CLAIMSWhat is claimed is:1. An accelerator device, comprising: a first processing unit configured to access a structure of a graph dataset; and a second processing unit coupled with the first processing unit and configured to perform computations based on data values in the graph dataset.2. The accelerator device of claim 1, wherein: the first processing unit comprises one or more circuits configured for: adding and removing vertices in the graph dataset, and adding and removing edges in the graph dataset, and the second processing unit comprises a central processing unit configured to execute program code for performing the computations.3. The accelerator device of claim 1, wherein: the first processing unit comprises a central processing unit configured to execute program code for adding and removing vertices in the graph dataset, and adding and removing edges in the graph dataset; and 28
the second processing unit comprises one or more circuits configured for performing the computations.4. The accelerator device of claim 3, wherein: the second processing unit comprises a systolic array of the functional units.5. The accelerator device of claim 3, wherein: each functional unit of the set of functional units comprises zero detection circuitry configured for skipping one or more zero-operand computations.6. The accelerator device of claim 1, further comprising: a gather unit configured to obtain a first portion of the graph dataset from a plurality of memory devices; and a scatter unit configured to send a second portion of the graph dataset to the plurality of memory devices.7. The accelerator device of claim 1, further comprising: a format shuffle unit configured to convert a portion of the graph dataset from a first representation to a second representation.8. A method, comprising: 29
storing a graph dataset in a set of memory devices; in a first processing unit of an accelerator device, accessing a structure of the graph dataset; and in a second processing unit of the accelerator device, performing computations based on data values in the graph dataset.9. The method of claim 8, wherein: accessing the structure of the graph dataset comprises modifying the structure by, in a set of functional units of the first processing unit, adding or removing one or more vertices in the graph dataset, and adding or removing one or more edges in the graph dataset; and performing the computations comprises, in the second processing unit, executing program code for performing one or more arithmetic operations on the data values.10. The method of claim 8, wherein: performing the computations comprises, in a set of functional units in the second processing unit, performing one or more arithmetic operations on the data values; and accessing the structure of the graph dataset comprises, in the second processing unit in the accelerator device, executing program code for, adding or 30
removing one or more vertices in the graph dataset, and adding or removing one or more edges in the graph dataset.11. The method of claim 8, further comprising: in a format shuffle unit in the accelerator device, converting a portion of the graph dataset from a first representation to a second representation.12. The method of claim 8, further comprising: obtaining a first portion of the graph dataset from a plurality of memory devices; and sending a second portion of the graph dataset to the plurality of memory devices.13. A computing system, comprising: a plurality of memory devices each storing a portion of a graph dataset; and a plurality of accelerator devices coupled with the plurality of memory devices, wherein each accelerator device of the plurality of accelerator devices is configured to: access a structure of the graph dataset, and perform computations based on data values in the graph dataset. 3114. The computing system of claim 13, wherein each accelerator device of the plurality of accelerator devices further comprises: a vertex processing unit configured for modifying a structure of the graph dataset, and a throughput processing unit configured for performing one or more arithmetic operations on data values of the graph dataset.15. The computing system of claim 13, wherein: for a first accelerator device of the plurality of accelerator devices and a second accelerator device of the plurality of accelerator devices having different hardware configurations, the first accelerator device is configured for performing a same function set as the second accelerator device, and the function set is accessible via a common programming interface in both of the first accelerator device and the second accelerator device.16. The computing system of claim 13, wherein: a first set of functional units for performing a function on the graph dataset in a first accelerator device of the plurality of accelerator devices has a greater throughput capability than a second set of functional units for performing the function in a second accelerator device of the plurality of accelerator devices. 3217. The computing system of claim 13, wherein: the first accelerator device of the plurality of accelerator devices is configured to perform a function on the graph dataset by executing program code in a processing unit, and the second accelerator device of the plurality of accelerator devices is configured to perform the function in one or more circuits.18. The computing system of claim 13, wherein: each accelerator device is configured to operate on the portion of the graph dataset stored in a local memory device of the plurality of memory devices, wherein the local memory device is closer to the accelerator device than any other memory device of the plurality of memory devices.19. The computing system of claim 13, wherein: each accelerator device of the plurality of accelerator devices further comprises a systolic array of functional units configured to perform the computations, and each of the set of functional units comprises zero detection circuitry configured for skipping one or more zero-operand computations.20. The computing system of claim 13, wherein each accelerator device of the plurality of accelerator devices further comprises: 33
a gather unit configured to obtain a first portion of the graph dataset from the plurality of memory devices; and a scatter unit configured to send a second portion of the graph dataset to the plurality of memory devices. 34 |
FLEXIBLE, SCALABLE GRAPH-PROCESSING ACCELERATORCROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to U.S. Provisional Application No. 63/188,175, filed on May 13, 2021, which is incorporated by reference herein in its entirety.BACKGROUND[0002] A graph is a data structure that has nodes, or vertices, that are connected to other nodes by edges. Each node and/or edge may also be associated with additional data values. Graph analytics is a popular application domain because many machine learning, data mining and scientific computation can be modeled as graph-structured computation. For example, large graph datasets can be used for representing relationships between people in a social network, modeling interactions between different molecules for drug synthesis, generating recommendations, etc.[0003] One dimension affecting the performance and cost of graph analytics is the size of the graph dataset. Very large graph datasets are often distributed over multiple memory devices, and the computations associated with such large graph datasets are performed by multiple computing nodes in a system. However, scaling a graph computing system in this manner can 1
result in problems such as performance bottlenecks (e.g., due to increased communication latency) and lack of flexibility and uniformity. 2
BRIEF DESCRIPTION OF THE DRAWINGS[0004] The present disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.[0005] Figure 1 illustrates an embodiment of a computing system implementing graph processing accelerator devices.[0006] Figure 2 illustrates an embodiment graph processing accelerator device.[0007] Figure 3 illustrates graph processing accelerator devices deployed in a computing system, according to an embodiment.[0008] Figure 4A illustrates a systolic array of functional units in a throughput processing unit, according to an embodiment.[0009] Figure 4B illustrates a systolic array of functional units in a throughput processing unit, according to an embodiment.[0010] Figure 4C illustrates a process of performing matrix multiplication in a throughput processing unit, according to an embodiment.[0011] Figure 5 illustrates a process of processing graph data in a computing system implementing graph processing accelerator devices, according to an embodiment. 3
DETAILED DESCRIPTION[0012] The following description sets forth numerous specific details such as examples of specific systems, components, methods, and so forth, in order to provide a good understanding of the embodiments. It will be apparent to one skilled in the art, however, that at least some embodiments may be practiced without these specific details. In other instances, well-known components or methods are not described in detail or are presented in a simple block diagram format in order to avoid unnecessarily obscuring the embodiments. Thus, the specific details set forth are merely exemplary. Particular implementations may vary from these exemplary details and still be contemplated to be within the scope of the embodiments.[0013] Graph processing and graph analytics is an extremely popular application space in modem data centers, and encompasses a wide variety of applications including social network analysis, recommendation systems, drug synthesis, etc. Providing a scalable hardware solution that is simple to program can facilitate deployment of these applications on a variety of computing platforms. Graph processing is compute and memory intensive and can benefit from purpose built hardware accelerators; however, existing approaches are associated with a high degree of non-recurring engineering (NRE) since hardware solutions tailored to address specific processing bottlenecks result in different software and programming interfaces. 4
[0014] In addition, processing of larger graphs means that the graph data is physically located over a wider area in the computing system, as compared to smaller graphs. The effectiveness of accelerator hardware may be impacted when the accelerator hardware is located farther away from the data; however, placement of accelerator functionality in different parts of the system can also lead to differences in hardware (and consequently, the software/programming interface) in order for the accelerator to be optimized for operating in a particular location.[0015] In one embodiment, a graph-processing accelerator architecture processes graph data as close as possible to where the graph data being processed resides in memory, and can be located at a variety of different locations in the overall computing system, including and not limited to central processing unit (CPU)-attached, network-attached, memory-attached, storage-attached locations, etc. The accelerator architecture is scalable, allowing for different performance levels when instantiated in different parts of the system; however, the programming interface for accessing the accelerator's functions remains constant regardless of the specific microarchitecture, so that writing software for accelerator is significantly easier and more scalable. In one embodiment, a graph processing accelerator includes a single-instruction multiple data (SIMD) or systolic-array-based throughput processing unit (or vector processing unit) to perform matrix 5
arithmetic, a vertex processing unit for manipulating the structure (i.e., nodes and edges) of the graph data, a format shuffle unit to convert sparse matrices between different sparse representations, a programmable gather/scatter unit, and a general purpose CPU.[0016] Figure 1 illustrates an embodiment of a computing system 100 which includes the graph-processing accelerators as described above. In general, the computing system 100 is embodied as any of a number of different types of devices, including but not limited to a laptop or desktop computer, mobile phone, server, datacenter, etc. The computing system 100 includes a number of components 102-108 that can communicate with each other through an interconnect 101. In computing system 100, each of the components 102-108 is capable of communicating with any of the other components 102-108 either directly through the interconnect 101, or via one or more of the other components 102-108. The components 101-108 in computing system 100 are contained within a single physical casing, such as a laptop or desktop chassis, or a mobile phone casing. In alternative embodiments, some of the components of computing system 100 are embodied as peripheral devices such that the entire computing system 100 does not reside within a single physical casing.[0017] The computing system 100 may also include user interface devices for receiving information from or providing information to a user. 6
Specifically, the computing system 100 may include an input device 102, such as a keyboard, mouse, touch-screen, or other device for receiving information from the user. The computing system 100 displays information to the user via a display 105, such as a monitor, light-emitting diode (LED) display, liquid crystal display, or other output device.[0018] Computing system 100 additionally includes a network adapter 107 for transmitting and receiving data over a wired or wireless network. Computing system 100 also includes one or more peripheral devices 108. The peripheral devices 108 include mass storage devices, location detection devices, sensors, input devices, or other types of devices that can be used by the computing system 100.[0019] Computing system 100 includes one or more processing unit(s) 104 that can receive and execute instructions 106a that are stored in the main memory 106 or in other memory devices (e.g., memory local to one or more of the processing unit(s) 104). As referenced herein, processing unit(s) 104 represents one or more processor "pipelines", and could include central processing unit (CPU) pipelines, graphics processing unit (GPU) pipelines, or other computing engines. Main memory 106 is part of a memory subsystem of the computing system 100 that includes memory devices used by the computing system 100, such as random-access memory (RAM) modules, read-only memory (ROM) modules, hard disks, and other non-transitory 7
computer-readable media. In addition to the main memory 106, the memory subsystem also includes cache memories, such as L2 or L3 caches, and/or registers. Such cache memory and registers are present in the processing unit(s) 104 or on other components of the computing system 100.[0020] Figure 2 illustrates an embodiment of a graph processing accelerator device 200 that is deployed in the computing system 100. The accelerator device 200 can be deployed at multiple levels of the system hierarchy, with the same functionality exposed at every location in the system hierarchy, thus simplifying the software interface and the programming of the accelerator. The circuit components in the accelerator device 200 include processing units 201-204, local memory 210, and memory interface modules 205-209. The processing units 201-203 contain specialized hardware functional units (e.g., in a systolic array arrangement) for performing specific graph processing tasks. In one embodiment, computations performed in the functional units are carried out in circuitry without execution of any program code.[0021] The throughput processing unit 201 (or vector processing unit) is a processing unit that performs computations based on the data values in graph datasets, such as arithmetic and linear algebra operations. In one embodiment, the throughput processing unit 201 performs linear algebra primitive functions for tensor processing including, but not limited to: 8
Matrix-matrix multiplications• Vector-matrix multiplications• Matrix-vector multiplication• Element-wise multiplication of vectors and matrices• Element-wise addition of vectors and matrices• Selecting/extracting a subgraph in the form of a tensor or other formats• Assigning a value of a subgraph in the form of a tensor• Applying a function to a subgraph in the form of a tensor• Reducing a matrix to a vector or a vector to an element• Transposing a matrix• Calculating a Kronecker product or other, similar, outer product between two matricesIn one embodiment, the throughput unit 201 is implemented using a single instruction, multiple data (SIMD) architecture, or one or more systolic arrays of functional units for performing the above primitive functions.[0022] The vertex processing unit 202 contains functional units for accessing and/or modifying a structure of a graph dataset (i.e., the nodes/vertices, edges, and properties or metadata associated with nodes and/or edges). In one embodiment, the vertex processing unit 202 supports graph manipulation primitive functions including, but not limited to:Adding/removing vertices 9
Query-based filtering, partitioning by a given strategy• Grouping together edges• Combining edges• Grouping together vertices• Combining vertices• Collecting neighboring node identifiers (IDs)• Collecting connected components• Reversing (transposing) a graph• Collecting a subgraph from a graph based on a filter• Masking subnodes in a graph based on a filter• Node value aggregation[0023] Additionally, embodiments may support primitive operations on an underlying matrix representation of the graph including, but not limited to, the following:• Resizing the matrix• Clearing the matrix• Removing or extracting elements• Setting the value of an element[0024] The accelerator device 200 also includes a format shuffle unit 203 that includes computational hardware for converting sparse matrices or other data structure types between different sparse representations. The format 10
shuffle unit 203 is capable of converting at least a portion of the graph dataset from a first representation to a second representation. In one embodiment, the format shuffle unit 203 supports the following conversion operations:• Conversion between various sparse tensor representation formats such as compressed sparse row (CSR), compressed sparse column (CSC), coordinate list (COO), hierarchical coordinate (HiCOO), ELLPACK (ELL), and sliced ELLPACK (SELL-C-sigma).• Conversion between sparse tensor representations and a dense representation and vice-versa.• Conversion between various graph representations such as adjacency lists, adjacency matrices, and edge lists.• Conversion between compressed representations and uncompressed representations.[0025] The above processing components 201-203 include specialized hardware for performing their respective tasks. Lor a given accelerator device that includes a particular processing unit type, the tasks for which the included processing unit is optimized are performed primarily by the specialized hardware in that processing unit. Lor example, an accelerator device that includes a vertex processing unit 202 primarily performs graph data structure manipulation in the vertex processing unit 202 rather than in other components. 11
[0026] In one embodiment of a computing system 100, each graph processing accelerator in the system 100 supports the same base set of primitive functions; however, a given accelerator device in the system 100 need not include all of the processing unit types 201-203. Tasks for which the given accelerator device 200 does not include specialized hardware are instead performed by the CPU 204 (e.g., an x86 CPU). For example, a graph processing accelerator device that does not include a specialized vertex processing unit 202 performs graph data structure manipulation in the CPU 204. However, at least some tasks will be performed by one of the specialized processing units 201-203 having hardware optimized for the tasks that the accelerator device 200 does include.[0027] In an example computing system 100, each graph processing accelerator device 200 in the system 100 includes a CPU 204 and at least one of the processing units 201-203. Each of the accelerator devices 200 supports at least the same base set of graph processing primitive functions implemented in the specialized hardware of the processing units 201-203 and the general CPU 204 (when the specialized hardware for the function is not included). Various embodiments of the accelerator 200 can support fewer or more primitive functions than those listed above.[0028] The memory interface portion of the accelerator device 200 includes one or more of: programmable gather 205 and scatter 206 units, input/output 12
module 207, and compression 209 and decompression 208 units. The gather unit 205 is capable of retrieving data from a sparse range of memory locations, and the scatter unit 206 is capable of scattering (i.e., storing data) over a sparse range of memory locations. The gather unit 205 obtains a portion of a graph dataset from multiple memory locations (e.g., different memory devices) in the system 100 via the I/O module 207. The data can be received at the I/O module 207 in compressed form, and is decompressed in the decompression unit 208. The gather unit 205 stores the decompressed graph data in the local memory 210 where it can be accessed by the processing units 201-204 for their computations.[0029] The scatter unit 206 sends a portion of the graph dataset to be stored in one or more remote memory devices in the system 100. In one embodiment, the scatter unit 206 obtains data (e.g., data resulting from computations performed by the processing units 201-204) to be stored in the remote memory devices from the local memory 210. The data can be compressed in the compression unit 209 and then transmitted via the I/O module 207 to the destination memory devices via the interconnect 101.[0030] Figure 3 illustrates graph processing accelerator devices at different locations in a computing system 100, according to an embodiment. Figure 3 additionally illustrates multiple processing units 301-303 (corresponding to processing unit(s) 104), multiple memory devices 304 and 305 (corresponding 13
to memory 106), storage device 306 (i.e., one of the peripheral device(s) 108), and graph processing accelerator devices 314-318 in various locations in the computing system 100. Each of the accelerator devices 314-318 is implemented by a device such as the accelerator device 200, but may include all or a subset of the processing units 201-203 and all or a subset of components 205-209. The set of graph processing accelerator devices 314-318 includes a processor-attached accelerator 318, memory-attached accelerators 314 and 315, network attached accelerator 316, and storage-attached accelerator 317.[0031] Each of the accelerator devices 314-318 includes a gather unit 205 and a scatter unit 206. In the computing system 100, a graph dataset is stored across multiple memory devices, including memory devices 304 and 305 and other memory devices not illustrated, which each store a portion of the complete graph dataset. The gather unit 205 in an accelerator device obtains a portion of the graph dataset from one or more of the memory devices via the interconnect 101 so that the portion can be processed in the accelerator device. When the processing of the graph data is complete, the scatter unit 206 sends the processed portion of the graph data via the interconnect 101 to be stored in the memory devices. In one embodiment, each accelerator device 314-318 operates on graph data that is located closest to it. For example, the accelerator device 314 operates primarily on the portion of the graph dataset 14
that is stored in its local memory device 304, since the accelerator device 314 is closer to the memory device 304 than to any other memory device (e.g., memory device 305) in the system 100. In other words, a majority of the computations performed in the accelerator device 314 are on the graph data stored in memory device 304 rather than any other memory device.[0032] Some or all of the accelerator devices 314-318 have components (e.g., vector processing unit, vertex processing unit, format shuffle, etc.) with different throughput and/or bandwidth capabilities that are optimized depending on factors such as the location of the accelerator device, the proximity of the accelerator device to certain other devices or components, the application being run, etc. In one embodiment, each of the accelerator devices is capable of performing the same set of functions (e.g., the previously described primitive graph processing functions), which are accessible via the same common software/programming interface regardless of the differing individual hardware configurations of the accelerators 314-318.[0033] Scalability of the accelerator devices 314-318 is achieved by scaling the individual components to optimize for certain parts of the graph application. This can be accomplished by increasing the size or capacity of one or more of the processing units 201-203 by, for example, including a larger number of functional units, memory, or other hardware resources in the processing unit. Accordingly, different accelerator devices in the system 100 15
can have different performance capabilities for the same functions. For example, one of the processing units 201-203 for performing a particular function on the graph dataset in a first accelerator device may have a greater number of functional units and/or other hardware resources, and therefore has a greater throughput capability (i.e., can process more data in a given time) than the corresponding processing unit having a smaller set of functional units and fewer hardware resources for performing the same function in a second accelerator device.[0034] In some embodiments, a particular function in one accelerator device is performed by executing program code in its CPU 204, while in another accelerator device, the same function is performed in one or more hardware functional units. For example, a vector processing unit 202 in a first accelerator device includes hardware functional units for adding and removing vertices edges in the graph dataset, while the same first accelerator device lacks a throughput processing unit 201 and performs computations on graph data values (e.g., arithmetic and linear algebra) by executing program code in the CPU 204 for performing the computations. In contrast, a second accelerator device in the same computing system 100 lacks a vertex processing unit 202 and executes program code in its CPU 204 for adding and removing vertices and edges in the graph dataset, while the same second accelerator device has a throughput processing unit 201 that includes an array of 16
hardware functional units for performing the arithmetic, linear algebra, and other computations. Thus these accelerator devices support the same functions, though the functions are performed in different hardware and with different performance characteristics.[0035] In one embodiment, the performance capabilities of the accelerator devices 314-318 are optimized depending on their locations in the system 100. For example, when specific data is requested from a long-term storage device (e.g., from an accelerator 317 residing close to the storage device 306), it is more likely that more parallel memory requests will be processed and more query filters will be performed in order to make sure that the right set of data is collected before being sent to another part of the system. As such, the vertex processing unit 202 of the local accelerator device 317 and gather 205 and scatter 206 units are sized up, with additional functional units and processing capacity, while the throughput processing unit 201 are sized down or eliminated, with its functionality implemented by software running on the CPU 204. Since the accelerator 317 is close to and has greater access to one portion of the graph data while having less access to other portions of the graph data, it is more difficult for the accelerator 317 to access an overall view of the graph data which would be used for performing linear algebra computations. Instead, its primary role would be gathering data. 17
[0036] In another example, a graph processing accelerator 318 instantiated close to a main compute device, processing unit 301, is not exposed to a relatively large amount of data, but is primarily occupied with computations (e.g., linear algebra, arithmetic, etc.) for the application being run. Accordingly, the accelerator device 318 has a larger and more capable throughput processing unit 201, and smaller vertex processing unit 202 (since the graph will not be modified as much) and smaller gather/scatter units 205/206.[0037] Figure 4A illustrates a throughput processing unit 201 including a systolic array of functional units, according to an embodiment. The throughput processing unit 201 accelerates arithmetic and linear algebra operations for the graph dataset. In particular, the throughput processing unit 201 includes specialized hardware for accelerating such computations for sparse applications, in which there are many zeroes in the graph dataset. As illustrated in Figure 4A, the throughput processing unit 201 includes a systolic array 401 of functional units for performing the computations. The systolic array 401 is an array of 4-wide multiply-add functional units. The systolic array 401 receives two sets of values, the A values 404 and the B values 405, and multiplies each of the A values 404 with each of the B values 405. Alternative embodiments can include fewer or more functional units, with different widths and/or functions. For sparse computations, each of the 18
functional units also includes zero detection circuitry for skipping one or more zero-operand computations, which have zero as one of their operands. [0038] For example, when the multiply unit 402 receives two inputs, A and B, to produce a product C, optimizations can be performed to elide the computations of any zero-valued products. In this case, rather than providing two scalar values A and B, two possible pairs of operands are provided to the multiplier, (Al, Bl) and (A2, B2). Either theoperands (i.e., A1 and Bl) or the "2" operands (i.e., A2 and B2) are actually multiplied together, depending on which pair consists of two non-zero values. Additional buffering is provided for the minority case where both the "1" set and "2" set of operands have two non-zero values. For a sparse data set, likely no more than one of the pairs will consist of two nonzero values, so the results for both pairs likely can be determined using the zero detection circuitry 403 and a single multiplier unit 402. In this case, the zero detection circuitry 403 selects the set of operands that are both nonzero values to be multiplied together, and the product of the other set is zero. If each set of operands has at least one zero operand, then both products are zero. If each set of operands has two nonzero operands, then one set is multiplied in a first cycle, and the second set is buffered during the first cycle and multiplied in a subsequent cycle.[0039] Figure 4B illustrates an embodiment of a systolic array for performing matrix multiplication operations, in the context of a neural 19
network computation, on a set of weights (stored in the A-Matrix buffer 411) and activations (stored in the B-Matrix buffer 412), with the results stored in the C-Matrix buffer 413. Each of the multiply /accumulate units (MAC4) in the systolic array 414 receives four inputs from the A-Matrix 411 and four inputs from the B-Matrix 412; thus, the array 414 of MAC4 units receives a total of eight inputs from the A-Matrix 411 and eight inputs from the B-Matrix 412, or eight pairs of operands. In one embodiment, zero detection logic is incorporated into the MAC4 units to apply the approach illustrated in Figure 4A. For multiplying sufficiently sparse data, no more than four of the eight pairs of operands will include two nonzero values in most cases. Thus, the four MAC4 units are usually sufficient to perform the computations for the eight pairs of operands in a single cycle. The MAC4 units compute the products for the nonzero pairs of operands, and the multiply results for pairs that have at least one zero operand are set to zero.[0040] Figure 4C illustrates a flow chart showing a process 420 for performing matrix multiplication in an embodiment of the graph processing accelerator 200. The matrix multiplication process 420 is performed in the components of a throughput processing unit 201 as illustrated in Figures 2 and 4B. At block 421, a sparse read/gather operation is performed by the gather unit 205 to retrieve graph data from one or more memory devices in the system 100. At block 423, the gathered data is stored in the A-Matrix 411 20
and B-Matrix 412 buffers. The matrix multiplication is carried out in the systolic array 414 of MAC4 units, as provided at block 425, and the results are buffered in the C-Matrix buffer 413 at block 427. At block 429, if the computation is not yet complete, then the results in the C-Matrix buffer 414 are used as inputs to the MAC4 units in one or more subsequent iterations, to be multiplied with new incoming data, as the process 420 returns to block 425. When the computation is complete, the process 420 continues from block 429 to block 431. At block 431, the final data (i.e., the computation result) is stored back into memory by the scatter unit 206.[0041] Figure 5 illustrates a graph processing process 500 for performing operations on a graph dataset, according to an embodiment. The process 500 is performed by components in the computing system 100, including the memory 106 and the graph processing accelerators 314-318.[0042] At block 501, the data in a graph dataset is stored across multiple memory devices (e.g., memory devices 304 and 305) in the computing system 100. The graph dataset defines nodes (or vertices) and edges, along with relationships between the nodes and edges. The graph dataset also includes data values associated with the nodes and/or edges, and can be stored in an uncompressed format or a compressed format such as CSR, CSC, ELLPACK, etc. At block 503, one of the processing units 301-303 requests that an operation be performed on the graph data. The processing unit executes 21
program code that specifies the operation and the graph data on which the operation is to be performed according to the common programming interface for the graph processing accelerator devices 314-318. The request is transmitted via the interconnect 101 to one or more of the accelerator devices 314-318.[0043] Blocks 505-515 are performed in one or more of the accelerator devices 314-318, represented generally by accelerator device 200. In particular, the operations of blocks 505-515 are performed by components such as the processing units 201-204, gather and scatter units 205-206, etc.[0044] At block 505, the accelerator device 200 receives the request to perform an operation on the graph data stored in the memory devices 106.The request is received from the interconnect 101 by the I/O module 207. At block 507, the gather unit 205 responds to the request by reading the graph data on which the requested operation will be performed. The gather unit 205 requests the data from the memory devices 106, and the data is transmitted from memory 106 via the interconnect 101 and is received at the I/O module207 in compressed form. The data is decompressed in the decompression unit208 and the gather unit 205 stores the data in the local memory 210 where it can be accessed by the processing units 201-204. The graph data represents structural features of the graph (e.g., nodes/vertices, edges, etc.) and data values associated with the structural features. 22
[0045] The gathered data is processed according to the request, as provided in one or more of blocks 508-513. Depending on the requested operation or operations, the process 500 may include some or all of the blocks 508-513. If the requested operation involves modification of the graph structure (e.g., addition or removal of a node or edge, etc.), then the structure of the graph dataset is modified as provided at block 509. Depending on the hardware configuration of the accelerator device in which the operation is performed, the modification of the graph structure is performed in a set of functional units in the vertex processing unit 202 or, if the accelerator device does not include a vertex processing unit 202, the operation is performed in the CPU 204, which executes program code for performing the modification. The modified graph data is stored in the local memory 210.[0046] If the requested operation involves computations based on data values in the graph dataset, such as arithmetic, linear algebra, or other calculations, then the computations are performed as provided at block 511. Depending on the hardware configuration of the accelerator device, the computations are performed in a set of functional units in the throughput processing unit 201 or, if the accelerator device does not include a throughput processing unit 201, the computations are performed in the CPU 204, which executes program code for performing the computations. As an example, blocks 423-429 in Figure 4C correspond to block 511 in the process 500. 23
[0047] If the requested operation involves converting the graph data from one tensor representation format (e.g., CSR, CSC, COO, etc.) to another tensor representation format, then the conversion is performed as provided at block 513. Depending on the hardware configuration of the accelerator device, the computations are performed in a set of functional units in the format shuffle unit 203 or, if the accelerator device does not include a format shuffle unit 203, the conversion is performed in the CPU 204, which executes program code for performing the conversion.[0048] Once the requested operation is completed in one or more of blocks 509-513 the modified graph data is stored in the local memory 210. The process 500 continues at block 515, at which the scatter unit 206 sends the modified graph data to be stored in the memory devices 106. The scatter unit 206 obtains the modified graph data from the local memory 210 and the graph data is compressed in the compression unit 209. The compressed version of the data is sent by the I/O module 207 to the memory devices 106 via the interconnect 101. The process 500 repeats for each operation that is requested on the graph data. The accelerator devices 314-318 in the system 100 thus facilitate processing of the graph dataset while providing a unified software programming interface for accessing the supported accelerator functions and maintaining a high degree of scalability and flexibility. 24
[0049] As used herein, the term "coupled to" may mean coupled directly or indirectly through one or more intervening components. Any of the signals provided over various buses described herein may be time multiplexed with other signals and provided over one or more common buses. Additionally, the interconnection between circuit components or blocks may be shown as buses or as single signal lines. Each of the buses may alternatively be one or more single signal lines and each of the single signal lines may alternatively be buses.[0050] Certain embodiments may be implemented as a computer program product that may include instructions stored on a non-transitory computer- readable medium. These instructions may be used to program a general- purpose or special-purpose processor to perform the described operations. A computer-readable medium includes any mechanism for storing or transmitting information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The non-transitory computer- readable storage medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read-only memory (ROM); random-access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory, or another type of medium suitable for storing electronic instructions. 25
[0051] Additionally, some embodiments may be practiced in distributed computing environments where the computer-readable medium is stored on and/or executed by more than one computer system. In addition, the information transferred between computer systems may either be pulled or pushed across the transmission medium connecting the computer systems. [0052] Generally, a data structure representing the computing system 100 and/or portions thereof carried on the computer-readable storage medium may be a database or other data structure which can be read by a program and used, directly or indirectly, to fabricate the hardware including the computing system 100. For example, the data structure may be a behavioral- level description or register-transfer level (RTL) description of the hardware functionality in a high level design language (HDL) such as Verilog or VHDL. The description may be read by a synthesis tool which may synthesize the description to produce a netlist including a list of gates from a synthesis library. The netlist includes a set of gates which also represent the functionality of the hardware including the computing system 100. The netlist may then be placed and routed to produce a data set describing geometric shapes to be applied to masks. The masks may then be used in various semiconductor fabrication steps to produce a semiconductor circuit or circuits corresponding to the computing system 100. Alternatively, the database on the computer-readable storage medium may be the netlist (with or without 26
the synthesis library) or the data set, as desired, or Graphic Data System(GDS) P data.[0053] Although the operations of the method(s) herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operations may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of distinct operations may be in an intermittent and/or alternating manner.[0054] In the foregoing specification, the embodiments have been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader scope of the embodiments as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. 27 |
Embodiments of the present disclosure may relate to a memory controller that may include a memory interface and a logic circuitry component coupled with the memory interface. In some embodiments, the logic circuitry component is to program one or more NAND cells of a multi-level NAND memory array via the memory interface with a first set of data in a first pass, determine a first temperature of the multi-level NAND memory array in association with the first pass, determine a second temperature of the multi-level NAND memory array, determine a temperature difference between the second temperature and the first temperature, and perform one or more operations based at least in part on a result of the determination of the temperature difference. Other embodiments may be described and/or claimed. |
ClaimsWhat is claimed is:1. A memory controller comprising:a memory interface; anda logic circuitry component coupled with the memory interface, wherein the logic circuitry component is to:program one or more NAND cells of a multi-level NAND memory array via the memory interface with a first set of data in a first pass;determine a first temperature of the multi-level NAND memory array in association with the first pass;determine a second temperature of the multi-level NAND memory array; determine a temperature difference between the second temperature and the first temperature; andperform one or more operations based at least in part on a result of the determination of the temperature difference.2. The memory controller of claim 1, wherein the one or more operations include one or more of:program the one or more NAND cells with a second set of data in a second pass, in response to the temperature difference is less than or equal to a predefined threshold value; andsend a temperature difference exceeded flag to a host controller, facilitate an external data read of the one or more NAND cells, facilitate data correction associated with the one or more NAND cells, or facilitate recovery of data encoded by the one or more NAND cells, in response to the temperature difference is greater than the predefined threshold value.3. The memory controller of any one of claims 1-2, wherein the logic circuitry component is to store the first temperature in a flag byte associated with a page address.4. The memory controller of any one of claims 1-2, wherein the logic circuitry component is to program the one or more NAND cells with a second set of data in a second pass, in response to the temperature difference is less than or equal to the predefined threshold value.5. The memory controller of claim 4, wherein:the one or more NAND cells are quad-level cells; the first set of data includes a first page of data and a second page of data; the second set of data includes a third page of data and a fourth page of data;the first pass includes programming each of the one or more NAND cells into one of four levels based at least in part on the first set of data; andthe second pass includes programming each of the one or more NAND cells into one of sixteen levels based at least in part on the first and second sets of data.6. The memory controller of claim 4, wherein:the one or more NAND cells are quad-level cells;the first set of data include a first page of data, a second page of data, and a third page of data;the second set of data includes a fourth page of data;the first pass includes programming each of the one or more NAND cells into one of eight levels based at least in part on the first set of data; andthe second pass includes programming the one or more NAND cells into one of sixteen levels based at least in part on the first and second sets of data.7. The memory controller of claim 4, wherein the predefined threshold value is a first predefined threshold value, the temperature difference is a first temperature difference, the second temperature is associated with the second pass, and the logic circuitry component is also to:determine a third temperature of the multi-level NAND memory array;determine whether a second temperature difference between the third temperature and the second temperature is less than or equal to a second predefined threshold value; andprogram the one or more NAND cells with a third set of data in a third pass, in response to the second temperature difference is less than or equal to the second predefined threshold value.8. The memory controller of claim 7, wherein:the one or more NAND cells are quad-level cells;the first set of data includes a first page of data;the second set of data includes a second and a third page of data;the third set of data includes a fourth page of data;the first pass includes programming each of the one or more NAND cells into one of two levels based at least in part on the first set of data; the second pass includes programming each of the one or more NAND cells into one of eight levels based at least in part on the first and second sets of data; and the third pass includes programming each of the one or more NAND cells into one of sixteen levels based at least in part on the first, second, and third sets of data.9. The memory controller of any one of claims 1 -2, wherein the logic circuitry component is to send a temperature difference exceeded flag to a host controller in response to the temperature difference is greater than the predefined threshold value.10. The memory component of any one of claims 1-2, wherein the logic circuitry component is to determine the second temperature and determine whether the temperature difference between the second temperature and the first temperature is less than or equal to the predefined threshold in response to a temperature check command received from a host.11. The memory controller of any one of claims 1 -2, wherein the logic circuitry component includes a processor.12. A data storage apparatus comprising:a multi-level NAND memory array including one or more NAND cells associated with a word line;a memory controller coupled with the multi-level NAND array, wherein the memory controller is to:program the one or more NAND cells with a first set of data in a first pass; determine a first temperature of the multi-level NAND memory array in association with the first pass;determine a second temperature of the multi-level NAND memory array; determine whether a temperature difference between the second temperature and the first temperature is less than or equal to a predefined threshold value; andprogram the one or more NAND cells with a second set of data in a second pass, in response to the temperature difference is less than or equal to the predefined threshold value.13. The apparatus of claim 12, further including a temperature sensor, wherein the memory controller is to determine the first and second temperatures based at least in part on temperatures sensed by the temperature sensor.14. The apparatus of any one of claims 12-13, wherein the memory controller is further to store the first temperature in a flag byte associated with a page address.15. The apparatus of any one of claims 12-13, wherein:the one or more NAND cells are quad-level cells;the first set of data includes a first page of data and a second page of data; the second set of data includes a third page of data and a fourth page of data;the first pass includes programming the each of the one or more NAND cells into one of four levels based at least in part on the first set of data; andthe second pass includes programming each of the one or more NAND cells into one of sixteen levels based at least in part on the first and second sets of data.16. The apparatus of any one of claims 12-13, wherein the predefined threshold value is a first predefined threshold value, the temperature difference is a first temperature difference, the second temperature is associated with the second pass, and the logic circuitry component is also to:determine a third temperature of the multi-level NAND memory array; determine whether a second temperature difference between the third temperature and the second temperature is less than or equal to a second predefined threshold value; andperform one or more operations based at least in part on a result of a determination of the second temperature difference.17. The apparatus of claim 12, further including a host controller communicatively coupled with the memory controller, wherein the host controller is to send the first set of data to the memory controller.18. The apparatus of claim 17, wherein the memory controller is to send a temperature difference exceeded flag to the host controller in response to the temperature difference is greater than the predefined threshold value.19. The apparatus of claim 18, wherein the host controller is to perform an external data read to error correct the first set of data in response to the temperature difference exceeded flag.20. The apparatus of any one of claims 17-19, wherein:the host controller is to send a temperature check command to the memory controller; andthe memory controller is to determine whether the temperature difference between the second temperature and the first temperature is less than or equal to the predefined threshold in response to the temperature check command.21. The apparatus of any one of claims 17-19, wherein the apparatus is a solid- state drive (SSD) and the host controller is a SSD controller.22. A method comprising:receiving a first set of data from a host controller;programming, with a memory controller, one or more NAND cells associated with a word line of a multi-level NAND memory array with the first set of data in a first pass; determining, by the memory controller, a first temperature of the multi-level NAND memory array in association with the first pass;determining, by the memory controller, a second temperature of the multi-level NAND memory array;determining, by the memory controller, whether a temperature difference between the second temperature and the first temperature is less than or equal to a predefined threshold value; andperforming one of, by the memory controller:programming the one or more NAND cells with a second set of data in a second pass, in response to the temperature difference is less than or equal to the predefined threshold value; orsending a temperature difference exceeded flag to the host controller, in response to the temperature difference is greater than the predefined threshold value.23. The method of claim 22, wherein the method includes storing the first temperature in a flag byte associated with a page address, and wherein determining whether the temperature difference between the second temperature and the first temperature is less than or equal to the predefined threshold value includes reading the flag byte associated with the page address to obtain the stored first temperature.24. The method of any one of claims 22-23, wherein the multi-level NAND memory array is a triple level cell (TLC) array, a quad level cell (QLC) array, or a multi level cell (MLC) array.25. The method of any one of claims 22-23, wherein the host controller is a solid- state drive (SSD) controller. |
DATA STORAGE DEVICE WITH OPERATION BASED ON TEMPERATUREDIFFERENCERelated ApplicationThis application claims priority to U.S. Application 15/838,202, entitled“DATA STORAGE DEVICE WITH OPERATION BASED ON TEMPERATURE DIFFERENCE,” filed December 11, 2017.FieldEmbodiments of the present disclosure relate generally to the technical field of computing, and more particularly to NAND memory programming.BackgroundThe background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.Semiconductor memory may be classified as non-volatile memory or volatile memory. A non-volatile memory, e.g., NAND flash memory, may store and retain information even when the non-volatile memory is not connected to a power source. NAND flash memory, or simply NAND memory, or a NAND memory system, may be included in a storage device to store data. Bits may be stored into cells, or memory cells, of a NAND memory, which may be made of floating-gate transistors. Multi-level NAND memory may store multiple bits of data per cell, and may include three level cells (TLC) that store three bits of data per cell, quad level cells (QLC) that store four bits of data per cell, and other types of cells such as multi-level cells (MLC) that store two bits of data per cell. TLC and QLC NAND is typically programmed with more than one pass. In a NAND device, several non-idealities may result in an increased raw bit error rate (RBER). One of these non-idealities is temperature dependence of the NAND cells (e.g., when reading cells at a temperature different than the temperature during programming the cells, the threshold voltage of the cells may appear lower or higher than the threshold voltage if the cells are read at the same temperature.) As an example, an internal pre-read of eight threshold voltage (VT) states during a third pass of a 2-8-16 technique may occur at different temperature conditions than when the states were programmed during the second pass. This may cause a high RBER and potential misplacements of sixteen VT states, which may lead to fatal errors that may be uncorrectable by an external error correcting code (ECC) engine on system platforms.Brief Description of the DrawingsEmbodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings.Figure 1 illustrates an example electronic system that includes a memory controller to program a multi-level NAND memory array of a NAND memory system using temperature checks, in accordance with various embodiments.Figure 2 is a schematic representation of threshold voltage distributions of quad- level cells for multi-pass programming techniques and associated temperature readings, in accordance with various embodiments.Figure 3 illustrates a flow diagram of a technique for programming memory cells, in accordance with various embodiments.Figure 4 illustrates a flow diagram of another technique for programming memory cells, in accordance with various embodiments.Figure 5 illustrates a flow diagram illustrating different options to program a third pass of a 2-8-16 QLC programming technique, in accordance with various embodiments.Figure 6 is a block diagram that schematically illustrates a computing device, in accordance with various embodiments.Figure 7 illustrates an example storage medium with instructions configured to enable an apparatus to practice various aspects of the present disclosure, in accordance with various embodiments.Detailed DescriptionEmbodiments of the present disclosure may relate to a memory controller that may include a memory interface and a logic circuitry component coupled with the memory interface. In some embodiments, the logic circuitry component may program one or more NAND cells of a multi-level NAND memory array via the memory interface with a first set of data in a first pass, determine a first temperature of the multi-level NAND memory array in association with the first pass, determine a second temperature of the multi-level NAND memory array, and determine a temperature difference between the second temperature and the first temperature. In various embodiments, the memory controller may perform one or more operations based at least in part on a result of the determination of the temperature difference. In some embodiments, the operations may include program the one or more NAND cells with a second set of data in a second pass, in response to the temperature difference is less that a predefined threshold value. In some embodiments, the operations may include send a temperature difference exceeded flag to a host controller, facilitate an external data read of the one or more NAND cells, facilitate data correction associated with the one or more NAND cells, or facilitate recovery of data encoded by the one or more NAND cells, in response to the temperature difference is greater than the predefined threshold value.In some embodiments, a NAND memory system may include a multi-level NAND memory array and a memory controller to control the operations, e.g., read, write(program), erase, of the multi-level NAND memory array. In various embodiments, the memory controller may program the multi-level NAND memory array based at least in part on a temperature check. A multi-level NAND memory array may include multiple cells organized into pages, blocks, planes on a die, while the multi-level NAND memory array may include multiple dies. The smallest unit of operations for a multi-level NAND memory array may be referred to as a page. A page of data may be programmed into or read from a multi-level NAND memory array.In some embodiments, a NAND memory system may be a storage device coupled to an external computing device to store data generated by the computing device.Additionally and alternatively, a NAND memory system may be a part of a computing system to store data generated by a processor of the computing system. Sometimes, data may be programmed into the multi-level NAND memory array by the computing system or the computing device in two or more passes, to minimize the effect of coupling from the neighboring cells.In the description to follow, reference is made to the accompanying drawings, which form a part hereof wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.Operations of various methods may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiments. Various additional operations may be performed and/or described operations may be omitted, split or combined in additional embodiments.For the purposes of the present disclosure, the phrase“A or B” and“A and/or B” means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase“A,B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).The description may use the phrases“in an embodiment,” or“in embodiments,” which may each refer to one or more of the same or different embodiments. Furthermore, the terms“comprising,”“including,”“having,” and the like, as used with respect to embodiments of the present disclosure, are synonymous.As used hereinafter, including the claims, the term“module” or“routine” may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.Where the disclosure recites“a” or“a first” element or the equivalent thereof, such disclosure includes one or more such elements, neither requiring nor excluding two or more such elements. Further, ordinal indicators (e.g., first, second or third) for identified elements are used to distinguish between the elements, and do not indicate or imply a required or limited number of such elements, nor do they indicate a particular position or order of such elements unless otherwise specifically stated.The terms“coupled with” and“coupled to” and the like may be used herein. “Coupled” may mean one or more of the following. “Coupled” may mean that two or more elements are in direct physical or electrical contact. However,“coupled” may also mean that two or more elements indirectly contact each other, but yet still cooperate or interact with each other, and may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. By way of example and not limitation,“coupled” may mean two or more elements or devices are coupled by electrical connections on a printed circuit board such as a motherboard, for example. By way of example and not limitation,“coupled” may mean two or more elements/devices cooperate and/or interact through one or more network linkages such as wired and/or wireless networks. By way of example and not limitation, a computing apparatus may include two or more computing devices“coupled” on a motherboard or by one or more network linkages.As used herein, the term“circuitry” may refer to, be part of, or include anApplication Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group), and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable hardware components that provide the described functionality. As used herein, “computer-implemented method” may refer to any method executed by one or more processors, a computer system having one or more processors, a mobile device such as a smartphone (which may include one or more processors), a tablet, a laptop computer, a set-top box, a gaming console, and so forth.Figure 1 illustrates an example electronic system 100 that includes a memory controller 111 to program multiple pages of data into a multi-level NAND memory array 121, in accordance with various embodiments. For clarity, features of the electronic system 100 may be described below in accordance with some embodiments that may include a memory controller to program multiple pages of data into a multi-level NAND memory array of a NAND memory system based at least in part on a temperature check. However, it should be understood that there may be more or fewer components included in the electronic system 100 in various embodiments. Further, it should be understood that one or more of the devices and/or components within the electronic system 100 may include additional and/or varying features from the description below.In embodiments, the electronic system 100 may include a NAND memory system 101 coupled to a host 103 by an interconnect 145 through an interface 133 on the host 103 and an interface 113 on the NAND memory system 101. The host 103 may include a host controller 131, where the host controller 131 may generate a first page of data 132, a second page of data 134, a third page of data 136, a fourth page of data 138, and one or more program commands 135 to program the first page of data 132, the second page of data 134, the third page of data 136, and the fourth page of data 138 into the multi-level NAND memory array 121 within the NAND memory system 101. The first page of data 132, the second page of data 134, the third page of data 136, and the fourth page of data 138 may be stored in a buffer 137 in some embodiments. Although the multi-level NAND memory array 121 is shown to include quad-level cells (QLC), it should be understood that various embodiments may include other types of NAND memory, such as TLC NAND that stores three bits of data per cell, or MLC NAND that stores two bits of data per cell.In embodiments, the NAND memory system 101 may include the multi-level NAND memory array 121, the memory controller 111, and the interface 113, coupled with each other. In various embodiments, the memory controller 111 may include a memory interface 119 coupled with the NAND memory array 121. In some embodiments, the NAND memory system 101 may include a buffer 117 that may be within the memory controller 111. In some embodiments, the memory controller 111 may receive the first page of data 132, the second page of data 134, the third page of data 136, and the fourth page of data 138, and store them as a first page of data 112, a second page of data 114, a third page of data 116, and a fourth page of data 118, respectively in the buffer 117. In various embodiments, the memory controller 111 may receive the multiple pages (e.g., pages 132, 134, 136, 138) of data in separate communications from the host 103, or may receive some or all of the multiple pages of data in a single communication from the host 103. In some embodiments, the memory controller 111 may receive the one or more program commands 135 and may store the received one or more program commands 135 as one or more program commands 115.In some embodiments, the multi-level NAND memory array 121 may be formed by multiple cells arranged in an array. The multi-level NAND memory array 121 may include a word line 123, a word line 125, a bit line 127, and a bit line 129. In some embodiments, the bit line 127 and the bit line 129 may represent multiple bit lines. There may be multiple pages, e.g., a first page 142, a second page 144, a third page 146, and a fourth page 148 associated with the word line 123 and the bit line 127, including cells formed by the word line 123 and the bit line 127. Similarly, a page 152, a page 154, a page 156, and a page 158 may be associated with the word line 123 and the bit line 129; a page 162, a page 164, a page 166, and a page 168 may be associated with the word line 125 and the bit line 127; and a page 172, a page 174, a page 176, and a page 178 may be associated with the word line 125 and the bit line 129.The first page 142, the second page 144, the third page 146, and the fourth page 148 may be represented by a same group of cells associated with the same word line, e.g., the word line 123. For example, a cell 143 may store multiple bits, e.g. four bits. The first bit of the cell 143 may be contained in the first page 142, the second bit of the cell 143 may be contained in the second page 144, the third bit of the cell 143 may be contained in the third page 146, and the fourth bit of the cell 143 may be contained in the fourth page 148 in various embodiments. In some embodiments, all of the cells belonging to one word line may be included in one page, so that the first page 142 may expand throughout the word line 123. In some other embodiments, cells associated with one word line may be divided into multiple pages, e.g., cells of the word line 123 may be included in the page 142 and the page 152 separately. In various embodiments, the memory controller 111 may program the first page of data 112, the second page of data 114, the third page of data 116, and the fourth page of data 118 in multiple passes into pages of the multi-level NAND memory array 121. In some embodiments, the first page 142 may be a lower page (LP), the second page 144 may be an upper page (UP), the third page 146 may be an extra page (XP), and the fourth page 148 may be a top page (TP).In various embodiments, the NAND memory system 101 may include a temperature sensor 190. In some embodiments, the temperature sensor 190 may be or include a temperature sensor circuit. In some embodiments, the temperature sensor 190 may be on the same chip as the multi-level NAND memory array 121. In various embodiments, one or more flag bytes 181 may be associated with the first page 142, one or more flag bytes 182 may be associated with the second page 144, one or more flag bytes 183 may be associated with the third page 146, and one or more flag bytes 184 may be associated with the fourth page 148. Similarly, one or more flag bytes 185 may be associated with the page 152, one or more flag bytes 186 may be associated with the page 154, one or more flag bytes 187 may be associated with the page 156, and one or more flag bytes 158 may be associated with the page 158. In similar fashion, one or more additional flag bytes, not shown for clarity, may be associated with each of the pages 162, 164, 166, 168, 172, 174, 176, and 178.In some embodiments, the temperature sensor 190 may sense a first temperature associated with a first programming pass, and the memory controller 111 may store the first temperature in one or more of the flag bytes. In some embodiments, the first temperature may be sensed at the same time as the first programming pass, or may be sensed within a predetermined period of time before the first programming pass or after the first programming pass. In various embodiments, the temperature sensor 190 may sense a second temperature before the memory controller 111 programs the NAND memory in a second programming pass. In some embodiments, the second temperature may be sensed at a time associated with an internal read of data programmed during the first programming pass.In various embodiments, the memory controller 111 may determine whether a difference between the first temperature and the second temperature exceeds apredetermined maximum temperature difference. If the maximum temperature difference is not exceeded, the memory controller 111 may proceed to program the NAND memory in a second programming pass. If the maximum temperature difference is exceeded, the memory controller 111 may send a maximum temperature difference exceeded flag to the host 103. In various embodiments, the host 103 may include an ECC engine 196 that may be directed by the host controller 131 to perform error correction operations in response to receiving the maximum temperature difference exceeded flag.In various embodiments, the memory controller 111 may include a logic circuitry component 198 coupled with the memory interface 119. In some embodiments, the logic circuitry component 198 may be to program one or more NAND cells (e.g., including cell 143) of the multi-level NAND memory array 121 via the memory interface 119 with a first set of data in a first pass. In various embodiments, the logic circuitry component 198 may be to determine a first temperature of the multi-level NAND memory array 121 in association with the first pass. In embodiments, the logic circuitry component 198 may store the first temperature in a flag byte (e.g., flag byte 181) associated with a page (e.g., first page 142). In some embodiments, the logic circuitry component 198 may be to determine a second temperature of the multi-level NAND memory array 121 and determine a temperature difference between the second temperature and the first temperature.In various embodiments, the logic circuitry component 198 may be to perform one or more operations based at least in part on a result of the determination of the temperature difference. In some embodiments, the one or more operations may include program the one or more NAND cells with a second set of data in a second pass, in response to the temperature difference is less than or equal to a predefined threshold value. In some embodiments, the one or more operations may include one or more of send a temperature difference exceeded flag to the host controller 131, facilitate data correction associated with the one or more NAND cells, or facilitate recovery of data encoded by the one or more NAND cells, in response to the temperature difference is greater than the predefined threshold value.In some embodiments, the logic circuitry component 198 may program the one or more NAND cells with an 8-16 technique, where the first set of data may include a first page of data, a second page of data, and a third page of data (e.g., LP, UP, and XP); and the second set of data may include a fourth page of data (e.g., TP). In embodiments, programming with an 8-16 technique may include programming each of the one or more NAND cells into one of eight levels based at least in part on the first set of data, and programming the one or more NAND cells into one of sixteen levels based at least in part on the first and second sets of data.In some embodiments, the logic circuitry component 198 may program the one or more NAND cells with a 2-8-16 technique, where the first set of data may include a first page of data (e.g., LP), the second set of data may include a second and a third page of data (e.g., UP and XP), and the third set of data may include a fourth page of data (e.g., TP). In embodiments, programming with a 2-8-16 technique may include programming each of the one or more NAND cells into one of two levels based at least in part on the first set of data, programming each of the one or more NAND cells into one of eight levels based at least in part on the first and second sets of data, and programming each of the one or more NAND cells into one of sixteen levels based at least in part on the first, second, and third sets of data.In various embodiments, for a 2-8-16 technique, the predefined threshold value may be a first predefined threshold value, the temperature difference may be a first temperature difference and the second temperature may be associated with the second pass. In embodiments, the logic circuitry component 198 may be to determine a third temperature of the multi-level NAND memory array, determine whether a second temperature difference between the third temperature and the second temperature is less than or equal to a second predefined threshold value, and program the one or more NAND cells with the third set of data in a third pass, in response to the second temperature difference is less than or equal to the second predefined threshold value. In embodiments, the logic circuitry component 198 may send a temperature difference exceeded flag to the host controller 131 in response to the second temperature difference is greater than the second predefined threshold value.In some embodiments, for a three-pass technique such as a 2-8-16 technique, the first temperature check before the second pass data is programmed may compare a first temperature sensed at the time of programming data in the first pass, and a second temperature sensed when the data from the first pass is internally read. In some embodiments, if the second temperature is sensed within a predetermined time period of programming the second pass data, the second temperature may be stored in one or more flag bytes associated with one or more page addresses corresponding to the second pass data. A third temperature may be sensed when the data from the second pass is internally read, followed by a temperature check that compares the third temperature to the second temperature before programming third pass data. In some embodiments, if the second temperature is not sensed within the predetermined time period of programming the second pass data, an additional temperature may be sensed when programming the second pass data, that is then stored in one or more flag bytes for later comparison with the third temperature mentioned above, instead of comparing the second temperature to the third temperature.In embodiments, the electronic system 100 may be a system on chip (SOC), integrating the host 103 and the NAND memory system 101, together with other components, e.g., cache, random access memory (RAM), peripheral functions, or other functions onto one chip. In some embodiments, the NAND memory system 101 may be a storage device, and the host 103 may be an external computing device coupled to the NAND memory system 101. Alternatively, the electronic system 100 may be a computing system and the host controller 131 may be a processor of the computing system, coupled to the memory controller 111 with or without the interface 113 and the interface 133, in some embodiments. The electronic system 100 may be for various applications such as wireless communication, digital signal processing, security, and other applications, in various embodiments.In embodiments, the host 103 may be a computing system, a storage system, or any other system that may program multiple pages of data into a multi-level NAND memory array. In some examples, the host 103 may be implemented by a personal computer (e.g., a desktop computer, a laptop computer, etc.). However, the host 103 may be implemented by any other hardware and/or software. For example, the host 103 may be a smartphone, a television, a set top box, a printer, a home automation system, etc. In embodiments, the host 103 may be any type of computing system capable of programming data into the NAND memory system 101. In some embodiments, the host 103 may be a storage system, e.g., a solid-state drive (SSD) system, while the host controller 131 may be a SSD controller. When the host 103 is a SSD system, the host 103 may be coupled to another computing system, where data, e.g., the first page of data 132, the second page of data 134, the third page of data 136, and the fourth page of data 138, may be generated by another computing system or by the host 103.In embodiments, the host 103 may include the interface 133 that communicates with the interface 113 of the NAND memory system 101 using the interconnect 145. In embodiments, the interface 113 of the NAND memory system 101 may receive the first page of data 132, the second page of data 134, the third page of data 136, and the fourth page of data 138 to be stored in the buffer 117. In embodiments, any other type of communication interconnect or link may additionally or alternatively be used for the interconnect 145, the interface 133, and/or the interface 113, such as, for example, a Parallel Advanced Technology Attachment (PAT A) interconnect developed by the American National Standards Institute (ANSI) as standard no. X3.221-1994, a Serial Advanced Technology Attachment (SATA) interconnect developed by the Serial ATA International Organization, a Small Computer System Interface (SCSI) interconnect, a Serial-Attached SCSI (SAS) interconnect developed by the T10 group as standards document InterNational Committee for Information Technology Standards (INCITS), Peripheral Component Interconnect (PCI) express (PCIe) interconnect developed by the PCI Special Interests Group (PCI-SIG) as the PCI Express Base Specification, or a Non- Volatile Memory (NVMe) interconnect, etc..In embodiments, the memory controller 111, the logic circuitry component 198, and/or the host controller 131 may be implemented by or include a hardware processor, e.g., a silicon based processor, such as a microcontroller, a 16-bit processor, a 32-bit processor, a 64-bit processor, a single core processor, a multi-core processor, a digital signal processor, an embedded processor, or any other processor. In addition, any other type of circuitry may additionally or alternatively be used such as, for example an analog or digital circuit(s), a logic circuit, a programmable processor(s), an application specific integrated circuit(s) (ASIC(s)), a programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)).In some embodiments, the buffer 117 and/or the buffer 137 may be implemented as an application specific integrated circuit (ASIC). However, any other approach to implementing a buffer may additionally or alternatively be used. For example, the buffer 117 and/or the buffer 137 may be implemented in a memory die.Figure 2 is a schematic representation of threshold voltage distributions 200 of QLC cells (e.g., cell 143) for multi-pass programming techniques and associated temperature readings, in accordance with various embodiments. In some embodiments, the threshold voltage distributions 200 may include a first threshold voltage distribution 202 associated with a 4-16 multi-pass programming technique, a second threshold voltage distribution 204 associated with a 8-16 multi-pass programming technique, and/or a third threshold voltage distribution 206 associated with a 2-8-16 multi-pass programming technique. In some embodiments, some or all of the multi-pass programming techniques performed with respect to the threshold voltage distributions 200 may be practiced by components shown and/or described with respect to the electronic system 100 of Figure 1, the computing device 600 of Figure 6, or some other component described with respect to Figure 1 and/or Figures 6-7.Programming of multi-level per cell NAND components such as MLC, TLC, or QLC may be performed in multiple passes to minimize interference from neighboring word lines (WLs). In some embodiments, programming of QLC cells may be performed in two passes according to a 4-16 programming technique, illustrated with respect to the first threshold voltage distribution 202. In a first pass 210, two pages of data may be provided, and the cells of the corresponding WL may be programmed into one of the four levels that encode two bits of information according to the two pages of data. In a second pass 212, two more pages of data may be provided, the two pages of data programmed in the first pass may be internally read, and the cells of the corresponding WL may be programmed into one of the sixteen levels that encode four bits of information.Alternatively, programming of QLC cells may be performed in two passes according to an 8-16 programming technique, illustrated with respect to the second threshold voltage distribution 204. In a first pass 214, three pages of data may be provided, and the cells of the corresponding WL may be programmed into one of the eight levels that encode three bits of information according to the three pages of data. In a second pass 216, one more page of data may be provided, the three pages of data that wereprogrammed in the first pass may be internally read, and the cells of the corresponding WL may be programmed into one of the sixteen levels that encode four bits ofinformation.In another alternative, programming of QLC cells may be performed in three passes according to a 2-8-16 technique, illustrated with respect to the third threshold voltage distribution 206. In a first pass 218, one page of data may be provided and may be used to program the cells of the corresponding WL into one of the two levels that encode one bit of information. In a second pass 220, two more pages of data may be provided, the page of data programmed in the first pass may be internally read, and the cells may be programmed into one of the eight levels that encode three bits of information. In a third pass 222, one more page of data may be provided, the three pages of data from the second pass may be internally read, and the cells may be programmed into one of the sixteen levels that encode four bits of information.Generally, the successful placement of cells in each pass may depend on correctly reading the data programmed in the previous pass. Any error made in internally reading the data from an earlier pass may lead to programming cells into an incorrect level in a subsequent pass. The requirements set forth with respect to the allowable RBER for final placement of the cells may determine the allowable RBER for internal read operations performed on the data programmed in earlier passes. As an example according to some embodiments, if an acceptable RBER for final placement is 5e-3, the allowable RBER for internal reading of the data from an earlier pass may be 5e-4.In normal read operations, such as an external read command issued by an SSD controller (e.g., host controller 131), the data is typically fed into an error-correcting engine (e.g., ECC engine 196) such as a low-density parity-check (LDPC) engine. In embodiments, the engine can determine whether the data is correctable. If data is correctable, the error-correcting engine corrects the data. If not, the SSD controller may re-attempt reading the data according to a series of data recovery procedures. Such data recovery procedures may include re-read, use of one or more look-up tables to adjust read levels (e.g., read voltage), reading a neighboring WL and using adjusted read levels based on the content of the neighboring WL, acquiring soft-bit read information, and any other suitable data recovery procedure. However, when data from a previous programming pass is internally read (e.g., by a memory controller) in preparation to program a subsequent pass in a multi-pass programming technique according to typical legacy approaches, there is no opportunity to correct data using an error correcting engine and use appropriate data recovery procedures if the ECC engine determines that the data is not correctable.Various embodiments may include one or more temperature checks following one or more internal reads (e.g., by memory controller 111) of data programmed during one or more passes of a multi-pass programming technique before programming additional data in a subsequent pass of the multi-pass programming technique. In some embodiments, a temperature check may be performed before programming data in the second pass 212 by comparing temperature information recorded at the time data was programmed in the first pass 210 to a temperature sensed when the first pass data is internally read in preparation for the second pass 212. Similarly, a temperature check may be performed before programming data in the second pass 216 by comparing temperature information recorded at the time data was programmed in the first pass 214 to a temperature sensed when the first pass data is internally read in preparation for the second pass 216. In various embodiments, a three-pass programming technique may include two temperature checks, such as a first temperature check before programming data in the second pass 220, and a second temperature check before programming data in the third pass 222. If a maximum allowable temperature difference is exceeded during a temperature check, a temperature difference exceeded flag may be sent to a host controller, which may perform one or more error correction and/or data recovery procedures. In various embodiments, this may improve data integrity and/or the RBER in comparison to typical legacy approaches.Figure 3 is a flow diagram of a technique 300 for programming QLC cells in a two-pass 8-16 programming technique, in accordance with various embodiments. In some embodiments, some or all of the technique 300 may be practiced by components shown and/or described with respect to the electronic system 100 of Figure 1, the computing device 600 of Figure 6, or some other component described with respect to Figure 1 and/or Figures 6-7.In various embodiments, at a block 302, the technique 300 may include receiving first, second, and third pages of data in a first pass. In some embodiments, the three pages may be LP, UP, and XP. In some embodiments, the first, second, and third pages of data may be received by the memory controller 111 from the host controller 131. At a block 304, the technique 300 may include acquiring a first temperature, Tl. In various embodiments, the temperature Tl may be acquired from an on-chip temperature sensor (e.g., temperature sensor 190) by the memory controller 111. At a block 306, the technique 300 may include programming first, second, and third page data, along with Tl information at a specified address. In various embodiments, the logic circuitry component 198 of the memory controller 111 may program the cells (e.g., including cell 143) of the WL that correspond to the first, second, and third page address, along with the Tl information in the first pass programming. In some embodiments, the Tl information may be programmed into a location corresponding to one or more page addresses, such as one or more flag bytes (e.g., flag bytes 181, 182, 183) associated with the programmed page addresses (e.g., page addresses for first page 142, second page 144, third page 146).At a block 308, the technique 300 may include receiving a fourth page of data in a second pass. In embodiments, the fourth page may be a TP. At a block 310, the technique 300 may include internally reading the first, second, and third pages of data from the specified address along with the temperature information, Tl, from the location used to store it during the first pass. At a block 312, the technique 300 may include extracting the Tl information from the internally read data. At a block 314, the technique 300 may include acquiring a second temperature, T2. In various embodiments, the second temperature, T2 may be associated with the internal read of the first, second, and third pages of data at the block 310, and/or may be acquired within a predetermined time of reading the first, second, and third pages of data.At a decision block 316, the technique 300 may include determining whether a difference between Tl and T2 is greater than a predetermined maximum temperature difference, ATmax. If, it is determined that the difference between Tl and T2 is less than or equal to ATmax, the technique 300 may include, at a block 318, programming the first, second, third, and fourth pages at the specified address in a second pass. If, at the block 316, it is determined that the difference between Tl and T2 is greater than ATmax, the technique 300 may include, at a block 320, failing with an excessive temperature difference status. In some embodiments, failing with the excessive temperature difference status may include sending a temperature difference exceeded flag (e.g., from the memory controller 111 to the host controller 131). In embodiments, the temperature difference exceeded flag may be indicated with a status bit. In various embodiments, the memory controller 111 may perform a temperature check, including determining the difference between Tl and T2, automatically, without receiving a temperature check command from the host controller 131.In various embodiments, upon receiving an excessive temperature difference flag at the block 320, the host controller 131 (e.g., an SSD controller) may perform one or more of error correction or data recovery procedures (e.g., with ECC engine 196). In various embodiments, the host controller 131 may issue a read command to externally read the data for LP, UP, and XP, and correct them through an ECC engine 196. In some embodiments, the host controller 131 may send the corrected data back to the NAND device (e.g., memory controller 111), issuing a program command that uses externally provided data for LP, UP, and XP along with TP. In embodiments, if the ECC engine 196 determines that the data is not correctable, the host controller 131 may perform other data recovery procedures such as auto read calibration, corrective read using look-up tables to adjust read parameters, soft-bit read, and/or any other suitable data recovery procedure.Figure 4 is a flow diagram of a technique 400 for programming QLC cells in a two-pass 8-16 programming technique, in accordance with other embodiments. In some embodiments, some or all of the technique 400 may be practiced by components shown and/or described with respect to the electronic system 100 of Figure 1, the computing device 600 of Figure 6, or some other component described with respect to Figure 1 and/or Figures 6-7.In various embodiments, at a block 402, the technique 400 may include receiving first, second, and third pages of data in a first pass. At a block 404, the technique 400 may include acquiring a first temperature, Tl. In various embodiments, the temperature Tl may be acquired from an on-chip temperature sensor (e.g., temperature sensor 190) by the memory controller 111. At a block 406, the technique 400 may include programming first, second, and third page data, along with Tl information at a specified address. In various embodiments, the logic circuitry component 198 of the memory controller 111 may program the cells (e.g., including cell 143) of the WL that correspond to the first, second, and third page address, along with the Tl information in the first pass programming. In some embodiments, the Tl information may be programmed into a location corresponding to one or more page addresses, such as one or more flag bytes (e.g., flag bytes 181, 182, 183) associated with the programmed page addresses (e.g., page addresses for first page 142, second page 144, third page 146).At a block 408, the technique 400 may include receiving a temperature check command (e.g., at the memory controller 111 from the host controller 131). In various embodiments, the host controller 131 (e.g., SSD controller) may issue the temperature check command before issuing a program command for a second pass. At a block 410, the technique 400 may include internally reading data from locations used to store first pass temperature information. At a block 412, the technique 400 may include extracting the Tl information from the internally read data. At a block 414, the technique 400 may include acquiring a second temperature, T2. At a decision block 416, the technique 400 may include determining whether a difference between Tl and T2 is greater than apredetermined maximum temperature difference, ATmax. If, it is determined that the difference between Tl and T2 is less than or equal to ATmax, the technique 400 may include, at a block 418, issuing a pass status to the host controller, and receiving a fourth page of data (e.g., TP) in response. In various embodiments, the fourth page of data may be received along with a program command from the host controller specifying that first, second, and third page data (e.g., LP, UP, and XP) are to be internally read. At a block 420, the technique 400 may include internally reading the first, second, and third pages. At a block 422, the technique 400 may include programming the first, second, and fourth pages at a specified address in a second pass.If, at the block 416, it is determined that the difference between Tl and T2 is greater than ATmax, the technique 400 may include, at a block 424, failing with an excessive temperature difference status. In some embodiments, failing with the excessive temperature difference status may include sending a temperature difference exceeded flag (e.g., from the memory controller 111 to the host controller 131). In embodiments, the temperature difference exceeded flag may be indicated with a status bit.In various embodiments, upon receiving an excessive temperature difference flag at the block 424, the host controller 131 (e.g., an SSD controller) may perform one or more of error correction or data recovery procedures. In various embodiments, the host controller 131 may issue a read command to externally read the data for LP, UP, and XP, and correct them through the ECC engine 196. In some embodiments, the host controller 131 may send the corrected data back to the NAND device (e.g., memory controller 111), issuing a program command that uses externally provided data for LP, UP, and XP along with TP. In embodiments, if the ECC engine 196 determines that the data is not correctable, the host controller 131 may perform other data recovery procedures such as auto read calibration, corrective read using look-up tables to adjust read parameters, soft- bit read, and/or any other suitable data recovery procedure.In various embodiments, if the data is correctable or recoverable after failing with an excessive temperature difference status, the host controller 131 may issue a program command to the memory controller 111 using externally read and corrected first, second, and third page data (e.g., LP, UP, and XP) along with a fourth page (e.g., TP). The memory controller 111 may then proceed to program the first, second, third, and fourth pages at the specified address.Although the technique 300 described with respect to Figure 3 and the technique 400 described with respect to Figure 4 were described using an 8-16 QLC programming, it should be understood that various embodiments may use any suitable multi-pass programming technique, including different multi-pass techniques for TLC or QLC NAND devices. In some embodiments, for a three-pass QLC programming based on a 2- 8-16 technique, the memory controller 111 may determine whether there is an excessive temperature difference between a second pass (where NAND cells are programmed from 2-level to 8-level state) and a third pass (where the 8-level content of the cells is internally read and the cells are programmed from 8-level to 16-level state.) In various embodiments, the temperature may be determined by the memory controller 111 using the temperature sensor 190, and may be stored in one or more locations such as the flag bytes (e.g., one or more of flag bytes 181, 182, 183, 184, 185, 186, 187, 188) associated with one or more page addresses.Figure 5 is a flow diagram illustrating different options to program a third pass 500 of a 2-8-16 QLC programming technique, based at least in part on a temperature check status of aNAND memory device, in accordance with various embodiments. In some embodiments, some or all of the third pass 500 may be practiced by components shown and/or described with respect to the electronic system 100 of Figure 1, the computing device 600 of Figure 6, or some other component described with respect to Figure 1 and/or Figures 6-7.In various embodiments, the third pass 500 may include a temperature check at a block 501. In some embodiments, before the temperature check is performed at the block 501, the first two passes of the 2-8-16 QLC programming technique may have been performed, and a temperature of the NAND memory device associated with the second pass programming may have been stored (e.g., in one or more of flag bytes 181, 182, 183). At the block 501, performing the temperature check may include receiving a temperature check command at the memory controller 111 from the host controller 131. In response to the temperature check command, the memory controller 111 may acquire a current temperature (e.g., from the temperature sensor 190) and compare the current temperature to temperature information stored during the second pass of the 2-8-16 QLC programming technique (e.g., by reading the flag bytes 181, 182, 183).If the difference between the current temperature and the temperature stored during the second pass is less than or equal to a predetermined maximum temperature difference threshold, the temperature check may be considered to have been passed, and the third pass 500 may proceed to a first option 502. In various embodiments, the first option 502 may include receiving a fourth page of data in at a block 504 along with a third pass program command 506 (e.g., from the host controller 131). In some embodiments, the memory controller 111 may program the NAND in a third pass, with internally read first, second, and third pages along with the fourth page of data received at the block 504, in response to the third pass program command received at the block 506.If the difference between the current temperature and the temperature stored during the second pass is greater than the predetermined maximum temperature difference threshold, the temperature check may be considered to have failed, and the third pass 500 may proceed to a block 508 that may include receiving a fourth page of data in (e.g., at the memory controller 111 from the host controller 131). In various embodiments, the memory controller 111 may send an excessive temperature difference flag to the host controller 131 if the temperature check at the block 502 fails. In some embodiments, the memory controller 111 may inform the host controller 131 (e.g., SSD controller) to perform error correcting and recovery operations. In response to the excessive temperature difference flag, the host controller may perform an external read of a third page (e.g., XP) at a block 510 that may result in third page data out at a block 512.In some embodiments, at a decision block 514, an evaluation of the error correction performed with respect to the third page may be performed to determine whether the third page results are sufficient to proceed without additional error correction. If it is determined the results are sufficient to proceed without additional error correction, the third pass 500 may proceed to a second option 516. In various embodiments, the second option may include receiving a third page of data in at a block 518 along with a third pass program command 520. In some embodiments, the memory controller 111 may program the NAND in a third pass, with internally read first and second pages along with the third page of data received at the block 518 and the fourth page of data received at the block 508, in response to the third pass program command received at the block 520.If, at the decision block 514, it is determined that the results are not sufficient to proceed without additional error correction, the third pass 500 may proceed to additional error correction blocks. A block 522 may include receiving a corrected third page of data in. A block 524 may include performing an external read of a second page followed by a transfer of the second page data out (e.g., to host controller 131) at a block 526. After the second page data is verified, it may be transferred to the memory controller 111 at the block 528 as second page data in. In some embodiments, a block 530 may include performing an external read of a first page followed by a transfer of the first page data out (e.g., to host controller 131) at a block 532. In various embodiments, the third pass 500 may proceed to a third option 534 that may include a transfer of the first page data to the memory controller at a block 536 as first page data in, after the first page data out is verified (e.g., by ECC engine 196). The third option 534 may also include receiving a third pass program command at a block 538. In some embodiments, the memory controller 111 may program the NAND in a third pass, with error corrected first page data received at the block 536, error corrected second page data received at the block 538, error corrected third page data received at the block 522, and fourth page data received at the block 508.In some embodiments, if the temperature check performed at the block 501 fails, first, second, and third page data may be externally read such that the check at the decision block 514 is not performed, and the third option 534 may be followed. In suchembodiments, the second option 516 may not be included.Figure 6 illustrates a block diagram of an example computing device 600 suitable for use with various components of Figure 1, the multi-pass programming techniques described with respect to Figure 2, the technique 300 of Figure 3, the technique 400 of Figure 4, and/or the third pass 500 of Figure 5, in accordance with various embodiments. For example, the computing device 600 may be, or may include or otherwise be coupled to, the electronic system 100, memory controller 111, host controller 131, and/or one or more other components shown and/or described with respect to Figure 1. As shown, computing device 600 may include one or more processors or processor cores 602 and system memory 604. For the purpose of this application, including the claims, the terms “processor” and“processor cores” may be considered synonymous, unless the context clearly requires otherwise. The processor 602 may include any type of processors, such as a central processing unit (CPU), a microprocessor, and the like. The processor 602 may be implemented as an integrated circuit having multi-cores, e.g., a multi-core microprocessor. In some embodiments, processors 602, in addition to cores, may further include hardware accelerators, e.g., hardware accelerators implemented with Field Programmable Gate Arrays (FPGA). The computing device 600 may include mass storage devices 606 (such as diskette, hard drive, non-volatile memory (NVM) (e.g., compact disc read-only memory (CD-ROM), digital versatile disk (DVD), any other type of suitable NVM, and so forth).In general, system memory 604 and/or mass storage devices 606 may be temporal and/or persistent storage of any type, including, but not limited to, volatile and non-volatile memory, optical, magnetic, and/or solid state mass storage, and so forth. Volatile memory may include, but is not limited to, static and/or dynamic random access memory (DRAM). Non-volatile memory may include, but is not limited to, electrically erasableprogrammable read-only memory, phase change memory, resistive memory, and so forth.The computing device 600 may further include I/O devices 608 (such as a display (e.g., a touchscreen display), keyboard, cursor control, remote control, gaming controller, image capture device, and so forth) and communication interfaces 610 (such as network interface cards, modems, infrared receivers, radio receivers (e.g., Bluetooth), and so forth), one or more antennas, and/or any other suitable component.The communication interfaces 610 may include communication chips (not shown) that may be configured to operate the device 600 in accordance with a local area network (LAN) (e.g., Ethernet) and/or a Global System for Mobile Communication (GSM),General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA), or Long-Term Evolution (LTE) network. The communication chips may also be configured to operate in accordance with Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E-UTRAN). The communication chips may be configured to operate in accordance with Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution-Data Optimized (EV-DO), derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The communication interfaces 610 may operate in accordance with other wireless protocols in other embodiments.In various embodiments, computing device 600 may include a data storage device 652 that may be configured in similar fashion to the electronic system 100 described with respect to Figure 1. In some embodiments, the data storage device 652 may be coupled with other components of the computer device 600. In some embodiments, the data storage device 652 may include a memory controller 654 that may be configured in similar fashion to the memory controller 111 described with respect to Figure 1. In some embodiments, the memory controller 654 may include a logic circuitry component 656 that may be configured in similar fashion to the logic circuitry component 198 described with respect to Figure 1.The above-described computing device 600 elements may be coupled to each other via system bus 612, which may represent one or more buses. In the case of multiple buses, they may be bridged by one or more bus bridges (not shown). Each of these elements may perform its conventional functions known in the art. In particular, system memory 604 and mass storage devices 606 may be employed to store a working copy and a permanent copy of the programming instructions for the operation of various components of computing device 600, including but not limited to an operating system of computing device 600, one or more applications, and/or operations associated with computing device 600 serving as memory controller 111, host controller 131, and/or logic circuitry component 198, collectively denoted as computational logic 622. The various elements may beimplemented by assembler instructions supported by processor(s) 602 or high-level languages that may be compiled into such instructions. In some embodiments, the computing device 600 may be implemented as a fixed function ASIC, a FPGA, or any other suitable device with or without programmability or configuration options.The permanent copy of the programming instructions may be placed into mass storage devices 606 in the factory, or in the field through, for example, a distribution medium (not shown), such as a compact disc (CD), or through communication interface 610 (from a distribution server (not shown)). That is, one or more distribution media having an implementation of the agent program may be employed to distribute the agent and to program various computing devices.The number, capability, and/or capacity of the elements 608, 610, 612 may vary, depending on whether computing device 600 is used as a stationary computing device, such as a set-top box or desktop computer, or a mobile computing device, such as a tablet computing device, laptop computer, game console, or smartphone. Their constitutions are otherwise known, and accordingly will not be further described.In some embodiments, logic circuitry component 198, the memory controller 111, and/or the host controller 131 may be included with computational logic 622 or hardware accelerators of processor 602. For some embodiments, at least one of processors 602 may be packaged together with computational logic 622 configured to practice aspects of embodiments described herein to form a System in Package (SiP) or a System on Chip (SoC).In various implementations, the computing device 600 may comprise one or more components of a data center, a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, an ultra mobile PC, or a mobile phone. In some embodiments, the computing device 600 may include one or more components of a server. In further implementations, the computing device 600 may be any other electronic device that processes data.As will be appreciated by one skilled in the art, the present disclosure may be embodied as methods or computer program products. Accordingly, the present disclosure, in addition to being embodied in hardware as earlier described, may take the form of an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to as a“circuit,”“module,” or“system.”Figure 7 illustrates example computer-readable storage medium 702 having instructions configured to practice all or selected ones of the operations associated with the computer device 600, earlier described with respect to Figure 6; the electronic system 100, the memory controller 111, the logic circuitry component 198, and/or the host controller 131 described with respect to Figure 1; the technique 300 of Figure 3; the technique 400 of Figure 4, and/or the third pass 500 described with respect to Figure 5, in accordance with various embodiments. As illustrated, computer-readable storage medium 702 may include a number of programming instructions 704. The storage medium 702 may represent a broad range of non-transitory persistent storage medium known in the art, including but not limited to flash memory, dynamic random access memory, static random access memory, an optical disk, a magnetic disk, etc. Programming instructions 704 may be configured to enable a device, e.g., memory controller 111, host controller 131 and/or other components of the electronic system 100, in response to execution of the programming instructions 704, to perform, e.g., but not limited to, various operations described for the memory controller 111, the logic circuitry component 198, the host controller 131, the computer device 600 of Figure 6, operations shown and/or described with respect to technique 300 of Figure 3, the technique 400 of Figure 4, and/or the third pass 500 of Figure 5. In alternate embodiments, programming instructions 704 may be disposed on multiple computer-readable storage media 702. In an alternate embodiment, storage medium 702 may be transitory, e.g., signals encoded with programming instructions 704.Referring back to Fig. 6, for an embodiment, at least one of processors 602 may be packaged together with memory having all or portions of computational logic 622 configured to practice aspects shown or described for the memory controller 111, the logic circuitry component 198, the host controller 131, the computer device 600 of Figure 6, operations shown and/or described with respect to technique 300 of Figure 3, the technique 400 of Figure 4, and/or the third pass 500 of Figure 5. For an embodiment, at least one of processors 602 may be packaged together with memory having all or portions of computational logic 622 configured to practice aspects described for the memory controller 111, the logic circuitry component 198, the host controller 131, the computer device 600 of Figure 6, operations shown and/or described with respect to technique 300 of Figure 3, the technique 400 of Figure 4, and/or the third pass 500 of Figure 5 to form a System in Package (SiP). For an embodiment, at least one of processors 602 may be integrated on the same die with memory having all or portions of computational logic 622 configured to practice aspects described for the memory controller 111, the logic circuitry component 198, the host controller 131, the computer device 600 of Figure 6, operations shown and/or described with respect to technique 300 of Figure 3, the technique 400 of Figure 4, and/or the third pass 500 of Figure 5. For an embodiment, at least one of processors 602 may be packaged together with memory having all or portions of computational logic 622 configured to practice aspects of the memory controller 111, the logic circuitry component 198, the host controller 131, the computer device 600 of Figure 6, operations shown and/or described with respect to technique 300 of Figure 3, the technique 400 of Figure 4, and/or the third pass 500 of Figure 5 to form a System on Chip (SoC). Machine-readable media (including non-transitory machine-readable media, such as machine-readable storage media), methods, systems and devices for performing the above-described techniques are illustrative examples of embodiments disclosed herein. Additionally, other devices in the above-described interactions may be configured to perform various disclosed techniques.EXAMPLESExample 1 may include a memory controller comprising: a memory interface; and a logic circuitry component coupled with the memory interface, wherein the logic circuitry component is to: program one or more NAND cells of a multi-level NAND memory array via the memory interface with a first set of data in a first pass; determine a first temperature of the multi-level NAND memory array in association with the first pass; determine a second temperature of the multi-level NAND memory array; determine a temperature difference between the second temperature and the first temperature; and perform one or more operations based at least in part on a result of the determination of the temperature difference.Example 2 may include the subject matter of Example 1, wherein the one or more operations include one or more of: program the one or more NAND cells with a second set of data in a second pass, in response to the temperature difference is less than or equal to a predefined threshold value; and send a temperature difference exceeded flag to a host controller, facilitate an external data read of the one or more NAND cells, facilitate data correction associated with the one or more NAND cells, or facilitate recovery of data encoded by the one or more NAND cells, in response to the temperature difference is greater than the predefined threshold value.Example 3 may include the subject matter of any one of Examples 1-2, wherein the logic circuitry component is to store the first temperature in a flag byte associated with a page address.Example 4 may include the subject matter of any one of Examples 1-3, wherein the logic circuitry component is to program the one or more NAND cells with a second set of data in a second pass, in response to the temperature difference is less than or equal to the predefined threshold value.Example 5 may include the subject matter of Example 4, wherein: the one or more NAND cells are quad-level cells; the first set of data includes a first page of data and a second page of data; the second set of data includes a third page of data and a fourth page of data; the first pass includes programming each of the one or more NAND cells into one of four levels based at least in part on the first set of data; and the second pass includes programming each of the one or more NAND cells into one of sixteen levels based at least in part on the first and second sets of data.Example 6 may include the subject matter of Example 4, wherein: the one or more NAND cells are quad-level cells; the first set of data include a first page of data, a second page of data, and a third page of data; the second set of data includes a fourth page of data; the first pass includes programming each of the one or more NAND cells into one of eight levels based at least in part on the first set of data; and the second pass includes programming the one or more NAND cells into one of sixteen levels based at least in part on the first and second sets of data.Example 7 may include the subject matter of Example 4, wherein the predefined threshold value is a first predefined threshold value, the temperature difference is a first temperature difference, the second temperature is associated with the second pass, and the logic circuitry component is also to: determine a third temperature of the multi-level NAND memory array; determine whether a second temperature difference between the third temperature and the second temperature is less than or equal to a second predefined threshold value; and program the one or more NAND cells with a third set of data in a third pass, in response to the second temperature difference is less than or equal to the second predefined threshold value.Example 8 may include the subject matter of Example 7, wherein: the one or more NAND cells are quad-level cells; the first set of data includes a first page of data; the second set of data includes a second and a third page of data; the third set of data includes a fourth page of data; the first pass includes programming each of the one or more NAND cells into one of two levels based at least in part on the first set of data; the second pass includes programming each of the one or more NAND cells into one of eight levels based at least in part on the first and second sets of data; and the third pass includesprogramming each of the one or more NAND cells into one of sixteen levels based at least in part on the first, second, and third sets of data.Example 9 may include the subject matter of any one of Examples 1-3, wherein the logic circuitry component is to send a temperature difference exceeded flag to a host controller in response to the temperature difference is greater than the predefined threshold value.Example 10 may include the subject matter of any one of Examples 1-9, wherein the logic circuitry component is to determine the second temperature and determine whether the temperature difference between the second temperature and the first temperature is less than or equal to the predefined threshold in response to a temperature check command received from a host.Example 11 may include the subject matter of any one of Examples 1-10, wherein the logic circuitry component includes a processor.Example 12 may include a data storage apparatus comprising: a multi-level NAND memory array including one or more NAND cells associated with a word line; a memory controller coupled with the multi-level NAND array, wherein the memory controller is to: program the one or more NAND cells with a first set of data in a first pass; determine a first temperature of the multi-level NAND memory array in association with the first pass; determine a second temperature of the multi-level NAND memory array; determine whether a temperature difference between the second temperature and the first temperature is less than or equal to a predefined threshold value; and program the one or more NAND cells with a second set of data in a second pass, in response to the temperature difference is less than or equal to the predefined threshold value.Example 13 may include the subject matter of Example 12, further including a temperature sensor, wherein the memory controller is to determine the first and second temperatures based at least in part on temperatures sensed by the temperature sensor.Example 14 may include the subject matter of any one of Examples 12-13, wherein the memory controller is further to store the first temperature in a flag byte associated with a page address.Example 15 may include the subject matter of any one of Examples 12-14, wherein: the one or more NAND cells are quad-level cells; the first set of data includes a first page of data and a second page of data; the second set of data includes a third page of data and a fourth page of data; the first pass includes programming the each of the one or more NAND cells into one of four levels based at least in part on the first set of data; and the second pass includes programming each of the one or more NAND cells into one of sixteen levels based at least in part on the first and second sets of data.Example 16 may include the subject matter of any one of Examples 12-14, wherein the predefined threshold value is a first predefined threshold value, the temperature difference is a first temperature difference, the second temperature is associated with the second pass, and the logic circuitry component is also to: determine a third temperature of the multi-level NAND memory array; determine whether a second temperature difference between the third temperature and the second temperature is less than or equal to a second predefined threshold value; and perform one or more operations based at least in part on a result of a determination of the second temperature difference.Example 17 may include the subject matter of any one of Examples 12-16, further including a host controller communicatively coupled with the memory controller, wherein the host controller is to send the first set of data to the memory controller.Example 18 may include the subject matter of Example 17, wherein the memory controller is to send a temperature difference exceeded flag to the host controller in response to the temperature difference is greater than the predefined threshold value.Example 19 may include the subject matter of Example 18, wherein the host controller is to perform an external data read to error correct the first set of data in response to the temperature difference exceeded flag.Example 20 may include the subject matter of any one of Examples 17-19, wherein: the host controller is to send a temperature check command to the memory controller; and the memory controller is to determine whether the temperature difference between the second temperature and the first temperature is less than or equal to the predefined threshold in response to the temperature check command.Example 21 may include the subject matter of any one of Examples 17-20, wherein the apparatus is a solid-state drive (SSD) and the host controller is a SSD controller.Example 22 may include a method comprising: receiving a first set of data from a host controller; programming, with a memory controller, one or more NAND cells associated with a word line of a multi-level NAND memory array with the first set of data in a first pass; determining, by the memory controller, a first temperature of the multi-level NAND memory array in association with the first pass; determining, by the memory controller, a second temperature of the multi-level NAND memory array; determining, by the memory controller, whether a temperature difference between the second temperature and the first temperature is less than or equal to a predefined threshold value; and performing one of, by the memory controller: programming the one or more NAND cells with a second set of data in a second pass, in response to the temperature difference is less than or equal to the predefined threshold value; or sending a temperature difference exceeded flag to the host controller, in response to the temperature difference is greater than the predefined threshold value.Example 23 may include the subject matter of Example 22, wherein the method includes storing the first temperature in a flag byte associated with a page address, and wherein determining whether the temperature difference between the second temperature and the first temperature is less than or equal to the predefined threshold value includes reading the flag byte associated with the page address to obtain the stored first temperature.Example 24 may include the subject matter of Example 22, wherein the multi-level NAND memory array is a triple level cell (TLC) array, a quad level cell (QLC) array, or a multi-level cell (MLC) array.Example 25 may include the subject matter of Example 22, wherein the host controller is a solid-state drive (SSD) controller.Example 26 may include an apparatus comprising: means for receiving a first set of data from a host controller; means for programming one or more NAND cells associated with a word line of a multi-level NAND memory array with the first set of data in a first pass; means for determining a first temperature of the multi-level NAND memory array in association with the first pass; means for determining a second temperature of the multi-level NAND memory array; means for determining whether a temperature difference between the second temperature and the first temperature is less than or equal to a predefined threshold value; and means for performing one of programming the one or more NAND cells with a second set of data in a second pass, in response to the temperature difference is less than or equal to the predefined threshold value; or sending a temperature difference exceeded flag to the host controller, in response to the temperature difference is greater than the predefined threshold value.Example 27 may include the subject matter of Example 26, wherein the apparatus includes means for storing the first temperature in a flag byte associated with a page address, wherein the means for determining whether the temperature difference between the second temperature and the first temperature is less than or equal to the predefined threshold value includes means for reading the flag byte associated with the page address to obtain the stored first temperature.Example 28 may include the subject matter of any one of Examples 26-27, wherein the multi-level NAND memory array is a triple level cell (TLC) array or a quad level cell (QLC) array.Example 29 may include one or more non-transitory machine-readable media comprising instructions that cause a memory controller, in response to execution of the instructions by the memory controller, to: program one or more NAND cells of a multi level NAND memory array with a first set of data in a first pass; determine a first temperature of the multi-level NAND memory array in association with the first pass; determine a second temperature of the multi-level NAND memory array associated with an internal read of the first set of data; determine a temperature difference between the second temperature and the first temperature; and perform one or more operations based at least in part on a result of the determination of the temperature difference.Example 30 may include the subject matter of Example 29, wherein the one or more operations include one or more of: program the one or more NAND cells with a second set of data in a second pass, in response to the temperature difference is less than or equal to a predefined threshold value; and send a temperature difference exceeded flag to a host controller, facilitate an external data read of the one or more NAND cells, facilitate data correction associated with the one or more NAND cells, or facilitate recovery of data encoded by the one or more NAND cells, in response to the temperature difference is greater than the predefined threshold value.Various embodiments may include any suitable combination of the above- described embodiments including alternative (or) embodiments of embodiments that are described in conjunctive form (and) above (e.g., the“and” may be“and/or”). Furthermore, some embodiments may include one or more articles of manufacture (e.g., non-transitory computer-readable media) having instructions stored thereon that, when executed, result in actions of any of the above-described embodiments. Moreover, some embodiments may include apparatuses or systems having any suitable means for carrying out the various operations of the above-described embodiments.The above description of illustrated implementations, including what is described in the Abstract, is not intended to be exhaustive or to limit the embodiments of the present disclosure to the precise forms disclosed. While specific implementations and examples are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the present disclosure, as those skilled in the relevant art will recognize.These modifications may be made to embodiments of the present disclosure in light of the above detailed description. The terms used in the following claims should not be construed to limit various embodiments of the present disclosure to the specific implementations disclosed in the specification and the claims. Rather, the scope is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation. |
PROBLEM TO BE SOLVED: To provide techniques for fabricating analog and digital circuits on separate dies and stacking and integrating the dies within a single package to form a mixed-signal IC that provides many benefits.SOLUTION: The analog and digital circuits are implemented on two separate dies using different IC processes suitable for different types of circuits. The analog and digital dies are thereafter integrated (stacked) and encapsulated within the single package. Bonding pads 112 are provided to interconnect the dies and to connect the dies to external pins. The bonding pads are located and arranged in a manner to provide the required connectivity while minimizing the amount of die area required in order to implement the pads. In another aspect, the die-to-die connectivity may be tested in conjunction with a serial bus interface.SELECTED DRAWING: Figure 1 |
A mixed signal integrated circuit comprising: a package substrate having a plurality of bonding pads; a first die having a plurality of bonding pads and mounted on an upper surface of the package substrate, most of the digital circuits, A second die having a plurality of bonding pads and resting on the top surface of the first die, most analog circuits are assembled on the second die.The integrated circuit of claim 1, wherein the plurality of bonding pads for each of the first and second dies are located near an edge of the die.Wherein the first die comprises a first set of bonding pads for interconnecting with an associated bonding pad on the second die; and a second set of bonding pads for interconnecting with the associated bonding pads on the package substrate, A set of bonding pads;4. The integrated circuit of claim 3, wherein the bonding pads in the first set and the second set are interdigitated.4. The integrated circuit of claim 3, wherein the bonding pads in the first set and the second set are alternating along a line.The integrated circuit of claim 1, wherein the first die comprises a set of bond pads located at a location remote from the first die.The integrated circuit of claim 1, wherein the package substrate and the first and second dies are encapsulated within a single package.The integrated circuit of claim 1, wherein the package substrate and the first and second dies have dimensions having profiles that comply with specifications of a standard package.9. The integrated circuit of claim 8, wherein the standard package is a ball grid array.The integrated circuit of claim 1, wherein the first die and the second die are derived from a wafer that has been processed to achieve a particular thickness.The integrated circuit of claim 10, wherein the specific thickness is obtained by back grinding the wafer.The integrated circuit of claim 1, wherein the first die and the second die are assembled using two different integrated circuit process technologies.The integrated circuit of claim 1, wherein the first die is assembled using CMOS process technology.A mixed signal integrated circuit comprising: a package substrate having a plurality of bonding pads; a first die having a plurality of bonding pads and mounted on an upper surface of the package substrate; A second die mounted on the upper end of a part of the first die, a majority of the analog circuits being assembled on the second die A package that encapsulates the package substrate and the first and second dies; and the first die and the second die are assembled using two different integrated circuit (IC) process technologies.A mixed signal integrated circuit comprising: a first die in which most digital circuits are assembled; and a second die in which a majority of analog circuits are assembled, said second die further comprising: Wherein the control circuit includes one or more pads for receiving one or more supply signals for the analog circuit and the controller is configured to select one of the analog circuits on the second die Wherein the voltage of the selected supply signal of the supply signal is reduced during the standby mode of operation.The integrated circuit of claim 15, wherein the voltage for the selected supply signal of the supply signal is collapsed during the standby operation mode period zero.16. The integrated circuit of claim 15, wherein the controller is maintained in a power-on state during a standby operation mode.A method for testing an interface between a first die and a second die encapsulated in a single package comprising: providing a first control value to a serial bus interface; 1 control value indicates a first test value to be transmitted from the first die to the second die; in response to the first control value, the control value from the first die Transmitting the first test value to the second die; receiving the first test value on the second die; and comparing the first control value with the received first test value and And verifies the connectivity of the interconnection line.19. The method of claim 18, further comprising: transmitting a second test value from the second die via the interconnect line to the first die; transferring the second test value on the second die Supplying the detected second test value as a second control value on the serial bus interface; and comparing the second control value with the second test value, Verify the connectivity of the connection line. |
Mixed analog and digital integrated circuitsThe present invention relates generally to circuits and, more particularly, to a technique for assembling analog circuits and digital circuits on separate dies and stacking the dies in a single package.Many applications require both analog signal processing and digital signal processing. One such application is within the area of wireless communication and mixed analog and digital signal processing is required on both the sending and receiving sides. On the receiving side, the modulated analog signal (typically radio frequency) is received, conditioned (eg amplified, filtered), downconverted, quadrature demodulated and digitized and fed samples. Digital signal processing is then performed on the samples to reproduce the transmitted data. Then, at the sender, the data is processed (eg, encoded, interleaved, and spread) and then converted to one or more analog signals. The analog signal is then conditioned, modulated and upconverted to provide a modulated signal suitable for transmission over the radio link. The mixed signal circuit may also be used for other aspects of wireless communication, including voice / audio coder / decoder, analog / digital converters (ADCs) for digitizing various signals such as battery voltage and temperature, and Includes other circuits. Mixed signal processing is also necessary for many other applications such as networking, computers, and others.Traditionally, analog and digital signal processing is obtained through separate analog and digital integrated circuits (ICs) with interfaces between the two ICs obtained via ADCs and digital-to-analog converters (DACs). Digital circuits tend to generate a large amount of switching noise. By contrast, analog circuits typically include various sensitive circuits (eg, oscillators, amplifiers, etc.) that prefer to operate in a quiet environment or need to operate. Implementing analog and digital circuits on different ICs allows these circuits to be isolated and operate in a preferred environment. Furthermore, the optimal processing techniques for analog and digital circuits are typically different. Whereas digital circuits are often implemented using standard CMOS processes, analog circuits may utilize linear capacitors and resistors that require extra processing steps to be added to the standard CMOS process.In order to reduce the cost and complexity of the product, both the analog circuit and the digital circuit can be assembled on a common substrate in the mixed signal IC. Mixed signal ICs offer a number of advantages such as reduced cost, fewer parts, smaller board area requirements, easier to test, and perhaps other advantages.However, assembling an analog circuit and a digital circuit on a common substrate has several disadvantages. First, the noise generated by the digital circuit degrades the performance of the analog circuit through coupling through the substrate. Second, analog circuits may require linear capacitors and resistors, and as a result may indicate the need for a specific IC process such as analog CMOS. Thus, although the analog circuit may occupy only a fraction of the die, the cost of the digital circuit increases as a result of the IC process selected for the analog circuit. Third, digital circuits typically benefit from technology scaling (eg, transistor size reduction, lower operating voltage) while analog circuits may be affected by voltage scaling. And fourth, the design cycle of the mixed circuit IC may be stretched because the design cycle of the analog circuit is typically much longer than the design cycle of the digital circuit.As described above, there is a technical need for a technique of assembling and integrating an analog circuit and a digital circuit so as to obtain the benefit of the mixed signal IC while minimizing the disadvantage of the conventional mixed signal IC assembled on the common substrate .An aspect of the present invention is to provide a technique for assembling analog circuits and digital circuits on different dies, stacking and integrating the dies in a single package, and forming a mixed signal IC that provides many of the above-mentioned benefits provide. In one form, the analog circuit and the digital circuit are implemented on two separate dies, perhaps using different IC processes suitable for different types of circuits. The analog die and the digital die are then integrated (stacked) in a single package and encapsulated. Bonding pads are provided to interconnect the two dies and connect the die to the external pins. Bonding pads can be placed and placed to provide the necessary connectivity while minimizing the amount of die area required to mount the pad. In another form, die-to-die connectivity can be tested with a serial bus interface. In yet another form, the power supply supplied to some or all of the analog die (and perhaps the power supply supplied to some or all blocks in the digital die) is in a standby mode period for extending operation , It may collapse (for example may collapse to zero volts).The present invention further provides integrated circuits, methods and elements implementing various aspects, embodiments and features of the invention, as described in further detail below.FIG. 1 is a plan view of a mixed signal IC according to an embodiment of the present invention. FIG. 2 is a side view of a mixed signal IC encapsulated in a specific IC package. FIG. 3A shows a side view of the interconnections between the various layers of the mixed signal IC. FIG. 3B shows a side view of the interconnections between the various layers of the mixed signal IC. FIG. 3C is a side view of the interconnections between the various layers of the mixed signal IC. FIG. 4A shows a plan view of the interconnection between the analog die and the digital die. FIG. 4B shows a plan view of the interconnection between the analog die and the digital die.BRIEF DESCRIPTION OF THE DRAWINGS The features, nature and advantages of the present invention will become more apparent from the detailed description set forth below, taken in conjunction with the drawings, wherein like reference numerals refer to like parts.Aspects of the present invention provide techniques for assembling analog and digital circuits on different dies and adapting the die within a single package. The mixed signal IC of the present invention provides the benefit of many mixed signal ICs while minimizing the disadvantages of conventional mixed signal ICs fabricated on a common substrate. In one form, analog and digital circuits are implemented on two separate dies using an IC process suitable for these circuits. For example, digital circuits can be implemented using advanced low voltage digital CMOS technology to save cost, power consumption, and silicon area. Depending on the required performance, the analog circuit can be designed and implemented using low cost, grown analog CMOS technology to save power consumption, or it can be designed with high performance technology . As will be described in detail below, the analog die and the digital die are then integrated (encapsulated) and encapsulated in a single package.FIG. 1 is a plan view of a mixed signal IC 100 according to an embodiment of the present invention. Mixed signal IC 100 is comprised of analog die 130 stacked on top of digital die 120, and digital die 120 is further stacked on top of package substrate 110. For many applications, the analog die is only a portion of the size of the digital die (eg typically 1/8 to 1/4). For example, the analog die can have dimensions of 1.5 mm × 2 mm and the digital die 120 can have dimensions of 6 mm × 6 mm. Thus, smaller analogue dies can be stacked on top of the digital die, saving space and enabling the use of smaller packages.The analog die and the digital die can have any shape and dimensions. For some circuits and IC processes, certain aspect ratios for the die may be preferred. For example, rectangular dies may be desirable due to ease of manufacture and other advantages.As shown in FIG. 1, a large number of bonding pads 112 are provided on four sides of the package substrate. These bonding pads 112 can be used to provide input / output (I / O) for analog die and digital die. The digital die 120 also includes a number of bonding pads 122 that are interconnectable via bond lines 123 to corresponding bonding pads on the package substrate 110. Similarly, the analog die 130 includes a plurality of bonding pads 132 that are interconnectable via bond lines to corresponding bonding pads 112 on the package substrate 110. The analog die 130 further includes a number of bonding pads 134 that are interconnectable via a bond line to a corresponding bonding pad 124 on the digital die 120.When selecting a specific area of the digital die 120 on which the analog die 130 is placed, various factors can be considered. Improved performance can be achieved if the analog die 130 is placed on a more quiet area of the digital die 120. It may be desirable for the analog die 130 to be placed on a section of the digital die 120 which is unlikely to need to be debugged. For example, the digital die 120 may include sections of memory circuits (eg, RAM and / or ROM). The section for this memory circuit tends to be more circuit-defective and is more likely to need access to debug circuits. In that case, the analogue die 130 can be placed on another area of the digital die 120 that is unlikely to require access. The analog die 130 may also be placed near or at the corners of the digital die 120. This can shorten the interconnection (bond line) between the bonding pad 132 on the analog die 130 and the corresponding bonding pad 112 on the package substrate 110. The analog die 130 may be placed based on the pins outside the analog die and the entire package. Various other factors can also be considered and are within the scope of this invention.FIG. 2 is a side view of a mixed signal IC 100 encapsulated in a specific IC package. As shown in FIG. 2, the die-attach paste 140 is covered on the top of the package substrate 110, and the digital die 120 is placed on top of the die attachment paste layer. A second layer of die attach paste 140 is covered on top of digital die 120 and analog die 130 is placed on top of the second die attach paste layer. The die attachment paste layer is used to bond the die and the package substrate together (ie, adhesive). Mold compound 150 can be used to fill spaces separated by analog dies and digital dies.The mixed signal IC 100 can be packaged using various types of packages. Specific packages can be selected based on various factors such as the required number of pins, preferred pin layout, manufacturability, etc. In the example shown in FIG. 2, mixed signal IC 100 is packaged in a commercially available fine ball grid array (F-BGA) package having technically known sizes and dimensions.In one embodiment, to encapsulate the mixed signal IC 100 in a standard package having a defined height dimension, the thickness of the analog die 130 and / or the digital die 120 is controlled to be within certain limits be able to. The thickness of the analog die and digital die is reduced by "back grinding" the wafer used to process the die. In one embodiment, the wafer is back-grinded to 200 μm. However, other thickness values can also be used. By reducing the thickness of the analog die and the digital die, the stacked die can be either (1) a profile similar to the profile of a monolithic die typically encapsulated in that package or (2) a specification for that package Can be made to have profiles that conform to.3A to 3C are side views of interconnections between the various layers of mixed signal IC 100. FIG. FIG. 3A shows the interconnection between the digital die 120 and the package substrate 110. This interconnection is accomplished via bond pads 112, 122 and bond line 123, respectively, located on the package substrate and the digital die. This interconnection can be accomplished in a manner commonly used for that package.FIG. 3B shows the interconnection between the analog die 130 and the package substrate 110. This interconnection is accomplished via bond pads 112 and 132 and bond line 133, respectively, which are located on the package substrate and the analog die. This interconnection can also be accomplished in the usual way.FIG. 3C shows the interconnection between the analog die 130 and the digital die 120. This interconnection is accomplished via bond pads 134 and 124, and bond line 135, which are located on the analog die and the digital die, respectively. This interconnection can also be achieved in the usual way.4A and 4B show plan views of the interconnection between the analog die and the digital die. 4A, a first set of bonding pads 132 are provided on the analog die 130 for interconnection with the package substrate 110 and a second set of bonding pads 134 are provided for interconnecting with the digital die 120 . Similarly, a first set of bonding pads 122 are provided on the digital die 120 for interconnection with the package substrate 110, and a second set of bonding pads 124 are provided for interconnection with the analog die 130 .In one embodiment, if possible, bond pads 122 and 124 are "interdigitated" on digital die 120 such that bonding pads 122 and 124 are alternately arranged on the digital (along the lines) . Using an interdigitated bond pad arrangement requires a minimum additional die area (if any) to mount additional bond pads 124 on the digital die 120 to interconnect with the analog die 130. In this way, stacking the analog die 130 on top of the digital die 120 will not suffer disadvantages. Alternatively, a group of die and die bonding pads on the digital die 120 can be placed on the digital die between groups of external pins. There is no disadvantage also in this configuration.In one embodiment, the bonding pads 132 and 134 for the analog die 130 are located near the edge of the analog die and close to the edge of the digital die 120, and the package substrate to which the bonding pad is ultimately connected It is close to 110. This bonding pad arrangement facilitates the interconnection between the analog die 130 and the digital die 120 and the package substrate 110 (eg, to implement an interdigitated connection). This also results in shorter bond lines from the analog die 130, which can improve performance. In one embodiment, bonding pads 122 and 124 for the digital die 120 are also located near the edge of the digital die. Bonding pad placement for the digital die 120 avoids penetration into the digital circuit area. The placement of the bonding pads in the central area of the digital die can interface (ie block) with a routing channel for the signal line.FIG. 4B shows the interconnection between the analog die and the digital die using the bonding pad 126 located away from the edge of the digital die 120. For some specific designs, it may be convenient to interconnect digital circuits located remotely from the edge of the digital die. This may be necessary, for example, to shorten the interconnection between the analog circuit and the digital circuit, or to supply more I / O pads on the analog die. In this case, the bonding pads 126 may be provided on the digital die 120 to interconnect with the corresponding bonding pads 136 on the analog die 130.The stacked analog die and digital die described herein provide a number of advantages. First, by separating the analog and digital circuits into two dies, it is possible to select more optimal process technologies for each type of circuit. Different technologies can be selected for analog circuits and digital circuits. Second, the noise coupled through the common silicon substrate is eliminated. Third, because analog and digital circuits can evolve on different schedules, one circuit type (eg, analog) does not interfere with the design of the other circuit type. Furthermore, each circuit type can be designed and changed without affecting the design of other circuit types. Other advantages can also be realized using the stacked analog die and digital die design described herein.Another aspect of the present invention provides a technique for testing stacked analog dies and digital dies. Each die can be individually tested (eg at the wafer level) to ensure correct functioning of the circuit assembled on the die. After the analog die and the digital die are stacked, interconnected and encapsulated in the package, in order to ensure that the interconnections via the bond line are functional (ie to ensure connectivity) Further tests can be carried out. However, since die-to-die interconnections are not directly accessible via external pins, techniques for testing these interconnections are provided herein.In one embodiment, die to die interconnection testing is accomplished with a standard serial bus interface (SBI) operating in a manner known in the art. In order to implement this interface, the digital die can be designed and operated as a "master" driver controlling the test function (eg power down, mode selection etc.) and the analog die can be controlled by the control supplied by the digital die Slave "driver that implements the" slave "driver. A test vector consisting of a sequence of control values can be sent from the digital die to the analog die to test the die from die.Multiplexers are provided on the analog die for interconnection from each die to be tested to the die. The multiplexer has a first input for normal operation, a second input for testing, an output operatively coupling a pad from the die to the die, and a control input. To test the reading of the value from the analog die, the second input and the control input receive the test value and the control signal from the slave driver on the analog die, respectively. The slave driver can instruct the multiplexer to supply specific test values via the multiplexer to the pad from the die to the die. On the digital die, the test value from the analog die can be received (eg via another multiplexer) and sent to the external output pad. The test value is then detected and compared against the value supplied to the slave driver via the serial bus interface.To test the writing of the value to the analog die, the test value can be sent from the external output pad (eg via a multiplexer) to the die from the die on the digital die to the die. The test value is then received by another multiplexer on the analog die. The multiplexer on the analog die is controlled by the slave driver and can send the received test value to the slave driver. Next, the slave driver supplies the value to the serial bus interface. The test value supplied via the digital die and the detection value from the serial bus interface can be compared to ensure correct connectivity.Thus, the serial bus interface can be used to test both reading from the analog die and writing to the analog die. The serial bus interface is used to control the multiplexer on the analog die to test the die to die interconnection. The serial bus interface is also used to supply test values (for writing) to the analogue die via a digital die and via an analog-to-die interconnection (for readout) from die to die It is used to retrieve the received test value.Another aspect of the invention controls the circuitry and / or power supply on the analog die and possibly the digital die and reverses the power during the standby mode. A remote terminal in a wireless communication system can be active and fully operational at certain time intervals and will be off or in standby mode at other time intervals to conserve power and reduce the lifetime between battery recharges . In the standby mode, it is desirable to power down as many circuits as possible in order to reduce power consumption. However, powering down the circuit still produces leakage current, which shortens the lifetime of the remote terminal. This leakage current can be eliminated by "collapsing" the power supply supplied to these circuits (for example down to zero volts).In the case of a mixed signal IC implemented in the manner described above it is desirable to power down and / or collapse the power supply to as many circuits as possible in the analog die during the standby mode. The analog circuitry on the analog die may consume a large amount of current when it becomes active (relatively) to provide the desired performance. The analog circuit may also operate at a voltage different from the voltage used for the digital circuit. For example, it may operate from an analog circuit 3.3 volt supply while the digital circuit may operate from a 1.8 volt supply. The power source for the analog die can be supplied from an external source (eg power management device) via an external pin.In one embodiment, the serial bus interface is used to control the operation of some or all analog circuits and to collapse the power supply during the standby mode. The slave driver on the analog die can be designed to operate based on the digital power supply and is maintained to be movable at any time during the standby period. The digital control signals from the slave drivers can be used to control the required signal levels required to control the various types of analog circuits on the analog die (eg, oscillator, phase locked loop circuit, front end receiver circuit, etc.) , A level shift circuit is provided on the analog die. The slave driver receives a command from the digital die via the serial bus interface to control the operation of the circuit on the analog die. In response, the slave driver generates a control signal instructing the analog circuit on the analog die to operate in a predetermined manner. Since the slave driver is also supplied with power in the standby mode, the setting for that circuit can be maintained.In the standby mode, the serial bus interface is used to instruct the power management device to collapse the voltage of the selected circuit (all or a subset of circuits) in the analog die. This eliminates the leakage current and extends the life span. When not in the standby mode, the serial bus interface is used to instruct the power management device to raise the voltage for the analog die.The voltage "telescoping" technique described herein, where the voltage for some or all of the circuits in the analog die collapses (for example to zero volts) in the standby mode, the analog die and the digital die are piled up It is also applicable to other mixed signal designs packaged in separate packages. Voltage telescope technology is also applicable to various (optional) blocks within a digital die.The foregoing description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art. And the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Accordingly, the present invention is not intended to be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
A method and system are disclosed. In one embodiment the method comprises a first device determining whether a second device is permitted to be activated, the first device activating the second device if the second device is permitted to be activated, and the first device reducing the functionality of the second device if the second device is not permitted to be activated. |
CLAIMS What is claimed is: L A method, comprising: a first device determining whether a second device is permitted to be activated; the first device activating the second device if the second device is permitted to be activated; and the first device reducing the functionality of the second device if the second device is not permitted to be activated. 2. The method of claim 1, wherein determining whether a second device is permitted to be activated further comprises: the first device permitting the second device to be activated if a device activation bit is set; and if the activation bit is not set, the first device determining whether the device activation bit is permitted to be set. 3. The method of claim 2, wherein determining whether the activation bit is permitted to be set further comprises: the first device sending a device activation request to a registration server; and the first device receiving a device activation approval response or a device activation rejection response from the registration server in response to the device activation request. 4. The method of claim 3, wherein the activation request comprises a device identification number to identify the second device. 5. The method of claim 4, further comprising: the registration server receiving the activation request from the first device; the registration server checking a registration database with the device identification number to verify if the second device is permitted to be activated; and the registration server sending a response to the first device that specifies whether to allow or not allow activation of the second device based on the information within the registration database. 6. The method of claim 1 , wherein reducing the functionality of the second device if the second device is not permitted to be activated further comprises reducing the operating frequency of the second device. 7. The method of claim 1, wherein reducing the functionality of the second device if the second device is not permitted to be activated further comprises disabling an internal function within the second device. 8. The method of claim 1, wherein reducing the functionality of the second device if the second device is not permitted to be activated further comprises disabling the second device. 9. The method of claim 3, wherein the second device comprises a chipset. 10. A method, comprising: a first device determining whether a function within a second device is permitted to be activated; the first device activating the function within the second device if it is permitted to be activated; and the first device not activating the function within the second device if it is not permitted to be activated. 11. The method of claim 10, wherein determining whether a function within a second device is permitted to be activated further comprises: permitting the function within the second device to be activated if a function activation bit is set; and if the function activation bit is not set, determining whether the function activation bit is permitted to be set. 12. The method of claim 11, wherein determining whether the function activation bit is permitted to be set further comprises: the first device sending a function activation request to a registration server; and the first device receiving a function activation approval response or a function activation rejection response from the registration server in response to the function activation request. 13. The method of claim 12, wherein the activation request comprises: a device identification number to identify the second device; and a function identification number to identify the second device's function. 14. The method of claim 13, further comprising: the registration server receiving the activation request from the first device; the registration server checking a registration database with the device identification number to verify if the second device is permitted to be activated; and the registration server sending a response to the first device that specifies whether to allow or not allow activation of the second device's function based on the information within the registration database. 15. The method of claim 12, wherein the second device comprises a chipset. 16. A system, comprising: a bus; a processor coupled to the bus; a chipset coupled to the bus; and a memory coupled to the bus, the memory adapted for storing instructions, which upon execution by the processor: determines whether the chipset is permitted to be activated; activates the chipset if the chipset is permitted to be activated; and reduces the functionality of the chipset if the chipset is not permitted to be activated. 17. The system of claim 16, wherein the processor: permits the chipset to be activated if a chipset activation bit is set; and determines whether the chipset activation bit is permitted to be set if the chipset activation bit is not set. 18. The system of claim 17, wherein the processor: sends a chipset registration request to a registration server; and receives a chipset activation approval response or a chipset activation rejection response from the registration server in response to the chipset registration request. 19. The system of claim 16, wherein reducing the functionality of the chipset comprises reducing the operating frequency of the chipset. 20. The system of claim 16, wherein reducing the functionality of the chipset comprises disabling an internal function within the chipset. 21. The system of claim 16 wherein reducing the functionality of the chipset comprises disabling the chipset. 22. The system of claim 16, wherein the memory comprises a protected segment of the Basic Input-Output System (BIOS). 23. A system, comprising: a bus; a chipset coupled to the bus; and a processor coupled to the bus, the processor operable to; determine whether the chipset is permitted to be activated; activate the chipset if the chipset is permitted to be activated; and reduce the functionality of the chipset if the chipset is not permitted to be activated. 24. The system of claim 23, wherein the processor is further operable to: permit the chipset to be activated if a chipset activation bit is set; and determine whether the chipset activation bit is permitted to be set if the chipset activation bit is not set. 25. The system of claim 24, further comprising a memory, wherein the memory is operable to store instructions, which upon execution by the processor: sends a chipset registration request to a registration server; and receives a chipset activation approval response or a chipset activation rejection response from the registration server in response to the chipset registration request. 26. The system of claim 25, wherein the memory comprises a protected segment of the Basic Input-Output System (BIOS). 27. The system of claim 23, wherein reducing the functionality of the chipset comprises reducing the operating frequency of the chipset. 28. The system of claim 23, wherein reducing the functionality of the chipset comprises disabling an internal function within the chipset. 29. The system of claim 23 wherein reducing the functionality of the chipset comprises disabling the chipset. |
CHIPSET ACTIVATIONFIELD OF THE INVENTION[0001] The invention relates to activating a chipset.BACKGROUND OF THE INVENTION[0002] Modern operating systems such as Microsoft(R) Windows XP require activation through a secure registration certificate sent from the client operating system, directly to Microsoft via the Internet. This allows Microsoft to see if more than one copy of the operating system is being used and gives Microsoft the ability to be able to provide better customer service.[0003] Intel(R) Corporation has current motherboard technology, Intel(R) Active Management Technology (AMT), as referred to in the whitepaper Intel(R) Active Management Technology, August 2004, http://www.intel.com/business/bss/products/client/active mgmt.pdf, that provides BIOS and chipset-level services and asset management information. Some of these services and data include remote management and diagnostics capabilities, hardware failure detection, and electronic asset tags among others. AU asset management information is stored in a secure area of the BIOS's non-volatile memory that a system admin cannot access. In addition, the AMT agent in the BIOS also contains a small HTTP and XML web server for communication to 3rd party management software that alerts system administrators and other IT personnel. AMT technology features an out-of-band link that is independent of the operating system, allowing IT managers to access a system even if the operating system is inoperative.BRIEF DESCRIPTION QF THE DRAWINGS[0004] The present invention is illustrated by way of example and is not limited by the figures of the accompanying drawings, in which like references indicate similar elements, and in which:[0005] Figure 1 is a block diagram of one embodiment of a computer system utilized to activate a chipset.[0006] Figure 2 is a block diagram of one embodiment of the components comprising a chipset activation system.[0007] Figure 3 is a flow diagram of one embodiment of a process for activating a chipset. [0008] Figure 4 is a flow diagram of another embodiment of a process for activating a chipset.DETAILED DESCRIPTION OF THE INVENTION[0009] Embodiments of a method to activate a chipset are disclosed. In the following description, numerous specific details are set forth. However, it is understood that embodiments may be practiced without these specific details. In other instances, well- known elements, specifications, and protocols have not been discussed in detail in order to avoid obscuring the present invention. [00010] Figure 1 is a block diagram of one embodiment of a computer system utilized to activate a chipset. The computer system includes a central processing unit (CPU) 100, a memory controller hub (MCH) 102, and an I/O controller hub (ICH) 104 that, in one embodiment, comprise a chipset 106. The term "chipset" is a common term used to refer to a motherboard configuration of one or more chips, such as the MCH and ICH chips. The MCH and ICH are commonly referred to as the northbridge and the southbridge, which when combined, form a chipset. The chipset controls much of the information passed across one or more buses on a motherboard (such as an FO bus, a dedicated graphics bus, and a memory bus, among others). In one embodiment, the CPU 100 is coupled to the MCH 102 via a host bus and to system memory 108. System memory may comprise one or more of synchronous dynamic random access memory (SDRAM), double data rate SDRAM (DDR-SDRAM), or one of many other formats of main system memory. In one embodiment, the MCH 102 is coupled to a graphics module 110. In different embodiments, the graphics module is a Peripheral Component Interconnect (PCI) Express graphics card or an Accelerated Graphics Port (AGP) graphics card. In one embodiment, the ICH 104 is coupled to a hard drive 112, a keyboard controller 114, a mouse controller 116, and an FO bus 118. In different embodiments, the ICH 104 may also be coupled to any number of FO devices, buses, and/or other controllers. In one embodiment, a network interface card (NIC) 120 is coupled to the FO bus 118. In one embodiment, the NIC 120 is coupled to a network 122. In different embodiments, the network 122 may be the Internet, an intranet, or another information network. In different embodiments, the NIC 120 may be coupled to the network 122 through a local area network (LAN) topology, a wide area network (WAN) topology, a wireless network topology, or any other applicable network topology that would allow the computer system access to the network 122. In one embodiment, a registration server (REG SVR) 124 is also coupled to the network 122.[00011] In one embodiment, the chipset 106 must be activated to be operational. In another embodiment, the chipset 106 is operational with or without activation, but requires activation to enable one or more chipset functions. In one embodiment, the chipset 106 requires activation through an online registration process. In this embodiment, the REG SVR 124 has access to a database of all manufactured chipsets and their corresponding registration information. When the computer system containing the chipset 106 is booted for the first time by a user, the computer system checks with the REG SVR 124 to determine if the chipset 106 has been activated already. If the chipset 106 has not been activated, an attempt to automatically connect to the REG SVR 124 over the network 122 may be made. The REG SVR 124 may be able to communicate information to the computer system that specifies whether the chipset 106 is allowed to be activated or not. The computer system sends a request to the REG SVR 124, the REG SVR 124 then sends a communication back to the computer system with a response to the request (i.e. either allowing or not allowing the chipset 106 to be activated). Thus, in this embodiment, the chipset 106 may then be activated if allowance was given by the REG SVR 124. Otherwise, if the chipset has not been allowed activation, the chipset may be put into a reduced functionality mode. In one embodiment, the reduction in functionality may include reducing the operating frequency of the chipset. In another embodiment, the reduction in functionality may include disabling one or more functions associated with the chipset. In yet another embodiment, the reduction in functionality may include entirely disabling the chipset 106 from further use. [00012] Figure 2 is a block diagram of one embodiment of the components comprising a chipset activation system. In one embodiment, the chipset activation system is incorporated as a subsystem within a computer system (such as a desktop or laptop computer system). A chipset 200 is coupled to a processor 202. The processor is coupled to a memory 206 and a NIC 208. In one embodiment, the memory 206 is a secured segment of memory within the Basic Input-Output System (BIOS). In other embodiments, the memory 206 may be a shared memory, a dedicated memory, memory on the processor die, and/or one or more other valid memory arrangements. In one embodiment, a chipset activation bit (CAB) 206 is stored within the memory 204. In another embodiment, the CAB 206 is a bit within a register contained in the chipset 200. In one embodiment, the NIC 208 is coupled to a network 210 and has communication access to a REG SVR 212 also coupled to the network 210. In one embodiment, the processor 202 is dedicated to processing information related to the assets in the computer system. In one embodiment, the processor is a component of an Intel(R) Active Management Technology subsystem incorporated in the computer system. In one embodiment, the assets within the computer system may include hardware components within the computer system such as the CPU, the chipset, the system memory, and any peripheral cards.[00013] In one embodiment, when the computer system is booted for the first time the processor 202 attempts to read the CAB 206 within the memory 204 to determine the activation status of the chipset 200. In one embodiment, if the chipset has not been activated the processor 202 then attempts to communicate with the REG SVR 212 to ascertain whether the chipset is allowed to be activated. In one embodiment, the processor 202 attempts to send an activation request to the REG SVR 212. In one embodiment, the memory stores code for a small HTTP and/or XML web server (WEB SVR) 214 to effectively communicate with the REG SVR 212. In this embodiment, the processor 202 executes the WEB SVR 214 code and the WEB SVR 214 allows the processor 202 to communicate using the NIC 208 with the REG SVR 212 across the network 210. [00014] If the REG SVR 212 can be contacted, then the activation request sent by the processor 202 is then processed by the REG SVR 212. In one embodiment, the activation request includes identification information that allows the REG SVR 212 to identify the unique chipset 200 in the computer system making the request. The REG SVR 212 then processes the activation request, determines whether the chipset 200 is allowed to be activated, and sends a response back to the processor 202. In one embodiment, the response sent to the processor 202 consists of either a "yes" (i.e. "activate") or "no" (i.e. "do not activate") communication. In one embodiment, if a "yes" value is received from the REG SVR 212, the processor 202 permanently sets the CAB 206 to active and this process activation determination process will not be necessary again. In another embodiment, if a "no" value is received from the REG SVR 212, the processor 202 sets the CAB 206 to inactive. In one embodiment, when the CAB 206 is set to inactive the chipset 200 is disabled. In another embodiment, when the CAB 206 is set to inactive the chipset 200 is placed in a reduced functionality state. In yet another embodiment, the "no" value may eventually change to a "yes" value. Thus, in this embodiment, if the CAB 206 is set to inactive the processor 202 (utilizing the WEB SVR 214) will continue to poll the REG SVR 212 at each system boot to determine if the REG SVR 212 has changed the status for allowing the chipset 200 to be activated. [00015] In one embodiment, if the REG SVR 212 cannot be contacted, then the chipset activation request is queued. In one embodiment, if the request is queued the processor 202 (utilizing the WEB SVR 214) checks for network connectivity each time the computer system is booted. Once connected to a network, the processor 202 (utilizing the WEB SVR 214) attempts to contact the REG SVR 212. In one embodiment, the chipset 200 operates in a reduced functionality state until the processor 202 verifies with the REG SVR 212 that the chipset 200 is allowed to be activated. Again, in different embodiments, reducing the functionality of the chipset 200 may include a reduction in the chipset's operational frequency, disabling an LO bus coupled to the chipset 200, disabling an integrated graphics processor in the chipset 200, or disabling or modifying any other function of the chipset 200. [00016] In one embodiment, when the processor 202 sends an activation request to the REG SVR 212, the REG SVR 212 in turn registers the chipset and stores a registration file in the chipset database. In this embodiment, once the chipset has been activated the processor 202 (utilizing the WEB SVR 214) can periodically check with the REG SVR 212 for any critical BIOS patches, updates, and other important communication events regarding the chipset.[00017] In another embodiment, the response sent by the REG SVR 212 to the processor 202 includes chipset functionality level information. In this embodiment, the REG SVR 212 has functionality level information associated with each unique chipset identifier. The functionality level specifies the set of functions on the chipset 200 that are allowed to be activated (i.e. enabled). In different embodiments, the set of chipset functions that may or may not be allowed to be activated include the operational frequency of the chipset 200, a graphics processor integrated within the chipset 200, or any other functional aspect of the chipset 200 which may be enabled or disabled. In one embodiment, the chipset functionality level response sent to the processor 202 includes information regarding the activation of one or more chipset functions and each of the chipset functions is associated with a unique chipset function activation bit (CFAB) 206 located in the memory 204.[00018] In this embodiment, when the computer system is booted for the first time the processor 202 attempts to check each CFAB 206 located within the memory 204 to determine the activation status of each chipset function. In one embodiment, if a particular chipset function has not been activated the processor 202 then attempts to communicate with the REG SVR 212 to ascertain whether the chipset function is allowed to be activated. The processor 202 attempts to send a chipset function activation request to the REG SVR 212. [00019] If the REG SVR 212 can be contacted, then the chipset function activation request sent by the processor 202 is then processed by the REG SVR 212. In one embodiment, the chipset function activation request includes identification information that allows the REG SVR 212 to identify the chipset 200 in the computer system making the request from all other like chipsets. The REG SVR 212 then processes the chipset function activation request, determines whether the chipset function in question is allowed to be activated, and sends a response back to the processor 202. In one embodiment, the response sent to the processor 202 consists of either a "yes" (i.e. "activate") or "no" (i.e. "do not activate") communication. In one embodiment, if a "yes" value is received from the REG SVR 212, the processor 202 permanently sets the CFAB 206 to active and this chipset function activation determination process will not be necessary again. In another embodiment, if a "no" value is received from the REG SVR 212, the processor 202 sets the CFAB 206 to inactive. In one embodiment, when the CFAB 206 is set to inactive the chipset function is disabled. In another embodiment, the "no" value may eventually change to a "yes" value. Thus, in this embodiment, if the CFAB 206 is set to inactive the processor 202 (utilizing the WEB SVR 214) will continue to poll the REG SVR 212 at each system boot to determine if the REG SVR 212 has changed the status for allowing the chipset function to be activated. In another embodiment, if the CFAB 206 is set to inactive the processor 202 (utilizing the WEB SVR 214) will continue to poll the REG SVR 212 at predefined intervals of time (e.g. once an hour) to determine if the REG SVR 212 has changed the status for allowing the chipset function to be activated. [00020] If the REG SVR 212 cannot be contacted, then the chipset function activation request may be queued internally into the system. In one embodiment, the processor 202 (utilizing the WEB SVR 214) checks for network connectivity each time the computer system is booted. Once connected to a network, the processor 202 (utilizing the WEB SVR 214) attempts to contact the REG SVR 212. In one embodiment, the chipset 200 operates with the function in question inactive until the processor 202 verifies with the REG SVR 212 that the chipset function is allowed to be activated.[00021] Figure 3 is a flow diagram of one embodiment of a process for activating a chipset. The process is performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both. Referring to Figure 3, the process begins by processing logic determining whether a chipset is permitted to be activated (processing block 300). In one embodiment, processing logic checks to see if a chipset activation bit has been set to determine whether the chipset is permitted to be activated. In this embodiment, if the chipset activation bit has been set then the chipset is permitted to be activated. If the chipset activation bit has not been set then the chipset is not permitted to be activated. If the chipset is permitted to be activated then processing logic activates all functions within the chipset (processing block 302). If the chipset is not permitted to be activated then processing logic reduces the functionality of the chipset (processing block 304). In different embodiments, reducing the functionality of the chipset may include a reduction in the chipset's operational frequency, disabling an FO bus coupled to the chipset, disabling an integrated graphics processor, or disabling or modifying any other function of the chipset. [00022] Figure 4 is a flow diagram of another embodiment of a process for activating a chipset. The process is performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is ran on a general purpose computer system or a dedicated machine), or a combination of both. Referring to Figure 4, the process begins by processing logic determining whether a chipset activation bit has been set (processing block 400). Li one embodiment, this processing logic is located in the processor. In another embodiment, this processing logic is programmed into the software stored into memory and then ran by the processor. In different embodiments, the chipset activation bit may be located in a register on the chipset, in memory coupled to the chipset, in a ROM, in a BIOS, or in any other storage location. In one embodiment, the chipset activation bit is in a secured location that may not be tampered with by an end user. If the chipset activation bit has been set, then processing logic permits the chipset to be activated (processing block 402). In one embodiment, this processing logic is located in the processor. In another embodiment, this processing logic is programmed into the software stored into memory and then ran by the processor. In one embodiment, processing logic activates the chipset by setting the chipset activation bit and thus allowing the chipset to activate and boot with full functionality.[00023] If the chipset activation bit has not been set, then processing logic sends a chipset activation request to a registration server (processing block 404). In one embodiment, this processing logic is located in the processor. In another embodiment, this processing logic is programmed into the software stored into memory and then run by the processor. In different embodiments, the registration server may be located on a local network, on a wireless network, on the Internet, or on any other form of network that the processing logic can communicate across. In one embodiment, chipset activation request includes identification information that allows the registration server to identify the unique chipset in the computer system making the request. In one embodiment, the registration server contains a database of all manufactured chipsets and their corresponding registration information. In another embodiment, the registration server communicates with a third party database containing the corresponding registration information for the chipset. Once the activation request has been received, the registration server sends the results of the activation request back to processing logic. [00024] Therefore, processing logic next receives results of activation request from the registration server (processing block 406). In one embodiment, this processing logic is located in the processor. In another embodiment, this processing logic is programmed into the software stored into memory and then run by the processor. In one embodiment, the results that return from the registration server consist of either a "yes" (i.e. activate, approve) or "no" (i.e. do not activate, do not approve) communication. Next, processing logic checks to see whether the chipset activation request was approved by the registration server (processing block 408). In one embodiment, this processing logic is located in the processor. In another embodiment, this processing logic is programmed into the software stored into memory and then run by the processor. If the chipset activation was approved then processing logic permits the chipset to be activated (processing block 402).Alternatively, if the chipset activation was not approved then processing logic reduces the functionality of the chipset (processing block 410). In one embodiment, this processing logic is located in the processor. In another embodiment, this processing logic is programmed into the software stored into memory and then run by the processor. In different embodiments, reducing the functionality of the chipset may include a reduction in the chipset's operational frequency, disabling an I/O bus coupled to the chipset, disabling an integrated graphics processor, or disabling or modifying any other function of the chipset. [00025] Thus, embodiments of a method to activate a chipset are disclosed. Although the method is described with specific reference to a chipset, the same method can be employed for any piece of hardware that has similar functional capabilities such as a central processing unit or a graphics processor. Additionally, these embodiments have been described with reference to specific exemplary embodiments thereof. It will, however, be evident to persons having the benefit of this disclosure that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the embodiments described herein. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
An integrated circuit includes technology for providing out-of-band (OOB) processor telemetry. The integrated circuit comprises a processor comprising a core and a distributed core perimeter. The integrated circuit also comprises a telemetry push agent in the distributed core perimeter, and an OOB telemetry manager in the core to operate out of band and to send telemetry data for the processor to the telemetry push agent. The telemetry push agent comprises control logic to (a) receive the telemetry data from the OOB telemetry manager and (b) forward at least some of the telemetry data to in-band telemetry software. Other embodiments are described and claimed. |
An integrated circuit with technology for providing out-of-band (OOB) processor telemetry, the integrated circuit comprising:a processor comprising at least one core and a distributed core perimeter;a telemetry push agent in the distributed core perimeter; andan OOB telemetry manager in the core to operate out of band and to send telemetry data for the processor to the telemetry push agent; andwherein the telemetry push agent comprises control logic to (a) receive the telemetry data from the OOB telemetry manager and (b) forward at least some of the telemetry data to in-band telemetry software.An integrated circuit according to claim 1, wherein the telemetry push agent is configured to operate out of band.An integrated circuit according to claim 1, further comprising:telemetry counters in the core; anda distributed core register array (DCRA) in the processor; andwherein the OOB telemetry manager is configured to collect telemetry data from the telemetry counters, write at least some of the collected telemetry data to the DCRA, and send at least some of the collected telemetry data from the DCRA to the telemetry push agent.An integrated circuit according to claim 3, wherein the OOB telemetry manager comprises:a telemetry collector to operate out of band, to read raw telemetry data from the telemetry counters, and to generate collected telemetry data based on the raw telemetry data; anda telemetry messenger to operate out of band, to generate a telemetry packet based on the collected telemetry data, and to send the telemetry packet to the telemetry push agent.An integrated circuit according to claim 3, wherein the DCRA comprises an array of registers that reside in the core.An integrated circuit according to claim 5, wherein the OOB telemetry manager is configured to:generate multiple telemetry entries, based on the collected telemetry data; andwrite each telemetry entry to a different register in the DCRA.An integrated circuit according to claim 1, wherein:the core further comprises telemetry counters; andthe OOB telemetry manager comprises:a telemetry collector to operate out of band, to read raw telemetry data from the telemetry counters, and to generate collected telemetry data based on the raw telemetry data; anda telemetry messenger to operate out of band, to generate a telemetry packet based on the collected telemetry data, and to send the telemetry packet to the telemetry push agent.An integrated circuit according to claim 7, further comprising:a telemetry configuration register in the processor; andwherein the telemetry collector is configured to determine what kinds of telemetry data to collect, based at least in part telemetry configuration data from the telemetry configuration register.An integrated circuit according to claim 1, wherein:the core comprises a first core;the OOB telemetry manager comprises a first OOB telemetry manager to send telemetry data for the first core to the telemetry push agent; andthe integrated circuit further comprises a second core with a second OOB telemetry manager to operate out of band and to send telemetry data for the second core to the telemetry push agent.A data processing system with technology for providing out-of-band (OOB) processor telemetry, the data processing system comprising:a processor comprising at least one core and a distributed core perimeter;random access memory (RAM) responsive to the processor;a telemetry push agent in the distributed core perimeter; andan OOB telemetry manager in the core to operate out of band and to send telemetry data for the processor to the telemetry push agent; andwherein the telemetry push agent comprises control logic to (a) receive the telemetry data from the OOB telemetry manager and (b) forward at least some of the telemetry data to in-band telemetry software.A data processing system according to claim 10, wherein the telemetry push agent is configured to operate out of band.A data processing system according to claim 10, further comprising:telemetry counters in the core; anda distributed core register array (DCRA) in the processor; andwherein the OOB telemetry manager is configured to collect telemetry data from the telemetry counters, write at least some of the collected telemetry data to the DCRA, and send at least some of the collected telemetry data from the DCRA to the telemetry push agent.A data processing system according to claim 10, wherein:the core further comprises telemetry counters; andthe OOB telemetry manager comprises:a telemetry collector to operate out of band, to read raw telemetry data from the telemetry counters, and to generate collected telemetry data based on the raw telemetry data; anda telemetry messenger to operate out of band, to generate a telemetry packet based on the collected telemetry data, and to send the telemetry packet to the telemetry push agent.A method for providing out-of-band (OOB) processor telemetry, the method comprising:at an OOB telemetry manager in a core of a processor, collecting telemetry data for the processor;sending the telemetry data from the OOB telemetry manager to a telemetry push agent in a distributed core perimeter of the processor; andforwarding at least some of the telemetry data from the telemetry push agent to in-band telemetry software executing on the processor; andwherein the OOB telemetry manager operates out of band.A method according to claim 14, wherein the operation of collecting telemetry data for the processor comprises:reading telemetry data from telemetry counters in the core; andwriting at least some of the collected telemetry data to a distributed core register array (DCRA) in the processor. |
Technical FieldThe present disclosure pertains in general to data processing systems and in particular to technology for providing out-of-band processor telemetry.BackgroundIn the field of data processing systems, the term "processor telemetry" pertains to the process of (a) measuring attributes of a processor while that processor is operating and (b) transmitting the measurement results to a destination. Similarly, "processor telemetry data" denotes the information generated through processor telemetry. For purposes of this disclosure, the term "telemetry" means "processor telemetry," and the term "telemetry data" means "processor telemetry data."A conventional processor may include features which enable software to obtain telemetry data from the processor. For instance, at least some of the Intel® Xeon® processors from Intel Corporation include telemetry features which enable software to obtain telemetry data pertaining to a wide variety of processor attributes.Based on telemetry data, software in a data processing system may enhance operation of the data processing system by (a) optimizing power and/or performance for the current workload, (b) predicting when the system or a system component may fail, (c) learning how an application uses the system resources and better tuning the system dynamically, etc.However, a conventional data processing system uses a software layer running on the processor to collect the telemetry data. For instance, the software layer may include a performance monitoring agent that collects the telemetry data. And since the performance monitoring agent is part of the software layer, that agent is considered to be an in-band agent. Thus, a conventional data processing system may use an in-band agent to collect telemetry data. For purposes of this disclosure, when a data processing system uses an in-band agent to collect telemetry data, that data processing system may be referred to as including technology for providing in-band telemetry.However, an in-band performance monitoring agent may consume a significant amount of system resources to collect the telemetry data. An in-band performance monitoring agent may also need to be tailored to a particular operating system and/or to a particular software application.As described in greater detail below, the present disclosure introduces technology for providing out-of-band processor telemetry.Brief Description of the DrawingsFeatures and advantages of the present invention will become apparent from the appended claims, the following detailed description of one or more example embodiments, and the corresponding figures, in which:Figure 1 is a block diagram depicting an example embodiment of a data processing system with technology for providing out-of-band processor telemetry.Figure 2 is a flow diagram to describe operations performed by certain components of the data processing system of Figure 1 , including communications between those components to provide OOB telemetry.Figure 3 presents a flowchart of an example embodiment of a process for providing out-of-band processor telemetry, with regard to the telemetry collector of Figure 1 .Figure 4 presents a flowchart of an example embodiment of a process for providing out-of-band processor telemetry, with regard to the telemetry messenger of Figure 1 .Figure 5 is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention.Figure 6 is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention.Figures 7 and 8 are block diagrams of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip.Figure 9 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention.Figure 10 is a block diagram of a system according to embodiments of the invention.Figures 11and12 are block diagrams of more specific exemplary systems according to embodiments of the invention.Figure 13 is a block diagram of a system on a chip according to embodiments of the invention.Figure 14 is a block diagram depicting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention.Detailed DescriptionAs indicated above, an in-band performance monitoring agent may consume significant amounts of system resources collecting telemetry data, and it may need to be tailored to a particular operating system (OS) and/or to a particular software application. The present disclosure introduces technology for providing out-of-band processor telemetry. In particular, a processor according to the present disclosure includes telemetry facilities which operate at a hardware level to collect telemetry data for the processor. Accordingly, those telemetry facilities may be referred to as "out-of-band (OOB) telemetry facilities." As described in greater detail below, the OOB telemetry facilities may collect telemetry data without consuming significant amounts of system resources. In addition, the OOB telemetry facilities may be OS and application "agnostic," in that the OOB telemetry features do not need to be tailored to any particular OS or application.Figure 1 is a block diagram depicting an example embodiment of a data processing system 10 with technology for providing out-of-band processor telemetry. Data processing system 10 includes a processor 12 in communication with random access memory (RAM) 14 and non-volatile storage (NVS) 16. NVS 16 may include various software components (e.g., an operating system (OS) and user applications) that data processing system 10 copies into RAM 14 for execution. When software from NVS 16 and/or RAM 14 is executing on processor 12, that software may be referred to as a "software layer." The features of processor 12 which support execution of that software may be referred to as a "hardware layer." The software layer may be referred to as running "on top of' the hardware layer. In the embodiment of Figure 1 , the software in NVS 16 includes telemetry software 18. Since telemetry software 18 runs in the software layer, it may be referred to as "in-band telemetry software" or "in-band telemetry logic." In one embodiment, telemetry software 18 may be part of an OS, for instance. Data processing system 10 may also include various other software and hardware components (e.g., a memory controller, etc.) that are not illustrated to avoid obscuring the illustrated features.In the embodiment of Figure 1 , processor 12 includes core 20A, core 20B, and a distributed core perimeter 30 that all reside in the same chip or in the same package. However, in alternative embodiments, a data processing system may include one or more processors, each processor may include one or more cores, etc. For instance, a data processing system may include multiple processors which reside in separate packages, and each processor may include multiple cores. In the embodiment of Figure 1 , core 20B may include features that are the same as or similar to the features of core 20A. Distributed core perimeter 30 may also be referred to as an "uncore." In the embodiment of Figure 1 , each core resides in a domain with one set of voltages and/or frequencies, while distributed core perimeter 30 resides in a domain with different voltages and/or frequencies.As illustrated, processor 12 includes OOB telemetry facilities (OTF) 40. As described in greater detail below, OTF 40 includes an OOB telemetry manager 42 and a distributed core register array (DCRA) 26 within core 20A, and a telemetry push agent 32 within distributed core perimeter 30. As described in greater detail below, OTF 40 collects telemetry data and pushes that collected telemetry data to telemetry software 18. Furthermore, OTF 40 may collect the telemetry data much more efficiently than a convention data processing system that uses an in-band agent to collect telemetry data. For instance, an in-band agent in a conventional system might consume 2,000 to 3,000 cycles to collect telemetry data, but OTF 40 might consume only 50 cycles to collect the same kind of telemetry data in data processing system 10.In the embodiment of Figure 1 , core 20A includes telemetry counters 22. Telemetry counters 22 include counters for tracking many different aspects of operation for core 20A. Those counters may include, without limitation, counters for microcode, for an out-of-order (OOO) core cluster, for a mid-level cache (MLC) core cluster, for power management (PM), for core utilization, for simultaneous multithreading (SMT) utilization, for front-end bound, for back-end bound, for bad speculation, for retiring, etc. In one embodiment or scenario, the counters for microcode include metrics to indicate operational attributes such as the amount of time spent executing microcode, the counters for the OOO core cluster include metrics to indicate operational attributes such as a count of instructions that have been retired, the counters for the MLC core cluster include metrics to indicate operational attributes such as a count of the requests for new data, etc. The data in telemetry counters 22 may be referred to as "raw telemetry data."Core 20A also includes a telemetry configuration register 24. Telemetry configuration register 24 contains settings to indicate what kinds of telemetry data should be collected (out of band) by telemetry collector 44 and forwarded (out of band) by telemetry messenger 46 to telemetry software 18 via telemetry push agent 32.OOB telemetry manager 42 operates on a hardware level to implement telemetry components such as a telemetry collector 44 and a telemetry messenger 46. In another embodiment or scenario, the telemetry collector may be referred to as a telemetry nucleus, and the telemetry messenger may be referred to as a telemetry perimeter. OOB telemetry manager 42 (and the components therein) may be implemented, for example, as hardware, as microcode (ucode), as firmware, or as a combination of hardware, ucode, and/or firmware.Also, as illustrated, core 20A includes a core fabric to connect components such as telemetry collector 44, telemetry messenger 46, and DCRA 26. Core 20A also includes a register fabric to connect components such as telemetry collector 44, telemetry configuration register 24, and other registers, such as general purpose registers (GPRs) (e.g., R1, R2, etc.). Core 20A also includes a side-band channel to connect components within core 20A (such as telemetry messenger 46) with components in distributed core perimeter 30 (such as telemetry push agent 32). For instance, as described in greater detail below, telemetry messenger 46 may send telemetry packets (e.g., telemetry packet 48) to telemetry push agent 32 via the side-band channel. Telemetry push agent 32 may then push telemetry data to telemetry software 18.DCRA 26 is an array of registers that can be read from and written to by components such as telemetry collector 44 and telemetry messenger 46. In addition, core 20A uses DCRA 26 to hold data that is generated within core 20A and then sent to another destination in processor 12 outside of core 20A. In particular, as described in greater detail below, telemetry messenger 46 sends telemetry data from DCRA 26 to telemetry push agent 32, which resides outside of core 20A in distributed core perimeter 30. Thus, DCRA 26 may include intermediate or temporary telemetry data, and telemetry messenger 46 may use that temporary telemetry data to generate final telemetry data for telemetry push agent 32.As illustrated in Figure 1 , since components such as telemetry collector 44, telemetry messenger 46, and telemetry push agent 32 operate out of band to collect telemetry data, those components are part of OTF 40.As described in greater detail below, telemetry collector 44 collects telemetry data based on settings that telemetry collector 44 obtains from telemetry configuration register 24 via the register fabric in core 20A. Telemetry collector 44 obtains that raw telemetry data from telemetry counters 22. Telemetry collector 44 and telemetry messenger 46 then format that telemetry data and make it available to components outside of core 20A. For instance, telemetry collector 44 may generate collected telemetry data based on the raw telemetry data, and telemetry messenger 46 may generate telemetry packets based on the collected telemetry data. Telemetry messenger 46 may send those telemetry packets to telemetry push agent 32 via the side-band channel. Telemetry push agent 32 may then forward the telemetry packets (or other forms of the collected telemetry data) to telemetry software 18.Figure 2 is a flow diagram to describe operations performed by certain components of data processing system 10, including communications between those components to provide OOB telemetry. For instance, Figure 2 illustrates that telemetry collector 44 and telemetry messenger 46 communicate with DCRA 26 via the register fabric, that telemetry collector 44 and telemetry messenger 46 communication with each other via the core fabric, and telemetry messenger 46 communicates with telemetry push agent 32 via the side-band channel. In addition, the bullet points within the blocks for telemetry collector 44, telemetry messenger 46, and telemetry push agent 32 are arranged in relative vertical positions to illustrate the sequence generally followed during the process of providing OOB telemetry.Figures 3and4 present flowcharts of an example embodiment of a process for providing out-of-band processor telemetry, with regard, respectively, to telemetry collector 44 and telemetry messenger 46 in data processing system 10. Those flowcharts also correspond to the components and operations illustrated in Figure 2 .The process of Figure 3 may start after telemetry collector 44 has read the current telemetry settings from telemetry configuration register 24 via the register fabric and configured the telemetry hardware in processor 12 accordingly. As shown at block 110, telemetry collector 44 may then determine whether so-called "dirty bits" in DCRA 26 are clear or clean.As indicated above, DCRA 26 is an array of registers that can be read from and written to by components such as telemetry collector 44 and telemetry messenger 46. In one embodiment, telemetry collector 44 writes one DCRA entry to each register in DCRA 26, and each of those registers includes a dirty bit to indicate whether that register includes a DCRA entry which has been written by telemetry collector 44 but not yet read or transmitted by telemetry messenger 46. If the dirty bits are clean, telemetry collector 44 may collect raw telemetry data from telemetry counters 22, as shown at block 111. Telemetry collector 44 may then write the collected telemetry data to DCRA 26, as shown at block 112.For instance, in one embodiment or scenario, telemetry collector 44 formats the raw telemetry data that has been collected from telemetry counters 22 into two 32-bit blocks of telemetry data, and telemetry collector 44 writes those two blocks to a 64 bit register in DCRA 26. Those two blocks may be referred to as "Data-1" and "Data-2". In one embodiment or scenario, each of those blocks contains a header segment (e.g., the first 4 bits) and a data segment (e.g., the remaining 28 bits). The telemetry data that telemetry collector 44 stores in DCRA 26 may identify or indicate the operational attributes of core 20A that were measured in telemetry counters 22 and collected by telemetry collector 44. For example, as indicated above, those operational attributes may include metrics for microcode, metrics for the OOO core cluster, metrics for the MLC core cluster, metrics for core utilization, metrics for SMT utilization, metrics for PM, etc. As shown at block 114, telemetry collector 44 may then set the DCRA dirty bits.Also, as shown at blocks 116 and 118, telemetry collector 44 may generate a header message and send that header message to telemetry messenger 46. Telemetry collector 44 may thus provide the header message and the two 32-bit blocks of telemetry data to telemetry messenger 46. Accordingly, the header message and the two 32-bit blocks of telemetry data may be considered to be three messages: Header, Data-1, Data-2.Similarly, since telemetry collector 44 uses the core fabric and DCRA 26 to send data to telemetry messenger 46, the core fabric and DCRA 26 may both be referred to as "channels." For instance, as indicated above, telemetry collector 44 collects telemetry data from a previous stage (e.g., telemetry counters 22), formats or packetizes that data into the proper form for processing by telemetry messenger 46, and uses DCRA 26 as a channel to send the formatted data to telemetry messenger 46.As described in greater detail below with regard to Figure 4 , telemetry messenger 46 may then use the data from telemetry collector 44 to generate telemetry packets for telemetry push agent 32.As shown at block 120 of Figure 3 , after sending the header message to telemetry messenger 46, telemetry collector 44 may then determine whether telemetry collector 44 has received an acknowledgment (ACK) message from telemetry messenger 46 to indicate that telemetry messenger 46 received the header message. Alternatively, the process may reach block 120 in response to telemetry messenger 46 determining that the dirty bit is not clean. If telemetry collector 44 has not received an ACK from telemetry messenger 46, the process may return to block 110. Thus, after telemetry collector 44 writes telemetry data to DCRA 26 and sets the dirty bit, telemetry collector 44 may wait for the dirty bit to be cleared before writing additional telemetry data to DCRA 26. However, referring again to block 120, if telemetry collector 44 has received an ACK from telemetry messenger 46, telemetry collector 44 may clear the DCRA dirty bit, as shown at block 122. The process may then return to block 110, and telemetry collector 44 may repeat the operations described above to send new telemetry data from telemetry counters 22 to telemetry messenger 46, as indicated above.As indicated above, the process of Figure 4 presents a flowchart of an example embodiment of a process for providing out-of-band processor telemetry, with regard to telemetry messenger 46. That process may start when data processing system is powered on. Then, as shown at block 130, telemetry messenger 46 may determine whether it has received a header message from telemetry collector 44. If telemetry messenger 46 has not received a header message from telemetry collector 44, the process may wait at block 13o until a header message is received. As shown at block 132, when telemetry messenger 46 receives a header message, telemetry messenger 46 may respond be reading an OOB telemetry address from a remote address array in distributed core perimeter 30. In one embodiment or scenario, the OOB telemetry address points to telemetry push agent 32. As shown at block 134, telemetry messenger 46 then generates an OOB telemetry packet (e.g., telemetry packet 48), based on the header message from telemetry collector 44 and the telemetry data in DCRA 26. In addition, telemetry messenger 46 may add information to the telemetry packet to address the packet to the desired destination. As shown at block 136, telemetry messenger 46 then sends the telemetry packet to telemetry push agent 32. As shown at block 138, telemetry messenger 46 then sends an ACK to telemetry collector 44, to indicate that the telemetry data in DCRA 26 has been processed. The process may then return to block 130, with telemetry messenger 46 waiting to receive the next header message from telemetry collector 44. However, in an alternative embodiment or scenario, instead of (or in addition to) pushing packets, a telemetry messenger may use other techniques to bridge or carry the telemetry data from the core to the uncore (and/or to other external components, such as in-band software). For instance, the telemetry messenger may write the accumulated telemetry data to one or more registers that are accessible to the telemetry push agent and/or to the telemetry software.As shown in Figure 2 , when telemetry messenger 46 sends a telemetry packet to telemetry push agent 32, telemetry push agent 32 may respond by consuming that packet. For instance, telemetry push agent 32 may read the packet and then forward the telemetry data from the packet to telemetry software 18. In addition, when forwarding telemetry data to telemetry software 18, telemetry push agent 32 may add additional information to that data. For instance, telemetry push agent 32 may send telemetry data that also includes additional parameters for communication, such as an identifier for the sending core (e.g., core 20A) and an identifier for the target unit (e.g., telemetry software 18). In one embodiment, telemetry push agent 32 forwards the telemetry data to telemetry software 18 by extracting the telemetry data from the telemetry packets and storing that extracted data in a portion of RAM 14 that has been allocated to telemetry software 18 (e.g., in the stack or the heap of telemetry software 18). However, in an alternative embodiment or scenario, instead of (or in addition to) writing to RAM, a telemetry push agent may use other techniques to send the telemetry data to the in-band software. For instance, the telemetry push agent may write the telemetry data to one or more registers that are accessible to the telemetry software.Thus, as has been described, processor 12 uses OTF 40 to send telemetry data to in-band telemetry processing software. Also, as indicated above, core 20B may include features that are the same as or similar to the features of core 20A. For instance, core 20B may also include telemetry counters, a DCRA, an OOB telemetry manager (with a telemetry collector and a telemetry messenger), etc. And that OOB telemetry manager may collect telemetry data for core 20B and forward that collected telemetry data to telemetry push agent 32, for ultimate delivery to telemetry software 18. Moreover, in other embodiments, a processor may include multiple processing cores, and some or all of those processing cores may include its own OOB telemetry facilities (such as a DCRA, an OOB telemetry manager, etc.) for collecting and sending core-specific telemetry data to in-band telemetry software.Although certain example embodiments are described herein, one of ordinary skill in the art will understand that those example embodiments may easily be divided, combined, or otherwise altered to implement additional embodiments. Thus, the present teachings are not limited to the embodiments and/or scenarios described herein, but may be used to advantage in a wide variety of embodiment and scenarios. The following section describes features of various alternative embodiments which may include OOB telemetry facilities according to the present disclosure.Additional EmbodimentsFigure 5 is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 6 is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 5 and 6 illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 5 , a processor pipeline 900 includes a fetch stage 902, a length decode stage 904, a decode stage 906, an allocation stage 908, a renaming stage 910, a scheduling (also known as a dispatch or issue) stage 912, a register read/memory read stage 914, an execute stage 916, a write back/memory write stage 918, an exception handling stage 922, and a commit stage 924.Figure 6 shows processor core 990 including a front end unit 930 coupled to an execution engine unit 950, and both are coupled to a memory unit 970. The core 990 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 990 may be a special-purpose core, such as, for example, a network or communication core, a compression engine, a coprocessor core, a general-purpose graphics processing unit (GPGPU), a graphics core, or the like.The front end unit 930 includes a branch prediction unit 932 coupled to an instruction cache unit 934, which is coupled to an instruction translation lookaside buffer (TLB) 936, which is coupled to an instruction fetch unit 938, which is coupled to a decode unit 940. The decode unit 940 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 940 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 990 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 940 or otherwise within the front end unit 930). The decode unit 940 is coupled to a rename/allocator unit 952 in the execution engine unit 950.The execution engine unit 950 includes the rename/allocator unit 952 coupled to a retirement unit 954 and a set of one or more scheduler unit(s) 956. The scheduler unit(s) 956 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 956 is coupled to the physical register file(s) unit(s) 958. Each of the physical register file(s) units 958 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 958 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 958 is overlapped by the retirement unit 954 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 954 and the physical register file(s) unit(s) 958 are coupled to the execution cluster(s) 960. The execution cluster(s) 960 includes a set of one or more execution units 962 and a set of one or more memory access units 964.The execution units 962 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 956, physical register file(s) unit(s) 958, and execution cluster(s) 960 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 964). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 964 is coupled to the memory unit 970, which includes a data TLB unit 972 coupled to a data cache unit 974 coupled to a level 2 (L2) cache unit 976. In one exemplary embodiment, the memory access units 964 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 972 in the memory unit 970. The instruction cache unit 934 is further coupled to a level 2 (L2) cache unit 976 in the memory unit 970. The L2 cache unit 976 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 900 as follows: 1) the instruction fetch 938 performs the fetch and length decoding stages 902 and 904; 2) the decode unit 940 performs the decode stage 906; 3) the rename/allocator unit 952 performs the allocation stage 908 and renaming stage 910; 4) the scheduler unit(s) 956 performs the schedule stage 912; 5) the physical register file(s) unit(s) 958 and the memory unit 970 perform the register read/memory read stage 914; the execution cluster 960 performs the execute stage 916; 6) the memory unit 970 and the physical register file(s) unit(s) 958 perform the write back/memory write stage 918; 7) various units may be involved in the exception handling stage 922; and 8) the retirement unit 954 and the physical register file(s) unit(s) 958 perform the commit stage 924.The core 990 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 990 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 934/974 and a shared L2 cache unit 976, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Figures 7 and 8 are block diagrams of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.Figure 7 is a block diagram of a single processor core, along with its connection to the on-die interconnect network 1002 and with its local subset of the Level 2 (L2) cache 1004, according to embodiments of the invention. In one embodiment, an instruction decoder 1000 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 1006 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 1008 and a vector unit 1010 use separate register sets (respectively, scalar registers 1012 and vector registers 1014) and data transferred between them is written to memory and then read back in from an L1 cache 1006, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 1004 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 1004. Data read by a processor core is stored in its L2 cache subset 1004 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1004 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.Figure 8 is an expanded view of part of the processor core in Figure 7 according to embodiments of the invention. Figure 8 includes an L1 data cache 1006A part of the L1 cache 1004, as well as more detail regarding the vector unit 1010 and the vector registers 1314. Specifically, the vector unit 1010 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 1028), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 1020, numeric conversion with numeric convert units 1022A-B, and replication with replication unit 1024 on the memory input. Write mask registers 1026 allow predicating resulting vector writes.Figure 9 is a block diagram of a processor 1100 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 9 illustrate a processor 1100 with a single core 1102A, a system agent 1110, a set of one or more bus controller units 1116, while the optional addition of the dashed lined boxes illustrates an alternative processor 1100 with multiple cores 1102A-N, a set of one or more integrated memory controller unit(s) in the system agent unit 1110, and special purpose logic 1108.Thus, different implementations of the processor 1100 may include: 1) a CPU with the special purpose logic 1108 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1102A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1102A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1102A-N being a large number of general purpose in-order cores. Thus, the processor 1100 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU, a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1100 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache units 1104A-N within the cores, a set or one or more shared cache units 1106, and external memory (not shown) coupled to the set of integrated memory controller units 1114. The set of shared cache units 1106 may include one or more mid-level caches, such as L2, level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1112 interconnects the special purpose logic 1108, the set of shared cache units 1106, and the system agent unit 1110/integrated memory controller unit(s) 1114, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1106 and cores 1102 A-N.The system agent unit 1110 includes those components coordinating and operating cores 1102A-N. The system agent unit 1110 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1102A-N and the integrated graphics logic 1108. The display unit is for driving one or more externally connected displays.The cores 1102A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1102A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set. Such cores 1102A-N may convert certain memory access instructions into subline memory access instructions as described herein.Figures 10-13 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Figure 10 is a block diagram of a system 1200 according to embodiments of the invention. The system 1200 may include one or more processors 1210, 1215, which are coupled to a controller hub 1220. In one embodiment, the controller hub 1220 includes a graphics memory controller hub (GMCH) 1290 and an Input/Output Hub (IOH) 1250 (which may be on separate chips); the GMCH 1290 includes a memory controller to control operations within a coupled memory and a graphics controller to which are coupled memory 1240 and a coprocessor 1245; the IOH 1250 couples input/output (I/O) devices 1260 to the GMCH 1290. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1240 and the coprocessor 1245 are coupled directly to the processor 1210, and the controller hub 1220 in a single chip with the IOH 1250.The optional nature of additional processors 1215 is denoted in Figure 10 with broken lines. Each processor 1210, 1215 may include one or more of the processing cores described herein and may be some version of the processor 1100.The memory 1240 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1220 communicates with the processor(s) 1210, 1215 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1295.In one embodiment, the coprocessor 1245 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1220 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 1210, 1215 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 1210 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1210 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1245. Accordingly, the processor 1210 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1245. Coprocessor(s) 1245 accept and execute the received coprocessor instructions.Figures 11and12 are block diagrams of more specific exemplary systems 1300 and 1400 according to embodiments of the invention. As shown in Figure 11 , multiprocessor system 1300 is a point-to-point interconnect system, and includes a first processor 1370 and a second processor 1380 coupled via a point-to-point interconnect 1350. Each of processors 1370 and 1380 may be some version of the processor 1100. In one embodiment of the invention, processors 1370 and 1380 are respectively processors 1210 and 1215, while coprocessor 1338 is coprocessor 1245. In another embodiment, processors 1370 and 1380 are respectively processor 1210 and coprocessor 1245.Processors 1370 and 1380 are shown including integrated memory controller (IMC) units 1372 and 1382, respectively. Processor 1370 also includes as part of its bus controller units point-to-point (P-P) interfaces 1376 and 1378; similarly, second processor 1380 includes P-P interfaces 1386 and 1388. Processors 1370, 1380 may exchange information via a P-P interface 1350 using P-P interface circuits 1378, 1388. As shown in Figure 11 , IMCs 1372 and 1382 couple the processors to respective memories, namely a memory 1332 and a memory 1334, which may be portions of main memory locally attached to the respective processors.Processors 1370, 1380 may each exchange information with a chipset 1390 via individual P-P interfaces 1352, 1354 using point to point interface circuits 1376, 1394, 1386, 1398. Chipset 1390 may optionally exchange information with the coprocessor 1338 via a high-performance interface 1339. In one embodiment, the coprocessor 1338 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 1390 may be coupled to a first bus 1316 via an interface 1396. In one embodiment, first bus 1316 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in Figure 11 , various I/O devices 1314 may be coupled to first bus 1316, along with a bus bridge 1318 which couples first bus 1316 to a second bus 1320. In one embodiment, one or more additional processors 1315, such as coprocessors, high-throughput MIC processors, GPGPUs, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1316. In one embodiment, second bus 1320 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 1320 including, for example, a keyboard and/or mouse 1322, communication devices 1327 and a storage unit 1328 such as a disk drive or other mass storage device which may include instructions/code and data 1330, in one embodiment. Further, an audio I/O 1324 may be coupled to the second bus 1320. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 11 , a system may implement a multi-drop bus or other such architecture.Figure 12 presents a block diagram of a second more specific exemplary system 1400 in accordance with an embodiment of the present invention. Like elements in Figures 11 and 12 bear like reference numerals, and certain aspects of Figure 11 have been omitted from Figure 12 in order to avoid obscuring other aspects of Figure 12 .Figure 12 illustrates that the processors 1370, 1380 may include integrated memory and I/O control logic ("CL") 1372 and 1382, respectively. Thus, the CL 1372, 1382 include integrated memory controller units and include I/O control logic. Figure 12 illustrates that not only are the memories 1332, 1334 coupled to the CL 1372, 1382, but also that I/O devices 1414 are also coupled to the control logic 1372, 1382. Legacy I/O devices 1415 are coupled to the chipset 1390.Figure 13 is a block diagram of a system on a chip (SoC) 1500 according to embodiments of the invention. Dashed lined boxes are optional features on more advanced SoCs. In Figure 13 , an interconnect unit(s) 1502 is coupled to: an application processor 1510 which includes a set of one or more cores 1102A-N (including constituent cache units 1104A-N) and shared cache unit(s) 1106; a system agent unit 1110; a bus controller unit(s) 1116; an integrated memory controller unit(s) 1114; a set or one or more coprocessors 1520 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1530; a direct memory access (DMA) unit 1532; and a display unit 1540 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1520 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.Figure 14 is a block diagram depicting the use of a software instruction converter 1612 to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 14 shows a program in a high-level language 1602 may be compiled using an x86 compiler 1604 to generate x86 binary code 1606 that may be natively executed by a processor with at least one x86 instruction set core 1616. The processor with at least one x86 instruction set core 1616 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1604 represents a compiler that is operable to generate x86 binary code 1606 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1616. Similarly, Figure 14 shows the program in the high-level language 1602 may be compiled using an alternative instruction set compiler 1608 to generate alternative instruction set binary code 1610 that may be natively executed by a processor without at least one x86 instruction set core 1614 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1612 is used to convert the x86 binary code 1606 into code that may be natively executed by the processor without an x86 instruction set core 1614. This converted code is not likely to be the same as the alternative instruction set binary code 1610 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1612 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1606.In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.ConclusionIn the present disclosure, expressions such as "an embodiment," "one embodiment," and "another embodiment" are meant to generally reference embodiment possibilities. Those expressions are not intended to limit the invention to particular embodiment configurations. As used herein, those expressions may reference the same embodiment or different embodiments, and those embodiments are combinable into other embodiments. In light of the principles and example embodiments described and illustrated herein, it will be recognized that the illustrated embodiments can be modified in arrangement and detail without departing from the principles described and/or illustrated herein.Also, according to the present disclosure, a device may include instructions and other data which, when accessed by a processor, cause the device to perform particular operations. For purposes of this disclosure, instructions which cause a device to perform operations may be referred to in general as software. Software and the like may also be referred to as control logic. Software that is used during a boot process may be referred to as firmware. Software that is stored in nonvolatile memory may also be referred to as firmware. Software may be organized using any suitable structure or combination of structures. Accordingly, terms like program and module may be used in general to cover a broad range of software constructs, including without limitation application programs, subprograms, routines, functions, procedures, drivers, libraries, data structures, processes, microcode, and other types of software components. Also, it should be understood that a software module may include more than one component, and those components may cooperate to complete the operations of the module. Also, the operations which the software causes a device to perform may include creating an operating context, instantiating a particular data structure, etc. Embodiments may be implemented as software to execute on a programmable system comprising at least one processor, a storage system (e.g., volatile memory and/or one or more non-volatile storage elements), at least one input device, and at least one output device.Any suitable operating environment and programming language (or combination of operating environments and programming languages) may be used to implement software components described herein. For example, program code may be implemented in a high-level procedural or object oriented programming language, or in assembly or machine language. The mechanisms described herein are not limited to any particular programming language. In any case, the language may be a compiled or interpreted language.A medium which contains data and which allows another component to obtain that data may be referred to as a machine-accessible medium or a machine-readable medium. Accordingly, embodiments may include machine-readable media containing instructions for performing some or all of the operations described herein. Such media may be referred to in general as apparatus and in particular as program products. In one embodiment, software for multiple components is stored in one machine-readable medium. In other embodiments, two or more machine-readable media may be used to store the software for one or more components. For instance, instructions for one component may be stored in one medium, and instructions another component may be stored in another medium. Or a portion of the instructions for one component may be stored in one medium, and the rest of the instructions for that component (as well instructions for other components), may be stored in one or more other media. Similarly, software that is described above as residing on a particular device in one embodiment may, in other embodiments, reside on one or more other devices. For instance, in a distributed environment, some software may be stored locally, and some may be stored remotely. Similarly, operations that are described above as being performed on one particular device in one embodiment may, in other embodiments, be performed by one or more other devices.Other embodiments may be implemented in data and may be stored on a non-transitory storage medium, which if used by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform one or more operations according to the present disclosure. Still further embodiments may be implemented in a computer readable storage medium including information that, when manufactured into an SoC or other processor, is to configure the SoC or other processor to perform one or more operations according to the present disclosure. One or more aspects of at least one embodiment may be implemented by representative instructions, stored on a machine-readable medium, which represent various logic units within the processor, and which, when read by a machine, cause the machine to fabricate logic units to perform the techniques described herein. The instructions representing various logic units may be referred to as "IP cores," and they may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic units or the processor. One or more aspects of at least one embodiment may include machine-readable media containing instructions or design data which defines structures, circuits, apparatuses, processors and/or system features described herein. For instance, design data may be formatted in a hardware description language (HDL).The machine-readable media for some embodiments may include, without limitation, tangible non-transitory storage components such as magnetic disks, optical disks, magneto-optical disks, dynamic random access memory (RAM), static RAM, read-only memory (ROM), solid state drives (SSDs), phase change memory (PCM), etc., as well as processors, controllers, and other components that include data storage facilities. For purposes of this disclosure, the term "ROM" may be used in general to refer to nonvolatile memory devices such as erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash ROM, flash memory, etc.It should also be understood that the hardware and software components depicted herein represent functional elements that are reasonably self-contained so that each can be designed, constructed, or updated substantially independently of the others. In alternative embodiments, components may be implemented as hardware, software, or combinations of hardware and software for providing the functionality described and illustrated herein. In some embodiments, some or all of the control logic for implementing the described operations may be implemented in hardware logic (e.g., as microcode in an integrated circuit chip, as a programmable gate array (PGA), as an application-specific integrated circuit (ASIC), etc.). Also, terms such as "circuit" and "circuitry" may be used interchangeably herein. Those terms and terms like "logic" may be used to refer to analog circuitry, digital circuitry, hard-wired circuitry, programmable circuitry, processor circuitry, microcontroller circuitry, hardware logic circuitry, state machine circuitry, any other type of hardware component, or any suitable combination of hardware components.Additionally, the present teachings may be used to advantage in many different kinds of data processing systems. Such data processing systems may include, without limitation, accelerators, systems on a chip (SOCs), wearable devices, handheld devices, smartphones, telephones, entertainment devices such as audio devices, video devices, audio/video devices (e.g., televisions and set-top boxes), vehicular processing systems, personal digital assistants (PDAs), tablet computers, laptop computers, portable computers, personal computers (PCs), workstations, servers, client-server systems, distributed computing systems, supercomputers, high-performance computing systems, computing clusters, mainframe computers, minicomputers, and other devices for processing or transmitting information. Accordingly, unless explicitly specified otherwise or required by the context, references to any particular type of data processing system (e.g., a PC) should be understood as encompassing other types of data processing systems, as well. A data processing system may also be referred to as an apparatus. The components of a data processing system may also be referred to as apparatus.Also, unless expressly specified otherwise, components that are described as being coupled to each other, in communication with each other, responsive to each other, or the like need not be in continuous communication with each other and need not be directly coupled to each other. Likewise, when one component is described as receiving data from or sending data to another component, that data may be sent or received through one or more intermediate components, unless expressly specified otherwise. In addition, some components of the data processing system may be implemented as adapter cards with interfaces (e.g., a connector) for communicating with a bus. Alternatively, devices or components may be implemented as embedded controllers, using components such as programmable or non-programmable logic devices or arrays, ASICs, embedded computers, smart cards, and the like. For purposes of this disclosure, the term "bus" includes pathways that may be shared by more than two devices, as well as point-to-point pathways. Similarly, terms such as "line," "pin," etc. should be understood as referring to a wire, a set of wires, or any other suitable conductor or set of conductors. For instance, a bus may include one or more serial links, a serial link may include one or more lanes, a lane may be composed of one or more differential signaling pairs, and the changing characteristics of the electricity that those conductors are carrying may be referred to as signals on a line. Also, for purpose of this disclosure, the term "processor" denotes a hardware component that is capable of executing software. For instance, a processor may be implemented as a central processing unit (CPU), a processing core, or as any other suitable type of processing element. A CPU may include one or more processing cores, and a device may include one or more CPUs.Also, although one or more example processes have been described with regard to particular operations performed in a particular sequence, numerous modifications could be applied to those processes to derive numerous alternative embodiments of the present invention. For example, alternative embodiments may include processes that use fewer than all of the disclosed operations, process that use additional operations, and processes in which the individual operations disclosed herein are combined, subdivided, rearranged, or otherwise altered.Embodiments include the following examples:Example A1 is an integrated circuit with technology for providing OOB processor telemetry. The integrated circuit comprises a processor comprising at least one core and a distributed core perimeter. The integrated circuit also comprises a telemetry push agent in the distributed core perimeter and an OOB telemetry manager in the core to operate out of band and to send telemetry data for the processor to the telemetry push agent. Also, the telemetry push agent comprises control logic to (a) receive the telemetry data from the OOB telemetry manager and (b) forward at least some of the telemetry data to in-band telemetry software.Example A2 is an integrated circuit according to Example A1, wherein the telemetry push agent is configured to operate out of band.Example A3 is an integrated circuit according to Example A1, further comprising telemetry counters in the core, and a DCRA in the processor. Also, the OOB telemetry manager is configured to collect telemetry data from the telemetry counters, write at least some of the collected telemetry data to the DCRA, and send at least some of the collected telemetry data from the DCRA to the telemetry push agent. Example A3 may also include the features of Example A2.Example A4 is an integrated circuit according to Example A3, wherein the OOB telemetry manager comprises (a) a telemetry collector to operate out of band, to read raw telemetry data from the telemetry counters, and to generate collected telemetry data based on the raw telemetry data; and (b) a telemetry messenger to operate out of band, to generate a telemetry packet based on the collected telemetry data, and to send the telemetry packet to the telemetry push agent.Example A5 is an integrated circuit according to Example A3, wherein the DCRA comprises an array of registers that reside in the core. Example A5 may also include the features of Example A4.Example A6 is an integrated circuit according to Example A5, wherein the OOB telemetry manager is configured to (a) generate multiple telemetry entries, based on the collected telemetry data; and (b) write each telemetry entry to a different register in the DCRA.Example A7 is an integrated circuit according to Example A1, wherein the core further comprises telemetry counters; and the OOB telemetry manager comprises (a) a telemetry collector to operate out of band, to read raw telemetry data from the telemetry counters, and to generate collected telemetry data based on the raw telemetry data; and (b) a telemetry messenger to operate out of band, to generate a telemetry packet based on the collected telemetry data, and to send the telemetry packet to the telemetry push agent. Example A7 may also include the features of any one or more of Examples A2-A6.Example A8 is an integrated circuit according to Example A7, further comprising a telemetry configuration register in the processor. Also, the telemetry collector is configured to determine what kinds of telemetry data to collect, based at least in part telemetry configuration data from the telemetry configuration register.Example A9 is an integrated circuit according to Example A1, wherein (a) the core comprises a first core; (b) the OOB telemetry manager comprises a first OOB telemetry manager to send telemetry data for the first core to the telemetry push agent; and (c) the integrated circuit further comprises a second core with a second OOB telemetry manager to operate out of band and to send telemetry data for the second core to the telemetry push agent. Example A9 may also include the features of any one or more of Examples A2-A8.Example B1 is a data processing system with technology for providing OOB processor telemetry. The data processing system comprises (a) a processor comprising at least one core and a distributed core perimeter, (b) RAM responsive to the processor, (c) a telemetry push agent in the distributed core perimeter, and (d) an OOB telemetry manager in the core to operate out of band and to send telemetry data for the processor to the telemetry push agent. Also, the telemetry push agent comprises control logic to (a) receive the telemetry data from the OOB telemetry manager and (b) forward at least some of the telemetry data to in-band telemetry software.Example B2 is a data processing system according to Example B1, further comprising NVS responsive to the processor. Also, the NVS comprises the in-band telemetry software.Example B3 is a data processing system according to Example B1, wherein the telemetry push agent is configured to operate out of band. Example B3 may also include the features of Example B2.Example B4 is a data processing system according to Example B1, further comprising telemetry counters in the core, and a DCRA in the processor. Also, the OOB telemetry manager is configured to collect telemetry data from the telemetry counters, write at least some of the collected telemetry data to the DCRA, and send at least some of the collected telemetry data from the DCRA to the telemetry push agent. Example B4 may also include the features of any one or more of Examples B2-B3.Example B5 is a data processing system according to Example B4, wherein the OOB telemetry manager comprises (a) a telemetry collector to operate out of band, to read raw telemetry data from the telemetry counters, and to generate collected telemetry data based on the raw telemetry data; and (b) a telemetry messenger to operate out of band, to generate a telemetry packet based on the collected telemetry data, and to send the telemetry packet to the telemetry push agent.Example B6 is a data processing system according to Example B4, wherein the DCRA comprises an array of registers that reside in the core. Example B6 may also include the features of Example B5.Example B7 is a data processing system according to Example B6, wherein the OOB telemetry manager is configured to (a) generate multiple telemetry entries, based on the collected telemetry data; and (b) write each telemetry entry to a different register in the DCRA.Example B8 is a data processing system according to Example B1, wherein the core further comprises telemetry counters; and the OOB telemetry manager comprises (a) a telemetry collector to operate out of band, to read raw telemetry data from the telemetry counters, and to generate collected telemetry data based on the raw telemetry data; and (b) a telemetry messenger to operate out of band, to generate a telemetry packet based on the collected telemetry data, and to send the telemetry packet to the telemetry push agent. Example B8 may also include the features of any one or more of Examples B2-B7.Example B9 is a data processing system according to Example B8, further comprising a telemetry configuration register in the processor. Also, the telemetry collector is configured to determine what kinds of telemetry data to collect, based at least in part telemetry configuration data from the telemetry configuration register.Example C1 is a method for providing OOB processor telemetry. The method comprises (a) at an OOB telemetry manager in a core of a processor, collecting telemetry data for the processor; (b) sending the telemetry data from the OOB telemetry manager to a telemetry push agent in a distributed core perimeter of the processor; and (c) forwarding at least some of the telemetry data from the telemetry push agent to in-band telemetry software executing on the processor. Also, the OOB telemetry manager operates out of band.Example C2 is a method according to Example C1, wherein the telemetry push agent operates out of band.Example C3 is a method according to Example C1, wherein the operation of collecting telemetry data for the processor comprises reading telemetry data from telemetry counters in the core, and writing at least some of the collected telemetry data to a DCRA in the processor. Example C3 may also include the features of Example C2.Example C4 is a method according to Example C3, wherein the operations of reading telemetry data from telemetry counters and writing at least some of the collected telemetry data to the DCRA are performed by a telemetry collector in the OOB telemetry manager. Also, the operation of forwarding at least some of the telemetry data from the telemetry push agent to in-band telemetry software is performed by a telemetry messenger in the OOB telemetry manager.In view of the wide variety of useful permutations that may be readily derived from the example embodiments described herein, this detailed description is intended to be illustrative only, and should not be construed as limiting the scope of coverage. |
A non-volatile product term cell is provided having a first floating gate located over a first p-channel transistor and a first n-channel transistor, and a second floating gate located over a second p-channel transistor and a second n-channel transistor. A control gate is located over the first and second floating gates. A first tunnel oxide capacitor is coupled to the first floating gate and a second tunnel oxide capacitor is coupled to the second floating gate. A first transistor pair is coupled between the first p-channel transistor and the second n-channel transistor, and a second transistor pair is coupled between the second p-channel transistor and the first n-channel transistor. The first and second floating gates are programmed and/or erased. Complementary input signals are applied to the first and second transistor pairs. An output signal is provided in response to the programmed/erased states of the first and second floating gates. |
1. A product term cell comprising:a first floating gate structure located over a first p-channel transistor and a first n-channel transistor;a second floating gate structure located over a second p-channel transistor and a second n-channel transistor;a control gate structure capacitively coupled to the first and second floating gate structures;a first tunnel oxide capacitor formed with the first floating gate structure;a second tunnel oxide capacitor formed with the second floating gate structure;a first transistor pair coupled between the first p-channel transistor and the second n-channel transistor; anda second transistor pair coupled between the second p-channel transistor and the first n-channel transistor.2. The product term cell of claim 1, wherein source regions of the first p-channel transistor and the second p-channel transistor are coupled to a positive voltage supply terminal.3. The product term cell of claim 1, wherein source regions of the first n-channel transistor and the second n-channel transistor are coupled to a ground supply terminal.4. The product term cell of claim 1, wherein a source region of the first n-channel transistor is coupled to a first bit line, and a source region of the second n-channel transistor is coupled to a second bit line.5. The product term cell of claim 1, wherein the first transistor pair comprises:a third p-channel transistor coupled between the first p-channel transistor and a first node; anda third n-channel transistor coupled between the first node and the second n-channel transistor.6. The product term cell of claim 5, wherein the second transistor pair comprises:a fourth p-channel transistor coupled between the second p-channel transistor and a second node; anda fourth n-channel transistor coupled between the second node and the first n-channel transistor.7. The product term cell of claim 6, further comprising:a first input terminal coupled to gates of the third p-channel transistor and the third n-channel transistor; anda second input terminal coupled to gates of the fourth p-channel transistor and the fourth n-channel transistor.8. The product term cell of claim 7, further comprising an inverter coupled between the first and second input terminals.9. The product term cell of claim 6, further comprising an output terminal coupled to the first node and the second node.10. The product term cell of claim 9, further comprising a read transistor coupled to the output terminal.11. The product term cell of claim 10, wherein the read transistor comprises a fifth n-channel transistor having a source coupled to the ground supply voltage terminal, a drain coupled to the output terminal and a gate coupled to receive a read control signal.12. A method of operating a product term cell having a first floating gate element and a second floating gate element, the method comprising:erasing the first floating gate element, wherein erasing the first floating gate enables a first p-channel floating gate transistor that includes the first floating gate element and disables a first n-channel floating gate transistor that includes the first floating gate element; anderasing the second floating gate element, wherein the erased second floating gate enables a second p-channel floating gate transistor that includes the second floating gate element and disables a second n-channel floating gate transistor that includes the second floating gate element.13. The method of claim 12, further comprising providing an output signal having a first logic state through the enabled first p-channel floating gate transistor or the enabled second p-channel floating gate transistor.14. The method of claim 12, further comprising:programming the first floating gate element, wherein the programmed first floating gate element disables the first p-channel floating gate transistor and enables the first n-channel floating gate transistor; andapplying a first input signal to a first transistor pair coupled between the second p-channel floating gate transistor and the first n-channel floating gate transistor.15. The method of claim 14, further comprising providing an output signal through the enabled first n-channel floating gate transistor or the enabled second p-channel floating gate transistor, in response to the first input signal.16. The method of claim 14, further comprising:programming the second floating gate element, wherein the programmed second floating gate element disables the second p-channel floating gate transistor and enables the second n-channel floating gate transistor; andapplying a second input signal, complementary to the first input signal to a second transistor pair coupled between the first p-channel floating gate transistor and the second n-channel floating gate transistor.17. The method of claim 16, further comprising providing an output signal having a second logic state through the enabled first n-channel floating transistor or the second n-channel floating transistor.18. The method of claim 16, further comprising determining whether the first and second floating gate elements are programmed or erased.19. The method of claim 18, wherein the step of determining whether the first and second floating gate elements are programmed or erased comprises selecting the first and second input signals to have substantially equal voltage levels.20. The method of claim 16, further comprising margin testing the programmed first and second floating gate elements.21. The method of claim 20, wherein the margin testing comprises:applying a first read voltage to a control gate located over the first and second floating gate elements during a first read operation;determining the presence of a read current above a threshold level during the first read operation;applying a second read voltage, less than the first read voltage, to the control gate during a second read operation; anddetermining the absence of a read current above the threshold level during the second read operation.22. The method of claim 14, wherein the step of programming the first floating gate element comprises:applying a positive intermediate voltage to a control gate located over the first and second floating gate elements;applying a positive programming voltage to a first tunnel oxide capacitor formed with the first floating gate element, the programming voltage being greater than the intermediate voltage; and thenlowering the voltage applied to the control gate to the ground supply voltage (0 Volts).23. The method of claim 12, wherein the steps of erasing the first and second floating gate elements further comprise applying a first erase voltage to a control gate located over the first and second floating gate elements.24. The method of claim 23, wherein the steps of erasing the first and second floating gate elements further comprise applying a second erase voltage to a first tunnel oxide capacitor coupled to the first floating gate element, and to a second tunnel oxide capacitor coupled to the second floating gate element.25. A product term cell comprising:means for storing a first charge that adjusts the threshold voltages of a first p-channel transistor and a first n-channel transistor;means for storing a second charge that adjusts the threshold voltages of a second p-channel transistor and a second n-channel transistor;means for selectively coupling either the first p-channel transistor or the second n-channel transistor to an output terminal; andmeans for selectively coupling either the second p-channel transistor or the first n-channel transistor to the output terminal. |
FIELD OF THE INVENTIONThe present invention relates to a method and structure for configuring a programmable logic device. More specifically, the present invention relates to an improved product term (pterm) cell, which replaces SRAM cells with non-volatile embedded electrically erasable (EE) memory cells.RELATED ARTConventional complex programmable logic devices (CPLDs), such as the COOLRUNNER(TM) family of CPLDs available from Xilinx, Inc., include a basic circuit block known as a product term (pterm) cell.Conventional CPLD designs require a power-up initialization cycle. During this cycle, the contents of a non-volatile memory, such as an electrically erasable memory array, are transferred into a plurality of SRAM latches embedded in a logic core. This transfer typically occurs over a plurality of memory cycles, on an address-by-address basis. For example, a conventional CPLD may include an electrically erasable memory array that stores about 150,000 configuration values, which are loaded into corresponding latches 1500 bits at a time. Thus, 100 transfers must be made from the electrically erasable memory array to the latches in order to configure the CPLD. Once the configuration values are stored in the latches, the latches configure the logic core to implement a user-defined application.Two drawbacks of a conventional initialization cycle are the complex circuitry required to transfer the configuration values from the electrically erasable memory array to the latches, and the relatively long time required to transfer the configuration values from the electrically erasable array to the latches. In addition, the conventional initialization process is subject to disruption from noise and variations in the power supply voltage.It would therefore be desirable to have an improved product term cell for use in a CPLD.SUMMARYAccordingly, present invention provides a non-volatile product term cell having a smaller layout area than a conventional SRAM product term (pterm) cell, thereby resulting in a reduced die size. Moreover, the non-volatile pterm cell of the present invention directly applies a configuration state to the product term elements, without the need to transfer information from an electrically erasable memory cell to associated SRAM cells.In accordance with one embodiment, the non-volatile product term cell includes a first floating gate element which forms part of a first p-channel floating gate transistor and part of a first n-channel floating gate transistor. The product term cell also includes a second floating gate element which forms part of a second p-channel floating gate transistor and part of a second n-channel floating gate transistor. A control gate is capacitively coupled to both the first and second floating gate elements. In addition, a first tunnel oxide capacitor is coupled to the first floating gate element, and a second tunnel oxide capacitor is coupled to the second floating gate element.The sources of the first and second p-channel transistors are commonly coupled to a first voltage supply terminal, and the sources of the first and second n-channel transistors are commonly coupled to a ground supply voltage terminal. A first transistor pair, including a p-channel transistor and an n-channel transistor, is coupled between the drain of the first p-channel transistor and the drain of the second n-channel transistor. Similarly, a second transistor pair, including a p-channel transistor and an n-channel transistor, is coupled between the drain of the second p-channel transistor and the drain of the first n-channel transistor.The first and second floating gate elements are erased by applying a high positive voltage to the control gate, grounding the tunnel oxide capacitors, and allowing the first voltage supply terminal to float. When the first floating gate element is erased, the first p-channel floating gate transistor is enabled (turned on) and the first n-channel floating gate transistor is disabled (turned off). Similarly, when the second floating gate element is erased, the second p-channel floating gate transistor is enabled and the second n-channel floating gate transistor is disabled.The first and/or second floating gate elements can then be programmed, if desired, by applying a high positive voltage to the tunnel oxide capacitor(s) and the ground supply voltage to the control gate. When the first floating gate element is programmed, the first p-channel floating gate transistor is disabled and the first n-channel floating gate transistor is enabled. Similarly, when the second floating gate element is programmed, the second p-channel floating gate transistor is disabled and the second n-channel floating gate transistor is enabled.An input signal (ZIN) is applied to the gates of the transistors of the first transistor pair, and the complement of the input signal (ZIN#) is applied to the gates of the transistors of the second transistor pair. An output signal is provided at the common drain regions of the first and second transistor pairs in response to the input signal and the programmed/erased states of the first and second floating gate elements.If the first and second floating gate elements are both erased, then the output signal is pulled up to the first supply voltage by either the first or second p-channel transistor (depending on the state of the input signal ZIN/ZIN#). If the first and second floating gate elements are both programmed, then the output signal is pulled down to the ground supply voltage by either the first or second n-channel transistor (depending on the state of the input signal ZIN/ZIN#). If the first floating gate element is erased and the second floating gate element is programmed, then the output signal is either pulled up to the first supply voltage by the first p-channel floating gate transistor, or pulled down to the ground supply voltage by the second n-channel floating gate transistor (depending on the state of the input signal ZIN). Similarly, if the second floating gate element is erased and the first floating gate element is programmed, then the output signal is either pulled up to the first supply voltage by the second p-channel floating gate transistor, or pulled down to the ground supply voltage by the first n-channel floating gate transistor (depending on the state of the complementary input signal ZIN#).In an alternate embodiment, the sources of the first and second n-channel floating gate transistors are coupled to first and second bit lines, respectively. In this embodiment, a read transistor is coupled between the output terminal and a voltage supply terminal. When the read transistor is enabled, the programmed/erased states of the first and second floating gate elements can be determined by monitoring the first and second bit lines.The present invention will be more fully understood in view of the following description and drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a circuit diagram of a product term cell in accordance with one embodiment of the present invention.FIG. 2 is a block diagram of a 3*2 array of product term cells, each of which is substantially identical to the product term cell of FIG. 1.FIG. 3 is a circuit diagram of a product term cell in accordance with another embodiment of the present invention.FIG. 4 is a block diagram of a 2*2 array of product term cells, each of which is substantially identical to the product term cell of FIG. 3.DETAILED DESCRIPTIONFIG. 1 is a circuit diagram of a symmetrical product term (pterm) cell 100 in accordance with one embodiment of the present invention. Product term cell 100 includes high-voltage p-channel transistors 101-104, n-channel transistors 105-108, capacitors 111-112, and tunnel oxide capacitor elements 121-122.A left-side programming bit PBITL is applied to a first terminal of tunnel oxide capacitor element 121. In the described embodiment this first terminal of tunnel oxide capacitor element 121 is formed by a conductively doped (e.g., n-type) semiconductor substrate region. A thin oxide layer (e.g., 10 Angstroms) is formed over the first terminal of tunnel oxide capacitor element 121, and a conductively doped polysilicon floating gate element FGL is formed over this thin oxide layer, thereby forming the second terminal of tunnel oxide capacitor element 121. In the described embodiment, tunnel oxide capacitor element 121 has a layout area of about 0.16 square microns. However, other sizes are possible in other embodiments.The floating gate element FGL also forms a floating gate electrode of p-channel transistor 101, a floating gate electrode of n-channel transistor 108, and a first plate electrode of capacitor 111.Similarly, a right-side programming bit PBITR is applied to a first terminal of tunnel oxide capacitor element 122. In the described embodiment, tunnel oxide capacitor element 122 is substantially identical to tunnel oxide capacitor element 121. Tunnel oxide capacitor element 122 includes a conductively doped polysilicon floating gate element FGR. This floating gate element FGR also forms a floating gate electrode of p-channel transistor 103, a floating gate electrode of n-channel transistor 106, and a first plate electrode of capacitor 112. The second plate electrodes of capacitors 111 and 112 are coupled to receive a control gate signal CG.Each of the high-voltage p-channel transistors 101-104 has a relatively thick gate oxide, which enables these transistors to operate in response to voltages much greater than the nominal supply voltage. For example, if product term cell 100 normally operates in response to a nominal supply voltage of 1.8 Volts, then high-voltage p-channel transistors 101-104 are capable of operating in response to voltages of 12-14 Volts, without adverse effects.P-channel transistors 101-102 and n-channel transistors 105-106 are connected in series between a VDD terminal (which receives a VDD signal) and a ground voltage supply terminal (which is maintained at a ground supply voltage of 0 Volts). Similarly, p-channel transistors 103-104 and n-channel transistors 107-108 are connected in series between the VDD terminal and the ground voltage supply terminal. The n-type body regions of p-channel transistors 101-104 are coupled to the VDD terminal. In the described embodiment, each of transistors 101-108 has a width-to-length ratio of 0.56/0.5. However, these transistors can have other width-to-length ratios in other embodiments.The gates of p-channel transistor 102 and n-channel transistor 105 are commonly coupled to a ZIN input terminal (which receives an input signal ZIN). The gates of p-channel transistor 104 and n-channel transistor 107 are commonly coupled to a ZIN# input terminal (which receives an input signal ZIN# that is typically the inverse of the input signal ZIN). The drains of transistors 102, 104, 105 and 107 are commonly connected to an output terminal 130, which provides an output signal (OUT). In some embodiments, the ZIN# input signal may be provided by an inverter coupled between the ZIN and ZIN# input terminals.Product term cell 100 is initially erased as follows. The PBITL and PBITR signals are connected to receive the ground supply voltage (0 Volts), the VDD terminal is left floating, and the control gate voltage CG is raised to a programming voltage VPP of about 12-14 Volts for about 100 milli-seconds (msec). Under these conditions, electrons travel from the first terminals of tunnel oxide capacitor elements 121 and 122 toward the control gate terminal CG, and are trapped on the left floating gate FGL and the right floating gate FGR, respectively. The negative electronic charge on left floating gate FGL lowers the threshold voltage of p-channel transistor 101 and raises the threshold voltage of n-channel transistor 108. Similarly, the negative electronic charge on right floating gate FGR lowers the threshold voltage of p-channel transistor 103 and raises the threshold voltage of n-channel transistor 106.Product term cell 100 can then be programmed as follows. The control gate voltage CG is initially raised to [1/2] the programming voltage VPP, or about 6-7 Volts. The PBITL signal is then raised from the ground supply voltage to the programming voltage VPP (12-14 Volts) if the left floating gate FGL is to be programmed. Similarly, the PBITR signal is raised from the ground supply voltage to the programming voltage VPP (12-14 Volts) if the right floating gate FGR is to be programmed. If neither the left floating gate FGL nor the right floating gate FGR is to be programmed, then both the PBITL and the PBITR signals remain at the ground supply voltage.After the PBITL and PBITR signals have been set, the control gate voltage CG is lowered to the ground supply voltage (0 Volts). At this time, the left floating gate FGL is programmed if the PBITL signal is raised to the programming voltage VPP, and the right floating gate FGR is programmed if the PBITR signal is raised to the programming voltage VPP. During the programming operation, negative electronic charge is removed from the associated floating gate. For example, if the left floating gate FGL is programmed, negative electronic charge is removed from this floating gate. As a result, the threshold voltage of p-channel transistor 101 is raised, and the threshold voltage of n-channel transistor 108 is lowered. If the PBITL (or PBITR) signal remains at the ground supply voltage, then the associated floating gate FGL (or FGR) remains in the erased state, and is not programmed. The program operation requires approximately 10 msec. When the program operation is complete, the control gate voltage CG is raised back to a voltage of [1/2] VPP.FIG. 2 is a block diagram of a 3*2 array 200 of product term cells 2011-2013, 2021-2023 each of which is substantially identical to product term cell 100. Although a 3*2 array is illustrated, it is understood that this array can be expanded or contracted as necessary in view of the following teachings. In the illustrated embodiment, the product term cells 2011-2013 of the first column are commonly coupled to receive the PBITL1 and PBITR1 signals. Similarly, the product term cells 2021-2023 of the second column are commonly coupled to receive the PBITL2 and PBITR2 signals. Each row of product term cells 2011-2021, 2012-2022, and 2013-2023 is coupled to receive a corresponding control gate signal CG1, CG2 and CG3, respectively.Array 200 is programmed on a row-by-row basis in the described embodiment. While the first row of product term cells 2011-2021 is being programmed in the manner described above, control gate signals CG2 and CG3 for the other rows are all held at a voltage of about [1/2] VPP, such that the associated product term cells 2012-2022 and 2013-2023 are not subjected to programming conditions.After the first row of product term cells 2011-2022 has been programmed, the CG1 signal is raised back to a voltage of [1/2] VPP. The second row of product term cells 2012-2022 is then programmed in the manner described above, with control gate signals CG1 and CG3 being held at a voltage of about [1/2] VPP, such that the associated product term cells 2011-2021 and 2013-2023 are not subjected to programming conditions.After the second row of product term cells 2012-2022 has been programmed, the CG2 signal is raised back to a voltage of [1/2] VPP. The third row of product term cells 2012-2022 is then programmed in the manner described above. After the third row of product term cells has been programmed, the PBITL1-PBITL2 and PBITR1-PBITR2 signals brought to the ground supply voltage. Then, the control gate signals CG1-CG3 are brought to the ground supply voltage. The PBITL1-PBITL2 and PBITR1-PBITR2 signals must be at the ground supply voltage before returning all of the control gate signals CG1-CG3 to the ground supply voltage to avoid mis-programming any cells.After product term cell 100 has been erased/programmed, this product term cell is configured for normal operation. During normal operation, the VDD terminal and the control gate terminal CG are coupled to receive the VDD supply voltage (e.g., 1.8 Volts) and the PBITL and PBITR terminals are coupled to receive the ground supply voltage. Under these conditions, the logical operation of product term cell 100 depends on the programmed or erased state of the floating gates FGL and FGR. Table 1 below defines the manner in which the output signal OUT is provided in response to the input signals ZIN/ZIN# for the various programmed/erased states of floating gates FGL and FGR. In general, by setting the states of the floating gates appropriately, the product term cell 100 may be configured to provide one of a constant logic high, a constant logic low, ZIN, or ZIN# as the output signal OUT.<tb>TABLE 1<tb>FGL<sep>FGR<sep>ZIN<sep>ZIN#<sep>OUT<tb>ERASED<sep>ERASED<sep>0<sep>1<sep>1 (=1)<tb>ERASED<sep>ERASED<sep>1<sep>0<sep>1 (=1)<tb>ERASED<sep>PROGRAMMED<sep>0<sep>1<sep>1 (=ZIN#)<tb>ERASED<sep>PROGRAMMED<sep>1<sep>0<sep>0 (=ZIN#)<tb>PROGRAMMED<sep>ERASED<sep>0<sep>1<sep>0 (=ZIN)<tb>PROGRAMMED<sep>ERASED<sep>1<sep>0<sep>1 (=ZIN)<tb>PROGRAMMED<sep>PROGRAMMED<sep>0<sep>1<sep>0 (=0)<tb>PROGRAMMED<sep>PROGRAMMED<sep>1<sep>0<sep>0 (=0)When the floating gates FGL and FGR are both erased, both p-channel transistors 101 and 103 are in a conductive state, and both n-channel transistors 106 and 108 are in a non-conductive state. Since ZIN# is the inverse of ZIN, the ZIN/ZIN# signals always turn on one of the p-channel transistors 102 or 104, thereby coupling the output terminal 130 to the VDD terminal. As a result, a logic high ("1") output signal OUT is provided, regardless of the state of the ZIN/ZIN# signals.Conversely, when the floating gates FGL and FGR are both programmed, both n-channel transistors 106 and 108 are in a conductive state, and both p-channel transistors 101 and 103 are in a non-conductive state. Again, since ZIN# is the inverse of ZIN, the ZIN/ZIN# signals always turn on one of the n-channel transistors 105 or 107, thereby coupling the output terminal 130 to the ground voltage supply terminal. As a result, a logic low ("0") output signal OUT is provided, regardless of the state of the ZIN/ZIN# signals.When the left floating gate FGL is erased and the right floating gate FGR is programmed, p-channel transistor 101 and n-channel transistor 106 are each in a conductive state, and p-channel transistor 103 and n-channel transistor 108 are each in a non-conductive state. In this case, the product term cell is responsive to the ZIN signal. Thus, if the ZIN signal has a logic low state, then p-channel transistors 101 and 102 are both in a conductive state (and n-channel transistor 105 is in a non-conductive state), such that the output terminal 130 is pulled up to the VDD supply voltage. Conversely, if the ZIN signal has a logic high state, then n-channel transistors 105 and 106 are both in a conductive state (and p-channel transistor 102 is in a non-conductive state), such that the output terminal 130 is pulled down to the ground supply voltage. Thus, the output signal OUT is equal to the inverse of the ZIN signal (or ZIN#).When the right floating gate FGR is erased and the left floating gate FGL is programmed, p-channel transistor 103 and n-channel transistor 108 are each in a conductive state, and p-channel transistor 101 and n-channel transistor 106 are each in a non-conductive state. In this case, the product term cell is responsive to the ZIN# signal. Thus, if the ZIN# signal has a logic low state, then p-channel transistors 103 and 104 are both in a conductive state (and n-channel transistor 107 is in a non-conductive state), such that the output terminal 130 is pulled up to the VDD supply voltage. Conversely, if the ZIN# signal has a logic high state, then n-channel transistors 107 and 108 are both in a conductive state (and p-channel transistor 104 is in a non-conductive state), such that the output terminal 130 is pulled down to the ground supply voltage. Thus, the output signal OUT is equal to the inverse of the ZIN# signal (or ZIN)FIG. 3 is a circuit diagram of a symmetrical product term cell 300 in accordance with another embodiment of the present invention. Because product term cell 300 (FIG. 3) is similar to product term cell 100 (FIG. 1), similar elements in FIGS. 1 and 3 are labeled with similar reference numbers. Thus, product term cell 300 includes high-voltage p-channel transistors 101-104, n-channel transistors 105-108, capacitors 111-112, and tunnel oxide capacitor elements 121-122. In addition, product term cell 300 includes left bit line 301, right bit line 302 and n-channel read transistor 303. The sources of n-channel transistors 106 and 108 are coupled to left bit line 301 and right bit line 302, respectively (rather than to the ground supply voltage terminal, as in pterm cell 100 of FIG. 1). N-channel read transistor 303 has a source coupled to the ground voltage supply terminal, a drain coupled to the output terminal 130 and a gate coupled to receive a read control signal (READ).Product term cell 300 is erased and programmed in the same manner as product term cell 300. During erase and program operations, the LBIT and RBIT signals on left bit line 301 and right bit line 302 are held at the ground supply voltage, and the READ control signal is held at a logic low state, thereby turning off read transistor 303.After product term cell 300 has been erased/programmed, this product term cell is configured for normal operation. During normal operation, the VDD terminal and the control gate terminal CG are coupled to receive the VDD supply voltage (e.g., 1.8 Volts) and the PBITL, PBITR, LBIT, RBIT and READ signals are held at the ground supply voltage. Under these conditions, the logical operation of product term cell 300 depends on the programmed or erased state of floating gates FGL and FGR. Product term cell 300 operates in accordance with Table 1 above (i.e., in the same manner as product term cell 100 of FIG. 1).In addition, product term cell 300 allows for the state of its floating gates to be read. The programmed/erased states of floating gates FGL and FGR of product term cell 300 can be read as follows. The ZIN, ZIN#, READ and CG signals are held at the VDD supply voltage, and the PBITL and PBITR signals are held at the ground supply voltage. The logic high ZIN and ZIN# signals turn on n-channel transistors 105 and 107, and turn off p-channel transistors 102 and 104. The logic high READ signal turns on read transistor 303, such that the ground supply voltage is applied to the drains of n-channel transistors 106 and 108. If left floating gate FGL is programmed, then the VDD supply voltage applied to the control gate terminal CG will cause n-channel transistor 108 to be in a conductive state. As a result, the right bit line 302 will be pulled down to the ground supply voltage. Similarly, if right floating gate FGR is programmed, then the VDD supply voltage applied to the control gate terminal CG will cause n-channel transistor 106 to be in a conductive state. As a result, the left bit line 301 will be pulled down to the ground supply voltage.In contrast, if left floating gate FGL is erased, then the VDD supply voltage applied to the control gate terminal CG will not be sufficient to turn on n-channel transistor 108. As a result, the right bit line 302 remains floating (i.e., is not pulled down to the ground supply voltage). If right floating gate FGR is erased, then the VDD supply voltage applied to the control gate terminal CG will not be sufficient to turn on n-channel transistor 106. As a result, the left bit line 301 remains floating (i.e., is not pulled down to the ground supply voltage).As is known in the art, conventional sense amplifier circuitry (not shown) may be coupled to the left and right bit lines 301 and 302 to sense the states of the LBIT and RBIT signals. A ground voltage state identifies a programmed state of the associated floating gate, and a floating state identifies an erased state of the associated floating gate. For example, the left and right bit lines 301 and 302 may be pre-charged, and held at the pre-charge voltage by associated half-latch circuits (not shown). A programmed floating gate will result in the associated bit line being pulled down to ground, while an erased floating gate will result in the associated bit line remaining pulled up to the pre-charge voltage.Having the ZIN and ZIN# signals at the VDD supply voltage during the read operation turns off any possible current path through the p-channel transistors 101-104 of product term cell 300. One advantage of product term cell 300 is the ability to margin test the strength of the programmed floating gate n-channel transistors 106 and 108 by selecting different control gate voltages CG during a read operation. That is, the control gate voltage CG can be reduced during successive read operations to determine the minimum control gate voltage CG that will cause the sense amplifier to properly identify the programmed states of floating gate n-channel transistors 106 and 108.FIG. 4 is a block diagram of a 2*2 array 400 of product term cells 4011-4012, 4021-4022 each of which is substantially identical to product term cell 300. Although a 2*2 array is illustrated, it is understood that this array can be expanded or contracted as necessary in view of the following teachings. In the illustrated embodiment, the product term cells 4011-4021 of the first column are commonly coupled to receive the PBITL1, PBITR1, LBIT1 and RBIT1 signals. Similarly, the product term cells 4012-4022 of the second column are commonly coupled to receive the PBITL2, PBITR2, LBIT2 and RBIT2 signals. Each row of product term cells 4011-4012 and 4021-4022 is coupled to receive a corresponding control gate signal CG1 and CG2, respectively, and a corresponding read control signal R1 and R2, respectively.Array 400 is programmed on a row-by-row basis in the same manner described above for array 200. Note that the LBIT1-LBIT2. RBIT1-RBIT2 and R1-R2 signals are all held at the ground supply voltage during the programming operations.Array 400 is also read on a row-by-row basis. For example, the first row of product term cells 4011-4012 is read as follows. The ZIN and ZIN# signals associated with each of product term cells 4011-4012 and the read control signal R1 are activated high (VDD) to read the first row. The ZIN and ZIN# signals associated with each of product term cells 4021-4022 and the read control signal R2 are deactivated low (ground), since the second row is not being read. The control gate signal CG1 is held at the VDD supply voltage. The control gate signal CG2 and the PBITL1-PBITL2 and PBITR1-PBITR2 signals are held at the ground supply voltage. Under these conditions, the contents of product term cells 4011 and 4012 can be read on bit lines LBIT1-RBIT1 and LBIT2-RBIT2 in the manner described above.Although the invention has been described in connection with several embodiments, it is understood that this invention is not limited to the embodiments disclosed, but is capable of various modifications, which would be apparent to one of ordinary skill in the art. Thus, the present invention is only limited by the following claims. |
To provide microelectronic structures and methods of forming the same.SOLUTION: Embodiments of those methods include forming a nanowire device comprising a substrate including source/drain structures adjacent to spacers, and nanowire channel structures disposed between the spacers, where the nanowire channel structures are vertically stacked above each other.SELECTED DRAWING: Figure 1i |
A substrate, a horizontal nanowire channel structure on the substrate, and a gate electrode surrounding the horizontal nanowire channel structure, a part of the gate electrode having a top surface on the horizontal nanowire channel structure, and the gate electrode Another part of the gate electrode has a lowermost surface under the horizontal nanowire channel structure, and a pair of gate electrodes and a pair of gate electrodes adjacent to the gate electrode along respective end surfaces of the gate electrode in the length direction of the gate electrode. Side wall spacers, a part of each of the pair of side wall spacers is laterally adjacent to the part of the gate electrode on the horizontal nanowire channel structure, and another part of each of the pair of side wall spacers is A portion is laterally adjacent to the other portion of the gate electrode below the horizontal nanowire channel structure, and each of the pair of sidewall spacers is A pair of side wall spacers forming an integrated side wall spacer and a source / drain structure on each side of the horizontal nanowire channel structure, a part of the source / drain structure being a part of the horizontal nanowire channel structure. Laterally adjacent to and in contact with the portion of the sidewall spacer that is laterally adjacent to the portion of the gate electrode above, and the source / drain structure is less than the bottom surface of the sidewall spacer. An integrated circuit structure having a source / drain structure having a bottom surface.The integrated circuit structure of claim 1, wherein the horizontal nanowire channel structure is a silicon horizontal nanowire channel structure.The integrated circuit structure of claim 1, wherein the horizontal nanowire channel structure is a silicon germanium horizontal nanowire channel structure.The integrated circuit structure of claim 1, wherein the gate electrode comprises a gate dielectric material surrounding the horizontal nanowire channel structure and a metal gate surrounding the gate dielectric material.The integrated circuit structure of claim 4, wherein the gate dielectric material comprises a high-k gate dielectric material.The integrated circuit structure of claim 1, wherein the source / drain structure comprises epitaxial silicon germanium.The integrated circuit structure of claim 1, wherein the source / drain structure has a bottom surface below a bottom surface of the horizontal nanowire channel structure.The integrated circuit structure according to claim 1, wherein the substrate is an SOI substrate.The integrated circuit structure of claim 1, wherein the substrate is a bulk silicon substrate.The method further comprises a first trench contact coupled to a first one of the source / drain structures and a second trench contact coupled to a second one of the source / drain structures. Item 1. The integrated circuit structure according to item 1.The integrated circuit of claim 1, wherein the source / drain structure comprises p + -doped silicon germanium, and the integrated circuit structure further comprises a silicon epitaxial chip between the source / drain structure and the substrate. Circuit structure.The integrated circuit of claim 1, wherein the source / drain structure comprises n + doped silicon and the integrated circuit structure further comprises a silicon epitaxial chip between the source / drain structure and the substrate. Structure.A substrate, a horizontal silicon nanowire channel structure on the substrate, and a PMOS gate electrode surrounding the horizontal silicon nanowire channel structure, a part of the PMOS gate electrode having a top surface on the horizontal silicon nanowire channel structure. Then, another part of the PMOS gate electrode has a lowermost surface under the horizontal silicon nanowire channel structure, and extends along the respective end faces of the gate electrode and the PMOS gate electrode in the longitudinal direction of the PMOS gate electrode. And a pair of sidewall spacers adjacent to the PMOS gate electrode, wherein a part of each of the pair of sidewall spacers is laterally adjacent to the part of the PMOS gate electrode on the horizontal silicon nanowire channel structure. , The other part of each of the pair of sidewall spacers is connected to the horizontal silicon nano-layer. A pair of sidewall spacers laterally adjacent to the other portion of the PMOS gate electrode under the wire channel structure, each pair of sidewall spacers forming an integral sidewall spacer; A p + doped silicon germanium source / drain structure on each side of the horizontal silicon nanowire channel structure, a portion of the p + doped silicon germanium source / drain structure above the horizontal silicon nanowire channel structure The p + doped silicon germanium source / drain structure laterally adjoins and contacts the portion of the sidewall spacer laterally adjacent to the portion of the PMOS gate electrode, and the p + -doped silicon germanium source / drain structure comprises: P + -doped silicon germanium having a bottom surface below the bottom surface And over scan / drain structure has integrated circuit structure.14. The integrated circuit structure of claim 13, wherein the PMOS gate electrode has a gate dielectric material surrounding the horizontal silicon nanowire channel structure and a metal gate surrounding the gate dielectric material.The integrated circuit structure of claim 14, wherein the gate dielectric material comprises a high-k gate dielectric material.14. The integrated circuit structure of claim 13, wherein the p + doped silicon germanium source / drain structure is a p + doped epitaxial silicon germanium source / drain structure.14. The integrated circuit structure of claim 13, wherein the p + doped silicon germanium source / drain structure has a bottom surface below the bottom surface of the horizontal silicon nanowire channel structure.14. The integrated circuit structure according to claim 13, wherein the substrate is an SOI substrate.14. The integrated circuit structure according to claim 13, wherein the substrate is a bulk silicon substrate.A first trench contact coupled to a first one of the p + doped silicon germanium source / drain structures and a second trench contact coupled to a second one of the p + doped silicon germanium source / drain structures. 14. The integrated circuit structure of claim 13, further comprising a second trench contact.14. The integrated circuit structure of claim 13, having a silicon epitaxial chip between the p + doped silicon germanium source / drain structure and the substrate. |
Silicon and silicon germanium nanowire structuresThe disclosed embodiments relate to silicon and silicon germanium nanowire structures.Maintaining mobility enhancement and short channel control during scaling of microelectronic device dimensions beyond the 15 nm node poses challenges in device manufacturing. Nanowires used to fabricate devices provide improved short channel control. For example, a silicon germanium (SixGe1-x) nanowire channel structure (where x <0.5) provides a mobility enhancement with a reasonable Eg suitable for use in many conventional products that utilize higher voltage operation. Bring Silicon germanium (SixGe1-x) nanowire channels (where x> 0.5) also have enhanced mobility with lower Egs (eg, suitable for low voltage products in the mobile / handheld field). BringProvided are a nanowire device and a method for manufacturing the same.In one aspect, a method of manufacturing a device includes forming epitaxial silicon germanium on a substrate, forming epitaxial silicon on the epitaxial silicon germanium, and patterning the epitaxial silicon arranged on the epitaxial silicon germanium. Forming a fin structure, forming a plurality of spacers over the plurality of fin structures, and removing a portion of the fin structure from a source / drain region of the substrate adjacent to the plurality of spacers; Then, forming a source / drain structure on the source / drain region and removing one of the epitaxial silicon layer and the epitaxial silicon germanium layer from the fin structure disposed between the spacers. To To.While the specification concludes with claims that focus on and distinctly claim particular embodiments, a reading of the following description of the embodiments together with the accompanying drawings, including the drawings, will highlight the advantages of the various embodiments. It can be more easily clarified. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. 6A to 6D are diagrams showing a method of forming a structure according to an embodiment. It is a figure which shows the system which concerns on embodiment.In the following detailed description, reference is made to the accompanying drawings that show, by way of illustration, specific embodiments that may be practiced. The embodiments are described in sufficient detail to enable one of ordinary skill in the art to practice the embodiments. As will be appreciated, the various embodiments, if different, are not necessarily mutually exclusive. For example, a particular function, structure or characteristic described herein in connection with one embodiment may be used within other embodiments without departing from their spirit and scope. Also, as will be appreciated, the position or configuration of individual elements within each disclosed embodiment may be changed without departing from their spirit and scope. The following detailed description is, therefore, not to be taken in a limited sense, and the scope of the embodiments is defined by the appended claims appropriately interpreted and the entire range equivalent to the scope of the claims. It is only defined. In the drawings, identical or similar features are referred to by like reference numerals throughout the several views.Methods of forming and using microelectronic structures, such as nanowire device structures, and related structures are described. These methods and structures may include forming a nanowire device having a substrate having a source / drain structure and a plurality of nanowires between the source / drain structures. The plurality of nanowire channel structures are stacked on top of each other. Various embodiments included herein enable mobility enhancement and short channel control even during device dimension scaling beyond the 15 nm node. Embodiments further enable enhanced channel isolation from the substrate, reduced capacitance associated with spacer-gap separation, and scaling of vertical architectures with nanowires.1a-1n illustrate embodiments for forming microelectronic structures, such as forming nanowire device structures. FIG. 1 a shows a substrate 100. In one embodiment, the substrate 100 may include a bulk silicon substrate 100. In other embodiments, the substrate 100 may comprise a silicon-on-insulator (SOI) substrate, but may include any suitable type of substrate material. In one embodiment, the first silicon germanium material 102 may be grown on the substrate 100 by epitaxial growth. In one embodiment, a first silicon material 104 may be epitaxially grown on the first epitaxial silicon germanium 102. A second layer of silicon germanium 102 'may be formed on the first silicon layer 104, and a second layer of silicon 104' may be formed on the second silicon germanium 102 '. In another embodiment, the number of alternating epitaxial silicon germanium layers 102 / epitaxial silicon layers 104 formed on the substrate may vary depending on the particular application. In another embodiment, the order of layers may be reversed so that alternating layers of epitaxial silicon 104 and epitaxial silicon germanium 102 are formed on substrate 100.In one embodiment, a silicon germanium / silicon / silicon germanium / silicon epitaxial stack 120 may be patterned using conventional patterning / etching techniques (FIG. 1b). For example, the stack structure 120 can be etched in a trench etch process, such as in a shallow trench isolation (STI) process, to form a plurality of trenches in the substrate 100 to form a plurality of fin structures 107. 101 may be formed. Each of the fin structures 107 formed may be separated from each other by an oxide 103 that may be formed in the trench 101.In one embodiment, the fin structure 107 may have a dual channel portion of a gate all around (GAA) nanowire device. The number of channels in the device will depend on the number of layers in the fin structure 107. Fin structure 107 may have a nanowire structure. Spacers 106 may be formed orthogonally to fin structures 107 and formed on fin structures 107 across the fin structures 107 (FIG. 1c). In one embodiment, the spacer 106 may have a material that is selective to the material of the fin structure 107 in the process.In one embodiment, the gate electrode material 108 may be formed within / between the spacers 106 and around the portion of the fin structure 107 located between the spacers 106. In one embodiment, a gate electrode material may be formed around a portion of fin structure 107 to form spacers 106 on either side of the gate. The gate 108 can include polysilicon in some examples and can also include a sacrificial gate structure 108. In one embodiment, a portion of fin structure 107 may be removed from substrate 100 to expose source / drain regions 109 (FIG. 1d). In one embodiment, the portion of fin structure 107 may be etched by a dry etching process to expose source / drain regions 109. In one embodiment, the source / drain regions 109 can be etched to terminate on either the substrate 100 or the bottom wire (102 or 104). Depending on the specific device needs, an undercut wet or dry etch process may be used to remove additional material in the gate 108 / tip overlap region.In one embodiment, epitaxial growth techniques are used to grow a silicon or silicon germanium source / drain structure 110 in the source / drain regions 109 (FIG. 1e) and bond to portions of the fin structure 107 located between the spacers 106. Can be done. In one embodiment, the epitaxial source / drain structure 110 may be n-doped silicon for NMOS devices and p-doped silicon / silicon germanium for PMOS devices, depending on the device type of the particular application. Doping can be introduced in an epitaxial process, by ion implantation, by plasma doping, by solid source doping, or by other methods known in the art.The chip-source / drain junction can be engineered by combining epitaxial layers doped with different dopant species and concentrations. For example, if a silicon germanium source / drain is used to strain the silicon channel of a PMOS device, first the silicon etch stop layer / chip 112 is grown before growing the source / drain silicon germanium epitaxial structure 110. The growth of Si prevents the source / drain regions 109 from being etched during the subsequent silicon germanium etch (FIG. 1f). In other words, the PMOS chip material needs to be resistant to the subsequent silicon germanium etching process.An inter-layer dielectric (ILD) (not shown) may be formed on the substrate 100 over the source / drain structure 110, the gate 108 and the spacer 106. In one embodiment, chemical mechanical polishing (CMP) may open the top of the sacrificial polygate 108. Then, the sacrificial gate electrode material 108 can be removed from between the spacer materials 106 (FIG. 1g). FIG. 1h shows the interior between the spacer materials 106 with the fin structure 107 located between two spacers (only one shown). In one embodiment, the silicon layers 104, 104 'may be selectively removed from the fin structure 107, leaving a gap 111 between the silicon germanium channels 102, 102' (FIG. 1i). In one embodiment, the silicon layers 104, 104 'can be selectively etched using a wet etch that selectively removes the silicon 104, 104' without etching the silicon germanium nanowire structures 102, 102 '. Etching chemistries such as, for example, hydrous hydroxide chemistries including ammonium hydroxide and potassium hydroxide can be used to selectively etch silicon.In another embodiment, the silicon germanium layers 102, 102 'may be selectively removed from the fin structure 107 and sidewalls, leaving a gap 113 between the silicon channel layers 104, 104' (FIG. 1j). . In one embodiment, the silicon germanium 102, 102 'can be selectively etched using a wet etch that selectively removes silicon germanium without etching the silicon nanowire channels 104, 104'. Etching chemistries such as carboxylic acid / nitric acid / HF chemistry and citric acid / nitric acid / HF can be used to selectively etch silicon germanium. Therefore, between the spacers 106, the silicon layer is removed from the fin structure 107 to form the silicon germanium nanowires 102, 102 ′, or the silicon germanium layer is removed from the fin structure 107 to form the silicon channel nanowires 104, 104 ′ structure. Either can be done. In one embodiment, both silicon channel material and silicon germanium channel material are present on the same wafer, in the same die, or on the same circuit, eg, NMOS Si and PMOS SiGe in an inverter structure. obtain. In one embodiment having NMOS Si and PMOS SiGe in the same circuit, both Si channel thickness (SiGe inter-layer) and SiGe channel thickness (Si inter-Si) improve circuit performance and / or circuit minimum operating voltage. Can be selected. In one embodiment, the etching process may vary the number of wires between different devices in the same circuit to improve circuit performance and / or circuit minimum operating voltage.A gate dielectric material 115 may be formed surrounding the channel region between the spacers 106. In one embodiment, the gate dielectric material 115 can comprise a high-k gate dielectric material that can have a dielectric constant greater than about 4. In one embodiment, the gate dielectric material 115 may be conformally formed on all sides of the silicon nanowire structure 104, 104 'between the spacers 106 (FIG. 1k). In another embodiment, the gate dielectric material 115 may be formed on the sides of the silicon germanium nanowire structure 102, 102 'between the spacers 106 (not shown).Then, a gate electrode material 117 may be formed around the gate dielectric material 115 (FIG. 11). The gate electrode material 117 may include a metal gate electrode material (including nitrides such as TaN and TiN) such as pure metals and alloys of Ti, W, Ta and Al, and may include Er and Dy. With rare earths, or noble metals such as Pt. The gap 113 between the silicon nanowire structures 104, 104 'may be filled with the gate electrode material 117. In another embodiment, the gap 111 between the silicon germanium nanowire structures 102, 102 'may be filled with gate electrode material 117 (not shown). In one embodiment, further standard CMOS processes may be performed on the substrate 100 to fabricate CMOS devices according to embodiments herein.In one embodiment, NMOS and / or PMOS devices may be formed. FIG. 1m illustrates an NMOS device (a single silicon channel is depicted) that may be formed, depending on the particular application, the trench contact 119 may be coupled to the source / drain structure 110. The source / drain structure 110 may be n + doped silicon in some examples. A silicon epitaxial chip 112, which may be n-doped in some examples, may be disposed between the source / drain structure 110 and the substrate 100. Gate electrode material 117 may surround the silicon nanowire channels 104.FIG. 1n shows a PMOS device (a single silicon channel 104 is depicted), where the trench contact 119 may be coupled to the source / drain structure 110, depending on the particular application. The source / drain structure 110 may be P + doped silicon germanium in some examples. A silicon epitaxial tip / etch stopper 120, which may be p-doped in some examples, may be disposed between the source / drain structure 110 and the substrate 100. Gate electrode material 117 may surround silicon channel 104, which in some cases may have strained silicon channel 104.In some cases, devices that use silicon germanium channel structures (such as those shown in FIG. 1i) may be advantageous by having high carrier mobility due to silicon germanium properties. In one embodiment, the gate-around-a-silicon germanium channel device process is the same except that the epitaxial layer stack 120 is inverted, that is, the silicon material 104 is first formed on the substrate and the silicon germanium is deposited on the silicon. It may be similar to a gate-around-a-silicon channel device process, except that it is formed. Since the underlying silicon will be removed selectively with respect to silicon germanium, the source / drain will have silicon germanium and the etch stopper under the sacrificial gate electrode material will also have silicon germanium, Etching is avoided.Embodiments herein enable fabrication of self-aligned gate all around (GAA) type silicon channel and silicon germanium channel transistor structures and devices. Nanowire channel devices exhibit lower subthreshold leakage due to reduced short channel effects (SCE). The implementation of the GAA SiGe high mobility channel device suppresses the SCE effect, for example. (GAA) devices can maximize electrostatic gating on the channel.In one embodiment, devices manufactured according to the various embodiments herein may include enhanced substrate isolation. Referring to FIG. 2a, the bottom nanowire channel 202 disposed on the substrate 200 may have a shorted tri-gate with poor sub-fin leakage in some examples. One solution involves forming the device on a silicon-on-insulator (SOI) substrate 201 (FIGS. 2b-2c), where the source / drain structure 210 and the nanowire structure 204 (see FIG. 2a). Instead of being placed on the bulk silicon substrate 200 (as shown), it is placed on an insulator material 203, eg an oxide material 203. By using the SOI substrate 201, the geometry of the bottom nanowires 204 is such that the silicon germanium etch of the nanowire fin structure (eg, similar to nanowire fin structure 107 of FIG. 1b) and after the gate electrode material (eg, of FIG. 11). It can be defined by etching the bottom oxide before forming the gate electrode material 117).For example, Figure 2d shows etching a dielectric to form one nanowire and one trigate, and Figure 2e shows etching a dielectric to form a device with two nanowires. Is shown. In another embodiment, enhanced substrate isolation may be achieved by forming fin spacers 211 (FIG. 2f) on the sidewalls of fins 207 after trench etching. Then, a second trench etch 214 is performed to expose bottom fin region 216, and the silicon portion of bottom fin region 216 may be oxidized (FIG. 2g). Therefore, the bottom nanowires of the device are placed on the oxide to improve substrate isolation. In another embodiment, fin spacers 211 may be formed on the sidewalls of fins 207 after trench etching and filling (FIG. 2h). The bottom silicon portion 216 of the fin 207 may be oxidized after STI recess formation / oxide fill to enhance substrate isolation (FIG. 2i). Therefore, the bottom nanowires of the device can be placed on the oxide to improve substrate isolation.In one embodiment, removal of the silicon region of the nanowire stack 307 may leave voids 311 within the spacer 306 (FIG. 3a). After application of a gate, eg, a metal gate structure (eg, similar to gate structure 117 of FIG. 11), void 311 defines a very high capacitance parasitic region between the subsequently formed gate and source / drain structure 310. Can be produced. In one embodiment, this potential parasitic region is due to the use of epitaxial oxide 302 rather than silicon in the starting stack (which may or may not require reorientation of silicon substrate 300). It can be avoided (Fig. 3b). In one embodiment, alternating layers of epitaxial semiconductor material 304 may be formed on epitaxial oxide material 302, which may be formed on substrate 300.For example, by epitaxially growing Gd2O3 on (111) silicon and then growing silicon germanium on top of Gd2O3, it can be etched into a fin structure 307, which can later form silicon germanium wires on the substrate. It is possible to build various multi-layer stacks. In another embodiment, cerium oxide may be grown on (111) silicon (or alternatively on (100) silicon) to form a multilayer stack. When using an oxide / semiconductor / oxide stack, there is the option of not etching, partially etching, or completely etching the oxide material 302, 302 ′ of the fin structure 307 (FIG. 3c-respectively). 3e). The no-etch option (FIG. 3c) solves the capacitance problem but comes at the cost of worse confinement. The partial etching option (FIG. 3d) improves the confinement, but at the cost of some parasitic capacitance.In another embodiment, the voids 311 in the spacer adjacent the fin structure (shown in FIG. 3a) are formed from the source / drain 310 side of the spacer 306, the spacer-like material 312 prior to the epitaxial growth of the source / drain. Alternatively, it may be filled with a second spacer 312 having a low-k material 312 (FIG. 3f). The material of the second spacer 312 may include, but is not limited to, materials such as SiON, SiN, SiC, SiOBN, and low-k oxide. In one embodiment, the etching of stack 307 may remove all of the silicon, so that the gate replacement etch (removal of sacrificial gate electrode material) only strikes the oxide. In another embodiment, the voids 311 may be filled with spacer-like material 312 or low-k material 312 (prior to gate deposition) from the gate side (FIG. 3g). Such embodiments include performing a full or partial etch of stack 307 (illustrated as a full etch).In another embodiment, the voids 311 may be filled by utilizing an anisotropic etch of silicon to minimize silicon etchout during the removal process from the stack 307. For example, a (110) wafer can be used with channels along <111>. This structure will have a low etch rate (111) surface facing the source / drain 310, thus limiting undercutting. The wet etch selected here should also etch SiGe more slowly than Si, leaving behind only partially etched SiGe nanowires after removing all of the silicon between the SiGe nanowires. I won't. Therefore, anisotropic etching is used to minimize lateral etching within the spacer 306, and the etch chemistry can be made highly selective to silicon but not silicon germanium.In one embodiment, nanowires may be utilized to achieve vertical architectural scaling. In one embodiment, silicon germanium or silicon is epitaxially grown from the substrate into the trenches, after which the fin structures may be separated into nanowires, which may be stacked one on top of the other, using, for example, an oxidation or etching process. In one embodiment, oxidation of the entire wire may be performed, with the source / drain regions starting as layers of SiGe (or Si) and oxide. Alternating oxide layers 404 and nitride layers 402 (more layers may be used to form more wires) may be formed on the silicon substrate 401 (FIG. 4a). These oxide and nitride layers may be patterned and etched to form trenches 405 and backsides 406, which allows trenches 405 to expose the silicon material of substrate 401 (FIG. 4b). Silicon germanium (or silicon) 407 can be epitaxially grown and polished in the trench 405 and in the backside (FIG. 4c). A hard mask 408 is formed on the silicon germanium (or silicon) 407 and patterned and etched to expose the sides of the fin 410 (FIG. 4d). In one embodiment, the fin structure may be formed by removing a portion of the alternating layers of nitride and oxide that are not covered by the hard mask.Fins 410 may be oxidized to define nanowires (FIG. 4e). The oxidized portions of fins 410 may be removed, forming nanowires 412 that function as the channel structure of the device and are formed substantially across the entire structure. In one embodiment, the first nanowire 412 can be vertically disposed above the second nanowire 412 '. In another embodiment, these wires may be defined only within the channel region (FIGS. 4g-4j). A second mask material 413, eg SiC, may be formed around the fin structure 410. The second mask material 413 can be selective to oxides and nitrides. Fin structure 410 may have alternating oxide / nitride layers, for example, as in FIG. 4d. A trench 414 may be formed where the gate electrode material will be subsequently formed to define a gate region adjacent to the fin structure 410, exposing a portion of the fin structure 410 (FIG. 4h). Oxidation is performed to define nanowires (Fig. 4i), and the wires can be further defined by removing the oxidized portion of the fin structure (Fig. 4j). Thus, wires are formed in the gate / trench 414 but not in the source / drain regions.A spacer process can be used to reduce lithographic concerns regarding patterning nanowires. The sides of the Si or SiGe fins 410 may then be exposed by etching the nitride surrounding the fins 410 (the tops may be covered by a hard mask 421 such as SiC). The spacer 420 is then formed by a combination of isotropic deposition and anisotropic etching (FIG. 4k). The spacer 420 is used to mask the etching that exposes the sidewalls of the fin 410. The spacer 420 may then be removed.In another embodiment, an anisotropic wet etch separates the fins into the wires shown in FIG. First, the oxide can be etched away using wet etching. The SiGe or Si of the exposed fins 410 is then etched using Si or SiGe anisotropic wet etching. Nanowires can be formed due to the crystal orientation dependence of the etching rate. After both etchings are performed, in one embodiment, the nanowires can be formed into hexagonal shapes. After removal of the oxide, Si or SiGe fins may be formed (FIG. 4m).Longitudinal scaling of nanowires can be achieved. This can limit long-term scaling of such devices, as nanowire size can be limited to about 7 nm by phonon scattering. One solution is to position one of the N or P channels in the bottom wire and the other channel in the top wire to build the devices vertically. In one embodiment, an N + substrate can be used for Vss. In another embodiment, the top and bottom contacts can be offset. In another embodiment, a wire having left and right wings can be formed. FIG. 5a shows an inverter manufactured using an N + substrate 500 for Vss and a gate 501. It should be noted that this is a tall (Toll) contact 512 (TCN) connecting the N and P nanowire channels 514, and a short (short) top TCN 510 coupled to one of the N and P nanowire channels 514. It requires one of the N and P nanowire channels 514 and a substrate plug 508 / bottom TCN to couple to the substrate 500. Figure 5b shows the top TCN 510 and the bottom TCN 508 displaced. FIG. 5c shows N and P nanowires with left and right wing nanostructures 514. Figure 5d shows an inverter wired with left and right wing nanostructures 514.Nanowires with GAA offer improvements over GAA-type non-nanowire structures as well as fin and tri-gate structures. The use of lateral nanowires with a replacement metal-gate (RMG) style gate all-around process is a logical extension of the roadmap from planar with RMG to fin with RMG. Gate all-around (GAA) nanowire structures offer the potential for improved short channel control over GAA non-nanowire structures and fins. Improved isolation of the bottom wire within the silicon or silicon germanium nanowire structure from the substrate can be achieved by embodiments herein.Density scaling may be enabled when phonon scattering limits the minimum nanowire size to ~ 7 nm. For both silicon and silicon germanium, lateral nanowire structures can be combined with substitutional metal gate architectures for wires and fabrication-compatible manufacturing techniques that are improved from those developed for tri-gate structures. Vertical architecture scaling with nanowires is enabled. Nanowires can be used to build circuits in the transistor layer itself.FIG. 6 illustrates a computer system according to one embodiment. The system 600 includes a processor 610, a memory device 620, a memory controller 630, a graphics controller 640, an input / output (I / O) controller 650, a display 652, a keyboard 654, a pointing device 656, and peripherals 658. In part embodiments, these may all be communicatively coupled to each other via bus 660. Processor 610 may be a general purpose processor or an application specific integrated circuit (ASIC). The I / O controller 650 may include a communication module for wired or wireless communication. The memory device 620 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, or a combination of these memory devices. Therefore, in some embodiments, memory device 620 in system 600 need not include a DRAM device.One or more of the components shown in system 600 may include one or more nanowire devices according to various embodiments included herein. For example, processor 610, or memory device 620, or at least a portion of I / O controller 650, or a combination of these components, may be included in an integrated circuit package that includes at least one embodiment of the structures described herein. .These elements perform conventional functions known in the art. In particular, memory device 620 may be used, in some examples, to provide long-term storage of executable instructions for a method of forming a structure according to some embodiments. Also, in other embodiments, the executable instructions of the method of forming structures according to the embodiments may be used to be stored on a short-term basis during execution by the processor 610. The instructions may also be communicatively coupled to the system, such as compact disc read only memory (CD-ROM), digital versatile disc (DVD), floppy disc, carrier wave, and / or other propagation. It may be stored on a machine-accessible medium such as a signal or otherwise coupled. In one embodiment, the memory device 620 may provide the processor 610 with executable instructions for execution.The system 600 includes a computer (eg, desktop, laptop, handheld, server, Web device, router, etc.), wireless communication device (eg, cell phone, cordless phone, pager, personal digital assistant, etc.), computer-related peripheral device. (Eg, printer, scanner, monitor, etc.), entertainment equipment (eg, television, radio, stereo, tape player and compact disc player, video cassette recorder, video camera, digital camera, MP3 (MPEG Audio Layer 3) player, video Gaming devices, watches, etc.) and the like.While the above description details certain steps and materials that may be used in the embodiments, many modifications and substitutions can be made, as will be appreciated by those skilled in the art. Such alterations, modifications, substitutions and additions are to be considered within the spirit and scope of this embodiment as defined by the appended claims. Also, as will be appreciated, various microelectronic structures such as transistor devices are well known in the art. Therefore, the figures presented here show only those parts of a typical microelectronic structure that are relevant to the implementation of this embodiment. Therefore, the present embodiment is not limited to the structure described here. |
Multi-level cell (MLC) non-volatile (NV) media may be programmed by internal buffer reuse to reduce the need for external buffering. The internal buffer is located on the same die as the NV medium to be programmed, and is used together with the volatile memory to store data to be programmed. The internal buffer reads and programs data for the NV media. Programming the NV medium includes temporarily storing the first partial page in a buffer for programming, reading the second partial page from the NV medium to a volatile memory, storing the second partial page in the buffer, and programming the NV medium using the first partial page and the second partial page. |
1.A device comprising:non-volatile (NV) media with multi-level cell arrays on the dielectric die;volatile memory on the media die for storing data to program the NV media; anda buffer on the media die for buffering read and program data for the NV media;The program of the NV medium temporarily stores the first partial page in the buffer for programming, reads the second partial page from the NV medium to the volatile memory, and writes the second partial page to the volatile memory. storing in the buffer, and programming the NV medium using the first partial page and the second partial page.2.The apparatus of claim 1, wherein the program of the NV medium includes garbage collection to move data from a source medium to the NV medium.3.3. The apparatus of claim 2, wherein the source medium comprises single level cell (SLC) flash memory.4.2. The device of claim 2, wherein the source medium comprises one of three-level cell (TLC) flash memory, quad-level cell (QLC) flash memory, or three-dimensional cross-point (3DXP) memory.5.3. The apparatus of claim 2, wherein the source medium comprises dynamic random access memory (DRAM).6.The apparatus of claim 1, wherein reading the second partial page to the volatile memory comprises performing error checking and correction (ECC) on the second partial page.7.1. The apparatus of claim 1, wherein programming the NV medium comprises: in response to loading a new address for programming in the NV medium, programming the first partial page and the second partial page Flush from the buffer to the NV medium.8.1. The apparatus of claim 1, wherein programming the NV medium comprises flushing the first partial page and the second partial page from the buffer to the NV medium in response to a refresh command.9.The apparatus of claim 1, wherein the buffer includes a read/write register for the NV medium.10.The apparatus of claim 1, wherein the NV medium comprises quad level cell (QLC) flash memory.11.The device of claim 1, wherein the NV medium comprises one of three-level cell (TLC) flash memory, five-level cell (5LC) flash memory, or three-dimensional cross point (3DXP) memory.12.The apparatus of claim 1, wherein the volatile memory comprises static random access memory (SRAM).13.A computing device comprising:a host processor; andA solid state drive (SSD) coupled to the host processor, the SSD comprising:non-volatile (NV) media with multi-level cell arrays on the dielectric die;volatile memory on the media die for storing data to program the NV media; anda buffer on the media die for buffering read and program data for the NV media;The program of the NV medium temporarily stores the first partial page in the buffer for programming, reads the second partial page from the NV medium to the volatile memory, and writes the second partial page to the volatile memory. storing in the buffer, and programming the NV medium using the first partial page and the second partial page.14.14. The computing device of claim 13, wherein the program of the NV media includes garbage collection to move data from a single level cell (SLC) flash cache to the NV media.15.14. The computing device of claim 13, wherein reading the second partial page to the volatile memory comprises performing error checking and correction (ECC) on the second partial page.16.14. The computing device of claim 13, wherein programming the NV medium comprises: in response to loading a new address for programming in the NV medium, converting the first partial page and the second partial Pages are flushed from the buffer to the NV medium.17.14. The computing device of claim 13, wherein the buffer comprises a scratch buffer for the NV media.18.14. The computing device of claim 13, wherein the NV media comprises quad level cell (QLC) flash memory.19.14. The computing device of claim 13, wherein the volatile memory comprises static random access memory (SRAM).20.The computing device of claim 13, further comprising:a display communicatively coupled to the host processor;a network interface communicatively coupled to the host processor; orA battery for powering the computing device.21.A method for data storage, comprising:store data programmed to non-volatile (NV) media;buffering read and program data for the NV media using buffers on the media die;Programming the NV medium includes temporarily storing a first partial page in the buffer for programming, reading a second partial page from the NV medium to volatile memory, and storing the second partial page storing in the buffer, and programming the NV medium using the first partial page and the second partial page.22.21. The method of claim 21, wherein programming the NV medium comprises: in response to loading a new address for programming in the NV medium, changing the first partial page and the second partial page from The buffer is flushed to the NV medium.23.21. The method of claim 21, wherein programming the NV medium comprises flushing the first partial page and the second partial page from the buffer to the NV medium in response to a refresh command.24.21. The method of claim 21, wherein the buffer includes a read/write register for the NV medium. |
Multi-level cell programming without DRAM using NAND bufferstechnical fieldThe description relates generally to non-volatile memory, and more specifically to programming multi-level cell non-volatile memory.Background techniqueNon-volatile storage devices or non-volatile memory are used for mass storage in computing devices and gaming systems. A non-volatile memory device refers to a memory device that maintains a certain state even if power to the memory is interrupted. The storage space of the device keeps increasing as the demand increases. With multi-level cells replacing single-level cells (SLCs), the increase in capacity is achieved by increasing data density. Multilevel cells can include 2, 3, 4, or even 5 bits per cell.Programming of multilevel cells is slower than SLC. Multi-level cells are usually programmed with the aid of volatile memory. However, adding a volatile memory device for programming the non-volatile memory device increases the cost of the non-volatile memory device. For example, QLC (quad-level cell) programming involves programming four pages of data that are traditionally cached in a DRAM (dynamic random access memory) device, which for a 2TB (terabyte) drive might be Up to 4MB (megabytes).There are DRAMless memory devices for three-level cells (TLCs) with on-die volatile buffers of approximately 256KB (kilobytes) to 384KB. However, QLC devices require significantly larger volatile buffers for programming using volatile buffers, which would require about 1-4MB of memory. Including 1-4MB of volatile memory is prohibitive in terms of cost and die area.As an alternative to providing buffers on a non-volatile die, the system can utilize memory space in the system's main memory as a program data cache. Using system memory as a data cache requires access to the cache through the host memory bus, which involves a significant performance penalty for sharing host bandwidth. Additionally, programming garbage collection routines using the host memory bus is not feasible because the communication bus will transition to a low power state during the time garbage collection is performed. Neither using high-capacity on-die volatile storage nor using a host memory bus to access main memory is a scalable solution for non-volatile devices with increasing capacity.Description of drawingsThe following description includes a discussion of the drawings with illustrations given by way of example of embodiments. The drawings are to be understood by way of example and not limitation. As used herein, reference to one or more examples should be understood to describe a particular feature, structure, or characteristic included in at least one embodiment of the present invention. Appearances of phrases such as "in one example" or "in an alternative example" herein provide examples of embodiments of the invention and are not necessarily all referring to the same embodiment. However, they are not necessarily mutually exclusive.1 is a block diagram of an example of a system with a solid state drive.2 is a block diagram of an example of a non-volatile die with a multi-stage procedure.3 is a block diagram of an example of a non-volatile die with SLC and QLC memory devices.4 is a swim lane diagram of an example of a multi-stage programming operation for a multi-level cell non-volatile memory.5 is a flowchart of an example of a process for programming a multi-level cell non-volatile memory.6A is a block diagram of an example of a system having a hardware view of a solid state drive (SSD) with a non-volatile array having internal buffers for multi-phase programming operations.6B is a block diagram of an example of a logical view of a system having a solid state drive (SSD) with a non-volatile array having internal buffers for multi-phase programming operations.7 is a block diagram of an example of a computing system in which a non-volatile array with internal buffers for multi-stage programming operations may be implemented.8 is a block diagram of an example of a mobile device in which a non-volatile array with internal buffers for multi-stage programming operations may be implemented.The following is a description of certain details and implementations, including drawings that may depict some or all of the examples, as well as non-limiting descriptions of other potential implementations.Detailed waysAs described herein, multi-level cell (MLC) non-volatile (NV) media can be programmed with internal buffer reuse to reduce the need for external buffering. An internal buffer is located on the same die as the NV medium to be programmed and is used along with volatile memory to store the data to be programmed. Internal buffers read and program data for NV media. programming the NV medium includes staging the first partial page in a buffer for programming, reading the second partial page from the NV medium to volatile memory, storing the second partial page in the buffer, and programming the NV media using the first partial page and the second partial page.Programming using the described internal buffers provides a scalable solution that does not require additional volatile memory space (whether on-die memory such as SRAM (Synchronous Random Access Memory) or off-die memory such as DRAM (Dynamic Random Access Memory)) and does not negatively impact performance. Programming is scalable since programming can use available internal buffer space that has been repurposed with different programming operations. Using internal buffers Programming to buffer write data can be applied to SSDs (Solid State Drives) without DRAM.In one example, programming is enabled using an internal buffer, although additional programming stages are required (QLC has programming A and B, while TLC (three level cell or triple level cell) has programming A only) Programming of QLC NAND SSDs without DRAM. TLC NAND SSD garbage collection can include approximately 256KB to 384KB of ASIC SRAM buffers, which is sufficient for QLC SSDs with quad-plane NAND die and 4-channel controller when the internal buffers are properly utilized.As a specific example, consider a storage device such as an SSD that uses QLC (quad-level cell) NAND. NAND-based non-volatile memory is often referred to as flash memory. QLC flash memory includes internal latches or registers that operate as internal buffers to move data in and out of non-volatile QLC memory arrays. NAND internal operations are usually performed using internal registers. In one example, firmware in the media controller may repurpose internal registers for system purposes to hold data required to perform programming of the NAND flash array. In one example, the SSD firmware uses internal buffers to perform QLC programming and garbage collection. There is no power penalty for using the internal buffers when programming QLC NAND flash during idle garbage collections.This programming enables a DRAM-free solution with a lower SRAM footprint on an ASIC (application-specific integrated circuit) controller for flash memory. This solution reduces the cost and power consumption of systems such as hybrid SSDs in which it is deployed, including hybrid SSDs utilizing QLC NV media and 3DXP (three-dimensional intersection point) write buffer media.1 is a block diagram of an example of a system with a solid state drive. System 100 includes host 110 coupled to solid state drive (SSD) 120 . Host 110 represents a computing system platform that stores data in SSD 120 . SSD 120 represents a storage device for system 100 . The computing system platform may be, for example, a laptop computer, gaming system, tablet or other handheld system, or other computing system.Host 110 includes processor 112 , which represents a host processor or main processor for a computing device of system 100 . Processor 112 may be any type of processor, such as a central processing unit (CPU) system-on-chip (SOC), graphics processing unit (GPU), or other processor that performs operations that trigger access to storage resources on SSD 120 or controller.Host 110 includes interface 114 , which represents an interface to access SSD 120 . Interface 114 may include hardware to communicate with SSD 120, such as signal lines, drivers, receivers, or other hardware. SSD 120 includes a host interface 122 to communicate with host 110 . In one example, interface 114 and host interface 122 may communicate via the Non-Volatile Memory Express (NVMe) standard. The NVMe standard defines a register-level interface for host software to communicate with SSDs via Peripheral Component Interconnect Express (PCIe), a high-speed serial computer expansion bus. The NVM Express Standard is available at www.nvmexpress.org. The PCIe standard is available at pcisig.com.In one example, host 110 includes controller 116 representing a host-side controller to manage host access to SSD 120 . Controller 116 may manage interface 114 to enable host 110 to communicate with SSD 120 . Controller 116 receives a request for data stored on SSD 120 from processor 112 or another component on host 110 . The request may be a read request to access data at a particular location, or a write or program request to send data to SSD 120 for storage.In one example, SSD 120 includes controller 140 representing a storage-side controller that manages host interface 122 and generates internal operations in response to requests from host 110 . The controller 140 represents a controller for the SSD device itself, and can control access to the NVM (non-volatile memory) die 150 and the volatile memory 160 . In one example, SSD 120 may include volatile memory 160 as an internal cache for programming or write operations to improve programming time between buffer 130 and NVM die 150 . Volatile memory 160 may be removed from SSD 120 using the programming operations herein. If SSD 120 includes volatile memory 160, controller 140 may include volatile memory controls 142 to manage access to volatile memory devices. Controller 140 includes NVM controls 144 to manage access to NVM die 150 .In one example, SSD 120 includes buffer 130 as a write buffer or write cache to buffer write data sent to SSD 120 . In one example, buffer 130 may represent a read buffer used to store frequently accessed data in a storage medium with fast access. The storage capacity of buffer 130 is smaller than that of NVM die 150 , but the access time is faster than that of NVM die 150 .In one example, buffer 130 is an area on NV die 150 . For example, NVM die 150 may include a large QLC storage array as primary storage and a smaller SLC storage array as cache. First, data can be written to the buffer 130 or SLC area for faster write times than writing directly to QLC or other multi-level cell areas, thereby improving the write time for SSD 120 . The data can then be moved to the multi-level cell region through a garbage collection operation, which refers to the background processing of moving data between single-level cell regions and multi-level cell regions.Controller 140 represents off-die controls for NVM die 150 . NVM die 150 may include an on-die controller to manage operations within the NVM die, which would be separate from controller 140 . Controller 140 may queue and process commands for NVM die 150 (eg, read, write, or program, or erase commands for NVM die 150 ) and process commands for volatile memory. 160 commands for reading and writing.SSD 120 includes one or more NVM dies 150 . Details of a single die are illustrated in system 100 . In one example, NVM die 150 is a multi-planar die with different memory channels to increase data access bandwidth.NVM die 150 includes NVM array 152 representing storage media for SSD 120 . In one example, NVM die 150 includes buffers 156, which may represent registers or flip-flops within NVM die 150 that interface with NVM array 152 as buffers. The NVM array 152 may be implemented as any storage medium that writes data in MLC-implemented multi-level cells such as TLC, QLC, 5LC, or 3DXP, as long as the NVM array has internal buffers to implement the described programming operations. Can. Using internal buffer 156, write operations are self-contained within NVM die 150 and no cache resources external to the NVM die are required to perform data transfers and programming of MLC cells.In one example, NVM die 150 includes SRAM (Static Random Access Memory) 154 as a volatile memory buffer within the die to implement caching for programming operations. For programming operations, SRAM 154 may include blocks of data to be written to NVM array 152, and buffer 156 includes space for a small number of pages to read from or write to NVM array 152 at appropriate times. Thus, SRAM 154 and buffer 156 may provide a buffer or buffer for programming operations, wherein using buffer 156 may provide a place to hold data for programming operations while SRAM 154 is loaded with other data to complete a full write.Known QLC SSDs have an "SLC first" architecture in which host data is written to NAND in SLC mode and then rewritten to NAND in QLC mode during garbage collection background processing. In one example, QLC NAND has a 2-step or 2-stage programming sequence, with the first stage writing 4 states and the second stage writing 16 states. In one example, the second-stage write involves pre-reading the first-stage data from the NAND.Garbage collection in SSD 120 involves moving valid data from source memory (such as buffer 130 , which may be another NAND block or other medium), sorting valid data, and writing valid data to the destination NAND block of NVM array 152 . Buffer 156 may be a read/write buffer. For normal read or write operations, the contents of the data registers or storage medium of buffer 156 may be overwritten for normal operation. In one example, NVM die 150 is configured to hold data through subsequent read operations until the data is programmed into NVM array 152 .In one example, collating valid data includes writing and leaving a first portion of data in buffer 156 while reading other portions of data to a volatile medium such as SRAM 154 . In one example, reading to SRAM 154 or volatile memory may include performing ECC (Error Checking and Correction) on the data. Thus, error correction of the data may be performed prior to writing the data to the NVM array 152 . Other portions of the data may also be written to buffer 156 , and then all of the data may be written from buffer 156 to NVM array 152 .Buffer 130 may be a source medium or source memory device that provides data to be written to NVM array 152 for garbage collection purposes. In one example, buffer 130 includes SLC flash memory. In one example, buffer 130 includes 3DXP. In one example, programming using buffer 156 may be performed between other memory media and NVM array 152 . For example, the source medium for programming may be a volatile buffer of DRAM (eg, if volatile memory 160 is used in SSD 120), a non-volatile medium such as TLC, a different QLC array, 5LC (five level unit), or other media.2 is a block diagram of an example of a non-volatile die with a multi-stage procedure. System 200 represents a non-volatile die according to an example of NVM die 150 of system 100 . System 200 includes array 230 , buffer 210 , and buffer 220 .In one example, array 230 is a NAND array that can operate in SLC mode or MLC mode. In SLC mode, array 230 can store a single bit of data (binary bit) in each memory cell. In multi-level cell mode, the array stores multiple bits of data by storing the data as one of multiple voltage levels stored in the cell. In one example, array 230 is another non-volatile medium that can store data in binary mode or multi-level cell mode. Array 230 is the destination storage device for program or write operations in MLC mode. In one example, array 230 may be a source of SLC mode.Buffer 210 represents a volatile memory buffer. In one example, buffer 210 is an on-die SRAM memory with array 230 . In one example, buffer 210 is a DRAM array. Buffer 210 may be a buffer interface to an off-die storage medium of array 230 .Buffer 220 represents a read/write buffer for array 230 . For read operations, buffer 220 stores data for reading into buffer 210 . For write operations, buffer 220 may be a scratch buffer to load data to program array 230 . In one example, buffer 220 holds write data through subsequent array read operations and SLC/QLC mode switching operations.Although not specifically shown, the array 230 stores data as blocks of data, where a block includes multiple pages of data. A page of data includes multiple bits of data and associated metadata. For example, an array may include 2K (2048) blocks, each block having 64 pages of 2K bytes of data (and 64 bytes of metadata). References to read data and write data are done through pages and blocks.In one example, the system 200 implements the QLC write algorithm and page order as follows. In the first write phase or programming phase, a controller (not specifically shown) writes 2 pages of WL (word lines) N. In the second write phase or programming phase, the controller writes another 2 pages of data. In one example, the second stage writes data from different word lines in a staircase fashion. Interleaving of word lines can increase the read window budget for QLC devices, allowing for faster programming. The time delay required for sequential programming operations on different word lines is shorter than the time delay required for sequential programming operations on the same word line. Therefore, by writing in a staircase fashion, writing one WL in the first programming stage and writing a different word line in the second stage, the overall programming operation is faster.In one example, the controller writes 2 pages of WL N-2 during the second phase. In one example, the controller may write 2 pages of WL N-1. It has been observed that writing to addresses that are more than one hop away can improve write performance and reduce errors. Thus, programming may, for example, write to WL N, WL N-2, WL N+1, WL N-1, and so on. In one example, the second stage of programming may include reading the 2 pages previously programmed during the first stage of programming into the word line to be written. In one example, the system performs ECC on the read data, then resends the data pattern and 2 new pages to program.In one specific example of writing data to the array 230, consider a garbage collection process that writes data, eg, from an SLC NAND device to a QLC NAND device. Thus, array 230 may represent QLC NAND devices for writing data from SLC NAND devices (not shown).Operations may begin with loading data from the SLC NAND into buffer 210 as shown by load data 242 . In one example, buffer 210 represents a static data cache (SDC), which represents a buffer external to array 230 . In one example, buffer 210 temporarily stores the first portion of data (temporary data 244 ) in buffer 220 . For writes from SLC to QLC, in one example, the first part is 2 pages of data. The first portion of data may be referred to as the first portion of the page, indicating that the portion is only a portion of all pages to be written to the array 230 . Thus, the system 200 diagram shows two lines, each representing a page of data, one line between buffer 210 and register 0 of buffer 220 and one line between buffer 210 and register 1 of buffer 220 between. In one example, pages are loaded into buffer 220 one at a time.In one example, buffer 220 represents a programmable data cache (PDC), where the N registers (registers[0:(N-1)]) represent buffers associated with array 230 for programming the array hardware. In one example, the media controller controls the flush 246 of the first portion of data from register 0 and register 1 to the array 230 . From data load to first refresh 246 may be considered the first phase of a programming operation for array 230 .In one example, additional data, ie, a second portion of data (eg, another 2 pages of data) or a second portion of pages may be loaded into buffer 210 . As shown in the temporary data 260 , the second part of the data may be temporarily stored in the buffer 220 . For the first part of the operation, although there is no requirement to load data into sequential registers or address locations of buffer 220, it is possible to load data into Register 2 and another register (eg, Register 3).In one example, the controller reads the first portion of data from array 230 as indicated by read 248 . Read 248 reads data from array 230 into registers in buffer 220 . In one example, the system 200 may provide the read data to a device external to the system 200 as indicated by read 250 . Read 250 reads data from buffer 210 to another part of the computer system of which system 200 is part.In one example, the controller maintains data in register 2 and another register while other data is being read. In one example, system 200 performs ECC on read data and stores the data or corrected data in buffer 210 . Then, the buffer 210 may temporarily store the read data in the buffer 220 . In one example, data is illustrated as being buffered into register 0 and register 1 as shown in buffer data 260 . It should be understood that other registers or address spaces of the buffer 220 may also be used to temporarily store the updated first portion of data.In one example, the controller flushes both portions of data to the array 230 as indicated by flush 262 . Regardless of where the data is stored within the buffer 220, it is generally understood that the controller manages the reading and storage of data into the buffer 220 to perform programming without the use of an external buffer.In one example, the controller resets the buffer 220 in response to new data being loaded into the buffer 210 . Loading of new data can be controlled to retain data for programming array 230 . In one example, if the system 200 is programmed in a staircase fashion, the data to be retained may include data for different word lines. In one example, when a new address is loaded into the buffer 210, the controller performs a flush to signal that an operation is being performed on a different portion of the data.In one example, the upper page triggers a data refresh. In one example, the controller may issue a refresh in response to the previous page being loaded. Thus, in response to a new address being loaded for programming, system 200 can refresh both portions of data to program the entire portion of data to array 230. In one example, system 200 supports explicit refresh commands or instructions from an off-die media controller. Thus, in response to the refresh command, system 200 can refresh both portions of data to program the entire portion of data to array 230 .3 is a block diagram of an example of a non-volatile die with SLC and QLC memory devices. NVM die 300 represents a non-volatile die according to an example of NVM die 150 of system 100 or an example of system 200 .NVM die 300 includes buffer 302 that represents an internal buffer of NVM QLC block 320 . The NVM die 300 utilizes the buffer 302 to program the NVM QLC block 320 without buffering or caching the data external to the NVM die to organize the QLC write data. Buffer 302 enables NVM die 300 to write data from NVM SLC block 310 to NVM QLC block 320, eg, as part of an internal copy from a block configured in SLC mode to a block configured in QLC mode.NVM SLC block 310 represents a block configured in SLC mode, and NVM QLC block 320 represents a block configured in QLC mode. NVM SLC pages 312 represent one or more pages of SLC data. NVM QLC pages 322 represent one or more QLC data pages. Four NVM SLC blocks 310 may be stored in one NVM QLC block 320 and four NVM SLC pages 312 may be stored in one NVM QLC page 322 .Internal controller 304 represents a controller or media controller internal to NVM die 300 . In one example, the internal controller 304 manages the transfer of data from the NVM SLC block 310 to the NVM QLC block 320 . In one example, internal controller 304 executes firmware that controls garbage collection from NVM SLC block 310 to NVM QLC block 320 . The internal controller 304 may control the transfer of data to and from the buffer 302, including holding write data in the buffer to temporarily store the data for use with others for writing to or programming the NVM QLC block 320 data together. In one example, the internal controller 304 manages the replication of four NVM SLC blocks 310 selected in the NVM die 300 , including temporarily storing the data in the buffer 302 before writing the data to the NVM QLC block 320 .4 is a swim lane diagram of an example of a multi-stage programming operation for a multi-level cell non-volatile memory. Programming 400 illustrates multi-stage operations that may be performed by an example of system 200 or NVM die 300 . The illustrated programming may be an example of 2-bit programming for two pages or first pass or first stage programming followed by another 2-bit programming for two other pages or second pass or second stage programming. Programming can be controlled and operated by an internal controller on the NVM die to be programmed, including source and destination media.In one example, a QLC SSD has a front-end SLC write buffer, and all host data goes through the SLC buffer before being rewritten to the QLC. In one example, the SLC to QLC movement can be designed in a FIFO fashion (first in, first out). In one example, the SLC to QLC movement can be designed in a LIFO fashion (last in first out). In one example, the movement of SLC to QLC can be designed in an efficient manner.The following description assumes a FIFO approach. The following description assumes that the smallest atomic unit is a 4-page host write to SLC and a 4-page background move from SLC to QLC. Thus, the controller may include a write pointer to the top of the stack and a read pointer to the bottom of the stack. So operations will write to N and read and move data from 0.Programming 400 illustrates operations from a source medium and controller (identified as firmware or FW), SRAM as a volatile buffer, and NVM medium. In one example, NVM media has a source mode for NVM media. In one example, the source mode is the SLC mode, where data is first written to the SLC as a write buffer. In one example, host writes to source mode involve storing host write data into different source NVM media for transfer to QLC mode media (such as QLC to QLC, TLC to QLC, two-level cell (sometimes abbreviated as MLC, more commonly used herein as any unit that stores more than one bit of data)) other operations.In one example, at 402, the host writes 4 pages from the source medium to the destination medium. At 404, the programming performs a source mode write (WR) to block (BLK)N of the source medium. At 406, programming performs a source mode read (RD) from block (BLK)0. It should be understood that when garbage collection writes to QLC mode BLK N from source BLK 0, a write to BLK N occurs, where a write to the highest address of the stack occurs and a write to BLK N from the lowest address of the stack occurs. read transfer.At 408, a read from source mode is specified to read Page 1 to SRAM. At 410, the SRAM stores page 1. At 412, the SRAM scratches page 1 for writing, and the NVM medium loads page 1 into the internal buffer. At 414, programming performs a source mode read (RD) from block (BLK)0. At 416, a read from source mode is specified to read page 2 to SRAM. At 418, the SRAM stores page 2. At 420, the SRAM scratches page 2 for writing, and the NVM medium loads page 2 into the internal buffer.In one example, at 422, the firmware (FW) triggers a procedure from the internal buffer to the QLC mode media. At 424, the programming performs a QLC mode first phase write to the WLN. The program will be the lower pages LP and UP of WLN of QLC mode media.At 426, programming performs a source mode read (RD) from block (BLK)0. At 428, a read from source mode is specified to read page 3 to SRAM. At 430, the SRAM stores page 3. At 432, the SRAM scratches page 3 for writing, and the NVM medium loads page 3 into the internal buffer. At 434, programming performs a source mode read (RD) from block (BLK)0. At 436, a read from source mode is specified to read page 4 to SRAM. At 438, the SRAM stores page 4. At 440, the SRAM scratches page 4 for writing, and the NVM medium loads page 4 into the internal buffer.In one example, the final programming of the QLC media is done with pages 3 and 4 that have been loaded into the internal buffer for different word lines, and data from pages 1 and 2 read back from the QLC media. Pages may be pages for different word lines, including pages that are temporarily stored and maintained in an internal buffer pending second-stage programming.At 442, programming performs a preprogrammed read (PRE-RD) from the QLC mode of WLN-2. At 444, a read from source mode is specified to read the first scratch page 1 of WL N-2 to SRAM. At 446, the SRAM stores the first temporary page 1 (identified as page 1a). At 448, the SRAM scratches page 1a for writing, and the NVM medium loads page 1a into the internal buffer. At 450, programming performs a QLC mode pre-program read (PRE-RD) from WLN-2. At 452, a read from source mode is specified to read the first scratch page 2 of WL N-2 to SRAM. At 454, the SRAM stores the first temporary page 2 (identified as page 2a). At 456, the SRAM scratches page 2a for writing, and the NVM medium loads page 2a into the internal buffer.In one example, at 458, the firmware (FW) triggers a procedure from the internal buffer to the QLC mode media. At 460, programming performs a QLC mode second phase write to WL N-2. The program will be the upper and lower pages of WL N-2 of QLC mode media, LP, UP, XP and TP.5 is a flowchart of an example of a process for programming a multi-level cell non-volatile memory. Process 500 illustrates an example of a process for programming an NVM multilevel cell. In one example, at 502, the NVM die receives multiple pages of data from the host for a program operation. In one example, at 504, the NVM die will read and stage pages, respectively, for programming to the destination NVM medium.At 508, the NVM die controller may stage the page for writing in an internal buffer for the NVM die destination medium. If the controller is not ready to program the NV medium, the "yes" "no" branch is taken at 510, the controller can identify the next page to read from the source medium at 512, and return to 506 to program the following A page is read into volatile memory. In one example, the controller determines to program the NV medium based on whether a refresh trigger or a program trigger has been received. A programming trigger can be to load a new address for writing. A programming trigger may be the receipt of a command indicating a programming operation.If the controller is to program the NV media, the "yes" branch at 510, in one example, at 514, the controller determines if there are more pages to program during this programming pass. In one example, if there are more pages to program, a "yes" branch at 516, at 518, the controller may increment the write scratch pad and return to programming operations. There may be more pages to program if there is another scratchpad to program. Then, at 512, the controller may identify the next page to read, and return to 516 to read the next page to volatile memory.In one example, if there are no more pages of data to write to the NV media, the "NO" branch at 516, the entire page is buffered in the internal buffer and the controller is ready to program the NV media. At 520, the media controller may program the NV media with these pages of data.6A is a block diagram of an example of a system having a hardware view of a solid state drive (SSD) with a non-volatile array having internal buffers for multi-phase programming operations. System 602 represents components of a non-volatile memory system that can implement multi-stage programming operations according to programming 400 . System 602 may include an NVM die according to an example of system 200 or an example of NVM die 300 .System 602 includes SSD 620 coupled to host 610 . Host 610 represents the host hardware platform connected to SSD 620 . The host 610 includes a CPU (Central Processing Unit) 612 or other processor as a host processor or host processor device. CPU 612 represents any host processor that generates requests to access data stored on SSD 620 to read data or write data to storage. Such processors may include single-core or multi-core processors, main processors for computing devices, graphics processors, peripheral processors, or supplemental or auxiliary processors, or combinations thereof. CPU 612 may execute host OS and other applications to cause system 602 to operate.Host 610 includes chipset 614 representing hardware components that may be included in the connection between CPU 612 and SSD 620 . For example, chipset 614 may include interconnect circuits and logic to enable access to SSD 620 . Accordingly, host platform 610 may include a hardware platform driver interconnect to couple SSD 620 to host 610 . Host 610 includes hardware to interconnect with the SSDs. Likewise, SSD 620 includes corresponding hardware to interconnect with host 610 .The host 610 includes a controller 616 representing a storage device controller or memory controller on the host side to control access to the SSD 620 . In one example, the controller 616 is included in the chipset 614 . In one example, controller 616 is included in CPU 612 . Controller 616 may be referred to as an NV memory controller to enable host 610 to schedule and organize commands to read and write data to SSD 620 .SSD 620 represents a solid state drive or other storage system or module that includes non-volatile (NV) media 630 to store data. SSD 620 includes an HW (hardware) interface 622 representing hardware components that interface with host 610 . For example, HW interface 622 may interface with one or more buses to implement high-speed interface standards such as NVMe (Non-Volatile Memory Express) or PCIe (Peripheral Component Interconnect Express).In one example, SSD 620 includes NV (non-volatile) media 630 as the primary storage device for SSD 620 . In one example, the NV medium 630 is or includes a block addressable memory technology, such as NAND (not AND) or NOR (not OR). In one example, NV media 630 may include non-volatile block-addressable media, non-volatile byte-addressable media, or non-volatile media that may be byte-addressable or block-addressable . In one example, the non-volatile medium stores data based on the resistance state of the memory cell or the phase of the memory cell. For example, the NV medium 630 may be or include a three-dimensional intersection point (3DXP) memory or storage array based on a chalcogenide phase change material (eg, chalcogenide glass). In one example, the NV medium may be or include multi-threshold level NAND flash, NOR flash, single or multi-level phase change memory (PCM), or phase change memory with switches (PCMS), resistive memory, nanowire memory , Ferroelectric Transistor Random Access Memory (FeTRAM), Magnetoresistive Random Access Memory (MRAM) incorporating memristor technology, or Spin Transfer Torque (STT)-MRAM, or a combination of any of the above, or other memories. In one example, NV media 630 includes 3DNAND cells.In one example, the NV medium 630 is implemented as multiple dies, illustrated as N dies, dies[0:(N-1)]. N can be any number of devices, and is usually a binary number. SSD 620 includes controller 640 to control access to NV media 630 through HW interface 622 . Controller 640 represents the hardware and control logic within SSD 620 to perform control of the media. The controller 640 is internal to the non-volatile storage device or module and is separate from the controller 616 of the host 610 .In one example, die[0:(N-1)] includes NV array 632 . In one example, the NV array 632 is a 3D memory array. The NV array 632 includes an associated buffer 634 that represents an internal buffer for reading and writing to the NV array 632 . In one example, control of reading and writing to and storing data from buffer 634 to NV array 632 enables application programming of NV media using minimal external resources, according to any of the programming examples described. . Program 636 represents the control logic that implements programming. In one example, program 636 represents control logic implemented by a controller that manages programming of NV media.6B is a block diagram of an example of a logical view of a system having a solid state drive (SSD) with a non-volatile array having internal buffers for multi-phase programming operations. System 604 illustrates a system with a non-volatile memory array according to the example of system 602 of Figure 6A.System 604 illustrates logical layers of hosts and SSDs according to the hardware platform of system 602 . System 604 may represent example software and firmware components and physical components of system 602 . In one example, host 650 provides an instance of host 610 . In one example, SSD 660 provides an example of SSD 620.In one example, host 650 includes host OS 652, which represents a host operating system or software platform for the host. Host OS 652 may include a platform on which applications, services, agents, and/or other software execute and be executed by a processor. File system 654 represents control logic for controlling access to NV media. The file system 654 can manage which addresses or memories are used to store which data. There are several known file systems, and file system 654 may implement known file systems or other proprietary systems. In one example, file system 654 is part of host OS 652.Storage device drivers 656 represent one or more system-level modules that control the hardware of host 650 . In one example, storage device driver 656 includes a software application to control the interface to SSD 660 and thus the hardware of SSD 660 . The storage device driver 656 may provide a communication interface between the host and the SSD.Controller 670 of SSD 660 includes firmware 674, which represents control software/firmware for the controller. In one example, controller 670 includes a host interface 672 that represents an interface to host 650 . In one example, the controller 670 includes a media interface 676 that represents an interface to the NAND die 662 . NAND die 662 represents a specific example of NV media and includes an associated NAND array 664 . NAND array 664 includes an array of memory cells.Media interface 676 represents controls that execute on the hardware of controller 670 . It should be appreciated that controller 670 includes hardware that interfaces with host 650 , which hardware may be considered to be controlled by host interface software/firmware 674 . Again, it should be understood that the controller 670 includes hardware that interfaces with the NAND die 662 . In one example, the code for host interface 672 may be part of firmware 674 . In one example, the code for media interface 676 may be part of firmware 674 .In one example, the controller 670 includes an error control 680 to handle data errors in accessing data and corner cases in compliance with signaling and communication interface connections. Error controls 680 may include implementations of hardware or firmware, or a combination of hardware and software.In one example, NAND die 662 includes buffer 666 , which represents an internal buffer for reading and writing to NAND array 664 . In one example, control of reading and writing to and storing data from buffer 666 into NAND array 664 according to any of the programming examples described enables application to NV media using minimal external resources programming. Program 668 represents the control logic that implements programming. In one example, program 668 represents control logic implemented by a controller that manages programming of NV media.7 is a block diagram of an example of a computing system in which a non-volatile array with internal buffers for multi-stage programming operations may be implemented. System 700 represents a computing device according to any of the examples herein, and may be a laptop computer, desktop computer, tablet computer, server, game or entertainment control system, embedded computing device, or other electronic device.In one example, storage subsystem 780 includes storage 784 with NV array 790 to store code/data 786 . In one example, NV array 790 includes associated buffers 792 . In one example, storage device 784 includes a controller (CTLR) 794 representing an on-die controller for utilizing buffer 792 to manage programming of NV array 790 to avoid using external buffering of data. In one example, according to any of the programming examples described, the controller 794 can control reading and writing into the buffer 792 and storing data from the buffer 792 into the NV array 790 to use a minimum of external resources Execute programming.System 700 includes processor 710, which may include any type of microprocessor, central processing unit (CPU), graphics processing unit (GPU), processing core, or other processing hardware, or combination thereof, to provide instructions for system 700 processing or execution. Processor 710 may be a host processor device. Processor 710 controls the overall operation of system 700 and may be or include one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application-specific integrated circuits (ASICs), programmable Logic Device (PLD), or a combination of these devices.System 700 includes boot/configuration 716 representing storage to store boot code (eg, Basic Input/Output System (BIOS)), configuration settings, security hardware (eg, Trusted Platform Module (TPM)) , or other system-level hardware that operates outside the host OS. Boot/configuration 716 may include non-volatile storage devices such as read only memory (ROM), flash memory, or other memory devices.In one example, system 700 includes an interface 712 coupled to processor 710, which may represent a higher speed interface for system components requiring higher bandwidth connections, such as memory subsystem 720 or graphics interface component 740, or High throughput interface. Interface 712 represents interface circuitry, which may be a stand-alone component or integrated on the processor die. Interface 712 may be integrated as a circuit on a processor die or as a component on a system-on-chip. Graphics interface 740, if present, interfaces with graphics components to provide a visual display to a user of system 700. Graphics interface 740 may be a stand-alone component or integrated on a processor die or system on a chip. In one example, graphics interface 740 may drive a high-definition (HD) display or an ultra-high-definition (UHD) display that provides output to a user. In one example, the display may comprise a touch screen display. In one example, graphical interface 740 generates a display based on data stored in memory 730 or based on operations performed by processor 710 or based on both.Memory subsystem 720 represents the main memory of system 700 and provides storage for code to be executed by processor 710 or data values for execution of routines. Memory subsystem 720 may include one or more variants of random access memory (RAM), such as DRAM, 3DXP (three-dimensional cross point), other memory devices, or a combination of these devices. Among other things, memory 730 stores and hosts an operating system (OS) 732 to provide a software platform for executing instructions in system 700 . Additionally, applications 734 may execute on the software platform of OS 732 from memory 730 . Applications 734 represent programs with their own operational logic to perform one or more functions. Process 736 represents an agent or routine that provides accessibility functions to OS 732 or one or more applications 734 or a combination thereof. OS 732 , applications 734 , and processes 736 provide software logic to provide functionality for system 700 . In one example, memory subsystem 720 includes memory controller 722 , which is a memory controller used to generate and issue commands to memory 730 . It should be understood that the memory controller 722 may be a physical part of the processor 710 or a physical part of the interface 712 . For example, memory controller 722 may be an integrated memory controller integrated on a circuit with processor 710 (such as on a processor die or a system-on-a-chip).Although not specifically shown, it should be understood that system 700 may include one or more buses or one or more bus systems between devices, such as a memory bus, a graphics bus, an interface bus, and the like. Buses or other signal lines may communicate or electrically couple components together or between components. A bus may include physical communication lines, point-to-point connections, bridges, adapters, controllers, or other circuits or combinations thereof. The bus may include, for example, a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or Industry Standard Architecture (ISA) bus, a Small Computer System Interface (SCSI) bus, a Universal Serial Bus (USB), or other bus, or one or more of their combinations.In one example, system 700 includes interface 714 , which may be coupled to interface 712 . Interface 714 may be a slower interface than interface 712 . In one example, interface 714 represents interface circuitry, which may include separate components and integrated circuits. In one example, multiple user interface components or peripheral components, or both, are coupled to interface 714 . Network interface 750 provides system 700 with the ability to communicate with remote devices (eg, servers or other computing devices) over one or more networks. The network interface 750 may include an Ethernet adapter, wireless interconnect, cellular interconnect, USB (Universal Serial Bus), or other wired or wireless standard-based or proprietary interface. The network interface 750 may exchange data with remote devices, including sending data stored in memory or receiving data to be stored in memory.In one example, system 700 includes one or more input/output (I/O) interfaces 760 . I/O interface 760 may include one or more interface components through which a user interacts with system 700 (eg, audio, alphanumeric, tactile/touch, or other interface connections). Peripheral interface 770 may include any hardware interface not specifically mentioned above. Peripherals generally refer to devices that are subordinately connected to system 700 . A dependent connection is a connection of a software platform or a hardware platform, or both, on which system 700 provides to perform operations and interact with a user.In one example, system 700 includes a storage subsystem 780 that stores data in a non-volatile manner. In one example, at least some components of storage subsystem 780 may overlap with components of memory subsystem 720 in certain system implementations. The storage device subsystem 780 includes one or more storage devices 784, which may be or include any conventional media for storing large amounts of data in a non-volatile manner, such as one or more magnetic disks, solid state disks, 3DXP, or CD, or a combination of them. Storage 784 maintains code or instructions and data 786 in a permanent state (ie, retains values even if power to system 700 is interrupted). Although memory 730 is typically an execution or operation memory to provide instructions to processor 710, storage 784 may also be generally considered "memory." Storage 784 is a non-volatile storage device, while memory 730 may include volatile memory (ie, if power to system 700 is interrupted, the value or state of the data is indeterminate). In one example, storage subsystem 780 includes a controller 782 to interface with storage 784 . In one example, controller 782 is a physical part of interface 714 or processor 710 , or may include circuitry or logic in both processor 710 and interface 714 .Power supply 702 provides power to the components of system 700 . More specifically, the power supply 702 is typically interfaced with one or more power supplies 704 in the system 700 to provide power to the components of the system 700 . In one example, power supply 704 includes an AC to DC (alternating current to direct current) adapter that plugs into a wall outlet. This AC power source may be a renewable energy (eg, solar) power source 702 . In one example, power source 702 includes a DC power source, such as an external AC to DC converter. In one example, the power source 702 or power supply 704 includes wireless charging hardware to charge via a proximity charging field. In one example, the power source 702 may include an internal battery or fuel cell source.8 is a block diagram of an example of a mobile device in which a non-volatile array with internal buffers for multi-stage programming operations may be implemented. System 800 represents a mobile computing device, such as a computing tablet, a mobile phone or smartphone, a wearable computing device or other mobile device, or an embedded computing device. It should be understood that some components are shown generically and that not all components of a device are shown in system 800 .In one example, memory subsystem 860 includes memory 862 with NV array 890 . In one example, NV array 890 includes associated buffers 892 . In one example, memory 862 includes a controller (CTLR) 894 representing an on-die controller that utilizes buffer 892 to manage programming of NV array 890 to avoid using external buffering of data. In one example, according to any of the programming examples described, the controller 894 may control reading and writing into the buffer 892 and storing data from the buffer 892 into the NV array 890 to use a minimum of external resources Execute programming.System 800 includes a processor 810 that performs the main processing operations of system 800 . Processor 810 may be a host processor device. Processor 810 may include one or more physical devices, such as microprocessors, application processors, microcontrollers, programmable logic devices, or other processing devices. Processing operations performed by processor 810 include executing an operating platform or operating system on which applications and device functions execute. Processing operations include operations related to a human user or I/O (input/output) to other devices, operations related to power management, operations related to connecting the system 800 to another device, or a combination thereof. Processing operations may also include operations related to audio I/O, display I/O, or other interface connections, or a combination thereof. The processor 810 may execute data stored in the memory. The processor 810 may write or edit data stored in memory.In one example, system 800 includes one or more sensors 812 . Sensors 812 represent embedded sensors or interfaces to external sensors, or a combination thereof. Sensors 812 enable system 800 to monitor or detect one or more conditions of the environment or equipment in which system 800 is implemented. Sensors 812 may include environmental sensors (such as temperature sensors, motion detectors, light detectors, cameras, chemical sensors (eg, carbon monoxide, carbon dioxide, or other chemical sensors)), pressure sensors, accelerometers, gyroscopes, medical or physiological sensors Sensors (eg, biosensors, heart rate monitors, or other sensors that detect physiological properties), or other sensors, or a combination thereof. Sensors 812 may also include sensors for biometric systems such as fingerprint recognition systems, face detection or recognition systems, or other systems that detect or recognize user characteristics. Sensor 812 should be understood broadly and not limited to the number of different types of sensors that can be implemented with system 800 . In one example, one or more sensors 812 are coupled to processor 810 via front-end circuitry integrated with processor 810 . In one example, one or more sensors 812 are coupled to processor 810 via another component of system 800 .In one example, system 800 includes an audio subsystem 820 representing hardware (eg, audio hardware and audio circuits) and software (eg, drivers, codecs) components associated with providing audio functionality to a computing device . Audio functions may include speaker or headphone output, and microphone input. Devices for such functionality may be integrated into system 800, or connected to system 800. In one example, a user interacts with system 800 by providing audio commands that are received and processed by processor 810 .Display subsystem 830 represents hardware (eg, a display device) and software components (eg, drivers) that provide a visual display for presentation to a user. In one example, the display includes haptic components or touch screen elements for user interaction with the computing device. Display subsystem 830 includes a display interface 832 that includes a particular screen or hardware device for providing a display to a user. In one example, display interface 832 includes logic separate from processor 810 (eg, a graphics processor) to perform at least some processing related to display. In one example, display subsystem 830 includes a touch screen device that provides both output and input to the user. In one example, display subsystem 830 includes a high-definition (HD) display or an ultra-high-definition (UHD) display that provides output to a user. In one example, the display subsystem includes or drives a touch screen display. In one example, display subsystem 830 generates display information based on data stored in memory or based on operations performed by processor 810 or based on both.I/O controller 840 represents hardware devices and software components associated with user interaction. I/O controller 840 is operable to manage hardware that is part of audio subsystem 820 or display subsystem 830, or both. Additionally, I/O controller 840 illustrates a connection point for additional devices connected to system 800 through which a user may interact with the system. For example, devices attachable to system 800 may include microphone devices, speakers or stereo systems, video systems or other display devices, keyboard or keypad devices, buttons/switches, or devices such as card readers or other devices for specific applications other I/O devices.As described above, I/O controller 840 may interact with audio subsystem 820 or display subsystem 830, or both. For example, input through a microphone or other audio device may provide input or commands to one or more applications or functions of system 800 . Additionally, instead of or in addition to display output, audio output may also be provided. In another example, if the display subsystem includes a touch screen, the display device also acts as an input device that can be managed at least in part by I/O controller 840 . There may also be additional buttons or switches on system 800 to provide I/O functions managed by I/O controller 840 .In one example, I/O controller 840 manages information such as accelerometers, cameras, light sensors or other environmental sensors, gyroscopes, global positioning systems (GPS), or other hardware that may be included in system 800 , or sensors 812 equipment. The input may be part of direct user interaction, and environmental input may be provided to the system to affect the operation of the system (such as filtering noise, adjusting the display for brightness detection, applying a flash to the camera, or other features).In one example, system 800 includes power management 850 that manages battery power usage, battery charging, and features related to power saving operation. Power management 850 manages power from power source 852 , which provides power to the components of system 800 . In one example, the power source 852 includes an AC to DC (alternating current to direct current) adapter that plugs into a wall outlet. This AC power source may be a renewable energy source (eg, solar energy, motion-based electricity). In one example, the power source 852 includes only DC power, which may be provided by a DC power source, such as an external AC to DC converter. In one example, the power source 852 includes wireless charging hardware for charging via a proximity charging field. In one example, the power source 852 may include an internal battery or fuel cell source.Memory subsystem 860 includes one or more memory devices 862 for storing information in system 800 . Memory subsystem 860 may include non-volatile (state does not change if power to the memory device is interrupted) and/or volatile (state is indeterminate if power to the memory device is interrupted) memory devices or a combination thereof. Memory subsystem 860 may store application data, user data, music, photos, documents, or other data, as well as system data (whether long-term or temporary) related to performing the applications and functions of system 800 . In one example, memory subsystem 860 includes memory controller 864 (memory controller 864 may also be considered part of the control of system 800, and potentially may be considered part of processor 810). Memory controller 864 includes a scheduler to generate and issue commands to control access to memory device 862 .Connection 870 includes hardware devices (eg, wireless or wired connectors and communication hardware, or a combination of wireless and wired hardware) and/or software components (eg, drivers, protocol stacks, etc.) to enable system 800 to communicate with external devices . External devices may be separate devices such as other computing devices, wireless access points or base stations, and peripheral devices such as headsets, printers, or other devices. In one example, system 800 exchanges data with external devices for storage in memory or for display on a display device. Data exchanged may include data to be stored in memory or data already stored in memory to read, write, or edit data.Connection 870 may include a number of different types of connections. In general terms, system 800 is illustrated with a cellular connection 872 and a wireless connection 874 . Cellular connection 872 generally refers to a cellular network connection provided by a wireless carrier, such as via GSM (Global System for Mobile Communications) or its variants or derivatives, CDMA (Code Division Multiple Access) or its variants or derivatives, TDM (Time Division Multiple Access) multiplex) or variants or derivatives thereof, LTE (Long Term Evolution - also known as "4G"), 5G, or other cellular service standards. Wireless connection 874 refers to a non-cellular wireless connection, and may include a personal area network (such as Bluetooth), a local area network (such as WiFi), or a wide area network (such as WiMax), or other wireless communications or combinations thereof. Wireless communication refers to the use of modulated electromagnetic radiation to transmit data over a non-solid medium. Wired communication takes place over a solid communication medium.Peripheral connections 880 include hardware interfaces and connectors, and software components (eg, drivers, protocol stacks) that make peripheral connections. It should be understood that system 800 may be either a peripheral ("to" 882) or have a peripheral connected ("from" 884) to other computing devices. For purposes such as managing (eg, downloading, uploading, changing, synchronizing) content on system 800, system 800 typically has a "docking" connector to connect to other computing devices. Additionally, the docking connector may allow the system 800 to connect to certain peripheral devices that allow the system 800 to control content output, such as for audiovisual systems or other systems.In addition to proprietary docking connectors or other proprietary connection hardware, the system 800 may also make peripheral connections 880 via generic connectors or standards-based connectors. Common types may include Universal Serial Bus (USB) connectors (which may include any of several different hardware interfaces), DisplayPort including Mini DisplayPort (MDP), High Definition Multimedia Interface (HDMI), or other type.Generally, for the description herein, in one example, an apparatus includes: a non-volatile (NV) medium having a multi-level cell array on a dielectric die; a volatile memory on the dielectric die, for storing data for programming the NV medium; and a buffer on the medium die for buffering read and programming data for the NV medium; wherein the program for the NV medium will temporarily storing a portion of the page in a buffer for programming, reading a second portion of the page from the NV medium to the volatile memory, storing the second portion of the page in the buffer, and using the The first partial page and the second partial page program the NV medium.In one example of the apparatus, the program of the NV medium includes garbage collection to move data from the source medium to the NV medium. According to any preceding example of the apparatus, in one example, the source medium comprises single-level cell (SLC) flash memory, or in one example, the source medium comprises three-level cell (TLC) flash memory, or in an example In an example, the source medium includes quad-level cell (QLC) flash memory, or in one example, the source medium includes three-dimensional cross-point (3DXP) memory, or in one example, the source medium includes dynamic random access memory (DRAM). According to any preceding example of the apparatus, in one example, reading the second partial page to the volatile memory includes performing error checking and correction (ECC) on the second partial page. According to any preceding example of the apparatus, in one example, programming the NV medium comprises: in response to loading a new address for programming in the NV medium, changing the first partial page and the A second partial page is flushed from the buffer to the NV medium. According to any preceding example of the apparatus, in one example, programming the NV medium comprises, in response to a refresh command, flushing the first partial page and the second partial page from the buffer to the the NV medium described above. According to any preceding example of the apparatus, in one example, the buffer includes a read/write register for the NV medium. According to any preceding example of the apparatus, in one example, the NV medium comprises quad-level cell (QLC) flash memory, or in one example, the NV medium comprises three-level cell (TLC) flash memory, or in an example In one example, the NV medium includes five-level cell (5LC) flash memory, or in one example, the NV medium includes three-dimensional cross point (3DXP) memory. According to any preceding example of the apparatus, in one example, the volatile memory comprises static random access memory (SRAM).Generally, for the description herein, in one example, a computing device includes: a host processor; and a solid state drive (SSD) coupled to the host processor, the SSD including: a non-volatile (NV) a medium having a multi-level cell array on a medium die; volatile memory on the medium die for storing data for programming the NV medium; and a buffer on the medium die with for buffering read and programming data for the NV medium; wherein the program of the NV medium temporarily stores the first partial page in the buffer for programming, and reads the second partial page from the NV medium to the the volatile memory, storing the second partial page in the buffer, and programming the NV medium using the first partial page and the second partial page.In one example of the computing device, the program of the NV medium includes garbage collection to move data from the source medium to the NV medium. According to any preceding example of the computing device, in one example, the source medium comprises single-level cell (SLC) flash memory, or in one example, the source medium comprises three-level cell (TLC) flash memory, or In one example, the source medium includes quad-level cell (QLC) flash memory, or in one example, the source medium includes three-dimensional cross-point (3DXP) memory, or in one example, the source medium includes dynamic random access memory. fetch memory (DRAM). According to any preceding example of the computing device, in one example, reading the second partial page to the volatile memory includes performing error checking and correction (ECC) on the second partial page. According to any preceding example of the computing device, in one example, programming the NV medium comprises: in response to loading a new address for programming in the NV medium, changing the first partial page and all the The second partial page is flushed from the buffer to the NV medium. According to any preceding example of the computing device, in one example, programming the NV medium includes, in response to a refresh command, flushing the first partial page and the second partial page from the buffer to the NV medium. According to any preceding example of the computing device, in one example, the buffer includes a read/write register for the NV medium. According to any preceding example of the computing device, in one example, the NV medium comprises quad-level cell (QLC) flash memory, or in one example, the NV medium comprises three-level cell (TLC) flash memory, or In one example, the NV medium includes five-level cell (5LC) flash memory, or in one example, the NV medium includes three-dimensional cross-point (3DXP) memory. According to any preceding example of the computing device, in one example, the volatile memory comprises static random access memory (SRAM). According to any preceding example of the computing device, in one example, the computing device comprises: a display communicatively coupled to the host processor; a network interface communicatively coupled to the host processor; or a battery, for powering the computing device.In general, for the description herein, in one example, a method includes storing data on volatile memory to program the NV medium, the volatile memory and a non-volatile (NV) medium on one media die and having a multi-level cell array; and buffering read and program data for the NV media using buffers on the media die; programming the NV media includes placing a first partial page staging in the buffer for programming, reading a second partial page from the NV medium to the volatile memory, storing the second partial page in the buffer, and using the The first partial page and the second partial page program the NV medium.In one example of the method, programming the NV medium includes garbage collection to move data from the source medium to the NV medium. According to any preceding example of the method, in one example the source medium comprises single level cell (SLC) flash memory, or in one example the source medium comprises three level cell (TLC) flash memory, or in one example In an example, the source medium includes quad-level cell (QLC) flash memory, or in one example, the source medium includes three-dimensional cross-point (3DXP) memory, or in one example, the source medium includes dynamic random access memory (DRAM). According to any preceding example of the method, in one example, reading the second partial page to the volatile memory includes performing error checking and correction (ECC) on the second partial page. According to any preceding example of the method, in one example, programming the NV medium comprises: in response to loading a new address for programming in the NV medium, changing the first partial page and the A second partial page is flushed from the buffer to the NV medium. According to any preceding example of the method, in one example, programming the NV medium includes, in response to a refresh command, flushing the first partial page and the second partial page from the buffer to the the NV medium described above. According to any preceding example of the method, in one example, the buffer includes a read/write register for the NV medium. According to any preceding example of the method, in one example, the NV medium comprises quad-level cell (QLC) flash memory, or in one example, the NV medium comprises three-level cell (TLC) flash memory, or in an example In one example, the NV medium includes five-level cell (5LC) flash memory, or in one example, the NV medium includes three-dimensional cross point (3DXP) memory. According to any preceding example of the method, in one example, the volatile memory comprises static random access memory (SRAM).The flowcharts shown herein provide examples of sequences of various process actions. The flowcharts may indicate operations to be performed by software or firmware routines as well as physical operations. Flow diagrams may illustrate examples of implementations of the states of a finite state machine (FSM) that may be implemented in hardware and/or software. Although shown in a particular order or sequence, unless otherwise stated, the order of actions may be modified. Accordingly, the schematic diagrams shown should be understood as examples only, processes may be performed in a different order, and some actions may be performed in parallel. Additionally, one or more actions may be omitted; thus, not all implementations will perform all actions.To the extent that various operations or functions are described herein, they may be described or defined as software code, instructions, configurations, and/or data. Content can be directly executable ("object" or "executable" form), source code, or diff code ("delta" or "patch" code). The software content described herein may be provided via an article of manufacture on which the content is stored, or via a method of operating a communication interface to transmit data via the communication interface. A machine-readable storage medium may cause a machine to perform the functions or operations described, and includes any mechanism for storing information in a form accessible by a machine (eg, computing device, electronic system, etc.), such as a recordable/non-recordable medium (eg , read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.). A communication interface includes any mechanism that interfaces with any of hardwired, wireless, optical, etc. media to communicate with another device, such as a memory bus interface, a processor bus interface, an internet connection, a disk controller, and the like. The communication interface may be configured by providing configuration parameters and/or sending signals such that the communication interface is ready to provide data signals describing the software content. The communication interface may be accessed via one or more commands or signals sent to the communication interface.The various components described herein can be means for performing the described operations or functions. Each component described herein includes software, hardware, or a combination thereof. These components may be implemented as software modules, hardware modules, special purpose hardware (eg, special purpose hardware, application specific integrated circuits (ASICs), digital signal processors (DSPs), etc.), embedded controllers, hardwired circuits, and the like.In addition to what is described herein, various modifications may be made to the disclosed content and embodiments of the present invention without departing from the scope of the invention. Accordingly, the descriptions and examples herein should be construed in an illustrative rather than a restrictive sense. The scope of the invention should be measured only by reference to the appended claims. |
An electrical current (EC) manager module may assign a plurality of hardware elements of the PCD to one of two groups. The EC manager module may monitor individual electrical current levels of one of the groups as well as calculate an instantaneous electrical current level for the PCD based on a current charge status for the PCD. The EC manager module may then adjust operation of at least one hardware element to keep operation of the PCD below the calculated instantaneous electrical current level for the PCD. The EC manager module may estimate an electrical current level for one of the groups based on requests issued to hardware elements. The EC manager module may also compare the calculated instantaneous electrical current level to the monitored electrical current level. The calculated instantaneous electrical current level may be compared to minimum current levels listed in a table. |
CLAIMS What is claimed is: 1. A method for managing electrical current within a portable computing device ("PCD"), the method comprising: assigning a plurality of hardware elements of the PCD to one of two groups; monitoring individual electrical current levels of one of the groups; calculating an instantaneous electrical current level for the PCD based on a current charge status for the PCD; and adjusting operation of at least one hardware element to keep operation of the PCD below the calculated instantaneous electrical current level for the PCD. 2. The method of claim 1 , further comprising estimating an electrical current level for one of the groups based on requests issued to hardware elements. 3. The method of claim 1, further comprising comparing the calculated instantaneous electrical current level to the monitored electrical current level. 4. The method of claim 3, further comprising comparing the calculated instantaneous electrical current level to minimum current levels listed in a table, the table further comprising use cases for the PCD. 5. The method of claim 1, wherein adjusting operation of at least one hardware element further comprises issuing at least one command to the hardware element. 6. The method of claim 5, wherein the at least one command comprises one of degree relative to the operation of the hardware element. 7. The method of claim 1, wherein each group has an assigned priority level. 8. The method of claim 1, wherein assigning a plurality of hardware elements of the PCD to one of two groups is made according to input received from an operator of the PCD. 9. The method of claim 1, wherein at least one hardware element comprises a multicore processor. 10. The method of claim 1, wherein each level of a group has an assigned priority level. 11. A computer system for managing electrical current within a portable computing device ("PCD"), the system comprising: a processor operable for: assigning a plurality of hardware elements of the PCD to one of two groups; monitoring individual electrical current levels of one of the groups; calculating an instantaneous electrical current level for the PCD based on a current charge status for the PCD; and adjusting operation of at least one hardware element to keep operation of the PCD below the calculated instantaneous electrical current level for the PCD. 12. The system of claim 11, wherein the processor is further operable for: estimating an electrical current level for one of the groups based on requests issued to hardware elements. 13. The system of claim 11, wherein the processor is further operable for: comparing the calculated instantaneous electrical current level to the monitored electrical current level. 14. The system of claim 13, wherein the processor is further operable for: comparing the calculated instantaneous electrical current level to minimum current levels listed in a table, the table further comprising use cases for the PCD. 15. The system of claim 11, wherein adjusting operation of at least one hardware element further comprises issuing at least one command to the hardware element. 16. The system of claim 15, wherein the at least one command comprises one of degree relative to the operation of the hardware element. 17. The system of claim 11, wherein each group has an assigned priority level. 18. The system of claim 11, wherein assigning a plurality of hardware elements of the PCD to one of two groups is made according to input received from an operator of the PCD. 19. The system of claim 11, wherein at least one hardware element comprises a multicore processor. 20. The system of claim 11, wherein each level of a group has an assigned priority level. 21. A computer system for managing one or more memory resources of a wireless handheld computing device, the system comprising: means for assigning a plurality of hardware elements of the PCD to one of two groups; means for monitoring individual electrical current levels of one of the groups; means for calculating an instantaneous electrical current level for the PCD based on a current charge status for the PCD; and means for adjusting operation of at least one hardware element to keep operation of the PCD below the calculated instantaneous electrical current level for the PCD. 22. The system of claim 21, further comprising: means for estimating an electrical current level for one of the groups based on requests issued to hardware elements. 23. The system of claim 21, further comprising: means for comparing the calculated instantaneous electrical current level to the monitored electrical current level. 24. The system of claim 23, further comprising: means for comparing the calculated instantaneous electrical current level to minimum current levels listed in a table, the table further comprising use cases for the PCD. 25. The system of claim 21, wherein the means for adjusting operation of at least one hardware element further comprises means for issuing at least one command to the hardware element. 26. The method of claim 25, wherein the at least one command comprises one of degree relative to the operation of the hardware element. 27. The system of claim 21, wherein each group has an assigned priority level. 28. The system of claim 21, wherein the means for assigning a plurality of hardware elements of the PCD to one of two groups assesses input received from an operator of the PCD. 29. The system of claim 21, wherein at least one hardware element comprises a multicore processor. 30. The system of claim 21, wherein each level of a group has an assigned priority level. 31. A computer program product comprising a computer usable medium having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method for managing electrical current within a portable computing device ("PCD"), said method comprising: assigning a plurality of hardware elements of the PCD to one of two groups; monitoring individual electrical current levels of one of the groups; calculating an instantaneous electrical current level for the PCD based on a current charge status for the PCD; and adjusting operation of at least one hardware element to keep operation of the PCD below the calculated instantaneous electrical current level for the PCD. 32. The computer program product of claim 31, wherein the program code implementing the method further comprises: estimating an electrical current level for one of the groups based on requests issued to hardware elements. 33. The computer program product of claim 31, wherein the program code implementing the method further comprises: comparing the calculated instantaneous electrical current level to the monitored electrical current level. 34. The computer program product of claim 33, wherein the program code implementing the method further comprises: comparing the calculated instantaneous electrical current level to minimum current levels listed in a table, the table further comprising use cases for the PCD. 35. The computer program product of claim 31, wherein adjusting operation of at least one hardware element further comprises issuing at least one command to the hardware element. 36. The computer program product of claim 35, wherein the at least one command comprises one of degree relative to the operation of the hardware element. 37. The computer program product of claim 31, wherein each group has an assigned priority level. 38. The computer program product of claim 31, wherein assigning a plurality of hardware elements of the PCD to one of two groups is made according to input received from an operator of the PCD. 39. The computer program product of claim 31, wherein at least one hardware element comprises a multicore processor. 40. The computer program product of claim 31, wherein each level of a group has an assigned priority level. |
SYSTEM AND METHOD FOR MANAGING ELECTRICAL CURRENT IN A PORTABLE COMPUTING DEVICE PRIORITY AND RELATED APPLICATIONS STATEMENT This patent application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application Serial No. 61/602,951, filed on February 24, 2012, and entitled, "SYSTEM AND METHOD FOR MANAGING ELECTRICAL CURRENT IN A PORTABLE COMPUTING DEVICE." The entire contents of which are hereby incorporated by reference. DESCRIPTION OF THE RELATED ART Portable computing devices ("PCDs"), like mobile phones, usually have many rich features that are often accessed and run simultaneously. Such features are supported by many hardware elements which consume significant amounts of power. Power in most PCDs is delivered by one or more batteries. In mobile phones, like smart phones, this is usually a single battery having a form factor dictated by the size of the entire mobile phone. The electrical current draw from the combination of these hardware elements simultaneously often can be excessively high such that the voltage across a single battery may drop significantly when hardware elements are operated simultaneously. Such a significant drop in voltage may directly impact memory. For example, data within memory may become corrupted and may require a system reset in order to correct the problem. Other hardware elements besides memory within a portable computing device may suffer degraded performance when a voltage drop occurs. For example, audio signals supplied to a speaker may be clipped or become choppy due to a voltage drop. For RF modems, a voltage drop may equate to phone calls being dropped. Voltage drops may occur in complex processor environments. The electrical current draw for PCDs which have multicore processors may be significantly higher compared to portable computing devices which only utilize single core processors. A scenario in which significant electrical current draw and resulting voltage drops occur may include portable computing devices that support video recording simultaneously with the background light being supplied by light emitting diodes (LEDs). If an instruction intensive application is running on a multicore processor and the user desires to use the camcorder simultaneously with the intensive application while also providing background illumination with LEDs (and while listening to music or providing music through speakers), such a multi-feature situation with conventional portable computing devices may trigger a reset condition for the PCDs as described above. This may be especially true for PCDs powered by a single battery that has a form factor corresponding to the size of the PCD. Therefore, there is a need in the art for a system and method that manages available electrical current such that PCD functionality is optimized. SUMMARY OF THE DISCLOSURE [0007] Various embodiments of methods and systems for managing electrical current in a portable computing device ("PCD") are disclosed. Exemplary embodiments include an electrical current ("EC") manager module that may assign a plurality of hardware elements of the PCD to one of two groups. The EC manager module may monitor individual electrical current levels of one of the groups as well as calculate an instantaneous electrical current level for the PCD based on a current charge status for the PCD. The EC manager module may then adjust operation of at least one hardware element to keep operation of the PCD below the calculated instantaneous electrical current level for the PCD. The EC manager module may estimate an electrical current level for one of the groups based on requests issued to hardware elements. [0008] The EC manager module may also compare the calculated instantaneous electrical current level to the monitored electrical current level. The calculated instantaneous electrical current level may be compared to minimum current levels listed in a table. The table may include use cases for the PCD and corresponding minimum electrical current levels. Adjusting operation of at least one hardware element by the EC manager module may include issuing at least one command to the hardware element. The at least one command may comprise one of degree relative to the operation of the hardware element. BRIEF DESCRIPTION OF THE DRAWINGS In the drawings, like reference numerals refer to like parts throughout the various views unless otherwise indicated. For reference numerals with letter character designations such as "102A" or "102B", the letter character designations may differentiate two like parts or elements present in the same figure. Letter character designations for reference numerals may be omitted when it is intended that a reference numeral to encompass all parts having the same reference numeral in all figures. FIG. 1 is a functional block diagram of an exemplary, non-limiting aspect of a PCD in the form of a wireless telephone for implementing methods and systems for managing electrical current within a portable computing device; FIG. 2 is a functional block diagram illustrating relationships among the EC manager, controller, a resource power manager, master processors, low-level drivers, shared resources, and local resources; FIG. 3 is a graph which illustrates the state of charge of a battery of a portable computing device plotted along the X-axis versus battery voltage (in Volts) plotted a long a first y-axis and battery impedance (milliohms) plotted along a second y-axis; FIG. 4 is a graph which illustrates the state of charge of a battery of a portable computing device projected along the X-axis against achievable current maximums projected on the Y-axis; FIG. 5 provides a PCD current level tracking table that may be part of a database maintained by the EC manager module; FIG. 6 is a graph which illustrates the state of charge of a battery of a portable computing device projected along the X-axis against achievable current maximums projected on the Y-axis along with electrical current levels referenced in the table of FIG. 5; FIG. 7 is a bar chart 700 that illustrates at least three different types of electrical consumers that may be categorized within a portable computing device by the EC manager module; FIG. 8 is a graph 800 which illustrates instantaneous current plotted on the y- axis versus time on the x-axis in addition to present consumption of categories of electrical consumers illustrated in FIG. 7; and FIG. 9 is a logical flowchart illustrating a method for managing electrical current levels within a portable computing device. DETAILED DESCRIPTION The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as exclusive, preferred or advantageous over other aspects. [0020] In this description, the term "application" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, an "application" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed. [0021] As used in this description, the terms "component," "database," "module," "system," "processing component" and the like are intended to refer to a computer- related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components may execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal). [0022] In this description, the terms "central processing unit ("CPU")," "digital signal processor ("DSP")," and "chip" are used interchangeably. Moreover, a CPU, DSP, or a chip may be comprised of one or more distinct processing components generally referred to herein as "core(s)." [0023] In this description, the term "call" refers to a request for additional resources and/or functionality in a PCD over and above that which may be running at the time of the call. As such, one of ordinary skill in the art will understand that a call may be the result of a PCD user requesting the PCD to perform some function, provide some service, generate and render some deliverable or the like. Moreover, one of ordinary skill in the art will also understand that a call for a PCD resource may be the result of a given component within the PCD leveraging another component within the PCD to complete a workload task. As a non-limiting example, a user action to open a browser application on a PCD may cause calls for additional resources/components in the PCD not in use at the time of the call such as a modem, a graphical processor and/or a display. One of ordinary skill in the art will understand that allowing a call for a component or resource may increase battery demand within a PCD. [0024] In this description, it will be understood that the terms "thermal" and "thermal energy" may be used in association with a device or component capable of generating or dissipating energy that may be measured in units of "temperature." Consequently, it will further be understood that the term "temperature," with reference to some standard value, envisions any measurement that may be indicative of the relative warmth, or absence of heat, of a "thermal energy" generating device or component. For example, the "temperature" of two components is the same when the two components are in "thermal" equilibrium. [0025] In this description, the terms "workload," "process load" and "process workload" are used interchangeably and generally directed toward the processing burden, or percentage of processing burden, associated with a given processing component in a given embodiment. Further to that which is defined above, a "processing component" or "thermal aggressor" may be, but is not limited to, a central processing unit, a graphical processing unit, a core, a main core, a sub-core, a processing area, a hardware engine, etc. or any component residing within, or external to, an integrated circuit within a portable computing device. [0026] In this description, the term "portable computing device" ("PCD") is used to describe any device operating on a limited capacity power supply, such as a battery. Although battery operated PCDs have been in use for decades, technological advances in rechargeable batteries coupled with the advent of third generation ("3G") and fourth generation ("4G") wireless technology have enabled numerous PCDs with multiple capabilities. Therefore, a PCD may be a cellular telephone, a satellite telephone, a pager, a PDA, a smartphone, a navigation device, a smartbook or reader, a media player, a combination of the aforementioned devices, a laptop computer with a wireless connection, among others. [0027] An electrical current ("EC") manager module of a PCD, such as a mobile phone, may be embodied in software and/or hardware (or both). The EC manager module may track the state of charge for the battery of the PCD. As understood by one of ordinary skill the art, a battery manifests different characteristics over time while the battery is discharging. Further, the impedance of a battery may change with temperature. The EC manager module may monitor the state of charge of the battery of the portable computing device as well as the impedance of the battery at a given instant so that it can compute the maximum electrical current that a battery may support. The EC manager module may determine the maximum electrical current "budget" that can be "spent" or used by a portable computing device. The EC manager module may also track conditions when the portable computing device and its battery are receiving energy from a charger. [0028] The EC manager module may track the state of electrical current draw from all active hardware components of a portable computing device. In other exemplary embodiments, the EC manager module may assign "high" draw electrical current hardware components to a first group and assign "low" draw electrical current hardware components to a second group. According to this exemplary embodiment, the EC manager module may monitor each hardware component of the first group individually while it may assign an electrical current budget to the second group of hardware components and not track the individual electrical current draws of the hardware components in the second group. Stated differently, the EC manager module may allocate electrical current draw margins to the hardware components of the second group without tracking the individual states of each hardware component in this second group. [0029] The EC manager module may set maximum electrical current draws for certain groups of hardware. The EC manager module may communicate one or more electrical current levels at which a particular hardware device may operate. The EC manager module may communicate a range of levels to a particular hardware device in which each level may be associated with predefined operations that may be specific to the hardware device. [0030] For example, the EC manager module may communicate to a particular hardware device that it should switch from a high electrical current level of operation to a medium electrical current level of operation. The terms "high" and "medium" communicated by the EC manager module may be associated with specific operational characteristics of the hardware. In this scenario, the instruction of changing from a "high" level of operation to a "medium" level of operation from the EC manager module to an exemplary device, like a processor, may mean that the processor needs to dial back or lower its clock frequency. [0031] The lowering of the clock frequency may effectively lower the electrical current draw of the processor. One of ordinary skill in the art recognizes that the messages communicated from the EC manager module to the hardware devices are not limited to the words low, medium or high. Other level designations may include numbers like level 1, level 2, or level 3, or alpha numeric codes, etc. The EC manager module may maintain a table that tracks predefined numerical levels, use cases, and predefined electrical current levels expressed in amperes. Referring now to FIG. 1, this figure is a functional block diagram illustrating an exemplary embodiment of a portable computing device ("PCD") 100. An electrical current ("EC") manager module 26 within PCD 100 may leverage knowledge of individual hardware elements associated with various software applications in the PCD 100 to manage electrical current. Advantageously, by monitoring the specific electrical current of a hardware element or a plurality of hardware elements, the EC manager module 26 may apply electrical current load management with a fine grained approach which, when necessary, prioritizes components and their associated functions in such a way that the operation of the PCD 100 optimized. As can be seen in the exemplary illustration of FIG. 1, a resource power manager ("RPM") 180 monitors and controls power supplied by a battery 188 for hardware elements residing within the integrated circuit ("IC") 102. One or more electrical current sensors 157B are configured to monitor power rails (not illustrated) and generate a signal indicative of electrical current consumption by the particular component(s) associated with power rails (not illustrated) which feed each of the hardware elements within IC 102. One or more electrical current sensors 157B may be positioned on the IC 102 and/or adjacent to the IC 102. It is envisioned that the electrical current sensors 157B may be configured to monitor electrical current and be of a type such as, but not limited to, a Hall effect type for measuring the electromagnetic field generated by electrical current flowing through a power rail, a shunt resistor current measurement type for calculating electrical current from voltage drop measured across a resistor in the power rail, or any type known to one of ordinary skill in the art. As such, while the particular design, type or configuration of a electrical current sensor 157B that may be used in an embodiment of the systems and methods may be novel in, and of, itself, the systems and methods are not limited to any particular type of electrical current sensor 157B. Other sensors, such as temperature sensors 157 A and 157C may be configured for measuring temperature at or near a processing component, the measurement of which may also be used to deduce power consumption by a given component. As shown, the PCD 100 includes an on-chip system 102 that includes a multi- core central processing unit ("CPU") 110 and an analog signal processor 126 that are coupled together. The CPU 110 may comprise a zeroth core 222, a first core 224, and an Nth core 230 as understood by one of ordinary skill in the art. Further, instead of a CPU 110, a digital signal processor ("DSP") may also be employed as understood by one of ordinary skill in the art. In general, the EC manager module 26 may be responsible for monitoring electrical current within PCD 100, predicting impacts on battery loads and applying electrical current load management techniques to help the PCD 100 optimize its power supply and maintain a high level of functionality. The EC manager module 26 communicates with multiple operational sensors (e.g., electrical current sensors 157B, temperature sensors 157A,C, and hardware elements) distributed throughout the on-chip system 102 and with the CPU 110 of the PCD 100. In some exemplary embodiments, the EC manager module 26 may also monitor electrical current sensors 157B for current consumption rates uniquely associated with the cores 222, 224, 230 and transmit the current consumption data to a database (which may reside in memory 112). The EC manager module 26 may identify use case conditions of the PCD 100 that may warrant application of one or more electrical current load management techniques to specific hardware elements within chip 102. As illustrated in FIG. 1, a display controller 128 and a touch screen controller 128 are coupled to the digital signal processor 110. A touch screen display 132 external to the on-chip system 102 is coupled to the display controller 128 and the touch screen controller 128. The EC manager module 26 may monitor workload queues for the cores 222, 224, 230, for example, and work with the RPM 180 to manage power provided to the cores from power supply 188. The EC manager module 26 may monitor electrical current measurements on power rails from the RPM 180 to components of the on-chip system 102 and calculate present levels of electrical draw on the power supply 188, that may comprise a single battery and/or an electrical charger for the battery. Advantageously, by quantifying present levels of electrical current loads the EC manager module 26 may predict electrical current drawn on the battery 188 resulting from calls for additional functionality/workloads from one or more hardware elements within PCD 100. [0040] PCD 100 may further include a video encoder 134, e.g., a phase-alternating line ("PAL") encoder, a sequential couleur avec memoire ("SECAM") encoder, a national television system(s) committee ("NTSC") encoder or any other type of video encoder 134. The video encoder 134 is coupled to the multi-core central processing unit ("CPU") 110. A video amplifier 136 is coupled to the video encoder 134 and the touch screen display 132. A video port 138 is coupled to the video amplifier 136. As depicted in FIG. 1, a universal serial bus ("USB") controller 140 is coupled to the CPU 110. Also, a USB port 142 is coupled to the USB controller 140. A memory 112A and a subscriber identity module (SIM) card 146 may also be coupled to the CPU 110. Further, as shown in FIG. 1, a digital camera 148 may be coupled to the CPU 110. In an exemplary aspect, the digital camera 148 is a charge-coupled device ("CCD") camera or a complementary metal-oxide semiconductor ("CMOS") camera. [0041] As further illustrated in FIG. 1, a stereo audio CODEC 150 may be coupled to the analog signal processor 126. Moreover, an audio amplifier 152 may be coupled to the stereo audio CODEC 150. In an exemplary aspect, a first stereo speaker 154 and a second stereo speaker 156 are coupled to the audio amplifier 152. FIG. 1 shows that a microphone amplifier 158 may also be coupled to the stereo audio CODEC 150. Additionally, a microphone 160 may be coupled to the microphone amplifier 158. In a particular aspect, a frequency modulation ("FM") radio tuner 162 may be coupled to the stereo audio CODEC 150. Also, an FM antenna 164 is coupled to the FM radio tuner 162. Further, stereo headphones 166 may be coupled to the stereo audio CODEC 150. [0042] FIG. 1 further illustrates that a radio frequency ("RF") transceiver 168 may be coupled to the analog signal processor 126. An RF switch 170 may be coupled to the RF transceiver 168 and an RF antenna 172. As shown in FIG. 1, a keypad 174 may be coupled to the analog signal processor 126. Also, a mono headset with a microphone 176 may be coupled to the analog signal processor 126. Further, a vibrator device 178 may be coupled to the analog signal processor 126. FIG. 1 also shows that a power supply 188, for example a battery and/or an electrical charger in combination with the battery, is coupled to the on-chip system 102 through RPM 180. In a particular aspect, the power supply includes a rechargeable DC battery or a DC power supply that is derived from an alternating current ("AC") to DC transformer that is connected to an AC power source. [0043] The CPU 110 may also be coupled to one or more internal, on-chip thermal sensors 157A as well as one or more external, off-chip thermal sensors 157C. The on- chip thermal sensors 157 A may comprise one or more proportional to absolute temperature ("PTAT") temperature sensors that are based on vertical PNP structure and are usually dedicated to complementary metal oxide semiconductor ("CMOS") very large-scale integration ("VLSI") circuits. The off-chip thermal sensors 157C may comprise one or more thermistors. The thermal sensors 157C may produce a voltage drop that is converted to digital signals with an analog-to-digital converter ("ADC") controller 103. However, other types of thermal sensors 157A, 157C may be employed without departing from the scope of the invention. [0044] The thermal sensors 157A, 157C, in addition to being controlled and monitored by an ADC controller 103, may also be controlled and monitored by one or more EC manager module(s) 26. The EC manager module 26 may comprise software which is executed by the CPU 110. The EC manager module 26 may comprise one or more modules. However, the EC manager module(s) 26 may also be formed from hardware and/or firmware without departing from the scope of this disclosure. The EC manager module(s) 26 may be responsible for monitoring and applying electrical current load policies that include one or more electrical current load management techniques that may help a PCD 100 avoid overburdening its power supply 188 while maintaining a high level of functionality and user experience. [0045] The touch screen display 132, the video port 138, the USB port 142, the camera 148, the first stereo speaker 154, the second stereo speaker 156, the microphone 160, the FM antenna 164, the stereo headphones 166, the RF switch 170, the RF antenna 172, the keypad 174, the mono headset 176, the vibrator 178, the power supply 188, the RPM 180 and the thermal sensors 157C are external to the on-chip system 102. However, it should be understood that the EC manager module 26 may also receive one or more indications or signals from one or more of these external devices by way of the analog signal processor 126 and the CPU 110 to aid in the real time management of the resources operable on the PCD 100. [0046] In a particular aspect, one or more of the method steps described herein may be implemented by executable instructions and parameters stored in the memory 112 that form the one or more EC manager module(s) 26. These instructions that form the EC manager module(s) 26 may be executed by the CPU 110, the analog signal processor 126, or another processor, in addition to the ADC controller 103 to perform the methods described herein. Further, the processors 110, 126, the memory 112, the instructions stored therein, or a combination thereof may serve as a means for performing one or more of the method steps described herein. FIG. 2 is a functional block diagram illustrating relationships among a controller 101, resource power manager 180, master processors 110, 126, low-level drivers 103, shared resources 105 A-C, and local resources 105D-H that are part of a PCD 100. FIG. 2 also illustrates how the touchscreen 132 may be coupled to the touchscreen driver/controller 128. The touchscreen driver/controller 128 may be coupled to clock code 113 A of a first master processor 110A. In the exemplary embodiment shown in FIG. 2, the first master processor 11 OA may be coupled to the resource power manager ("RPM") 180 and the controller 101. The RPM 180 may be responsible for controlling power to the hardware elements such as processors 110A-110C. The RPM 180 may also control power to each resource 105 through its control of low level drivers 103. The RPM 180 may execute or run the EC manager module 26 when it is embodied as software. Alternatively, the RPM 180 may comprise the EC manager module 26 when it is embodied as hardware and/or software or both. The EC manager module 26 manages and maintains a database 112B. Exemplary data stored in the database may include, but is not limited to, a table that tracks predefined numerical levels, use cases, and predefined electrical current levels expressed in amperes as will be described in further detail below in connection with FIG. 5. With the database 112B, the EC manager module 26 may monitor and track instantaneous electrical current levels of the battery 188 and/or any chargers used to replenish the battery 188 with various current sensors 157B. The EC manager module 26 may issue commands through the RPM 180 to control current levels of the various hardware elements illustrated such as the processors 110 and resources 105 as will be described in further detail below. The controller 101 may be coupled to the clock code 113A of the first master processor 110A. The controller 101 may comprise one or more low-level drivers 103. The one or more low-level drivers 103 may be responsible for communicating with one or more shared resources 105A-C. Shared resources 105 A-C may comprise any type of device that supports tasks or functions of a master processor 110. Shared resources 105 A-C may include devices such as clocks of other processors as well as single function elements like graphical processors, decoders, and the like. The shared resources 105A-C may be coupled to one or more local resources 105D-H. The one or more local resources 105D-H may be similar to the shared resources 105 A-C in that they may comprise any type of device that supports or aids tasks or functions of a master processor 110. Local resources 105D-H may include devices such as clocks of other processors as well as single function elements like graphical processors, decoders, and the like. The local resources 105D-H may comprise leaf nodes. Leaf nodes are understood by one of ordinary skill in the art as local resources 105D-H that usually do not refer or include other dependent resources 105. The controller 101 may be responsible for managing requests that are issued from the one or more master processors 110, 126. For example, the controller 101 may manage a request that originates from the first master processor 110A. The first master processor 11 OA may issue this request in response to an operator manipulating the touchscreen 132. The touchscreen 132 may issue signals to the touchscreen driver/controller 128. The touchscreen driver/controller 128 may in turn issue signals to the clock code 113A of the first master processor 110A. The controller 101 may also be responsible for managing the sleep states for a particular processor 110. Prior to entering a sleep state, a processor 110 will provide information for managing sleep states. Information for managing sleep states includes the entry into and exiting from a sleep state. FIG. 3 is a graph 300 which illustrates the state of charge of a battery 188 of a portable computing device 100 plotted along the X-axis versus battery voltage (in Volts) plotted a long a first y-axis and battery impedance (milliohms) plotted along a second y- axis. The first curve 305 of graph 300 tracks the state of charge of a battery 188 of a portable computing device 100 operating at 25°C. Meanwhile, the second curve 310 of graph 300 corresponds with the same battery 188 of the portable computing device operating at 0°C. The second curve 310 illustrates how battery impedance increases with decreases in temperature. The third curve 315 tracks the open circuit voltage of the Battery 188. By tracking the temperature of a portable computing device 100, the EC manager module 26 will be able to accurately calculate the instantaneous electrical current available from the battery 188 since the impedance of the battery 188 is a function of temperature. The EC manager module 26 may store the information contained in graph 300 in its database 112B. FIG. 4 is a graph 400 which illustrates the state of charge of a battery 188 of a portable computing device 100 projected along the X-axis against achievable current maximums projected on the Y-axis. As understood by one of ordinary skill the art, the impedance of a battery 188 may change in response to changes in temperature as discussed above in connection with FIG. 3. The first curve 405 of graph 400 tracks electrical current associated with a battery 188 operating at a first temperature while the second curve 410 tracks electrical current associated with the same battery 188 operating at a second temperature. In the exemplary embodiment illustrated in FIG. 4, the first temperature comprises 25 °C while the second temperature comprises 0°C. At higher temperatures, a battery 188 may support more current as indicated by the first curve 405. Meanwhile, at lower temperatures, the impedance for the same battery 188 increases, which means that the same battery 188 will support less current as indicated by the second curve 410. This information about the electrical current supported by the battery 188 may be monitored and tracked by the EC manager module 26 with its database 112B as will be described in further detail below in connection with FIGs. 5-6. FIG. 5 provides a PCD current level tracking table 500 that may be part of a database 112B maintained by the EC manager module 26. The table 500 may comprise predefined numerical levels 510 listed in a first column, PCD use cases 515 in the second column, and predefined electrical current levels 520 expressed in Amperes in the third column. However, one of ordinary skill the art recognizes that other data as well as different order or sequences of the data tracked in table 500 may be employed without departing from the scope of the disclosure described herein. Each numerical level 510 in the table 500 may be associated with a particular PCD use case 515 that relates to functions, features, and/or operations of specific hardware elements within the portable computing device 100. For example, level 0 (zero) may be associated with a baseline functionality. Meanwhile, level 1 may include the functionality of level 0 in addition to a CPU operating at a "low performance." Level 2 may include the functions/features/operations of level 1 in addition to a Feature X, where X may be specified by the operator of the PCD 100 and/or manufacturer of the PCD 100. The minimum current level 520 assigned to each use case 515 A for levels 0, 1, and 2 are 2.0 amps, 2.5 amps, and 2.7 amps respectively. And so-on. The minimum current level 520 for each row/level 510 of table 500 may be determined by the EC manager 26, the manufacturer of the PCD 100, and/or the operator of the PCD 100. The EC manager module 26 may determine the instant current maximum for the portable computing device 100 and then communicate operational level messages to its various hardware devices under its control. For example, at the level 1 row of table 500 the EC manager module 26 may communicate to a multicore processor 110 that it may operate one of its cores, like zero core 222 of FIG. 1, at a low level which translates to the non- turbo mode expressed in table 510. The EC manager module 26 may easily produce the data listed in table 510 and/or the data may be calculated under laboratory conditions and then loaded into memory 112A of the portable computing device 100 so that the EC manager module 26 may access the table 510 with this pre-loaded data. Whether or not the EC manager module 26 may produce the data listed in table 510 depends on the amount and the location of current sensors 157B provided within the portable computing device 100. In some instances, too many electrical current sensors 157B may be cost prohibitive as understood by one of ordinary skill in the art. If electrical current sensors 157B are provided, the EC manager module 26 may communicate to receive data from the sensors 157B. FIG. 6 is a graph 600 which illustrates the state of charge of a battery 188 of a portable computing device projected along the X-axis against achievable current maximums projected on the Y-axis. The graph 600 of FIG. 6 also comprises the electrical current levels referenced in table 500 of FIG. 5. Graph 600 is very similar to graph 400 in that it also contains the first curve 405 which tracks data associated with the battery 188 operating at a first temperature. Graph 400 also has the second curve 410 of FIG. 4 which tracks data associated with the same battery 188 operating at a second temperature. In the exemplary embodiment illustrated in FIG. 6, the first temperature comprises 25°C while the second temperature comprises 0°C. The graph 600 illustrates how the highest level for a battery 188 operating at the second temperature of 0°C tracked by the second curve may only reach level 4. This level 4 of graph 600 corresponds with level 4 of table 500 and FIG. 5. The EC manager module 26 computes instantaneous electrical current using ohms law which is embodied in equation 1 (EQ1) provided below: (EQ1) I(max) = (Ocv - Vmin)/Rbat wherein I(max) is the instantaneous maximum electrical current; Ocv is the open circuit voltage of the battery 188, which is a function of the state of charge ("SoC") of the battery 188; Vmin is the minimum voltage of the battery 188 needed to support operation of the PCD 100 (i.e., if the battery voltage drops below this level, a voltage regulator on the PCD power grid may begin to operate outside of specification leading to a reset of the PCD 100); and Rbat is the impedance of the battery 188. The EC manager module 26 computes what maximum current (Imax) may be supported by the portable computing device 100 based on the characteristics of the battery 188. Most battery manufacturers will provide some characteristics about the battery 188 and how they may change across operating states, like how impedance of the battery 188 may change over time due to various factors such as temperature. This information may be used by the EC manager module 26 to calculate the state of charge at a particular instant in time for the portable computing device 100. From the state of charge (Soc) parameter, the EC manager module 26 may calculate the open voltage (Ocv) which can be supported by the battery 188. Also from the state of charge and the battery characteristics, the maximum current (Imax) that may be supported at any given instant of time may be calculated by the EC manager module 26. Based on the calculated electrical current maximum (Imax), the EC manager module 26 may issue commands to throttle hardware elements accordingly. Referring back to table 500 FIG. 5, the EC manager module 26 is calculating the present electrical current maximum (Imax) and comparing its calculated Imax value to the minimum current level listed in column 520 of table 500 (which levels are also plotted in graph 600 of FIG. 6). Based on the user scenarios listed for a particular level, the EC manager module 26 will activate and/or deactivate hardware elements corresponding to the user scenarios listed in table 500 for a particular operating level. As noted previously, the EC manager module 26 may reside or be executed by the resource power manager (RPM) 180 as illustrated in FIG. 2. The RPM 180 is typically embodied by an ARM processor as understood by one of ordinary skill in the art. If not residing in an ARM, the EC manager module 26 may reside within or be executed by a processor 110 which does not usually enter into a sleep state. However, according to other exemplary embodiments, it is possible that the EC manager module may be executed by an ordinary central processing unit 110 or an applications processor 110 that on occasion may enter into a sleep state, but such embodiments would likely be less preferred. Referring now to FIG. 7, this figure comprises a bar chart 700 that illustrates at least three different types of electrical consumers that may be categorized within a portable computing device 100 by the EC manager module 26. A first set of electrical consumers may be designated with a high priority such as those designated with the letter "A" as illustrated in FIG. 7. A second set of electrical consumers may be designated with a medium priority such as with the letter "B." A third set of electrical consumers may be designated with a low priority such as with the letter "C." Each priority may be assigned a particular weighting which may be represented mathematically. Further, one of ordinary skill in the art recognizes that the number of sets or categories of electrical consumers within a portable computing device 100 may be increased or decreased without departing from the scope of the disclosure described herein. The functions/features/operations assigned to a particular set or category may be adjusted depending upon desired priorities by the operator of the portable computing device 100. For example, an operator of the portable computing device 100 who is primarily interested in recording video and less interested in gaming may assign video recording with a higher priority relative to gaming, which may be assigned a lower priority as understood by one of ordinary skill in the art. According to one exemplary embodiment, the level "A" type of electrical consumers may correspond to different levels for supporting voice calls in a portable computing device 100. The lowest level (Al) within this type of electrical consumer may comprise supporting voice calls in an emergency 911 ("E911") situation. The highest level within this set or class of priority may comprise level A3 in which voice and data may be transmitted simultaneously. Meanwhile, at the next level within category "A" hardware elements, such as the level A2, only voice calls may be supported and not data as understood by one of ordinary skill in the art. And as noted above, the Al level may only be limited to E911 type calls. The medium priority functions and/or features assigned to the letter "B" category may comprise video gaming in which performance may be adjusted without degrading perceivable quality of service ("QoS") by the user. For example, response time, updates, and how things are encoded may be may be reduced without any perceivable degradations in QoS. The lowest priority functions and/or features assigned to the letter "C" category may comprise operations such as a camcorder. Performance of the camcorder may be adjusted such that the number of frames per second may be reduced during recording in order to conserve electrical current. Referring now to FIG. 8, this figure has a graph 800 which illustrates instantaneous current plotted on the y-axis versus time on the x-axis in addition to present consumption of categories of electrical consumers illustrated in FIG. 7. Graph 800 illustrates how electrical current within a portable computing device 100 is consumed over time. Between time 0 in time Tl, an operator of the portable computing device 100 may power up the device and initiate a voice phone call as indicated by current levels Al plus A2. The electrical current levels represented by Al plus A2 fall well below the instantaneous current maximum tracked by curve 805 of FIG. 8. Next, between times Tl and T2, the voice call as represented by electrical current levels Al plus A2 may continue while current levels CI plus C2 plus C3 may be added to correspond with the operator of the portable computing device 100 desiring to power up the camcorder so that video may be recorded while he or she is conducting the telephone call. Since the electrical current level of Al plus A2 plus CI plus C2 plus C3 fall below the instantaneous current maximum corresponding to curve 805, then these features/functions may be permitted to function by the EC manager module 26 after the EC manager module 26 determines that these features/functions do not exceed the present electrical current level tracked by curve 805. Between times T2 and T3, the voice call has been terminated, therefore, blocks Al and A2 representing a voice call have been removed from the graph 800. Meanwhile, during times T2 and T3, the operator of the portable computing device may have continued with the video recording as represented by electrical current level blocks CI plus C2 plus C3 while initiating a video game application program that is represented by the electrical current levels of Bl through B5. Since the sum of CI through C3 and Bl through B5 electrical current levels is less than the present instantaneous present electrical current level represented by curve 805, then the EC manager module 26 may permit these functions/features to operate without any conditions (freely). Between times T3 and T4, as the instantaneous electrical current level continues to drop as represented by curve 805, then the EC manager module 26 may need to impose conditions or degrade the quality of service for particular functions and/or features in order to conserve electrical current. For example, between times T3 and T4, while the operator of the portable computing device 100 continues to record with the camcorder features/functions, the EC manager module 26 may have reduced the recording level of the camcorder in which current level block C3 has been removed. According to this particular embodiment, the EC manager module 26 may have instructed the camcorder to reduce the number of frames it was recording per second. Meanwhile, during the same time window between time T3 and T4, the video gaming feature is maintained at its previous current levels as represented by electrical current level blocks Bl through B5. As noted previously, video gaming as described in connection with FIG. 7 was assigned a higher priority at level "B" relative to level "C" category of the camcorder function/feature. Next, between times T4 and T5, the operator of the portable computing device 100 decides to initiate a voice and data transmission is represented by electrical current level blocks Al through A3. Since voice and data transmissions have a higher priority at the "A" level relative to the "B" level of video gaming and "C" level of camcorder recording, then the electrical current levels for the video gaming and the camcorder recording are reduced by the EC manager module 26 during this time period. [0090] This downward change during the time period between time T4 and T5 is represented by blocks CI and Bl which were changed from electrical current levels BIBS and C1-C2 during time period T3 and T4, respectively. The electrical current levels for the "B" and "C" category functions/features were changed during time period T4 and T5 in a downward manner in order to accommodate the higher priority voice and data transmissions as represented by electrical current level blocks A1-A3. The management by the EC manager module 26 of the functions and features of the portable computing device 100 continues similarly for time periods from T5 through T7 as understood by one of ordinary skill the art. [0091] FIG. 9 is a logical flowchart illustrating a method 900 for managing electrical current levels within a portable computing device 100. Block 905 is the first step of method 900. In block 905, the EC manager module 26 may assign hardware and/or software elements to two or more groups in which each group may have its respective priority level. For example, as discussed above in connection with FIG. 7, the EC manager module 26 may assign voice calls according to a level "A" priority while gaming functions are assigned to a level "B" priority in which the level "B" priority is lower relative to the level "A" priority and so on. The EC manager module 26 may perform these assignments automatically and adjust these assignments periodically. Alternatively, the EC manager module 26 may be provided with these assignments from preloaded memory 112A created at the factory for the PCD 100. In other exemplary embodiments, an operator of the portable computing device 100 may be provided with options for selecting how hardware and/or software elements are assigned priority by the EC manager module 26. [0092] Next, in block 910, the EC manager module 26 may monitor individual electrical current levels of hardware elements assigned to higher priority levels. For example, as described above in connection with FIG. 7, the EC manager module 26 may monitor the current levels of all hardware elements assigned to the highest priority level "A" category. Meanwhile, the EC manager module 26 may only track an estimated amount of electrical current levels for those hardware elements assigned to the lower categories, such as the level "B" category and the level "C" category. The EC manager module 26 is not limited to monitoring individual hardware elements of a single category/priority/class. The EC manager module 26 is capable of monitoring electrical current levels of individual hardware elements for all categories/priorities as well as various combinations of categories/priorities. [0093] Next, in block 915, the EC manager module 26 may estimate an electrical current level for one or more second groups based on software requests issued to various hardware elements. In other words, the EC manager module 26 may estimate electrical current levels for those hardware elements of the PCD 100 which may be assigned to lower priority groups, such as the "B" and "C" category groups illustrated in FIG. 7 and described above. [0094] In block 920, the EC manager module 26 may calculate the instantaneous electrical current level of the PCD 100 based on its current charge status. In this block 920, the EC manager module 26 may be utilizing the data illustrated in graph 600 of FIG. 6. The EC manager module 26 may determine the current operating temperature of the PCD 100 and then utilize charge data corresponding to the appropriate curve 405, 410. The EC manager module 26 also utilizes EQ1 described above once it has obtained data for all of the variables needed to solve this EQ1 for instantaneous electrical current levels. [0095] In block 925, the EC manager module 26 may compare the calculated instantaneous electrical current level from block 920 with the monitored electrical current levels from block 910 and the estimated electrical current levels from block 915. All of these electrical current levels may be stored in the database 112B. Specifically, in this block 925, the EC manager module 26 compares the calculated instantaneous electrical current level with the minimum current level column 520 and the operation level column 510 tabulated in table 500 of FIG. 5. Based on the use cases listed in column 515, the EC manager module 26 then reviews the electrical current levels of the various hardware elements of the various groups. [0096] In block 930, the EC manager module 26 may adjust operation of the hardware elements of the first and second groups in order to keep operation of the PCD 100 below the estimated electoral current maximum calculated in block 920. The EC manager module 26 may then issue commands to various hardware elements through the resource power manager 180 that correspond with the use cases listed in column 515 of table 500. The hardware commands issued by the EC manager module 26 may include, but are not limited to, those such as "high," "low," "medium," "level 1," "level 2," "level 3, "turbo," "non-turbo," etc. etc. Such commands may be characterized as ones of degree relative to the operation of the hardware element. The process 900 then returns. As understood by one of ordinary skill in the art, the EC manager 26 may adjust the relative electrical current level of the PCD 100 up or down. All of the examples described above show cases where the battery 188 is draining. If the PCD 100 is connected to a charger or other type of power device, then the EC manager 188 may relax the operation and allow one or more groups to operate at a higher electrical current consuming level. Certain steps in the processes or process flows described in this specification naturally precede others for the invention to function as described. However, the invention is not limited to the order of the steps described if such order or sequence does not alter the functionality of the invention. That is, it is recognized that some steps may performed before, after, or parallel (substantially simultaneously with) other steps without departing from the scope and spirit of the invention. In some instances, certain steps may be omitted or not performed without departing from the invention. Further, words such as "thereafter", "then", "next", "subsequently", etc. are not intended to limit the order of the steps. These words are simply used to guide the reader through the description of the exemplary method. Additionally, one of ordinary skill in programming is able to write computer code or identify appropriate hardware and/or circuits to implement the disclosed invention without difficulty based on the flow charts and associated description in this specification, for example. Therefore, disclosure of a particular set of program code instructions or detailed hardware devices is not considered necessary for an adequate understanding of how to make and use the invention. The inventive functionality of the claimed computer implemented processes is explained in more detail in the above description and in conjunction with the drawings, which may illustrate various process flows. In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line ("DSL"), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc ("CD"), laser disc, optical disc, digital versatile disc ("DVD"), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer- readable media. Therefore, although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims. |
A plurality of flash electrically erasable programmable read only memory (EEPROM) cells is disclosed wherein metal lines couple both the sources and the drains of the flash cells. Reading of these flash cells is accomplished by applying a positive voltage to the source and reading from the associated metal source line. A soft erase scheme for increasing the threshold voltage of over-programmed flash cells is provided that prevents the leakage caused by applying a positive voltage to the drain. |
We claim: 1. A plurality of flash electrically erasable programmable read only memory (EEPROM) cells comprising: a first well region having a first conductivity type; a second well region having a second conductivity type opposite the first conductivity type, wherein the first well region is formed within the second well region; a semiconductor substrate, wherein the second well region is formed within the substrate; a plurality of non-volatile memory transistors arranged in a column and located in the second well region, each transistor having a source and a drain; a first metal line coupling each source of each of the non-volatile memory transistors in the column; and a second metal line coupling each drain of each of the non-volatile memory transistors in the column. 2. The flash EEPROM cell of claim 1 further comprising a word line, wherein the first and second metal lines are located perpendicular to the word line. 3. The flash EEPROM cell of claim 1, wherein the first well region, the second well region, the plurality of non-volatile memory transistors, and the first and the second metal lines are created using a stack-gate flash memory process technology. 4. A method of interconnecting an array of non-volatile memory transistors having a source region, a control gate and a drain region, the method comprising: forming a column of non-volatile memory transistors wherein each transistor shares at least one source or drain region with another transistor; coupling each drain of each non-volatile memory transistor in the column with a first metal line; coupling each source of each non-volatile memory transistor in the column with a second metal line; and coupling each control gate of each non-volatile memory transistor in a row. 5. The method of claim 4 wherein the first metal coupling each drain and the second metal coupling each source are adjacent to each other. |
FIELD OF THE INVENTION The present invention relates to an electrically erasable programmable floating gate memory, such as flash memory or electrically erasable programmable read only memory (EEPROM) for both memory and programmable logic application. More specifically, the present invention relates to a method to implement high density and high speed flash memory with a single low voltage power supply. BACKGROUND OF THE INVENTION FIG. 1 is a schematic diagram of an array 100 of conventional flash memory cells (flash cells) detailed in U.S. Pat. Nos. 5,357,465 and 5,222,040. Array 100 includes flash cells 110-113, word lines 101-102, common source line 103, and drain bit lines 105-106, as illustrated. In general, a non-volatile flash memory transistor (e.g., flash cell 110) includes a floating gate that can be programmed to store either a negative charge or a neutral charge. The amount of charge stored on the floating gate affects the threshold voltage of the flash cell. The threshold voltage of a flash cell is that voltage at which the flash memory transistor turns on, allowing full current to flow. When storing a negative charge, a flash cell is said to be in an erased state. When storing a neutral charge, a flash cell is said to be in a programmed state. When a flash cell is in the erased state, the negative charge stored on the floating gate prevents the flash cell from turning on at the low voltages used for reading the flash cell during a read operation. Therefore the erased flash cell is said to be in a high threshold state. When a flash cell is in the programmed state, the neutral charge stored on the floating gate allows the flash cell to be controlled by the voltage applied to the control gate of the flash cell. Therefore the programmed flash cell is said to be in a low threshold state. FIG. 2 is a cross-sectional view of flash cell 110 of array 100. Flash cell 110 includes p-substrate 160, n-well 170, n-well contact 171, p-well 180, p-well contact 181, source 120, drain 130, tunnel oxide region 153, floating gate 154, isolation material 155, and control gate 156. Control gate 156 is conventionally word line 101, thereby coupling flash cell 110 to other flash cells in the array. The entire array of flash cells is fabricated within p-well 180, n-well 170, and substrate 160. The charge on floating gate 154 determines the threshold voltage of and identifies the state of flash cell 110. FIG. 3 is a table describing the voltages for operating array 100. Array 100 can perform program, program inhibit erase, and read operations, as illustrated. During the program mode, relatively high voltages are applied across the control gate (0 Volt) and the drain (+5 Volts) of flash cells on the non-selected word line and the selected drain bit line. These high voltages can result in drain disturb in erased cells. Drain disturb occurs when an electrical field is strong enough to cause the floating gate to experience a charge loss due to electron tunneling from the floating gate to the drain. It is therefore an object of the present invention to lessen the drain disturb in a flash array. Minute variations in the size of the elements of a transistor can occur during transistor formation. As a result, some flash cells can have slightly thinner or thicker tunnel oxide regions. Electrons tunnel more easily through flash cells having thinner tunnel oxide regions during a program operation. As a result, flash cells having a thinner tunnel oxide region are less negatively charged during a program operation. These flash cells therefore have a lower threshold voltage than flash cells with thicker tunnel oxide regions. In some cases, the floating gate of a flash cell can lose enough charge to cause the threshold voltage of the flash cell to go negative. When this happens, a grounding voltage applied to the control gate does not turn off the flash cell. Cells with negative threshold voltages are called over-programmed cells. To conventionally prevent non-selected cells from turning on, a voltage more negative than the negative threshold voltage of the most over-programmed cell must be applied to each non-selected cell in the array. This large negative voltage causes a large voltage to be applied across the control gates and the drains of the non-selected flash cells in the array. This voltage can disturb the amount of charge on the floating gate of these flash cells under certain conditions. It is therefore another objective of the present invention to find a better way to prevent turn-on of non-selected, over-programmed cells. A flash cell is erased by applying the voltages listed in FIG. 3 to the array for a given period of time. Erasing is performed in blanket mode, meaning that all cells in an array are erased simultaneously. An array of cells is erased by applying a large positive voltage (e.g., 20.0 Volts) to each control gate, and grounding each source, drain, and substrate. Under these conditions, electrons tunnel from the substrate to the floating gate. As a result, after erasing, all cells should be in a high threshold voltage state. A row of flash cells is read by applying the voltages listed in FIG. 3 to the array for a given period of time. The junction of the drain region and a well region of a flash cell is called a drain junction. For example, the drain junction of flash cell 100 is located between the drain region (e.g., drain 130) and the p-well (e.g., p-well 180). The drain junction of a flash cell is designed to provide efficient F-N tunneling between the floating gate and the drain during a program operation. This is accomplished by implanting a more heavily doped (e.g., N+) region that is under-lapping the floating gate. As a result of the under-lapping, a tunneling region is created. Due to this sensitivity, applying a positive voltage to the drain may cause F-N tunneling induced read disturb in non-selected erased cells in the array. Read disturb occurs when the charge on a floating gate is altered by a read operation. In this case, read disturb occurs when an electrical field is strong enough to cause the floating gate to experience a charge loss due to electron tunneling from the floating gate to the drain. The floating gate is therefore less negatively charged after the read operation, and thus the threshold voltage of the cell is lowered. It is therefore an object of the present invention to lessen the read disturb occurring to non-selected, erased cells. As an additional result of the under-lap of the heavily doped region with the floating gate, applying a positive voltage to the drain also causes hot electron induced read disturb if the selected cell is in a programmed state. In this case, the read disturb occurs when an electrical field is strong enough to cause the electrons flowing between the source and the drain during the read operation to gain enough energy to jump through the tunnel oxide layer into the floating gate. As a result, the floating gate contains additional charge after the occurrence of the read disturb. It is therefore another object of the present invention to lessen the read disturb that can occur in selected, programmed cells during a read operation. Each cell in array 100 (FIG. 1) has one metal line and one diffusion line. Drain bit lines 105 and 106 are metal bit lines, and common source line 103 is a diffusion line. Diffusion lines inherently have large leakage current as well as large resistance and capacitance delays. As a result, the conduction performance of diffusion lines essentially act as an efficient connector coupled to a resistor and a capacitor. The added resistance and capacitance on the line is called RC delay. The RC delay of the diffusion line delays current along the line, thus delaying accesses to memory array 100. It is therefore another object of the present invention to increase the access speed to a flash memory array. FIG. 4 is a layout diagram containing flash memory array 100. Similar elements in FIGS. 1, 2, and 4 are labeled with similar reference numbers. The layout diagram of flash cell array 100 therefore contains word lines 101-102, common source line 103, drain bit lines 105-106, drain regions 130-133, and source regions 120-121. FIG. 5A is a schematic diagram of another conventional array 500 of flash cells as described in U.S. Pat. No. 5,592,415. Array 500 includes flash cells 510-513, word lines 504-505, drain bit lines 506-507, and source bit lines 508-509. Bit lines 506-509 are buried diffusion lines. FIG. 5B is an equivalent circuit of flash memory array 500. Each of buried diffusion lines 506-509 are represented as an efficient conducting line coupled to a resistor and a capacitor. As noted above, buried diffusion lines have an inherent RC delay. The amount of RC delay in an array is directly proportional to the length of the buried diffusion line. This RC delay makes it difficult to use large flash memory arrays connected by buried diffusion lines efficiently in high density flash memory. The delays caused by the length of the buried diffusion lines in large arrays are incompatible with the speed required in high density flash memory applications. To reduce the RC delay of the array, U.S. Pat. No. 5,592,415 provides many small arrays. The typical size of these small arrays is 32 by 32 n sectors, where a sector is a block of flash cells. The smaller array has proportionally shorter buried diffusion lines, with proportionally smaller RC delay. However, each of the 32.times.32 sectors must be interconnected to function as a large array. The additional interconnection makes this design more complicated than the conventional flash memory array of FIG. 1. FIG. 5C is a table describing the voltages for operating flash array 500. Flash array 500 is programmed, erased, and read in a manner similar to array 100. Buried diffusion drain bit lines 506-507 inherently provide a large drain junction area. The amount of leakage current during programming is proportional to the size of the drain junction area. As a result, flash array 500 experiences a large leakage current during a program operation. It is therefore another object of the present invention to lessen the drain leakage current during programming. Flash array 500 is read by applying a voltage of 3.3 Volts to the selected word line (e.g., WL1), a pre-determined positive voltage (e.g., 2.0 Volts) to the selected drain bit line (e.g., BL1), and a grounding voltage of 0 volts to both the selected source bit line (e.g., SL1) and the substrate. Under these circumstances, a programmed cell (e.g., flash cell 510) conducts current and an erased cell (e.g., flash cell 511) does not conduct current. Sense amplifiers coupled with drain bit lines 506-507 sense the voltage change on drain bit lines 506-507. However, as mentioned above, the drain junction is designed to have efficient F-N tunneling. Therefore, as with the circuit of FIG. 1, applying a positive voltage to the drain also causes hot electron induced read disturb if the selected cell is in a programmed state. This hot electron induced read disturb causes the threshold voltage of the affected programmed flash cell to increase. As noted above, it is another objective of the present invention to lessen the read disturb of selected, programmed cells during a read operation. Additionally, the manufacturing process for forming buried diffusion lines 506-509 is very complicated. This process is further complicated by the need to form many small 32.times.32 sectors rather than one large array. It is therefore another object of the present invention to provide a flash memory array using a relatively simple manufacturing process. FIG. 6 is a layout diagram of flash cell array 500. Similar elements in FIGS. 5A, 5B, and 6 are labeled similarly. The layout diagram of flash cell array 500 therefore contains word lines 504-505, (diffusion) source bit lines 508-509, (diffusion) drain bit lines 506-507, and flash cells 510-513. Also included are isolation material 42, drain select transistor gate 45, additional word lines 47, source select transistor gate 49, and common source lines 50. Note that the drain select transistor gate 45 is located at the top of the array, and the source select transistor gate 49 is located at the bottom of the array. The distance between select transistor gates 45 and 49 impairs the ability to exchange the associated control lines. SUMMARY Accordingly, the present invention provides a flash cell array and a method of operating same. The flash cell array is formed such that within each column of non-volatile memory transistors, each interior transistor shares a source region with a non-volatile memory transistor in a first direction and each interior non-volatile memory transistor shares a drain region with a non-volatile memory transistor in a second direction. An interior transistor is a transistor within the array that is located between two other transistors. Each drain of each non-volatile memory transistor in the column is coupled with a first metal, and each source of each non-volatile memory transistor in the column is coupled with a second metal. Each control gate of each non-volatile memory transistor along a word line is connected to the word line. The metal lines used to couple each drain and each source allow faster access to the flash memory array than the buried diffusion lines of the related art. The flash memory array is read from the source bit line, rather than by the conventional method of reading from the drain bit line. Reading from the source bit line prevents the application of a positive voltage to the drain junction, and therefore lessens the resulting read disturb. A soft erase scheme is provided to increase the threshold voltage of over-programmed cells after a program operation. Conventionally, a positive voltage is applied to the drain, and a grounding voltage of 0 Volts is applied to the source, control gate, and substrate. This conventional method discharges electrons on the floating gates of erased cells (cells having a high threshold voltage), thus lowering the threshold voltage of the affected erased cells. The present invention will be more fully understood in view of the following description and drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic diagram of an array of conventional flash cells; FIG. 2 is a cross-sectional view of the flash cells of FIG. 1; FIG. 3 is a table showing conventional flash memory array operating voltages; FIG. 4 is a layout diagram of the array of conventional flash cells of FIG. 1; FIG. 5A is a schematic diagram of another conventional array of flash cells; FIG. 5B is an equivalent schematic diagram of the array of flash cells of FIG. 5A; FIG. 5C is a table showing operating voltages for the flash memory array of FIG. 5A; FIG. 6 is a layout diagram of the array of conventional flash cells of FIG. 5A; FIG. 7 is a schematic diagram of an array of flash cells in accordance with one embodiment of the present invention; FIG. 8 is a cross sectional view of two flash cells in accordance with one embodiment of the present invention; FIG. 9 is a layout diagram of an array of flash cells in accordance with one embodiment of the present invention. FIG. 10 is a table describing operating voltages flash cell array operation in accordance with one embodiment of the present invention; FIG. 11 is a graph of flash cell threshold distribution in accordance with one embodiment of the present invention; DETAILED DESCRIPTION OF THE DRAWINGS FIG. 7 is a schematic diagram of an array of flash cells 700 in accordance with one embodiment of the present invention. Array 700 includes flash cells 713-716, word lines 702-703, source bit lines 705-706, and drain bit lines 707-708. Flash cells 713-714 are commonly coupled to word line 702. Flash cells 715-716 are commonly coupled to word line 703. Flash cells 713 and 715 are commonly coupled to drain bit line 707 and source bit line 705. Flash cells 714 and 716 are commonly coupled to drain bit line 708 and source bit line 706. Bit lines 705-708 of flash array 700 are metal lines, as opposed to the diffusion bit lines of flash array 500 (FIG. 5A), which inherently have large RC delays and leakage currents. Therefore array 700 does not experience an RC delay as large as the RC delay of array 500. Although only four cells of array 700 are illustrated, it is understood that array 700 typically includes many more rows and columns of flash cells. (This is illustrated by dots ". . ." in FIG. 7). In one embodiment, array 700 is a NOR array. As described in more detail below, the layout of array 700 enables this array to have a much larger block size than prior art array 500 (FIG. 5A). FIG. 8 is a cross sectional view of flash EEPROM cells 713 and 715 in accordance with one embodiment of the present invention. Similar elements in FIGS. 7 and 8 are labeled similarly. The flash cells 713 and 715 are fabricated on a monocrystalline semiconductor substrate 810. In the described example, substrate 810 is a p-type monocrystalline silicon having a boron dopant concentration of 10@14 to 10@15 cm@-3, although other types of semiconductor materials and other dopant concentrations can be used in other embodiments. An n-type well region (n-well) 820 is formed within substrate 810 as illustrated. In this embodiment, n-well 820 has a dopant concentration of about 10@15 to 10@16 cm@-3. A p-type well region (p-well) 830 is formed within n-well 820. P-well 830 has a dopant concentration of about 10@16 to 10@17 cm@-3. N-well 820 and p-well 830 are formed using conventional semiconductor processing techniques, such as ion implantation or diffusion. Field oxide layer 801 is formed over the upper surface of substrate 810 using conventional semiconductor processing techniques. In this embodiment, field oxide layer 801 is silicon oxide having a thickness of approximately 4500 .ANG.. Flash cell 713 is fabricated within p-well 830. Flash cell 713 is a stack-type double-poly transistor which includes source region 729, drain region 733, tunnel oxide film 841, floating gate 721, inter-poly dielectric layer 843, control gate 702, and spacers 845. Note that control gate 702 is a part of word line 702. Similarly, flash cell 715, also fabricated within p-well 830, includes source region 729, drain region 735, tunnel oxide film 842, floating gate 723, inter-poly dielectric layer 844, control gate 703, and spacers 845. Note that control gate 703 is a part of word line 703. Flash cells 713 and 715 share source region 729. In this embodiment, tunnel oxide films 841-842 are silicon oxide (i.e., SiO2) which are grown over the upper surface of p-well 830 to a thickness of approximately 80 to 100 .ANG.. As described in more detail below, grown tunnel oxide films 841-842 facilitate Fowler-Nordheim (F-N) programming and erasing of flash cells 713 and 715. Additionally, spacers 845 are silicon oxide, and result from the oxide etching process. As shown in FIG. 8, drain regions 733 and 735 each include a portion with medium N doping (e.g., N) and a portion with heavy N doping (e.g., N+). These portions of drain regions 733 and 735 are doped prior to the oxide etching process that forms spacers 845. Note the heavily N doped portions of drain regions 733 and 735 extend underneath floating gates 721 and 723, respectively. This heavily doped under-lapping region allows efficient F-N tunneling of electrons during a program operation. Also shown in FIG. 8, source region 729 includes a portion with light N doping (e.g., N-) and a portion with heavy N doping (e.g., N+). The N- portion of source region 729 is doped prior to the formation of spacers 845. Note that the portion of source region 729 that extends underneath floating gate 721 is lightly N doped. However, the N+ portion of source region 729 is doped after the formation of spacers 845, and therefore the N+ portion of source region 729 does not under-lap floating gate 721 and 723. As a result, the source junction is inefficient for allowing F-N tunneling during a program operation. The source junction is located between source region 729 and p-well 830. In this embodiment, floating gates 721 and 723 are formed from a lightly doped polycrystalline silicon layer which is deposited over tunnel oxide films 841-842, respectively, to a thickness of approximately 1000 to 3000 .ANG.. As described in more detail below, floating gates 721 and 723 store charge to determine the logic state (i.e., programmed or erased) of flash cells 713 and 715, respectively. Inter-poly dielectric layers 843-844 are formed from a dielectric layer (e.g., Oxide-Nitride-Oxide (ONO)) deposited over floating gates 721 and 723, respectively. An electrically conductive layer is then formed over the resulting structure. This conductive layer can be, for example, a layer of conductively doped polycrystalline silicon or a layer of polycide. Polycide includes a layer of metal (e.g., tungsten) or a layer of metal silicide (e.g., tungsten silicide) deposited over a layer of conductively doped polycrystalline silicon. This conductive layer is patterned and etched to form control gate/word line 702 of flash cell 713 and control gate/word line 703 of flash cell 715. The process of forming flash EEPROM cells 713 and 715 described above is called a stack-gate flash memory process. To form flash EEPROM cells 713 and 715 with this process, a few steps must be added to the standard CMOS process. For example, the step of forming floating gates 721 and 723 by depositing a lightly doped polycrystalline silicon layer over tunnel oxide films 841-842 must be added to the standard CMOS process. These additional steps required for the stack-gate flash memory process do not conflict with the standard CMOS process steps. Thus, the stack-gate flash memory process is compatible with the standard CMOS process. As a result, flash EEPROM cells 713 and 715 formed by the stack-gate flash memory process are easily embedded into circuits formed using the standard CMOS process. P type contact region 831 is formed in p-well 830 using conventional semiconductor processing methods. As denoted by the `P+` in FIG. 8, p type contact region 831 is heavily p doped. P-well contact region 831 has a dopant concentration of 10@19 to 10@20 cm@-3 in the described example. N type contact region 821 is formed in n-well 820 using conventional semiconductor processing methods. As denoted by the `N+` in FIG. 8, n type contact region 821 is heavily N doped. N-well contact region 821 has a dopant concentration of 10@19 to 10@20 cm@-3 in the described example. Transistors 713 and 715 are connected as follows. Metal-1 contacts 751, 739, and 753 contact drain region 733, source region 729, and drain region 735, respectively, at the upper surface of the substrate. Metal-1 pads 743 and 745 are connected to contacts 751 and 753, respectively. Contact 739 is connected to a metal-1 line 705 that is connected to source regions associated with other flash cells (not shown) in the same column as flash cells 713 and 715. Metal-1 line 705 is shown in more detail in FIG. 9. In this manner, the source regions of the flash cells in array 700 are connected by metal bit lines. The metal-1 layer also provides contacts 822 and 832 to n-well contact region 821 and p-well contact region 831, respectively. Metal-2 via plugs 747 and 749 contact metal-1 pads 743 and 745, respectively. Metal-2 via plugs 747 and 749 are connected by metal-2 line 707. In this manner, the drain regions of flash cells in a column of array 700 are connected by metal bit lines. The above described interconnect structure is fabricated as follows. An insulating layer (not shown), which is doped with phosphorous and/or boron, is deposited over the above-described transistor structure to act as a contamination diffusion barrier and as an insulating layer. Contact holes (not shown) are patterned and etched by a conventional oxide etch in this doped insulating layer, thereby exposing a portion of regions 821, 831, 733, 729, and 735. A first electrically conductive layer (metal-1), typically aluminum or an aluminum alloy, is then deposited over the doped insulating layer and into the contact holes, thereby forming contacts to each of regions 821, 831, 733, 729, and 735. These resultant regions are shown as metal-1 contacts 822, 832, 751, 739, and 753. This first conductive layer is then patterned and etched by a conventional aluminum etch, thereby forming source bit line 705, and conductive pads 743 and 745. Another insulating layer (not shown), which is doped with phosphorous and/or boron, is deposited over the resulting structure to act as a contamination diffusion barrier and as an insulating layer. Vias (not shown) are patterned and etched in this doped insulating layer, thereby exposing a portion of the first conductive layer at conductive pads 743 and 745. A second electrically conductive layer (metal-2), typically aluminum or an aluminum alloy, is then deposited over the second doped insulating layer, thereby forming via plugs 747 and 749 that contact the exposed portion of pads 743 and 745, respectively. This second conductive layer is then patterned and etched, thereby forming drain bit line 707. Electrically conductive connections are provided for p-type substrate 810 in an area which is not illustrated in FIG. 8. FIG. 9 is a layout diagram of flash array 700, which includes flash cells 711-718 in accordance with one embodiment of the present invention. These flash cells are formed with source regions 727-732, drain regions 733-736, floating gates 719-726, and word line/control gates 701-704. Adjacent flash cells in the same column share source regions or drain regions. For example, flash cells 711 and 713 share drain region 733, and flash cells 713 and 715 share source region 729. This helps to provide an area-efficient layout pattern. Floating gates 719-726 are shaped in a fashion that maximizes the area of overlap between these floating gates and the overlying word line/control gates 701-704. Word line/control gates 701-704 extend in parallel across multiple columns of the array 700. Word line/control gates 701-704 are made wider in the areas between columns in order to increase the gate couple ration of Flash cells 711-718 and minimize the resistance of the word line/control gates 701-704. Table 1 below identifies the various elements of flash cells 711-718. <tb>TABLE 1<tb> Floating Control<tb>Flash Cell Source Drain Gate Gate<tb>711 727 733 719 701<tb>712 728 734 720 701<tb>713 729 733 721 702<tb>714 730 734 722 702<tb>715 729 735 723 703<tb>716 730 736 724 703<tb>717 731 735 725 704<tb>718 732 736 726 704 The interconnect structure of array 700 is defined as follows. Metal-1 contacts 737-742 and 751-754 contact source/drain regions 727-736, respectively. Metal-1 contacts 751-754 are connected to metal-1 pads 743-746, respectively. Metal-1 pads 743-746 are shown in dashed lines. Metal-1 contacts 737, 739, and 741 are connected to metal-1 source bit line 705, which is shown in dashed lines. As a result, metal bit line 705 is connected to the source region of each flash transistor in the first column of flash array 700. Similarly, metal-1 contacts 738, 740, and 742 are connected to metal-1 source bit line 706, which is shown in dashed lines. As a result, metal bit line 706 is connected to the source region of each flash transistor in the second column of flash array 700. Metal-2 via plugs 747-750 contact metal-1 pads 743-746, respectively. Metal-2 via plugs 747 and 749 are connected to metal-2 drain bit line 707 (shown in short dashed lines). As a result, metal bit line 707 is connected to the drain region of each flash transistor in the first column of flash array 700. Similarly, metal-2 via plugs 748 and 750 are connected to metal-2 drain bit line 708 (shown in short dashed lines). As a result, metal bit line 708 is connected to the drain region of each flash transistor in the second column of flash array 700. Because source bit lines 705-706 are fabricated in the first metal layer and drain bit lines 707-708 are fabricated in the second metal layer, these bit lines can be laid out in the area efficient manner illustrated in FIG. 9. Both sets of bit lines extend in parallel along the vertical axis of the array, with portions of drain bit lines 707-708 extending over portions of source bit lines 705-706. The layout of array 700 is more area efficient than the layouts of prior art arrays 100 and 500 of FIGS. 1 and 5A, respectively. FIG. 10 is a table of voltages for operating the flash memory array 700 of FIG. 7. Array 700 is operated to perform program, erase, read, and soft erase functions in the manner described below. PROGRAM OPERATION A flash cell of array 700 is programmed by removing electrons from the floating gate of the flash cell, thereby leaving the floating gate with a neutral or positive charge. Referring to FIGS. 7, 8, and 10 to program a specific flash cell, such as flash cell 713, a voltage of -8.0 Volts is applied to word line 702 (the selected word line), a positive voltage of 5.0 Volts is applied to drain bit line 707 (the selected drain bit line), and source bit line 705 is left floating for a given period of time (e.g., 5 ms per cell). Referencing FIG. 8, n-well 822 is held to the supply voltage VCC and p-well 832 is held to ground to prevent disturbance from the substrate. Under these conditions, a high electrical field is established in tunnel oxide region 841 between floating gate 721 and drain region 733. The strength of this electrical field is proportional to the differential voltage across control gate 702 and drain region 733. This high electrical field promotes tunneling of electrons from floating gate 721 to drain region 733. This tunneling of electrons, called Fowler-Nordheim tunneling, leaves floating gate 721 in a programmed state of neutral or positive charge. Flash cell 715 is inhibited from programming as follows. Flash cell 715 is coupled to word line 703 (the non-selected word line) and selected drain bit line 707. During the programming of flash cell 713, non-selected word line 703 is held to a positive voltage equal to the supply voltage less the cell threshold voltage, VCC -Vt. In this embodiment, the supply voltage VCC is equal to 3.3 Volts, and the threshold voltage Vt is equal to 0.7 Volts. Thus, the resultant voltage applied to the control gates of the flash cells coupled to non-selected word line 703 is a positive voltage of 2.6 Volts. Referencing FIG. 8, the small voltage differential between control gate 703 and drain 735 produces a relatively weak electrical field in tunnel oxide 842. The strength of this electrical field is insufficient to program flash cell 715 in the period of time allowed for the programming operation. Note that this voltage of VCC -Vt is more positive than the 0 Volts used for the non-selected word line in prior art arrays 100 (FIG. 1). As a result, the voltage across control gate 703 and drain 735 of flash cell 715 (i.e. 5-2.6 Volts=2.4 Volts) is significantly less than that of prior art array 100. As a result, flash cell 715 of the present invention experiences proportionally less charge loss from floating gate 723 than is experienced by an equivalent prior art flash cell (See FIG. 1). Therefore, both drain disturb due to charge loss on the floating gate and band-to-band leakage current are reduced by the present invention. Flash cells on the non-selected drain bit lines are held in program inhibit conditions during the program operation. For example, flash cell 714 is coupled to selected word line 702 (-8.0 Volts) and drain bit line 708 (the non-selected drain bit line). A voltage of 0 Volts is provided on non-selected drain bit line 708. As a result, a voltage of -8 Volts is applied across the control gate and drain of flash cell 714. This voltage is insufficient to program flash cell 714 during the program operation. As another example, flash cell 716 is coupled to non-selected word line 703 (2.6 Volts) and non-selected drain bit line 708 (0 Volts). As a result, a voltage of 2.6 Volts is applied across the control gate and drain of flash cell 716. This voltage is insufficient to program (or erase) flash cell 716 during the program operation. In contrast, a positive voltage cannot be applied to the non-selected word lines of prior art array 100 (FIG. 1) because common source line 103 is coupled to the same flash cells as word lines 101-102, thereby allowing a leakage current to flow through programmed cells. For example, assume cell 110 of FIG. 1 is being programmed, and cells 112 and 113 of FIG. 1 are programmed from a prior operation. To program cell 110, a large programming voltage of -8.0 Volts is applied to selected word line 101 and a voltage of +5.0 Volts is applied to drain bit line 105. If a positive voltage is applied to word line 102 to inhibit programming, cells 112 and 113 would turn on. Turned on cell 112 would couple drain bit line 105 to common source line 103. Turned on cell 113 would couple drain bit line 106 to common source line 103. Because drain bit line 105 is held at a voltage of 5.0 Volts and drain bit line 106 is held at a grounding voltage, a leakage current freely flows from drain bit line 105 to drain bit line 106. As a result, the voltage on drain bit line 105 can be pulled down to a voltage that prevents flash cell 110 from being programmed. Of importance, both the drain and source bit lines 705-708 of array 700 in the present invention (FIG. 7) are made of metal. By comparison, array 500 of FIG. 5 has both the drain and source bit lines formed as buried diffusion lines. The buried diffusion lines, as noted above, require a larger drain junction area than that of metal lines. The amount of drain junction current leakage is proportional to the size of the drain junction area. Thus, the smaller drain junction area of the present invention allows for less drain junction current leakage. The other flash cells of array 700 are programmed in the same manner as flash cell 713. ERASE OPERATION A flash cell is erased when electrons are inserted into the floating gate, thereby providing a negative charge on the floating gate. An array of flash cells of the present invention is erased by applying the voltages listed in FIG. 10 to array 700 for a given period of time (e.g., 100 ms). Erasing is performed in blanket mode, meaning that all cells in an array are erased simultaneously. Flash memory array 700 is erased by applying a large positive voltage (8.0 Volts) to each control gate, a large negative voltage (-8.0 Volts) to each source, and leaving each drain floating. N-well 820 is held to a voltage equal to the supply voltage (VCC) and p-well 830 is held to a large negative voltage (-8.0 Volts) to prevent electron flow from other regions. Under these conditions, a large voltage differential is established between each of the floating gates in array 700 and their associated source and p-well regions. This high voltage differential establishes a large electrical field which causes electrons to tunnel from the source and p-well regions to the associated floating gates. As a result, after erasing, all cells are in a high threshold voltage state. READ OPERATION Flash cells are read by applying the voltages listed in FIG. 10 to the array for a given period of time (e.g., 20 ns). Flash cells may be read cell by cell or row by row. A flash cell within row 1 of array 700 is read by selecting word line 702. Selected word line 702 is held to VCC, source bit lines 705-706 are held to 1.0 Volts, and drain bit lines 707-708 are grounded. Source bit lines 705-706 are connected to sense amplifiers (not shown) to sense the change in current on the lines. For example, flash cell 713 is read by selecting word line 702. Control gate 702 is held to the supply voltage of VCC, drain 733 is grounded and source 729 is held to a voltage of 1.0 Volts. N-well 820 is held to a voltage equal to the supply voltage and p-well 830 is held to ground to prevent electron flow from the substrate. Row 2 of array 700 is withdrawn from the read operation by not selecting word line 703. Specifically, non-selected word line 703 is held to a voltage of -2.0 Volts to turn off any over-programmed cells. Under these conditions, the programmed cells in the selected row (those with a low threshold voltage) turn on and allow current to flow from the sources to the associated drains. This current is sensed by the sense amplifiers (not shown). Erased cells in the selected row (those with a high threshold voltage) do not turn on, thereby preventing the flow of current between the sources and the associated drains within the erased cells. Read disturb occurs when the charge on the floating gate is altered by the read operation. As noted above, flash arrays 100 and 500 (FIGS. 1 and 5A) are read from the drain. As a result, read disturb in prior art arrays occurs when an electrical field is strong enough to cause the floating gate of an erased cell (having a high threshold voltage) on the non-selected word line and selected drain bit line to experience a charge loss due to electron tunneling from the floating gate to the drain. This charge loss is called F-N tunneling induced read disturb. The floating gate is therefore less negatively charged after the read operation, and thus the threshold voltage of the cell is lowered. As noted above, the under-lap of the heavily doped drain region promotes efficient F-N tunneling between the drain and the floating gate. Therefore, a positive voltage applied to the drain induces more hot electron injection than would be induced by the same voltage if applied to the source. As a result, applying a positive voltage to the drain causes a larger read disturb if the selected cell (on selected word line and selected drain bit line) is programmed (at a low threshold voltage state). For a low threshold cell, the read disturb occurs when an electrical field is strong enough to cause the electrons flowing between the source and the drain to gain enough energy to jump through the tunnel oxide layer into the floating gate. As a result, the floating gate contains additional charge after the read operation. This charge acquisition is called hot electron induced read disturb. Cells along the same word line within flash array 100 share the same source bit line. This prevents the sense amplifiers from distinguishing individual bits to be read within a row of flash cells. For this reason, flash array 100 may not be read from the source rather than the drain. The source line control select transistors 50 of array 500 (FIG. 5A) are placed at the bottom of each sector of cells and drain line control select transistors 45 at the top of each sector of cell, as shown in FIG. 6. These controls are not conveniently placed for exchanging functions during a read operation. For this reason, it is difficult for array 500 (FIG. 5A) to switch from the drain to the source during a read operation. In the present invention, drain region is similarly optimized to provide efficient F-N tunneling between the control gate and the drain. However, the present invention is beneficially read from the source. Because the drain region provides more efficient F-N tunneling with the control gate than does the source region, reading from the source lessens both F-N tunneling induced and hot electron injection induced read disturb conditions. Thus, the present invention is not as susceptible to either type of read disturb caused by reading from the drain described above. Over time, repeated occurrences of read disturb lessen the useable lifetime of a flash cell. Therefore, the reduction of read disturb extends the useable lifetime of the flash cells of the present invention. In fact, the lifetime of flash cells of the present invention is one order of magnitude longer than the lifetime flash memory array 100 (FIG. 1). SOFT ERASE OPERATION FIG. 11 is a graph 1100 of flash cell threshold distribution in accordance with one embodiment of the present invention. Graph 1100 includes programmed cell distribution region 1101, erased cell distribution regions 1102-1103, and over-programmed cell distribution region 1104. All cells in an array are erased during an erase operation, thus all cells are in the erased state after the erase operation. Therefore, after an erase operation, all cells are in a high threshold state, as represented by erased cell distribution region 1103. As can be seen from FIG. 11, the threshold voltage of the erased cells ranges from approximately 4.9 to 6.3 Volts. A program operation lowers the threshold voltages of the programmed cells. The programmed cell threshold distribution regions 1101 and 1104 represent this distribution of programmed cells after a program operation. Programmed cell distribution region 1101 represents the typically programmed cell. Over-programmed cell distribution region 1104 represents those cells over-programmed by the programming operation. As can be seen from FIG. 11, the threshold voltage of the typical programmed cell ranges from 0 to 1.5 Volts. The conditions of the program operation slightly disturbs the contents of non-programmed cells. This disturbance tends to lessen the negative charge on the floating gates of the disturbed cells. As a result, the threshold voltages of these disturbed cells are lessened. Erased cell distribution region 1102 represents the distribution of erased cells after a program operation. As can be seen from FIG. 11, the voltage range for erased cells after a program operation is approximately 4.3 to 5.7 Volts. As noted above, some cells are more sensitive to the program operation than others. These cells respond to the programming operations more quickly, and therefore become more depleted of electrons in their floating gates than the average cell. These over-programmed cells typically have a negative threshold after the program operation, and are represented by over-programmed cell distribution region 1104. As shown in FIG. 11, the over-programmed cells have a threshold voltage of approximately -1.0 to 0 Volts. Over-programmed flash cells are turned on by the application of a grounding voltage to their control gates. Therefore, conventional flash memory arrays turn off over-programmed cells by applying a negative voltage to the appropriate word lines. Alternately, conventional flash memory arrays subject all flash cells to a soft erase function, which includes applying a positive voltage of about 5 Volts to the drain and to ground the source, gate, and substrate for a period of time. The positive voltage across the drain junction generates a high electrical field between the drain and the floating gate, causing electrons to be released from the floating gates. The present invention describes a method of handling these over-programmed cells. This method is to subject all cells to a soft erase mode after a programming operation. The soft erase mode induces a large electrical field in the tunnel oxide layer for a brief duration. This strong electrical field causes electrons to enter the floating gate, thus increasing the threshold voltage of the cells. However, the brief duration (approximately 1/10th the duration of an erase operation) is not long enough to perform an erase of the cells. The short duration of the soft erase operation allows only those cells with quick response times to introduce a small number of electrons into their floating gates. Only the over-programmed cells, with their quick response times, are significantly altered. During this operation, the induced electrical field does not significantly change the threshold of other cells in the array. For example, referring to FIGS. 8 and 10, a supply voltage of VCC is applied to control gate 702, control gate 703, and n-well contact region 821. Both p-well contact region 831 and source 729 are held to a negative voltage of -8.5 Volts, and drains 733 and 735 are left floating. The voltage differential between n-well contact region 821 and p-well contact region 831 ensures electrons do not flow into the system from the n-well junction. The n-well junction is located between n-well 820 and p-well 830. Under these conditions, the voltage differential between control gate 702 and p-well 830 causes an electrical field to form in tunnel oxide 841. If cell 713 is over-programmed, the charge on floating gate 721 is positive and the cell has a negative threshold value. Typically, the strength of the electrical field formed in the tunnel oxide region (e.g., tunnel oxide 841) is greater in over-programmed flash cells than in flash cells having a positive threshold voltage. The electrical field in tunnel oxide 841 causes electrons to tunnel through tunnel oxide 841 and enter floating gate 721, thus increasing the negative charge on the floating gate. The increased negative charge on floating gate 721 results in a higher cell threshold value. This method of soft erase ensures that all resultant programmed cells have a positive threshold voltage less than a given value. Additionally, the soft erase operation tightens the threshold voltage distribution of programmed cells. The voltage differential is not across the drain junction, resulting in a very small leakage current during the new soft erase operation. Array 100 (FIG. 1) uses one metal line per cell to couple the drains of each flash cell in a column. Array 500 (FIG. 5) uses two buried diffusion lines per cell to couple the drains and to couple the sources of each flash cell in a column, respectively. Array 700 (FIG. 7) uses two metal lines to couple each drain and to couple each source of flash cells in a column, respectively. As noted above, diffusion lines have an inherent delay due to the resistance and capacitance of the lines. Metal lines have much less resistance and capacitance delay. Therefore, the present invention allows faster access to the flash cells of array 700 than is allowed for arrays 100 and 500 (FIGS. 1 and 5A). Additionally, the complexity of the formation process for a cell using buried diffusion lines is inherently more complicated than for a cell using metal lines because an additional step is required to form the buried diffusion lines. The present invention uses the standard EPROM tunnel oxide (ETOX) process, which is much less complicated. As a result, flash cells of the present invention may be embedded within CMOS circuits with only a minor change in the formation process. A smaller flash cell size promotes a smaller overall chip size. Process technology is the method by which flash cells are fabricated. The feature size is used to delineate processes. Feature size is defined as the minimum width of pattern openings or spaces in a device. Therefore, a 0.35 .mu.m process is a process in which the minimum width of pattern openings or spaces in a device is 0.35 .mu.m. For processes of 0.35 .mu.m and above, the use of metal to connect both the drains and the sources of a cell increases the overall cell size as compared to cells using buried diffusion lines to connect one or both due to constraints of the ultra-violet (UV) lithography and etch equipment for the metal. However, the larger cell size required by the UV lithography and etch equipment constraints can be used to increase the cell coupling ratio, making the cell more efficient. The cell coupling ratio is the ratio between the amount of charge attracted into the floating gate versus the amount of voltage applied to the control gate. The cell coupling ratio is inversely proportional to the voltage required for program and erase operations. As a result, increasing the cell coupling ratio allows a lower voltage required for use during program and erase, and also increases the current produced during a read operation. For processes of 0.25 .mu.m and below, which use a deep-UV lithography and etch equipment, both cells using metal and cells using buried diffusion are similarly sized because the primary limitation on cell size is no longer the constraint of the UV lithography and etch equipment to process a metal line. The limitation at 0.25 .mu.m and below is now the diffusion to diffusion distance and poly to poly distance. The diffusion to diffusion and poly to poly distances are set by the high voltage requirement of the flash cell. These distance limitations are larger than the limitation imposed by the use of metal process equipment, and thus each of arrays 100, 500, and 700 (FIGS. 1, 5A and 7, respectively) are limited by similar factors. Although the present invention has been described in connection with one embodiment, it is understood that this invention is not limited to the embodiment disclosed, but is capable of various modifications which would be apparent to one of ordinary skill in the art. Thus, the invention is limited only by the following claims. |
A reverse current protection (RCP) circuit is provided that includes an RCP switch coupled between a power supply rail and a buffer power supply node. A control circuit powered by a buffer supply voltage on the buffer power supply node controls the RCP switch to open in response to a discharge of a power supply voltage carried on the power supply rail. |
1.An integrated circuit comprising:Reverse current protection (RCP) switch coupled between power rail and buffer power supply node;A voltage reference circuit configured to generate a reference voltage from a power supply voltage provided by the power rail;A control circuit powered by a buffer supply voltage carried on the buffer power supply node, wherein the control circuit is configured to: turn off the RCP switch in response to determining that the reference voltage is greater than the supply voltage, And closing the RCP switch in response to determining that the reference voltage is less than the supply voltage.2.The integrated circuit of claim 1, further comprising:A comparator configured to compare the reference voltage with the power supply voltage to make a determination that the reference voltage is greater than the power supply voltage and make the reference voltage smaller than the power supply voltage.3.The integrated circuit of claim 1, wherein the RCP switch comprises a PMOS transistor.4.The integrated circuit of claim 3, wherein the control circuit includes an inverter, an output signal of the inverter being configured to drive a gate of the PMOS transistor.5.The integrated circuit of claim 1, further comprising an input / output buffer coupled to the buffer power supply node.6.The integrated circuit of claim 1, further comprising a power terminal configured to receive power to power the power rail.7.The integrated circuit of claim 2, wherein the voltage reference circuit includes a diode-connected transistor coupled between the power rail and the capacitor.8.The integrated circuit of claim 7, further comprising:A source follower transistor coupled to the power rail, wherein the capacitor is coupled between a grounded and a gate of a source follower transistor.9.The integrated circuit of claim 8, further comprising:A second diode-connected transistor, a drain of the second diode-connected transistor being coupled to the power rail;A first resistor, a first terminal of the first resistor being coupled to a source of the second diode-connected transistor; andA second resistor, a first terminal of which is coupled to a source of the source follower transistor, wherein the comparator is configured to: connect a second terminal of the first resistor Is compared with the voltage at the second terminal of the second resistor to determine whether the reference voltage is greater than the supply voltage.10.The integrated circuit of claim 2, wherein the comparator is configured to be powered by the supply voltage.11.The integrated circuit of claim 9, wherein a resistance of the first resistor is greater than a resistance of the second resistor.12.The integrated circuit of claim 9, wherein the second terminal of the first resistor is coupled to the positive input of the comparator, and wherein the second terminal of the second resistor is coupled to the Describe the negative input of the comparator.13.The integrated circuit of claim 9, further comprising:A first current source configured to: bias the second diode-connected transistor with a first current; andA second current source configured to bias the source follower transistor with the first current.14.The integrated circuit of claim 13, wherein the first current source and the second current source each comprise a current source transistor, the integrated circuit further comprises a third diode-connected transistor, The third diode-connected transistor is in a current mirror configuration with the current source transistor.15.A method comprising:Receiving a voltage signal from a remote integrated circuit to power a buffer supply voltage in the first integrated circuit when the supply voltage is discharged for the first integrated circuit;Generating a switch-off control signal in a control circuit powered by the buffer supply voltage in response to the discharging of the supply voltage; andIn response to the generation of the switch closure signal, the switch is turned off to isolate the power rail carrying the supply voltage from the buffer supply voltage node carrying the buffer supply voltage.16.The method of claim 15, further comprising: closing the switch in response to the supply voltage being energized.17.The method of claim 15, further comprising: comparing a capacitively stored reference voltage with the supply voltage to determine whether the supply voltage is discharged.18.A system that includesFirst integrated circuit comprising:Power railInput / output (I / O) buffers, the I / O buffers including a buffer supply voltage node coupled to I / O terminals through an ESD diode;A reverse current protection (RCP) switch coupled between the buffer supply voltage node and the power rail;A reference voltage circuit configured to: generate a capacitively stored reference voltage from the supply voltage; andMeans for opening the RCP switch in response to the supply voltage being discharged below the reference voltage, the apparatus being coupled to the buffer power supply node to receive power; andA second integrated circuit comprising an I / O buffer, I / O terminals of which are coupled to I / O terminals of the first integrated circuit.19.The system of claim 18, further comprising:A power management integrated circuit (PMIC), wherein the first integrated circuit includes a power terminal coupled to the power rail and configured to receive power from the PMIC.20.The system of claim 20, wherein the first integrated circuit comprises a baseband integrated circuit and the second integrated circuit comprises an application processor. |
Self-sensing reverse current protection switchCross reference to related applicationsThis application claims the benefit of U.S. Patent Application Serial No. 14 / 606,746, filed January 27, 2015, which application is incorporated herein by reference in its entirety.Technical fieldThis application relates to reverse current protection for integrated circuits, and more particularly to a self-sensing reverse current switch.backgroundIt is conventional for modern electronic devices, such as smart phones, to include multiple interconnected integrated circuits. For example, a smart phone may include an application processor interfaced with other integrated circuits, such as sensors and baseband circuits. In order to save power, it is also conventional for these various integrated circuits to operate independently, so that one integrated circuit can be powered down during deep sleep operation while the other continues to operate in normal operation. Although this independent operation of the integrated circuit saves power, this causes a problem of generating a reverse current.To better understand the reverse current problem, note that the power rails of the integrated circuit's input / output (I / O) buffers will typically be protected by electrostatic discharge (ESD) diodes that drain from the buffer's I / O The pad or terminal is coupled to the internal buffer power rail. If the electrostatic discharge appears to suddenly apply a positive voltage to the I / O terminals, the ESD diode becomes forward biased and the electrostatic charge is safely discharged to the power rail. However, it is assumed that the corresponding integrated circuit including the I / O terminals is powered off while the other integrated circuit interconnected to the I / O terminals is still operating. The additional integrated circuit may have a default mode in which it holds the leads coupled to the I / O terminals with a positive voltage. The ESD diode will then be forward biased so that the power rail coupled to the I / O terminal will be charged to the positive voltage on the lead (minus the threshold voltage drop of the forward-biased ESD diode). The PMOS transistor in the integrated circuit (whose source is coupled to the buffer supply rail) will then conduct because the gate of the PMOS transistor will be discharged due to the off-state of the integrated circuit. This not only wastes power but also causes erroneous operation or malfunction in the subsequent power-up of the integrated circuit.In order to solve the problem of reverse current, various methods have been developed. For example, an integrated circuit, such as an application processor, may be programmed to know the status of other integrated circuits in the system. If another integrated circuit is powered down, the processor will then discharge any leads that it has to the I / O terminals on the integrated circuit that is powered down. But this approach puts the user under the burden of having to program the processor accordingly. In another approach, external components may also be located in the signal path between integrated circuits to gate signals when the interconnected integrated circuits are powered down. This external component adds manufacturing costs. Alternatively, the integrated circuit may be configured with a head switch that is turned off when the integrated circuit is powered down. This usually requires additional terminals and control signals, which adds manufacturing costs and complicates the design.Therefore, there is a need in the art for an improved reverse current protection circuit.OverviewA reverse current protection (RCP) circuit for a first integrated circuit is provided that includes an RCP switch coupled between a power rail and an input / output (I / O) buffer power supply node. The snubber power node is coupled to the I / O terminals driven by the remote integrated circuit through an ESD diode. The remote integrated circuit can continue to drive the I / O terminals with a voltage signal when the first integrated circuit is in deep sleep mode in which the supply voltage carried on the power rail is discharged or dropped. The ESD diode then becomes forward biased in order to charge the buffer power supply node. The RCP circuit is configured to turn off the RCP switch in response to the discharging of the power supply voltage to eliminate any problems caused by any reverse current caused by charging of the buffer power supply node. During normal operation when the power rail is powered, the RCP circuit closes the RCP switch to couple the power rail to the I / O buffer power node.In order to detect the discharge of the power supply voltage, the RCP circuit includes a reference voltage circuit having a capacitor charged by the power supply voltage to generate a reference voltage. When the supply voltage falls in Deep-sleep mode, the capacitor storage in the reference voltage circuit causes the reference voltage to become greater than the supply voltage. The control circuit in the RCP circuit responds to the reference voltage becoming greater than the power supply voltage by turning off (turning off) the RCP switch. During normal operation of the supply voltage greater than the reference voltage, the control circuit turns on (closes) the RCP switch. The control circuit is coupled to the buffer power supply node for receiving power so that it can remain powered during deep sleep mode and keep the RCP switch in an off state.The resulting RCP circuit is compact and low power. In addition, the resulting RCP circuit does not require additional terminals for receiving control signals and does not require any shuffling or reprogramming of the remote integrated circuit. These and additional advantageous features may be better understood by reference to the following detailed description of example embodiments.Brief Description of the DrawingsFIG. 1 is a circuit diagram of a reverse current protection circuit according to an embodiment of the present disclosure. FIG.FIG. 2 is a circuit diagram of a system including the reverse current protection circuit of FIG. 1. FIG.FIG. 3 is a flowchart of an operation method for a reverse current protection circuit according to an embodiment of the present disclosure. FIG.The embodiments of the present disclosure and their advantages are best understood by reference to the following detailed description. It should be appreciated that the same reference numerals are used to identify the same elements illustrated in one or more of the drawings.A detailed descriptionA reverse current protection (RCP) circuit is provided that has an RCP switch that acts as a perfect diode. The RCP switch is located on the power rail of one or more I / O buffers on the protected integrated circuit. Since the RCP switch acts as a perfect diode, the RCP switch turns on (closes) when the power rail is powered during normal operation of the protected integrated circuit. If the power rail is powered down during deep-sleep mode, the reverse current switch is turned off (opened) so that the protected I / O buffer can receive a charged voltage signal from a remote integrated circuit that is off-state in the protected integrated circuit Keep the power on. In this manner, the ESD diodes in the I / O buffers can become forward-biased as their terminals receive a positive voltage signal from the powered integrated circuit while the internal power rail of the protected integrated circuit is isolated due to isolation through the RCP switch While remaining discharged. The remote integrated circuit (s) may be completely unknown regarding the power state of the protected integrated circuit. As a result, there is no need for any reprogramming of remote integrated circuits. In contrast to the conventional post-power protection circuits previously discussed, control signals, additional pins, or external head switches are not necessary. Some example embodiments will now be discussed.An example reverse current protection (RCP) circuit 100 is shown in FIG. 1. In this embodiment, the RCP circuit 100 includes a PMOS RCP switch transistor 115 coupled between the input / output buffer power node 110 and the power rail 105. Internal power rail 105 is coupled to a power terminal (not illustrated) for receiving a supply voltage from an external source, such as a power management integrated circuit (PMIC). Buffer power supply node 110 is coupled to input / output (I / O) terminals 145 (such as pads or pins) through ESD diodes 140. As previously discussed, the ESD diode 140 may become forward biased when the internal supply voltage is discharged to ground and the I / O terminal 145 remains charged by the external integrated circuit. Despite this forward bias of the ESD diode 140, the RCP switching transistor 115 acts as a perfect diode and prevents the charging of the I / O terminal 145 from charging the power rail 115. To control this favorable ideal diode behavior, the gate voltage of the RCP switch transistor 115 is controlled by an inverter 135 whose power is not obtained from the power rail 105 but from the voltage carried on the buffer power supply node 110 of. In this manner, the inverter 135 may still be powered to charge the gate of the RCP switching transistor 115 even when the power rail 105 is discharged. During normal operation when the power rail 105 is charged, the inverter 135 grounds the gate of the RCP switch transistor 115 to cause the RCP switch transistor 115 to conduct. It is advantageous for the RCP switch transistor 115 to be a PMOS transistor so that the power supply voltage carried on the power rail 105 can be coupled to the buffer power supply node 110 with minimal loss (typically, the PMOS transistor delivers a strong binary one). However, it will be appreciated that NMOS transistors may be used to form the RCP switching transistor 115 in alternative embodiments. The following discussion will thus assume that RCP switch transistor 115 is a PMOS transistor without loss of generality. The n-well of the RCP switch transistor 115 is tied to the buffer power node 110 to prevent the pn-junction between the source of the RCP switch transistor 115 and its n-well 120 from being discharged when the power rail 105 is discharged and the external integrated circuit powers the terminal 145 Is forward biased.The comparator 125 in the RCP circuit 100 is used to detect when the power rail 105 is discharged, such as would occur in a power down mode of operation of an integrated circuit (protected integrated circuit) that includes the RCP circuit 100. To accomplish this detection, the reference voltage circuit 130 coupled to the power rail 105 generates a reference voltage (Vref). The reference voltage circuit 130 includes a diode-connected NMOS transistor M6 whose drain and gate are coupled to the power rail 105. In order to provide ESD protection, the gate of transistor M6 may be coupled to power rail 105 through ESD resistor R3. During normal operation of the protected integrated circuit, the internal power rail 105 is charged to the supply voltage VDD. The diode-connected transistor M6 then acts as a diode so that its source will be charged to VDD-Vt, where Vt is the threshold voltage of the diode-connected transistor M6. The reference voltage circuit 130 also includes a capacitor C that is coupled between the source of the diode-connected transistor M6 and ground so that it is charged to a VDD-Vt voltage during normal operation. The source of the diode-connected transistor M6 drives the gate of the source follower NMOS transistor M1. The drain of the source follower transistor M1 is coupled to the power rail 105. The resistor R is coupled between the source of the source follower transistor M1 and the drain of the current source NMOS transistor M5. During normal operation, the source of the source follower transistor Ml will equal its gate voltage minus its threshold voltage Vt. The source of the source follower transistor Ml thus equals VDD-2Vt during normal operation.The drain and gate of the diode-connected NMOS transistor M2 are coupled to the internal power rail 105. In order to provide ESD protection, the gate of transistor M2 may be coupled to internal power rail 105 through resistor R1. Another resistor R is coupled between the source of the transistor M2 and the drain of the NMOS current source transistor M4. Both of the current source transistors M4 and M5 are in a current mirror configuration with the diode-connected NMOS transistor M3. The gate / drain of transistor M3 is thus coupled to the gates of transistors M4 and M5. Transistor M3 has its source coupled to ground and its drain / gate coupled to internal power rail 105 through resistor R2.During normal operation mode, the transistor M3 will conduct a current I that is substantially equal to the ratio of the supply voltage VDD to the resistance of the resistor R2. Due to the current mirror configuration with transistor M3, the current source transistors M4 and M5 will thus bias their respective loads (transistors M2 and M1, respectively) with the same current I. FIG. The drain of transistor M4 will then be equal to (VDD-Vt) -I * R while the drain of transistor M5 will be equal to (VDD-2Vt) -I * R. The drain voltage of the current source transistor M4 is received at the positive input of the comparator 125. Similarly, the drain voltage of the current source transistor M5 is received at the negative input of the comparator 125. During normal operation, the drain voltage of transistor M4 is thus higher than the drain voltage of transistor M5 by a threshold voltage Vt. The output signal of the comparator 125 will then be high so that the inverter 135 grounds the gate of the RCP switching transistor 115 to turn it on if necessary in normal mode so that the power rail 105 is coupled to the buffer power supply node 110 . However, it is to be noted that mismatch, noise, and other anomalies affect this relationship between the input voltages of the comparator 125. To ensure that the RCP switch transistor 115 remains on during the normal mode of operation, the resistor R coupled to the source of the source follower transistor M1 may be in series with the additional resistor Roffset. It will be appreciated that in alternative embodiments, the resistors R and Roffet may be replaced by a single resistor having a substantially greater resistance than the resistance of the remaining resistors R coupled to the source of the diode-connected transistor M2 . The drain voltage of transistor M5 will therefore be equal to VDD-2Vt-I * (R + Roffset) to ensure that the RCP switch transistor 115 remains on during the normal mode of operation. Additionally, the comparator 125 may be configured to have a relatively low threshold voltage to further ensure that the RCP switch transistor 115 is on during normal operation mode.When the power supply voltage VDD falls after the normal operation mode transitions to the deep sleep mode, there will be a period of time during which the power supply voltage VDD discharges to ground but may still supply power to the comparator 125 (for the sake of clarity, FIG. 1 The power coupling of the comparator 125 to the power rail 105 is not shown). During the discharge period, the charge stored on the capacitor C in the reference voltage circuit 130 will cause the drain voltage of the final drive transistor M5 to be higher than the drain voltage of the transistor M4. In response to the reference voltage Vref becoming greater than the voltage of the power rail 105, the comparator 125 thereby drives its output low so that the inverter 135 drives the gate voltage of the RCP switching transistor 115 high to turn it off. When the rail voltage becomes completely discharged, there is no more power to drive the comparator 125, but this does not matter because the output signal of the comparator 125 will remain discharged. The resulting reverse current switching protection is quite advantageous because it does not require additional terminals or externally generated control signals. In addition, a single RCP circuit 100 may protect multiple I / O buffers.In one embodiment, the inverter 135 may be considered to include a means for opening the RCP switch in response to a discharge of the supply voltage below a reference voltage, the means being coupled to the buffer power supply node to receive power. An example system for an integrated circuit will now be discussed.A system 200 is shown in FIG. 2, which includes a protected integrated circuit 205 incorporated into the RCP circuit 100. In system 200, the protected integrated circuit 205 includes a modem processor (MDM) that interfaces with an external application processor (AP) host integrated circuit 210. However, it will be appreciated that the RCP circuit 100 can be applied to any collection of integrated circuits that require reverse current protection. The protected I / O buffer in the MDM 205 is a general purpose I / O (GPIO) buffer 220 that interfaces with a corresponding GPIO buffer 225 in the AP host 210. Advantageously, AP host 210 does not require software modifications regarding the operation of RCP circuit 100. Thus, one or more of the GPIO buffers 225 may drive its output signal high when the supply voltage VDD falls in the MDM 205. The RCP circuit 100 is used to isolate the power rail 105 from the resulting high voltage on the buffer power node 110. Power rail 105 may be powered through power terminals 230 driven by a power management integrated circuit (PMIC) 215. The method of operation for reverse current switching will now be discussed.3 is a flowchart of an example method of operation for a reverse current protection circuit in accordance with an embodiment of the present disclosure. The method includes an act 300 performed when the supply voltage is discharged with respect to the first integrated circuit and act 300 includes receiving a voltage signal from the remote integrated circuit to power the buffer supply voltage in the first integrated circuit. Receiving asserted voltage signal at the buffer power supply node 110 when the power supply voltage VDD is discharged on the power rail 105 in the RCP circuit 100 of FIG. 1 is an example of the action 300. The method also includes an act 305 performed in response to the discharging of the mains voltage in act 300. Act 305 includes generating a switch-off control signal in a circuit powered by the buffer supply voltage. Charging the output signal of the inverter 135 to turn off the RCP switch transistor 115 is an example of act 305. Finally, the method includes an act 310 performed in response to the generation of the switch-off signal and act 310 includes disconnecting the switch to isolate a supply rail carrying the supply voltage from a buffer supply voltage node carrying the buffer supply voltage. Turning off the RCP current switching transistor 115 is an example of act 310.As one of ordinary skill in the art will appreciate and depending on the particular application at hand, many modifications, substitutions and changes may be made in the materials, devices, arrangements, and methods of use of the apparatus of the present disclosure without departing from the spirit of the disclosure And range. In view of this, the scope of the disclosure should not be limited to the specific embodiments illustrated and described herein, as merely as examples of the disclosure, but rather should be fairly equivalent to the appended claims and their functional equivalents. |
A method includes receiving a fault notification message associated with a fault condition in a manufacturing system. Workpiece identification information is determined for at least one workpiece associated with the fault condition based on the fault notification message. Fault state data is collected based on the workpiece identification information. A fault record including the workpiece identification information and the fault state data is stored. A manufacturing system includes a plurality of tools for processing workpieces, a fault database, and a fault monitor. The fault monitor is configured to receive a fault notification message associated with a fault condition in the manufacturing system, determine workpiece identification information for at least one of the workpieces associated with the fault condition based on the fault notification message, collect fault state data based on the workpiece identification information, and store a fault record including the workpiece identification information and the fault state data in the fault database. |
What is claimed is:1. A method, comprising:receiving a fault notification message associated with a fault condition in a manufacturing system;determining workpiece identification information for at least one workpiece associated with the fault condition based on the fault notification message;collecting fault state data based on the workpiece identification information; andstoring a fault record including the workpiece identification information and the fault state data.2. The method of claim 1, wherein the fault notification message includes the workpiece identification information, and determining the workpiece identification information further comprises extracting the workpiece identification information from the fault notification message.3. The method of claim 1, wherein the fault notification message includes process tool identification information associated with a process tool related to the fault condition, and determining the workpiece identification information further comprises identifying at least one workpiece processed by the process tool during a time period proximate the fault condition.4. The method of claim 1, wherein collecting the fault state data further comprises collecting workpiece state data.5. The method of claim 4, wherein collecting the workpiece state data further comprises collecting metrology data associated with the workpiece.6. The method of claim 4, wherein collecting the workpiece state data further comprises collecting context data associated with the workpiece.7. The method of claim 6, wherein collecting the context data further comprises collecting at least one of process step data and processing history data.8. The method of claim 1, wherein collecting the fault state data further comprises collecting process tool state data.9. The method of claim 8, wherein collecting the process tool state data further comprises collecting data associated with a process run of the process tool executing proximate the fault condition.10. The method of claim 8, wherein collecting the process tool state data further comprises collecting at least one of tool maintenance history and tool fault history.11. The method of claim 1, further comprising initiating a scheduling request to gather additional fault state data responsive to the fault notification message.12. The method of claim 11, wherein initiating the scheduling request further comprises initiating a request for metrology data associated with the workpiece.13. The method of claim 11, wherein initiating the scheduling request further comprises initiating a request for image data associated with the workpiece.14. A manufacturing system, comprising:a plurality of tools for processing workpieces;a fault database;a fault monitor configured to receive a fault notification message associated with a fault condition in the manufacturing system, determine workpiece identification information for at least one of the workpieces associated with the fault condition based on the fault notification message, collect fault state data based on the workpiece identification information, and store a fault record including the workpiece identification information and the fault state data in the fault database.15. The system of claim 14, wherein the fault notification message includes the workpiece identification information.16. The system of claim 14, wherein the fault notification message includes process tool identification information associated with a process tool related to the fault condition, and the fault monitor is further configured to identify at least one of the workpieces processed by the process tool during a time period proximate the fault condition.17. The system of claim 14, wherein the fault state data further comprises workpiece state data.18. The system of claim 17, wherein the workpiece state data further comprises metrology data associated with the workpiece.19. The system of claim 17, wherein the workpiece state data further comprises context data associated with the workpiece.20. The system of claim 19, wherein the context data further comprises at least one of process step data and processing history data.21. The system of claim 14, wherein the fault state data further comprises process tool state data.22. The system of claim 21, wherein the process tool state data further comprises data associated with a process run of the process tool executing proximate the fault condition.23. The system of claim 21, wherein the process tool state data further comprises at least one of tool maintenance history and tool fault history.24. The system of claim 14, wherein the fault monitor is further configured to initiate a scheduling request to gather additional fault state data responsive to the fault notification message.25. The system of claim 24, wherein the scheduling request further comprises a request for metrology data associated with the workpiece.26. The system of claim 24, wherein the scheduling request further comprises a request for image data associated with the workpiece.27. A system, comprising:means for receiving a fault notification message associated with a fault condition in a manufacturing system;means for determining workpiece identification information for at least one workpiece associated with the fault condition based on the fault notification message;means for collecting fault state data based on the workpiece identification information; andmeans for storing a fault record including the workpiece identification information and the fault state data. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates generally to the field of semiconductor device manufacturing and, more particularly, to a method and apparatus for capturing fault state data.2. Description of the Related ArtThere is a constant drive within the semiconductor industry to increase the quality, reliability and throughput of integrated circuit devices, e.g., microprocessors, memory devices, and the like. This drive is fueled by consumer demands for higher quality computers and electronic devices that operate more reliably. These demands have resulted in a continual improvement in the manufacture of semiconductor devices, e.g., transistors, as well as in the manufacture of integrated circuit devices incorporating such transistors. Additionally, reducing the defects in the manufacture of the components of a typical transistor also lowers the overall cost per transistor as well as the cost of integrated circuit devices incorporating such transistors.Generally, a set of processing steps is performed on a lot of wafers using a variety of processing tools, including photolithography steppers, etch tools, deposition tools, polishing tools, rapid thermal processing tools, implantation tools, etc. The technologies underlying semiconductor processing tools have attracted increased attention over the last several years, resulting in substantial refinements. However, despite the advances made in this area, many of the processing tools that are currently commercially available suffer certain deficiencies. In particular, such tools often lack advanced process data monitoring capabilities, such as the ability to provide historical parametric data in a user-friendly format, as well as event logging, real-time graphical display of both current processing parameters and the processing parameters of the entire run, and remote, i.e., local site and worldwide, monitoring. These deficiencies can engender non-optimal control of critical processing parameters, such as throughput, accuracy, stability and repeatability, processing temperatures, mechanical tool parameters, and the like. This variability manifests itself as within-run disparities, run-to-run disparities and tool-to-tool disparities that can propagate into deviations in product quality and performance, whereas an ideal monitoring and diagnostics system for such tools would provide a means of monitoring this variability, as well as providing means for optimizing control of critical parameters.One technique for improving the operation of a semiconductor processing line includes using a factory wide control system to automatically control the operation of the various processing tools. The manufacturing tools communicate with a manufacturing framework or a network of processing modules. Each manufacturing tool is generally connected to an equipment interface. The equipment interface is connected to a machine interface which facilitates communications between the manufacturing tool and the manufacturing framework. The machine interface can generally be part of an advanced process control (APC) system. The APC system initiates a control script based upon a manufacturing model, which can be a software program that automatically retrieves the data needed to execute a manufacturing process. Often, semiconductor devices are staged through multiple manufacturing tools for multiple processes, generating data relating to the quality of the processed semiconductor devices.Statistical process control (SPC) techniques are commonly used to monitor the operation of manufacturing processes, systems, or individual manufacturing tools. Commonly, various measurements related to the process being monitored are compiled and analyzed. Fault detection data may include data related to the manufactured devices as well as data related to the operating parameters of the tools. For example, physical measurements, such as line width, or electrical measurements, such as contact resistance, may be used to detect faults in fabricated devices. Tool parameters, such as chamber pressure, temperature, voltage, reactive gas makeup, etc., may be evaluated during the processing of devices in the tool to detect fault conditions with the tools themselves.Typically, there is a delay between the time a fault condition is determined and corrective action and/or troubleshooting activities are performed. During this delay, the conditions in the fabrication facility may change (i.e., process and/or tool state) from what was present at the time the fault condition was generated. While some of the data regarding the events leading up to the fault condition is archived, other data is not stored at all or is only available for a limited time due to the volume of the incoming data. Even for the data that is archived, it is often time consuming to access all the relevant data sources and correlate the data to the wafer having the associated fault condition. Hence, troubleshooting activities are often hampered by the difficulties associated with incomplete data and the difficulty in gathering the existing data.The present invention is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above.SUMMARY OF THE INVENTIONOne aspect of the present invention is seen in a method that includes receiving a fault notification message associated with a fault condition in a manufacturing system. Workpiece identification information is determined for at least one workpiece associated with the fault condition based on the fault notification message. Fault state data is collected based on the workpiece identification information. A fault record including the workpiece identification information and the fault state data is stored.Another aspect of the present invention is seen in a manufacturing system including a plurality of tools for processing workpieces, a fault database, and a fault monitor. The fault monitor is configured to receive a fault notification message associated with a fault condition in the manufacturing system, determine workpiece identification information for at least one of the workpieces associated with the fault condition based on the fault notification message, collect fault state data based on the workpiece identification information, and store a fault record including the workpiece identification information and the fault state data in the fault database.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:FIG. 1 is a simplified block diagram of a manufacturing system in accordance with one illustrative embodiment of the present invention; andFIG. 2 is a simplified flow diagram of a method for capturing fault state data in accordance with another illustrative embodiment of the present invention.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTSIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.Referring to FIG. 1, a simplified block diagram of an illustrative manufacturing system 10 is provided. In the illustrated embodiment, the manufacturing system 10 is adapted to fabricate semiconductor devices. Although the invention is described as it may be implemented in a semiconductor fabrication facility, the invention is not so limited and may be applied to other manufacturing environments. The techniques described herein may be applied to a variety of workpieces or manufactured items including, but not limited to microprocessors, memory devices, digital signal processors, application specific integrated circuits (ASICs), or other similar devices. The techniques may also be applied to workpieces or manufactured items other than semiconductor devices.A network 20 interconnects various components of the manufacturing system 10, allowing them to exchange information. The illustrative manufacturing system 10 includes a plurality of tools 30-80. Each of the tools 30-80 may be coupled to a computer (not shown) for interfacing with the network 20. The tools 30-80 are grouped into sets of like tools, as denoted by lettered suffixes. For example, the set of tools 30A-30C represent tools of a certain type, such as a chemical mechanical planarization tool. A particular wafer or lot of wafers progresses through the tools 30-80 as it is being manufactured, with each tool 30-80 performing a specific function in the process flow. Exemplary processing tools for a semiconductor device fabrication environment, include metrology tools, photolithography steppers, etch tools, deposition tools, polishing tools, rapid thermal processing tools, implantation tools, etc. The tools 30-80 are illustrated in a rank and file grouping for illustrative purposes only. In an actual implementation, the tools may be arranged in any order or grouping. Additionally, the connections between the tools in a particular grouping are meant to represent only connections to the network 20, rather than interconnections between the tools.A manufacturing execution system (MES) server 90 directs the high level operation of the manufacturing system 10. The MES server 90 monitors the status of the various entities in the manufacturing system 10 (i.e., lots, tools 30-80) and controls the flow of articles of manufacture (e.g., lots of semiconductor wafers) through the process flow. A database server 100 is provided for storing data related to the status of the various entities and articles of manufacture in the process flow. The database server 100 may store information in one or more data stores 110. The data may include pre-process and post-process metrology data, tool states, lot priorities, etc. As described in greater detail below, a fault monitor 120 operating on a computer 130 is provided for receiving notifications of fault conditions determined for wafer being processed in the manufacturing system 10. Upon receiving a fault notification message, the fault monitor 120 captures fault state data from various sources and stores the fault state data in a fault database 140 for future use in troubleshooting activities.Portions of the invention and corresponding detailed description are presented in terms of software, or algorithms and symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the ones by which those of ordinary skill in the art effectively convey the substance of their work to others of ordinary skill in the art. An algorithm, as the term is used here, and as it is used generally, is conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of optical, electrical, or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, or as is apparent from the discussion, terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.An exemplary information exchange and process control framework suitable for use in the manufacturing system 10 is an Advanced Process Control (APC) framework, such as may be implemented using the Catalyst system offered by KLA-Tencor, Inc. The Catalyst system uses Semiconductor Equipment and Materials International (SEMI) Computer Integrated Manufacturing (CIM) Framework compliant system technologies and is based the Advanced Process Control (APC) Framework. CIM (SEMI E81-0699-Provisional Specification for CIM Framework Domain Architecture) and APC (SEMI E93-0999-Provisional Specification for CIM Framework Advanced Process Control Component) specifications are publicly available from SEMI, which is headquartered in Mountain View, Calif.The distribution of the processing and data storage functions amongst the different computers or workstations in FIG. 1 is generally conducted to provide independence and central information storage. Of course, different numbers of computers and different arrangements may be used.The fault monitor 120 may be configured to receive fault information from a variety of sources. Fault detection and classification (FDC) data may be generated based on data from sensors or metrology tools relating to product or process conditions in the manufacturing system 10. Exemplary product data includes physical measurement data (e.g., line width, process layer thickness, trench depth, planarity, uniformity, photoresist pattern data, etc.), electrical measurement data (e.g., resistivity, contact resistance, dielectric constant, drive current, leakage current, etc.), or tool state data (e.g., tool state data for an etch tool may include gas flow, chamber pressure, chamber temperature, voltage, reflected power, backside helium pressure, RF tuning parameters, etc.). The particular makeup for the FDC data is application dependent and the application of the present invention is not limited to any particular type of FDC data. The specification of data sources for generating FDC data is well known to those of ordinary skill in the art. Various FDC techniques for processing the FDC data and determining fault conditions are well known to those of ordinary skill in the art, and for clarity and to avoid obscuring the present invention, they are not described in greater detail herein. Exemplary FDC techniques include control chart/control limit analysis, multivariate analysis, etc. An exemplary multivariate software tool for processing tool state data to determine tool health is ModelWare(TM) offered by Triant, Inc. of Nanaimo, British Columbia, Canada Vancouver, Canada. An exemplary system for monitoring tool health is described in U.S. patent application Ser. No. 09/863,822, entitled "METHOD AND APPARATUS FOR MONITORING TOOL HEALTH," filed in the names of Elfido Coss Jr., Richard J. Markle, and Patrick M. Cowan, that is assigned to the assignee of the present application and incorporated herein by reference in its entirety.The particular construct of the fault notification message received by the fault monitor 120 depends on the particular fault condition being identified. The fault notification message may or may not include the wafers affected by the fault condition. For example, for fault data generated based on metrology analysis the wafer and/or lot identification numbers of the wafers being measured are typically known. Hence, a fault notification message derived from a metrology source would typically include wafer identification information.For other fault sources, the fault notification message may only include the tool 30-80 that experienced the fault condition and a timestamp indicating when the fault occurred. In such cases, the fault monitor 120 determines the wafer identification information based on the tool identification information and the timestamp information. The MES server 90 maintains a schedule of the processing activities in the manufacturing system 10. The fault monitor 120 queries the MES server 90 to determine which wafer, lot, or lots of wafers (i.e., depending on the particular type of process tool 30-80) was being processed during or near the time the fault condition was determined. The timestamp of the fault notification message may not always correspond exactly to a time period during which a particular wafer was being processed. For example, the fault condition may be determined between processing runs based on a tool health analysis of the previous run. The fault monitor 120 may have to consider multiple wafers or lots processed during the time frame proximate the fault determination before deciding which wafer(s) to designate as being suspect. It may also be the case where wafers have been processed under the same conditions leading to the fault prior to the determination being made. The fault monitor 120 may also flag these subsequent wafers as being suspect. Along the same lines, wafers processed before the process run on which the fault determination was made may also be flagged by the fault monitor 120 as being suspect. In some cases, a fault analysis may not be conducted for each processing run. In such cases, all wafers processed since the last fault determination may be suspect.Based on the wafer identification information supplied with the fault notification message or determined as described above, the fault monitor 120 collects fault state data associated with the determined fault condition. The fault state data includes the data that was analyzed to determine the fault condition. Again, the particular type of FDC data depends on the type of fault being determined. The fault state data may include both wafer state data and tool state data. Exemplary wafer state data includes metrology data regarding the characteristics of the wafer (e.g., physical dimensions, defect rate, electrical measurements, etc.), previous fault conditions, image data (e.g., optical or scanning electron microscope images for construction of a library of visual effects/fault events), context data (e.g., process step, process history (tools 30-80 employed)), etc. Exemplary tool state data includes the data collected during the processing run (i.e., as described above), tool maintenance history, prior tool faults, etc.In some cases, the data desired for inclusion in the fault record may not be available. For example, metrology data or image data may not be collected for every wafer. The fault monitor 120 may determine, based in part on the particular type of fault condition determined, that additional data would be useful in troubleshooting the fault. Accordingly, the fault monitor 120 may send a request to the MES server 90 to schedule the desired activities for collecting additional metrology or image data. In one embodiment, the MES server 90 may notify the fault monitor 120 when the requested data is available. In another embodiment, the fault monitor 120 may maintain a queue of requested activities and periodically check to identify when the data becomes available.The fault monitor 120 generates a fault record in the fault database 140 that is available for future analysis and troubleshooting of the fault condition. The fault record may include fault state data associated with multiple wafers and/or multiple process tools 30-80 depending on how precisely the fault monitor 120 can narrow down the suspect wafers and/or tools 30-80. The fault monitor 120 may update the fault record as additional information becomes available. A user may also instruct the fault monitor 120 to update the fault record when the troubleshooting activities commence to determine if additional data that was not present at the time the fault was captured is now available.Turning now to FIG. 2, a simplified flow diagram of a method for capturing fault state data in accordance with another illustrative embodiment of the present invention is provided. In block 200, a fault notification message associated with a fault condition in a manufacturing system is received. In block 210, workpiece identification information is determined for at least one workpiece associated with the fault condition based on the fault notification message. In block 220, fault state data is collected based on the workpiece identification information. In block 230, a fault record including the workpiece identification information and the fault state data is stored.The capturing and storage of fault state data, as described above, has numerous advantages. The fault state data is collected immediately after determination of the fault condition to reduce the likelihood that the data would not be available when troubleshooting actually occurs. The automated data collection process also reduces the time required to conduct troubleshooting activities because the relevant data has been previously gathered. The automated data gathering system also allows data (e.g., metrology or image) that is not presently available to be collected and stored prior to commencement of the troubleshooting activities.The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below. |
An instruction cycle is determined from instructions stored in a cache [502], where the instruction cycle represents the sequence of instructions predicted to be executed by the processing device that are resident in the cache. The duration of the instruction cycle is estimated and one or more components of the processing device that are not expected to be used during the instruction cycle may be suspended for a portion or all of the duration [506]. The components may be suspended by, for example, clock gating or by isolating the components from one or more power domains [508]. |
1.A method that includes the following steps:Receiving multiple instructions;Identifying one or more components of a processor that are not in use during execution of a sequence of instructions in a plurality of instructions;At least one component of the one or more identified components of the processor is suspended while the processor is executing at least a portion of the sequence of instructions.2.The method of claim 1 wherein suspending the at least one component comprises isolating the at least one component from one or more power domains.3.The method of claim 1 wherein suspending the at least one component comprises clocking the at least one component.4.The method of claim 1 wherein suspending the at least one component comprises deciding whether to suspend the at least one component based on a power consumption expected to be restarted by the at least one component.5.A processor comprising:Multiple components;a storage device for storing a plurality of instructions;Means for identifying one or more components of the plurality of components that are not expected to be used during execution of the sequence of instructions of the plurality of instructions by the processor;Means for suspending at least one component of the one or more identified components during execution of at least a portion of the sequence of instructions by the processor.6.The processor of claim 5 wherein the means for suspending the at least one component comprises means for isolating the at least one component from one or more power domains.7.The processor of claim 5 further comprising means for the processor to request execution of a command that is not included in the sequence of instructions prior to the instruction.8.The processor of claim 7 wherein the means for suspending the at least one component comprises means for clocking the at least one component when the duration is greater than a first threshold.9.The processor of claim 7 wherein the means for suspending the at least one component comprises means for isolating the at least one component from one or more power domains when the duration is greater than a second threshold The device, wherein the second threshold is greater than the first threshold.10.The processor of claim 7 wherein the means for suspending the at least one component comprises means for isolating the at least one component from the one or more power domains when the duration is greater than the first threshold Device. |
System and method for predicting processor component hangTechnical fieldThe present invention relates to power saving of a processing device, and more particularly to a particular component of a suspension processing device.Background techniquePipelined processing typically provides improved performance due to the ability to process multiple instructions simultaneously with different components of the pipeline. The branch prediction technique can be used to further improve performance, whereby the branch prediction unit of the processing device predicts whether to select the branch presented by the upcoming change of flow (COF) instruction. If the branch is predicted to be selected, the branched instruction can be preloaded into the instruction cache of the processing device, and the instruction can also be executed in whole or in part before the COF instruction is resolved. However, if a false prediction occurs, the pipeline is usually flushed and the result of any execution of the instruction associated with the misprediction is discarded. Therefore, erroneous predictions often result in considerable power consumption by the processor and processing cycles that are not being utilized efficiently.Regardless of the effectiveness of the branch prediction of the processor, it will be appreciated that the pipeline processing device fetch and execute instructions may not include or require the use of one or more components of the processing device. To illustrate, the execution of instructions representing integer operations typically does not require the use of a floating point unit (FPU) of the processing device. Thus, even if an upcoming instruction stream does not require the use of certain components, the processing device often consumes power unnecessarily while maintaining certain components in an enabled state. Accordingly, a system and method for reducing power consumption of a processing device and reducing losses associated with mispredicted branches would be beneficial.Summary of the invention1 through 5 illustrate an exemplary system and technique for dynamically suspending a processor component in order to reduce power consumption of the processing device. In at least one embodiment, the instruction cycle is determined from instructions stored in a cache, wherein the instruction cycle represents a sequence instruction that is predicted to be executed by the processing device and that is present in the cache. The duration of the instruction cycle is estimated, and one or more components of the processing device that are not expected to be used during the instruction cycle may be suspended for a portion or all of the duration. For example, the component can be suspended by clock gating or by isolating the component from one or more power domains.DRAWINGSOBJECTS AND ADVANTAGES OF THE INVENTION The detailed description of the following additional figures will be apparent to those of ordinary skill in the art, wherein like elements are used to indicate similar components, and wherein:1 is a block diagram illustrating an exemplary processing system in accordance with at least one embodiment of the present invention;2 is a block diagram illustrating an exemplary suspend controller of the exemplary processing system of FIG. 1 in accordance with at least one embodiment of the present invention;3, 4 and 5 are flow diagrams illustrating an exemplary method for dynamically suspending components of a processing device in accordance with at least one embodiment of the present invention.Detailed waysReferring now to Figure 1, an illustration of a processing system 100 in accordance with at least one embodiment of the present invention is illustrated. The processing system 100 includes processing devices, such as processor 102, coupled to one or more peripheral components, such as a main bus interface unit 104, a memory controller (MC) 106, and a system memory 108. The processor 102 includes an instruction pipeline 110 having an instruction cache 112, a prefetch module 114, a decoding module 116, an address calculation module 118, and an execution module 120. The execution module 120 can include, for example, an integer unit 122, a floating point unit (FPU) 124, a multimedia extension (MMX) unit 126, and the like. The processor 102 can further include a suspend controller 130, a decode cache 132, a second order (L2) cache 134, and a bus controller (BC) 136 that serves as an interface between components of the processor 102 and peripheral components. .Based on the branch prediction information or other prefetch information, the prefetch module 114 retrieves the identified instruction from the instruction cache 112 or from the system memory 108 and provides the instruction to the decoding module 116. In one embodiment, the decoding module 116 partially or completely decodes the prefetch instruction and stores the instruction in the decode cache 132 and/or the L2 cache 134. The instructions in the decode cache 132 and/or the instructions fetched directly from the instruction cache 112 may then be provided to the address calculation module 118 and the execution module 120 for execution in accordance with a program flow.As discussed in more detail with respect to FIG. 2, in one embodiment, the decoding module 116 determines certain program flow features associated with instructions provided by the prefetch module 114 and provides representations of these program flow features. The decode cache 132 is stored with the corresponding decode instruction. The program flow characteristics may include, but are not limited to, the characteristics of the process change (ie, whether the instruction may cause a branch); those processor components that are expected to be used to execute the instruction or are expected to be executed due to the execution of the instruction An indication of those processor components used, and if the instruction is a COF instruction, a branch prediction feature, such as whether or not the branch prediction is to be selected; the number of times the instruction is expected to be selected before the prediction is complemented; The number of times a given COF instruction has been correctly parsed before the complement prediction is implemented.Depending on the instructions in the decode cache 132 and the associated program flow characteristics, in one embodiment, the suspend controller 130 can estimate the duration before it is expected that an instruction that is not present in the decode cache 132 is to be requested. The suspend controller 130 can identify those components of the processor 102 that are expected to be unused during the estimated duration, and then suspend one or more of the identified components for a portion or all of the duration, In order to reduce the power consumed by the processor 102 during this duration. Likewise, the suspend controller 130 can identify one or more peripheral components that are not expected to be used during the duration, and suspend one or more of the peripheral components for some or all of the duration period. As discussed below, additional power consumption involved in shutting down and restarting the processor component or peripheral components may be considered when deciding whether to suspend a particular component.In at least one embodiment, one or more processor components are isolated from their respective power domains by suspending the corresponding power domain to suspend one or more processor components. To illustrate, the prefetch module 114 can be associated with the power domain 140, the integer unit 122 can be associated with the power domain 142, the FPU 124 can be associated with the power domain 144, and the MMX unit 126 can be associated with the power domain 146, L2 high speed The cache 134 can be associated with the power domain 148 and the system memory 108 can be associated with the power domain 150. The suspend controller 130 can thus be inserted, for example, by one or more switching units inserted between the power supply lines of the components 114, 122, 124, 126, 134 and the power domains 140-148 by asserting the suspend signal. 160 to 168 disconnect the one or more components 114, 122, 124, 126, 134 from the corresponding power domains 140-148. Similarly, a deasserted signal can be supplied to reconnect the component to the corresponding power domain. To illustrate, in response to the expectation that the MMX unit 126 will not use a particular instruction cycle, the suspend controller 130 can provide a suspend signal to the switch unit 166 to disconnect the MMX unit 126 from the power domain 146. When the instruction cycle is come to conclusion, the suspend controller 130 can provide a stop announcement suspend signal to the switch unit 166 to reconnect the MMX unit 126 to the power domain 146. Likewise, in response to a decision that the system memory 108 is expected to be unused, the suspend controller 130 can provide an announcement signal to the switch unit 170 via, for example, the BC 136, the MBIU 104, and the MC 106 to disconnect the system memory 108 from the power domain 150. The connection. The switching units 160-170 can comprise any of a variety of suitable switching mechanisms, such as, for example, transistors.Although, for ease of illustration, FIG. 1 illustrates a one-to-one correspondence between a power domain and a processor component, it will be appreciated that multiple components can be associated with one power domain and/or multiple power domains can be associated with A component is associated. In those instances where all components associated with a particular power domain are expected to be unused during a particular instruction cycle, the component can be suspended by isolating the power domain from its power source. Alternatively, the processor component and peripheral components can be suspended by using other techniques (e.g., clock gating) without departing from the spirit or scope of the present invention.Referring now to Figure 2, illustrated is an illustrative embodiment of a decoding module 116, a decode cache 132, and a suspend controller 130 in accordance with at least one embodiment of the present invention. In one embodiment, the decode module 116 utilizes the program flow reference table 202 to determine a program flow reference associated with the instructions stored by the instruction module 116 in the decode cache 132. In the illustrated example, the table 202 includes a plurality of entries for at least a subset of the types of instructions that are compatible with the processor 102. Each entry may have a prefix/opcode field 204 to identify a particular instruction type and one or more fields to indicate whether a corresponding processor or peripheral component is executing at an instruction having the corresponding instruction type. It is expected to be used during the period or due to the execution result of the instruction having the corresponding instruction type. For example, the fields may include a memory field 206, an integer field 208, an FPU field 210, and an MMX field 212 to indicate whether an instruction having a particular instruction type is expected to use system memory 108, integer unit, FPU 124, or MMX unit 126, respectively. In addition to this, each item may include a COF field 213 to indicate whether the instruction type is a possible COF instruction.Upon receiving an instruction from the prefetch module 114, the decoding module 116 decodes the instruction to an appropriate extent and uses an identifier (eg, an opcode or preamble of the instruction) to identify in the table 202. The corresponding instruction type item. Using the field values of the entries of the table 202, the decoding module 116 adds at least a portion of the decoded instructions (stored in field 214) and appropriate program flow information to the items of the decode cache 132. The program flow field of the decode cache 132 can include a linear instruction pointer (LIP) field 216 to provide an index; a COF field 218 to indicate whether the instruction is a COF instruction and fields 218 through 226 to indicate whether the instruction is expected The system memory 108, integer unit 124, FPU 124, or MMX unit 126 are used, respectively. The entry of the decode cache 132 may further include a prediction field 228 to indicate whether to predict the COF instruction to be selected; a predict count field 230 to indicate the number of times the COF instruction is to be selected before the supplemental prediction; a predictive index field 232 to indicate The number of times the execution module 120 has correctly parsed the COF instruction before implementing the supplemental prediction; and the target field 234 to indicate the target of the COF instruction. The decoding module 116 can determine an appropriate value for each of these fields by using the information from the table 202 and the branch prediction information provided by the branch prediction logic associated with the prefetch module 114.In one embodiment, suspend controller 130 analyzes decode cache 132 to identify one or more instruction cycles, each instruction cycle representing a sequence of instructions present in decode cache 132 that is expected to be executed by Module 120 executes in sequence, with the interrupt of the instruction cycle including requirements for instructions that are not present in decode cache 132. Note that if the decode cache 132 includes instructions to form a loop within the decode cache 132, the sequence of instructions may include multiple occurrences of one or more instructions. By using the example in Figure 2, the illustrated instruction cycle starting at LIP1 will include {LIP1,2,3,4,5,6,7,2,3,4,5,6,7,2, 3,4,5,6,7,8,9,10}. It will be appreciated that the sequence instructions at LIPs 2 through 7 are repeated three times because: the value "1" in the COF field 218 and in the prediction field 228 indicates that the instruction at LIP 7 is a COF instruction and the prediction is to be selected; A value of "2" in the target field 234 indicates that the instruction will branch to the instruction at LIP 2; and a value of "2" in the predicted count field 230 indicates that the branch to the instruction at LIP 2 will be repeated twice.After identifying the expected instruction cycle, if the processor 102 does actually execute the instruction as predicted in the instruction cycle, then the suspend controller can determine the duration of the processing time that is expected to be spent. In one embodiment, the duration is determined based on a clock cycle. By using an average of the clock cycles, such as each instruction, the suspend controller 130 can achieve the total number of clock cycles for the instruction cycle. To illustrate, if each instruction is scheduled to average 2.4 clock cycles, the suspend module 130 can determine that a sequence of 30 instructions will take 72 clock cycles (ie, 2.4 clock cycles/instructions X30 instructions). ). Alternatively, the suspend controller 130 can determine the number of clock cycles used by the processor 102 to execute instructions of the instruction cycle according to its instruction type to achieve the total number of clock cycles for the instruction cycle.By using fields 220 through 226 of an entry for the decode cache 132 corresponding to the instructions in the instruction cycle, the suspend controller 130 can identify those components that are expected to be unused during execution of the instruction cycle. To illustrate, in fields 220 and 224, the instruction at LIP 2 has a value of "1" indicating that system memory 108 and FPU 124 are used to execute the instruction, however, there are no instructions in the illustrated instruction cycle provided above. There is a value of "1" in field 222 or 226, which indicates that neither integer unit 122 nor MMX unit 126 is expected to be used during execution of the instruction of the illustrated instruction cycle. Depending on the expected duration of the instruction cycle and the components that are not expected to be used during the execution of the instruction cycle, the suspend controller 130 may hang by providing a suspend signal to the appropriate switch unit (FIG. 1). Appropriate processor components and peripheral components.Referring now to Figure 3, an exemplary method 300 for identifying and suspending a processor and peripheral components in accordance with at least one embodiment of the present invention is illustrated. The method 300 begins at step 302 where a duration of an instruction cycle identified in the decode cache 132 is determined. As noted above, this duration can be represented by the number of clock cycles that are expected to be used during the execution of the sequence instructions of the instruction cycle.In step 304, the suspend controller 130 identifies one or more components that may be suspended for some or all of the predicted duration. However, it will be appreciated that there may be power and clock cycles wasted on shutting down and subsequently restarting certain components. For example, some components may need to be initialized at boot time, and may require tens, hundreds, or thousands of clock cycles and more than negligible power when powered on. Thus, in step 306, the suspend controller 130 makes a decision as to whether or not to suspend a particular component for power saving advantages, taking into account the power and time cost of suspending a particular component. This evaluation takes into account the power saved by suspending the component and the power consumed to shut down and then restart the component. For example, if the expected instruction cycle will be 100 clock cycles long, and a particular component can save 0.001 milliwatts (mW) per cycle (or a total of 0.1 mW for that instruction cycle), shut down those cycles. For 90 clock cycles (assuming 10 cycles during re-initialization), if the shutdown power cost approaches or exceeds 0.1 mW, the suspend controller 130 may choose to abandon the suspension of the component.In another embodiment, the relative value of the suspended component may be determined based on a comparison of the expected duration to one or more thresholds. For example, if the duration is less than a certain threshold (eg, 100 clock cycles), the suspend controller may choose to abandon the suspension of the component. Alternatively, the type of suspend operation used by the suspend controller 130 can be determined based on the comparison of the predicted comparison value to one or more threshold values. To illustrate, if the duration is less than the first threshold (eg, 50 clock cycles), the suspend operation is not performed. If the duration is greater than the first threshold but less than the second threshold (eg, 200 clock cycles), a clock gating suspend operation can be performed. If the duration is greater than the second threshold, a suspend operation to isolate the component from its power domain can be performed. With a multi-step suspend operation, the power save/shutdown cost balance can be modified to suit a particular instruction cycle based on its predicted duration. It will be appreciated that the threshold can be set to be specific to the particular component of the processing device to be suspended. For example, if a simple component (such as a multiplexer or adder) is to be suspended, the clock gating threshold can be set to be quite low (eg, several clock cycles), however, the more complex components are suspended ( For example, if the FPU is to be suspended, the clock gating threshold can be set to be quite high (eg, hundreds or thousands of clock cycles).If it is determined that the duration of the suspended component for at least a portion is advantageous, then in step 308, the suspend controller 130 suspends the component for the identified portion of the duration. At the end of execution of the identified portion of the duration, the suspend controller 130 reinitializes and/or restarts the component unless the component is identified as not required for the next instruction cycle, wherein the suspend control The device can maintain the component in a suspended state. If multiple components are identified as hangable, the suspend controller 130 may repeat steps 306 and 308 for each identified component in step 310.Referring now to Figure 4, illustrated is an illustration of a method 400. The method 400 begins at step 402 where a plurality of instructions are received. In step 404, one or more components of the processor that are expected to be unused during execution of the sequence of instructions from one of the plurality of instructions are identified. In step 406, one or more identified components are suspended during execution of at least a portion of the sequence of instructions by the processor.Referring now to Figure 5, illustrated is an illustration of a method 500. The method 500 begins at step 502 where a plurality of decoding instructions are stored in a cache. In step 504, one or more components of the processor that are expected to be unused during execution of the sequence of instructions from one of the plurality of instructions are identified. In step 506, the duration is estimated before the processor is required to execute an instruction that is not in the sequence of instructions. In step 508, at least one component of one or more identified components of the processor is suspended during execution of at least a portion of the sequence of instructions based on the estimated duration.The disclosure of the present invention is intended to be illustrative and not restrictive, and the scope of the appended claims Other embodiments. The scope of the invention, therefore, is to be construed as limited by the claims |
A system and method for processing instructions in a computer system comprising a processor and a co-processor communicatively coupled to the processor. Instructions are processed in the processor in an instruction pipeline. In the instruction pipeline, instructions are processed sequentially by an instruction fetch stage, an instruction decode stage, an instruction execute stage, a memory access stage and a result write-back stage. If a co-processor instruction is received by the processor, the co-processor instruction is held in the core processor until the co-processor instruction reaches the memory access stage, at which time the co-processor instruction is transmitted to the co-processor. <IMAGE> |
A method of processing instructions in a computer system comprising a processor and a co-processor communicatively coupled to the processor, the method comprising:(a) processing instructions in the processor in an instruction pipeline wherein instructions are processed sequentially by an instruction fetch stage, an instruction decode stage, an instruction execute stage, a memory access stage and a result write-back stage; and(b) if a co-processor instruction is received by the processor, holding the co-processor instruction in the core processor until the co-processor instruction reaches the memory access stage and then transmitting the co-processor instruction to the co-processor.The method of claim 1 further comprising a step (c) of:(c) if an interrupt or exception is received by the processor, canceling an instruction that is in the instruction execute stage when the interrupt is received and reissuing the instruction starting at the instruction fetch stage.The method of claim 2 wherein if an interrupt or exception is received by the processor when a co-processor instruction is in the instruction execute stage, the co-processor instruction is canceled before the co-processor instruction is transmitted to the co-processor and the co-processor instruction is reissued starting at the instruction fetch stage.The method of claim 1 wherein step (b) comprises: if a co-processor instruction is received by the processor, performing steps of:(b)(i) executing the co-processor instruction during the instruction execute stage, but not transmitting the co-processor instruction to the co-processor during the instruction execute stage; and(b)(ii) transmitting the co-processor instruction to the co-processor during the memory access stage.The method of claim 4 wherein the processor includes a co-processor interface communicatively coupled to the co-processor, wherein executing step (b)(i) comprises providing the co-processor instruction to the co-processor interface during the instruction execute stage and wherein transmitting step (b)(ii) comprises transmitting the co-processor instruction from the co-processor interface to the co-processor during the memory access stage.The method of claim 1 wherein the co-processor is a processing element for which sending the same co-processor instruction to the co-processor twice decreases the performance of the co-processor.The method of claim 1 wherein the processor is a reduced instruction set computer (RISC) processor.The method of claim 1 wherein the system is a media decoding system, the processor is a core decoder processor and the co-processor is a decoding accelerator adapted to assist the core processor with a decoding function.A computer system comprising:a processor adapted to process instructions in an instruction pipeline wherein instructions are processed sequentially by an instruction fetch stage, an instruction decode stage, an instruction execute stage, a memory access stage and a result write-back stage; anda co-processor communicatively coupled to the processor and adapted to perform processing tasks in response to co-processor instructions provided by the processor; wherein when the processor processes a co-processor instruction, the processor holds the co-processor instruction until the co-processor instruction reaches the memory access stage and then transmits the co-processor instruction to the co-processor.The system of claim 9 wherein if an interrupt is received by the processor, the processor is adapted to cancel an instruction that is in the instruction execute stage when the interrupt is received and to reissue the instruction starting at the instruction fetch stage.The system of claim 10 wherein if an interrupt is received by the processor when a co-processor instruction is in the instruction execute stage, the processor is adapted to cancel the co-processor instruction before the co-processor instruction is transmitted to the co-processor and to reissue the co-processor instruction starting at the instruction fetch stage.The system of claim 9 wherein the processor is adapted to execute a co-processor instruction during the instruction execute stage, but wherein the processor is adapted not to transmit the co-processor instruction to the co-processor until the memory access stage.The system of claim 9 wherein the processor includes a co-processor interface communicatively coupled to the co-processor, wherein the processor is adapted to provide a co-processor instruction to the co-processor interface during the instruction execute stage and wherein the co-processor interface is adapted to transmit the co-processor instruction to the co-processor during the memory access stage.The system of claim 9 wherein the co-processor is a processing element for which sending the same co-processor instruction to the co-processor twice decreases the performance of the co-processor.The system of claim 9 wherein the processor is a reduced instruction set computer (RISC) processor.The system of claim 9 wherein the system is a media decoding system, the processor is a core decoder processor and the co-processor is a decoding accelerator adapted to assist the core processor with a decoding function. |
INCORPORATION BY REFERENCE OF RELATED APPLICATIONS The following U.S. Patent Applications are related to the present application and are hereby specifically incorporated by reference: U.S. Patent Application No. 10/114,679 filed April 1, 2002, entitled "METHOD OF OPERATING A VIDEO DECODING SYSTEM"; U.S. Patent Application No. 10/114,797 filed April 1, 2002, entitled "METHOD OF COMMUNICATING BETWEEN MODULES IN A DECODING SYSTEM"; U.S Patent Application No. 10/114,886 filed April 1, 2002, entitled "MEMORY SYSTEM FOR VIDEO DECODING SYSTEM"; U.S. Patent Application No. 10/114,619 filed April 1, 2002, entitled "INVERSE DISCRETE COSINE TRANSFORM SUPPORTING MULTIPLE DECODING PROCESSES"; and U.S. Patent Application No. 10/114,798 filed April 1, 2002, entitled "VIDEO DECODING SYSTEM SUPPORTING MULTIPLE STANDARDS"; all filed on even date herewith. The following Provisional U.S. Patent Applications are also related to the present application and are hereby specifically incorporated by reference: U.S. Provisional Patent Application No. 60/369,144 filed April 1, 2002, entitled "VIDEO DECODING SYSTEM HAVING A PROGRAMMABLE VARIABLE LENGTH DECODER"; U.S. Provisional Patent Application No. 60/369,014 filed April 1, 2002, entitled "PROGRAMMABLE VARIABLE LENGTH DECODER"; U.S. Provisional Patent Application No. 60/369,210 filed April 1, 2002, entitled "DMA ENGINE HAVING MULTI-LEVEL COMMAND STRUCTURE"; and U.S. Provisional Patent Application No. 60/369,217 filed April 1, 2002, entitled "INVERSE QUANTIZER SUPPORTING MULTIPLE DECODING PROCESSES"; all filed on even date herewith. FIELD OF THE INVENTION The present invention relates generally to media decoding systems and, more particularly, to a core processor for a decoding system. BACKGROUND OF THE INVENTION A typical reduced instruction set computer (RISC) processor processes instructions in an instruction pipeline. In a typical instruction processing pipeline, instructions are processed sequentially in stages. Typical pipelines contain 3-9 stages. One existing pipeline architecture is a five-stage pipeline that includes an instruction fetch stage, during which the instruction is fetched from memory; an instruction decode stage; an instruction execute stage; a memory access stage, during which memory is accessed for a load/store instruction; and a result write-back stage, during which the result is written to a register file in the processor. Some RISC processors include a co-processor interface through which the RISC processor can intimately issue instructions to another processing element. A processing element that is connected to the RISC processor via the co-processor interface is thus referred to as a co-processor. In existing RISC processors, when an instruction that is being processed is a co-processor instruction, the co-processor instruction is transmitted to the co-processor during the instruction execute stage.In existing RISC processors, the instruction executed at each stage can raise exceptions or be interrupted. But in order to maintain a manageable order, the exception or interrupt is raised at a fixed stage, say at the memory access stage. This stage will be called the exception raising stage subsequently. When such an event occurs, all instructions before the write-back stage are canceled, and the processor restarts the execution of the instructions starting with the instruction that was in the memory access stage when the exception/interrupt occurred. In such a scheme, if a co-processor instruction is in the instruction execute stage when an interrupt is received, the co-processor instruction will have been already sent to the co-processor when the interrupt is received. As a result of the interrupt, the co-processor instruction will be cancelled and reissued beginning again at the instruction fetch stage. When the reissued co-processor instruction reaches the instruction execute stage, the co-processor will again be transmitted to the co-processor. Thus, the same co-processor instruction will have been transmitted to the co-processor twice. This condition can cause problems in co-processors in which an issued instruction cannot be cancelled or re-issued. One example of such a co-processor is one that has a consumable buffer storage. With such a co-processor, once a coprocessor instruction is executed, it consumes a certain number of entries of the buffer.Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings. SUMMARY OF THE INVENTION One aspect of the present invention is directed to a method of processing instructions in a computer system comprising a processor and a co-processor communicatively coupled to the processor. Pursuant to the method, instructions are processed in the processor in an instruction pipeline. In the instruction pipeline, instructions are processed sequentially by an instruction fetch stage, an instruction decode stage, an instruction execute stage, a memory access stage and a result write-back stage. If a co-processor instruction is received by the processor, the co-processor instruction is held in the core processor until the co-processor instruction reaches the exception raising stage, at which time the co-processor instruction is transmitted to the co-processor.Another embodiment of the present invention is directed to a computer system having a processor and a co-processor. The processor processes instructions in an instruction pipeline. In the instruction pipeline, instructions are processed sequentially by an instruction fetch stage, an instruction decode stage, an instruction execute stage, a memory access stage and a result write-back stage. The co-processor is communicatively coupled to the processor and performs processing tasks in response to co-processor instructions provided by the processor. When the processor processes a co-processor instruction, the processor holds the co-processor instruction until the co-processor instruction reaches the exception raising stage, at which time the processor transmits the co-processor instruction to the co-processor.Another embodiment of the present invention is directed to a computer system having a processor and a co-processor. The processor processes instructions in an instruction pipeline. In the instruction pipeline, instructions are processed sequentially by an instruction fetch stage, an instruction decode stage, an instruction execute stage, a memory access stage and a result write-back stage. The co-processor is communicatively coupled to the processor and performs processing tasks in response to co-processor instructions provided by the processor. When the processor processes a co-processor instruction, it dispatches the instruction to the co-processor at the decode stage or the execution stage, the co-processor can start executing the initial part of the co-processor instruction that does not change the state of the coprocessor, the rest of execution cannot be started until the coprocessor instruction reaches the exception raising stage.It is understood that other embodiments of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein embodiments of the invention are shown and described only by way of illustration of the best modes contemplated for carrying out the invention. As will be realized, the invention is capable of other and different embodiments and its several details are capable of modification in various other respects, all without departing from the spirit and scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive. DESCRIPTION OF THE DRAWINGS These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:FIG. 1 is a functional block diagram of a computer system according to an illustrative embodiment of the present invention.FIG. 2 is a functional block diagram of a computer system according to an illustrative embodiment of the present invention.FIG. 3 is a chart showing a core processor instruction pipeline according to an illustrative embodiment of the present invention.FIG. 4 shows the structure of a co-processor instruction according to an illustrative embodiment of the present invention.FIG. 5 is a flow chart representing a method of processing an instruction in an instruction pipeline according to an illustrative embodiment of the present invention.FIG. 6 is a functional block diagram showing buffers of a co-processor interface and their interactions with a processor core, co-processor and hardware accelerators according to an illustrative embodiment of the present invention. DETAILED DESCRIPTION FIG. 1 is a functional block diagram of a computer system 100 according to an illustrative embodiment of the present invention. In the illustrative computer system 100 shown in FIG. 1, the computer system is a media decoding system. For purposes of illustration, aspects of the present invention will be described relative to such a media decoding system, and in particular, to a video decoding system. However, it is to be understood that aspects of the present invention can be implemented in any of a multitude of computer systems. Decoding system 100 includes a core decoder microprocessor 102, bridge module 104, co-processor 106, two hardware accelerators 108 and 110, decoder memory module 112, register bus 114 and system bus 116. Register bus 114 and system bus 116 communicate with an external host and external memory (not shown). In an illustrative embodiment, the co-processor comprises two independent and identical units. In an illustrative embodiment, the bridge module 104 is a "switch center" that arbitrates between different modules. The bridge module illustratively includes direct memory access (DMA) functionality.The acceleration modules 108 and 110 are hardware accelerators that accelerate special decoding tasks that would otherwise be bottlenecks for real-time media decoding if these tasks were handled by the core processor 102 alone. This helps the core processor 102 achieve the required performance. In an illustrative embodiment, the co-processor 106 is also a hardware accelerator that communicates with the core processor 102 via a co-processor interface of the core processor 102. In an illustrative embodiment wherein the decoding system 100 is a video decoding system, the co-processor 106 is a variable-length decoder and the acceleration modules perform one or more video decoding tasks such as inverse quantization, inverse discrete cosine transformation, pixel filtering, motion compensation and deblocking. The system of Figure 1 is illustrative only. In accordance with the present invention, the decoding system 100 can have any number of hardware accelerators.The core processor 102 is the central control unit of the decoding system 100. In an illustrative embodiment of the present invention, the core processor 102 receives the data units from the bitstream to be decoded. The core processor 102 prepares the data for decoding. In an embodiment wherein the data being decoded is video data, the data unit comprises macroblock coefficient data. The core processor 102 extracts the control information and data for each data unit. In an illustrative embodiment of the present invention, the co-processor unit 106 assists the core processor 102 in decoding the header information. After extracting the control information and data for each data unit, the core processor 102 illustratively deposits the appropriate control information and data in decoder memory 112. In an alternative embodiment, the core processor 102 provides the processed control information and data directly to the co-processor 106 for processing by the co-processor 106. In an illustrative embodiment of the present invention, the core processor 102 also orchestrates a data unit processing pipeline (such as a macroblock processing pipeline) for the acceleration modules 106, 108 and 110 and fetches the required data from external memory via the bridge module 104. The core processor 102 also handles some data processing tasks. Where decoding system 100 is a video decoding system, picture level processing, including sequence headers, GOP headers, picture headers, time stamps, macroblock-level information except the block coefficients, and buffer management, are performed directly and sequentially by the core processor 102, without using the accelerators 106, 108, 110, except for using a variable-length decoder 106 to accelerate general bitstream parsing.The bridge module 104 arbitrates and moves data between decoder memory 112 and external memory. The bridge interface 104 illustratively includes an internal bus network that includes arbiters and a direct memory access (DMA) engine. The bridge module 104 serves as an asynchronous interface to the system buses.Decoder memory 112 is used to store data unit data and other time-critical data used during the decoding process. The co-processor 106 and hardware accelerators 108 and 110 use the decoder memory 112 as the source and destination memory for their normal operation. In an illustrative embodiment of the present invention, decoder memory 112 is a static random access memory (SRAM) unit. The external host has access to decoder memory 112, and the bridge module 104 can transfer data between decoder memory 112 and external memory. The arbiter for decoder memory 112 is in the bridge module 104.In an illustrative embodiment of the present invention, the core processor 102 is a reduced instruction set computer (RISC) processor, such as a MIPS processor, for example. FIG. 2 is a functional block diagram of computer system 100 wherein the core processor 102 is a RISC processor. FIG. 2 shows the interfaces of the core decoder processor 102 to other blocks in decoding system 100 according to an illustrative embodiment of the present invention. In FIG. 2, elements that are equivalent to elements in FIG. 1 are given the same reference numbers as their corresponding elements in FIG. 1. To achieve a higher performance level, module 106 is directly connected to the core processor 102 through a fast co-processor interface 138. Co-processor commands are sent to the co-processor 106 from the processor core 136 via co-processor commands. Results and status are passed between the core processor 102 and the co-processor 106 through move instructions and copy instructions.The DMA block 104 routs requests between blocks in the decoding system 100. Core processor memory accesses are performed through the bus interface unit (BIU) 144 of the decoder processor 102 and DMA block 104. The core processor 102 is in charge of issuing memory requests to move data between the decoder memory 112 and external memory. Hardware accelerators 108 and 110 receive commands via memory-mapped writes from the core processor 302.In an illustrative embodiment of the present invention, the core 136 employs a MIPS32 instruction set architecture (ISA). The core 136 has a multiply-divide unit (MDU) that performs fast integer multiply, multiply-accumulate, multiply-subtract, and divide operations. The core 136 also includes a memory management unit (MMU) that uses fixed mapping. In an illustrative embodiment, the MMU does not implement a translation look-aside buffer (TLB) for page-based memory management, as is available in typical MIPS32 ISA processors. The core processor also includes a debugging support unit (DSU). In an illustrative embodiment, the DSU interfaces with an external EJTAG block, which in turn interfaces with a host CPU performing the debugging.The core processor 102 includes a load store unit (LSU) 142 that processes all types of load (read) and store (write) requests. The bus interface unit 144 processes all memory accesses. One or two data buffers are installed in BIU 144 for buffering incoming and outgoing data between the core processor 102 and decoder memory 112 and system memory. As an example, a write buffer stages any memory-bound data so that the core processor 102 need not wait until the store data are actually placed in the memory. Without such a buffer, in the case of cache misses and non-cacheable reads, the core processor 102 would be stalled until the data is returned. The core processor 102 also includes instruction and data caches 140.In an illustrative embodiment of the present invention, the core processor 102 is based on an instruction pipeline 300, as shown in FIG. 3. The illustrative instruction pipeline shown in FIG. 3 includes five stages. The five stages of the core processor pipeline are instruction fetch stage 310, instruction decode stage 320, instruction execute stage 330, memory access stage 340 and write-back stage 350. There can be up to five instructions simultaneously being executed in the five-stage pipeline. In an alternative embodiment of the present invention, the core processor 102 is based on a six-stage pipeline that includes two instruction fetch stages. In the first instruction fetch stage, the instruction is retrieved from the instruction cache. In the second instruction fetch stage, branch handling and hit/miss resolution are performed with respect to the instruction. There can be up to six instructions simultaneously being executed in the 6-stage pipeline.Referring again to FIG. 2, the co-processor 106 is directly connected to the core processor 102 through a co-processor interface 138 and the co-processor 106 is architected as a co-processor to the decoder processor 102. That is, the co-processor 106 can operate on a single-command basis where the decoder processor 102 issues a command (via a co-processor instruction) and waits (via a move-from-coprocessor instruction) until it is executed by the co-processor 106, without polling a status register in the co-processor 106 to determine completion of the command. In an illustrative embodiment, the core processor 102 makes available a co-processor usability bit in a system control status register to activate the co-processor 106. The core processor 102 detects co-processor instructions and passes them to the co-processor 106 to execute. The core processor 102 decodes and executes co-processor move instructions to transfer data between the registers in the co-processor interface 138 and the general registers in the processor core 136. The core processor 102 executes co-processor copy instructions to access the status of each block 106, 108, 110 with a general register in the core processor 102. In an illustrative embodiment, for co-processor instructions that move data between the registers in the co-processor 106 and the general registers in the core processor 102, the pipeline control in the core processor 102 will stall the instruction pipeline 300 when the data are not ready in the co-processor 106.The pipeline control in the core processor 102 may need to be synchronous with the co-processor 106 when issuing co-processor instructions. The co-processor interface 138 acts as the front end of the modules 106, 108, 110 to perform this type of synchronization with the core processor 102. In an illustrative embodiment of the present invention, the core processor 102 runs at twice the frequency of the other processing modules 106, 108, 110.In general, there are two types of co-processor instructions: i) instructions issued at the core processor 102 but executed completely at the co-processor 106, and ii) instructions that move data between the core processor 102 and the co-processor 106. Instructions of type i) will be called co-processor commands in this document. The core processor 102 sends co-processor commands to the co-processor 106 directly so that a certain task can be performed. The co-processor 106 decodes individual co-processor commands before execution. Instructions of type ii) include move-to-coprocessor (MTC) instructions, which cause data to be written from the core processor 102 to the co-processor 106, and move-from-coprocessor (MFC) instructions which causes the core processor 102 to read data from the co-processor 106.In an illustrative embodiment of the present invention, the co-processor 106 includes two co-processor units, Unit0and Unit1. In this embodiment, the core processor 102 can only issue commands to one of the co-processor units at a time. The active co-processor unit is determined by the value of a co-processor unit-select register. In an exemplary embodiment, when the control register has a value 0, all co-processor instructions are sent to Unit0, and when the control register has a value 1, all co-processor instructions are sent to Unit1. The value in the control register is changed by a copy-control-to instruction and can be read by a copy-control-from instruction. For the rest of this discussion, the co-processor 106 referred to is the active co-processor unit under the current unit-select register value. In an illustrative embodiment wherein system 100 is a video decoding system, the co-processor 106 is a variable length decoder (VLD) that includes two VLD units, one of which is a programmable unit having a code RAM and the other of which is hard-coded to decode bitstreams according to a particular decoding standard.FIG. 4 shows the structure of a co-processor instruction according to an illustrative embodiment of the present invention wherein the core processor 102 is a 32-bit processor. Bits 26-31 (400) indicate that the instruction is a co-processor instruction. Bit 25 (402) indicates whether the instruction is a command (an instruction to be carried entirely by the co-processor) or an instruction that moves data between the core processor 102 and the co-processor 106. In an illustrative embodiment, if bit 25 (402) is high, it indicates that the instruction is a co-processor command. Bits 0-24 indicate the function to be performed by the co-processor 106. Referring to the pipeline diagram in FIG. 3, at instruction decode stage 320, the instruction decoder in the core processor 102 decodes the instruction. The instruction decoder recognizes the instruction as a co-processor instruction by examining bits 26-31 (400), and recognizes that the instruction is a co-processor command because bit 25 is set. The core thus passes the co-processor function (bits 0-24 (404)) to the co-processor. To execute a co-processor command, the co-processor decodes the field.In the illustrative embodiment wherein the instruction pipeline of the core processor 102 is a five-stage pipeline, like the one shown in FIG. 3, there can be up to five instructions simultaneously being executed in the instruction pipeline (and up to six instructions in the six-stage pipeline of the alternative embodiment). Like in most pipelined processors, an instruction can be cancelled due to interrupts or exceptions in any pipeline stage before the results of the instruction are committed in the write-back stage 350. When an instruction is cancelled, it is restarted from its instruction fetch stage 310. If an interrupt is detected in the execution stage 330, the interrupt is raised in the memory access stage 340, and the instructions from the fetch stage 310 to the execution stage 330 when the interrupt is detected will be cancelled and re-issued. Because a co-processor command can change the co-processor state, reissuing a cancelled co-processor command is complicated to support in the co-processor 106.To resolve this problem, according to an illustrative embodiment of the present invention, the co-processor interface 138 of processor 102 holds on to a co-processor instruction until the instruction reaches the memory access stage 340, and only then dispatches the co-processor instruction to the co-processor 106. All co-processor instructions, including co-processor commands, MFC instructions and MTC instructions, are dispatched by the core processor 102 to the co-processor 106 at the memory access stage 330 of the core processor pipeline 300. If there is an interrupt or exception raised before the co-processor command reaches the memory access stage 340, the command is cancelled before it is sent to the co-processor 106. It will be reissued just like all other regular core processor instructions. If no interrupt or exception is raised before the co-processor instruction reaches the memory access stage 340, the co-processor command is sent to the co-processor 106 in the memory access stage 340. This ensures that the co-processor instruction is not cancelled after it is dispatched to the co-processor. As such, a co-processor instruction appears to the core processor 102 like a load or store instruction, in that it is executed in the memory access stage 340 and completed in the write back stage 350. Holding the co-processor instruction until the memory access stage also avoids the ambiguity that would occur if a later-issued instruction arrived at the co-processor 106 before an earlier one.The data-moving co-processor instructions, such as MFC and MTC, are also dispatched to the co-processor 106 in the memory stage and they are interruptible even if they are waiting for the data to be ready. These co-processor instructions should have no side effect even when they are reissued in the core processor 102 and re-executed in the co-processor 106.FIG. 5 is a flow chart representing a method of processing an instruction in an instruction pipeline according to an illustrative embodiment of the present invention. The method of FIG. 5 implements a five-stage instruction pipeline corresponding to the one shown in FIG. 3. The pipeline stages are instruction fetch stage 510, instruction decode stage 520, instruction execute stage 530, memory access stage 540 and result write-back stage 550. In instruction fetch stage 510, the instruction to be processed is fetched by the instruction decoder of the core processor 102, as shown by block 512. As explained previously with respect to FIG. 3, an alternative embodiment of the present invention employs a six-stage pipeline having two instruction fetch stages. In instruction decode stage 520, the instruction is decoded by the instruction decoder of the core processor 102, as shown by block 522. The instruction decoder determines if the instruction is a co-processor instruction, as shown by decision block 524.If the instruction is not a co-processor instruction, in the instruction execute stage 530, the instruction is executed by the core processor 102, as shown by block 532. Then, in memory access stage 540, memory is accessed, as shown by block 542. Finally, in write-back stage 550, the result of the executed instruction is written back to the accessed memory location, as shown by block 552.If, on the other hand, the instruction is a co-processor instruction, in the instruction execute stage 530, the core 136 of the processor 102 provides the co-processor instruction to the co-processor interface 138 of the processor 102, as shown by block 534. But the co-processor interface 138 does not transmit the instruction to the co-processor 106 until memory access stage 540, as shown by block 544. The co-processor 106 executes the instruction during memory access stage 540, as shown by block 546. Then the result of the co-processor instruction is written back to memory in write-back stage 550, as shown by block 552.Additionally, according to an illustrative embodiment of the present invention, a co-processor instruction will not itself generate any exceptions in the core processor 102 after it is decoded.On receiving a co-processor command, the co-processor 106 performs the task the command dictates and sets a command-done signal to indicate the completion of the command by the co-processor 106. The command-done signal can only be cleared by a subsequent co-processor command issued by the core processor 102. In the case where the co-processor 106 is a variable-length decoder, the co-processor 106 is capable of executing a variety of commands issued by the core processor, including, but not limited to, variable-length decode (VLD), get bits, grab bits, start code search, download code table (from main memory 110), transfer data to main memory 110, and VLD block decode. During the execution of a co-processor command, no new commands will be accepted by the co-processor 106. Therefore, before issuing new commands, the decoder processor 102 checks to see if an earlier issued command is finished by polling (MFC read instruction) a command status register in the co-processor 106 that generates the command-done signal.In an illustrative embodiment of the present invention, the co-processor 106 includes general co-processor registers and co-processor control registers. The general registers are used to hold the data and results of the co-processor commands. The control registers (such as the command status register mentioned above) are used to hold the status and error conditions of the co-processor. In an illustrative embodiment of the present invention, the control registers are also used to hold the status and error conditions of the other functional blocks of the system 100, such as hardware accelerators 108 and 110. The following discussion describes co-processor instructions used to transfer the contents of the co-processor registers to and from the general registers of the core processor 102.The move-to-coprocessor (MTC) instruction is a register write instruction that is used by the core processor 102 to load the contents of a general register residing in the core processor 102 to a general register in the co-processor 106. The MTC instruction includes one or more "set" bits that indicate the set of co-processor registers to copy the data to.The move-from-coprocessor (MFC) instruction is a register read instruction used by the core processor 102 to load the contents of a general register in the co-processor 106 to a general register in the core processor 102. One such co-processor register that the core processor 102 may need to read is the command status register. The MFC instruction includes one or more "set" bits that indicate the set of co-processor registers to copy the data to. The move-from-coprocessor instruction also includes a "wait" bit. The move-from-coprocessor instruction behaves differently with respect to reading a co-processor register depending on the value of the wait bit.In an illustrative embodiment wherein the co-processor 106 runs at half the speed of the core processor 102, a move-from-coprocessor command uses at least two core processor clock cycles for the co-processor to return the read result. Therefore, in an illustrative embodiment, a move-from-coprocessor instruction stalls the core processor pipeline 300 by two core processor clock cycles.One use of the move-from-coprocessor instruction is the reading of a snapshot value of a register or simply reading back a previously programmed register for verification. In this case, the core processor 102 needn't wait for the command to be completed before reading the source register. In such a case, the wait bit will be low, for example. When the wait bit is low, read results are instantly returned to the core processor 102 without considering whether the data that is being read is updated, or whether the data is valid. The core processor will get the read data instantly (of course there is the fixed one or two clock cycle delay).Another use of the move-from-coprocessor instruction is the reading of results of a previously issued co-processor command or the status of the co-processor 106. In this case, a previously issued command may not have finished, in which case its results would not be valid and the core processor 102 waits for the command to be completed before reading the source register. Therefore, in an illustrative embodiment, when the wait bit is set, the move-from-coprocessor instruction will not finish its operation, or will wait, until the data to be read is updated and becomes valid. This is done by checking the command-done flag in the co-processor 106 and finishing the read when the co-processor 106 is done with its current task.The co-processor interface 138 of the core processor 102 is responsible for MFC register decoding. Therefore, the co-processor interface 138 provides the appropriate stall control for the core processor pipeline. MFC instructions can be consecutive with pipeline stalls between them.At times when the co-processor 106 cannot complete certain tasks or encounters error conditions, it can raise an external interrupt to the core processor 102. This external interrupt can interrupt the core even if the core is stalled due to an outstanding MFC instruction. In an illustrative embodiment, the interrupt will be delayed for all other stall situations, such as a cache miss.The control registers in the co-processor 106 are used to keep the status and configuration settings of the co-processor 106. In an embodiment wherein the co-processor comprises two co-processor units, the co-processor includes a unit-select register to indicate which unit is active. A status register comprising one or more bits indicates the status of the active unit of the co-processor 106. In an illustrative embodiment, global status registers in the co-processor 106 are used to hold the status and error conditions, i.e., the condition code, of other functional blocks in the system 100, such as hardware accelerators 108 and 110. In an illustrative embodiment, a few bits per module are allocated to each hardware accelerator module 108 and 110 to indicate the condition code of the module. In an illustrative embodiment, except the unit-select register, all of the co-processor control registers are read-only by the core processor 102. Each hardware accelerator resets its condition code bits in its global status registers when it receives commands from the core processor 102, and it sets the condition code bits i) when it completes the commands and is ready to receive another command or ii) when it encounters an error condition. The type of error can be retrieved from a register of the hardware accelerator block by issuing a read of the corresponding memory location.Copy instructions are used to access the control registers of the co-processor 106. A copy-control-from-coprocessor (CFC) instruction copies the contents of a specified control register to a specified general register in the core processor 102. A copy-control-to-coprocessor (CTC) instruction loads the contents of a specified general register in the core processor 102 into a specified control register in the co-processor 106.In addition to passing requests and data between the co-processor 106 and the core 136 of the processor 102, the co-processor interface 138 has buffers for holding the data and status in order to reduce access latency. FIG. 6 is a functional block diagram showing some of the buffers of the co-processor interface 138 and their interactions with the processor core 136, co-processor 106 and hardware accelerators 108 and 110. The buffers in the co-processor interface include command status buffer 150 (cmd), data buffer 152 (rd) and control registers 154 and 156. If a co-processor instruction has data to be returned to the core 136, the data are placed at the data buffer 152 at the interface 138 when the co-processor instruction is completed successfully.The command-done bit stored in command status register 150 is not accessible by the core processor 102. The command-done bit is clear when a co-processor instruction is issued from the core 136 and it's set to 1 when the instruction is completed by the co-processor. This allows the MFC instruction (with "wait" bit = 1) to start copying the data from the co-processor 106 to the target general register in the next cycle.Referring again to FIG. 2, the core processor 102 accesses the registers in the functional blocks such as hardware accelerators 108 and 110 through memory reads and writes. This is achieved by allocating a small sub-segment of memory in a noncacheable memory segment in the core processor's 102 address space. The mapping can be stored in the BIU 144 or the DMA bridge 104. In an illustrative embodiment of the present invention, when the core processor 102 wants to make sure all reads and writes are completed in the system 100, it issues a (noncacheable) read from a special location to a dummy register. The read is sent out to the DMA bridge 104 when the core processor's write buffer is empty. When the DMA bridge 104 receives the read, it will make sure all of the core processor's requests are completed, and then it will return a piece of dummy data to the core processor 102.The bus interface unit (BIU) 144 is in charge of all memory requests for the core processor 102. The BIU 144 includes a FIFO write buffer to stage outgoing data. The following byte-gathering scheme is implemented on the write buffer to minimize the number of memory store requests. If the core processor 102 performs a copy-back of a data cache line, the dirty line is placed in an entire new entry of the write buffer. If the core processor 102 performs a noncacheable write, the data is placed into the write buffer in one of the following ways. If it's at the beginning of a data entry of predetermined size, the data is placed in the next new entry which will be referred to as the active entry. If the data are next to the previous written data in a data entry boundary, the two requests are combined into one. Data in an entry are ready to be sent to the data bus if i) the data are the size of one full data entry, ii) the entry is not the active one, iii) an exception has occurred, or iv) the core processor 102 is about to send out a read request to the data bus.The instruction pipeline 300 of the core processor is stalled if a core processor memory store finds the write buffer is full. The write buffer is flushed, i.e., all valid entries are written to the memory, before i) a core processor memory read request can be sent to the memory or ii) the core processor can complete a synchronize instruction. When the data of an entry are written to the data bus, all following valid entries are shifted down.Although a preferred embodiment of the present invention has been described, it should not be construed to limit the scope of the appended claims. For example, the present invention is applicable to any type computer system employing a co-processor coupled to a main processor through a co-processor interface, including any media decoding systems, such as audio and graphics decoding systems, in addition to the video decoding system illustratively described herein. Those skilled in the art will understand that various modifications may be made to the described embodiment. Moreover, to those skilled in the various arts, the invention itself herein will suggest solutions to other tasks and adaptations for other applications. It is therefore desired that the present embodiments be considered in all respects as illustrative and not restrictive, reference being made to the appended claims rather than the foregoing description to indicate the scope of the invention. |
Techniques are described to transmit commands to a display device. The commands can be transmitted in header byte fields of secondary data packets. The commands can be used to cause a target device to capture a frame, enter or exit self refresh mode, or reduce power use of a connection. In addition, a request to exit main link standby mode can cause the target enter training mode without explicit command to exit main link standby mode. |
Claims: What is claimed is: 1. A method comprising: receiving at least one command in a header byte of a secondary data packet, wherein the secondary data packet is in compliance with a DisplayPort specification and the at least one command comprises one of a command to store a frame, a command to enter self refresh mode, and a command to reduce power of a link and requesting performance of the at least one command. 2. The method of claim 1, wherein when the at least one command comprises a command to store a frame, performance of the at least one command comprises: storing a frame associated with the command into a buffer. 3. The method of claim 1, wherein when the at least one command comprises a command to store a frame, performance of the at least one command comprises: exiting lower power mode; storing a frame associated with the command into a buffer; and entering lower power mode. 4. The method of claim 1, wherein when the at least one command comprises a command to reduce power of a link, performance of the at least one command comprises: reducing power of a main link. 5. The method of claim 1, wherein the at least one command is stored in bits of header byte HB2 as defined in section 2.2.5 of the DisplayPort version 1.1a. 6. A method comprising: requesting transmission of at least one command in a header byte of a secondary data packet, wherein the secondary data packet is in compliance with a DisplayPort specification and the at least one command comprises one of a command to store a frame, a command to enter self refresh mode, and a command to reduce power of a link. 7. The method of claim 6, wherein when the at least one command comprises a command to store a frame, the at least one command requests storing a frame associated with the command into a buffer or the at least one command requests: exiting lower power mode; storing a frame associated with the command into a buffer; and entering lower power mode, and when the at least one command comprises a command to reduce power of a link, the at least one command requests reducing power of a main link. 8. The method of claim 6, wherein the at least one command is requested to be stored in bits of header byte HB2 as defined in section 2.2.5 of the DisplayPort version 1.1a. 9. A method comprising: receiving an indication in a header field HB2 that a current frame is modified compared to a previous frame, self refresh is to be activated, and link standby is to be entered; entering link standby mode after a vertical blanking interval; receiving a request to exit standby mode; and entering training mode without explicit command from a transmitter of the request to exit standby mode. 10. The method of claim 9, wherein the receiving a request to exit standby mode comprises detecting a writing to a register. 11. A system comprising: a display; a memory device; an interface, the interface to receive at least one command in a header byte of a secondary data packet, wherein the secondary data packet is in compliance with a DisplayPort specification; and a controller to perform the at least one command. 12. The system of claim 1 1, wherein the at least one command comprises one of a command to store a frame, a command to enter self refresh mode, and a command to reduce power of a link. 13. The system of claim 1 1, wherein when the at least one command comprises a command to store a frame, the controller is to request storing of a frame associated with the at least one command into a buffer. 14. The system of claim 1 1, wherein when the at least one command comprises a command to store a frame, the controller is to request: exiting lower power mode; storing a frame associated with the command into the memory device; and entering lower power mode. 15. The system of claim 1 1, wherein when the at least one command comprises a command to reduce power of a link, the controller is to request: reducing power of a main link. 16. The system of claim 1 1, wherein the at least one command is stored in bits of header byte HB2 as defined in section 2.2.5 of the DisplayPort version 1.1a. 17. The system of claim 1 1, wherein the at least one command comprises: an indication whether a current frame is modified compared to a previous frame; an indication whether self refresh is to be activated; and an indication that link standby is to be entered. 18. The system of claim 17, wherein the controller is to enter link standby mode after a vertical blanking interval. 19. The system of claim 17, wherein the interface is to receive a second command, the second command comprising a request to exit standby mode and in response to the second command, the controller is to enter training mode without explicit command from a transmitter of the request to exit standby mode. 20. A computer-readable medium comprising instructions stored thereon, which when executed by a computer, cause the computer to: request transmission of at least one command in a header byte of a secondary data packet, wherein the secondary data packet is in compliance with a DisplayPort specification and the at least one command comprises one of a command to store a frame, a command to enter self refresh mode, and a command to reduce power of a link. 21. The medium of claim 20, wherein when the at least one command comprises a command to store a frame, the at least one command requests storing a frame associated with the command into a buffer or the at least one command requests: exiting lower power mode, storing a frame associated with the command into a buffer, and entering lower power mode; and when the at least one command comprises a command to reduce power of a link, the at least one command requests reducing power of a main link. |
TECHNIQUES TO TRANSMIT COMMANDS TO A TARGET DEVICE Field The subject matter disclosed herein relates generally to techniques for regulating power consumption. Related Art Multimedia operations in computer systems are very common. For example, personal computers are often used to process and display video. Power consumption by computers is a concern. It is desirable to regulate power consumption by personal computers. Brief Description of the Drawings Embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the drawings and in which like reference numerals refer to similar elements. FIG. 1A depicts a system in accordance with an embodiment. FIG. IB depicts an example of components of a host system whose power consumption can be controlled, in accordance with an embodiment. FIG. 1 C depicts a high level block diagram of a timing controller for a display device in accordance with an embodiment. FIG. 2 depicts an example format of signals transmitted over multiple lanes of a DisplayPort interface. FIG. 3 depicts an example manner of communication of secondary data packets over one and more lanes of a DisplayPort interface. FIG. 4 depicts an example of a sequence of events for entry into main link standby mode. FIG. 5 depicts an example of a sequence of events for exit from main link standby mode. Detailed Description Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in one or more embodiments. FIG. 1A depicts a system 100 in accordance with an embodiment. System 100 may include a source device such as a host system 102 and a target device 150. Host system 102 may include a processor 110 with one or more cores, host memory 1 12, storage 114, and graphics subsystem 1 15. Chipset 105 may communicatively couple devices in host system 102. Graphics subsystem 1 15 may process video and audio. System 100 can be implemented in a handheld personal computer, mobile telephone, set top box, or any computing device. Any type of user interface is available such as a keypad, mouse, and/or touch screen. In accordance with various embodiments, processor 110 may execute a software driver (not depicted) that determines whether to (1) instruct target device 150 to capture an image and repeatedly display the captured image, (2) power down components of graphics subsystem 115, and (3) power down components of target device 150. The driver may determine whether to initiate actions (1), (2), or (3) based at least on: a change in the system timer period, triangle or polygon rendering, any processor core is not in low power mode, any mouse activity, vertical blanking interrupts are used, and/or overlay is enabled. For example, powering down components may involve reducing voltage regulators to the lowest operating voltage level. For example, when the processor 1 10 executes a Microsoft Windows compatible operating system, the driver may be a kernel mode driver. For example, host system 102 may transmit commands to target device 150 using interface 145. In some embodiments, interface 145 may include a Main Link and an AUX channel, both described in Video Electronics Standards Association (VESA) DisplayPort Standard, Version 1, Revision la (2008) as well as revisions and variations thereof. In various embodiments, host system 102 (e.g., graphics subsystem 1 15) may form and transmit communications to target device 150 at least in a manner described with respect to co-pending U.S. patent application having serial number 12/286, 192, entitled "Protocol Extensions in a Display Port Compatible Interface," inventors Kwa et al, filed September 29, 2008 (attorney docket number P27579). Target device 150 may be a display device with capabilities to display visual content and/or render audio content. For example, target device 150 may include control logic such as a timing controller (TCON) that controls writing of pixels as well as a register that directs operation of target device 150. Target device 150 may have access to a memory or frame buffer from which to read frames for display. Various embodiments include the capability to transmit secondary data packets over interface 145 to target device 150. Secondary data packets can be used to command target device 150. FIG. IB depicts an example of components of host system 102 whose power consumption can be controlled (e.g., power consumption decreased or increased), in accordance with an embodiment. The components can be in a chipset, processor, or graphics subsystem. For example, the display phase lock loop (PLL) 160, display plane 162, display pipe 164, and display interface 166 of host 102 can be powered down or up. PLL 160 may be a system clock for the display plane 162, display pipe 164, and/or display interface 166. For example, display plane 162 may include a data buffer and RGB color mapper, which transforms data from buffer to RGB. Display plane 162 may include an associated memory controller and memory input/output (IO) (not depicted) that could also be power managed. Pipe 164 may include a blender of multiple layers of images into a composite image, X, Y coordinate rasterizer, and interface protocol packetizer. The interface protocol packetizer may be compliant at least with Display Port or Low- voltage differential signaling (LVDS), available from ANSI/TIA/EIA-644- A (2001), as well as variations thereof. Display interface 166 may include a DisplayPort or LVDS compatible interface and a parallel-in-serial-out (PISO) interface. FIG. 1 C depicts a high level block diagram of a timing controller for a display device in accordance with an embodiment. Timing controller 180 has the capability to respond to instructions from a host device to enter a self refresh display (SRD) mode that may include powering down components and/or capturing an image and repeatedly outputting the captured image to a display. In response to signal SRD_ON from a host, SRD control block activates the frame buffer to capture a frame and the SRD control block controls the multiplexer (MUX) to transfer the captured frame to the output port. After the frame buffer captures a frame, the host may read a register in the panel that indicates that the capture has taken place and that the timing controller displays a captured image. After the signal SRD ON is deactivated, SRD control block deactivates the frame buffer and associated logic and causes the MUX to transfer incoming video from the input port (RX in this case) to the output port (TX). Timing controller 180 may use less power because the frame buffer is turned off and the logic clock gated when the self refresh display mode is exited. In various embodiments, SRD_ON and SRD_STATUS can be signals or configured in a register. FIG. 2 depicts an example format of signals transmitted over multiple lanes on a DisplayPort compatible interface. In particular, FIG. 2 reproduces Figure 2-14 of the Video Electronics Standards Association (VESA) DisplayPort Standard, Version 1, Revision la (2008) (hereafter DP I. la specification"). However, embodiments of the present invention can be used in any version and variation of DisplayPort as well as other standards. DisplayPort specifies the availability of secondary data packets to transmit information at the vendor's discretion. Vendor-specific extension packets are a type of secondary data packet that can be used to control the display self refresh functionality over embedded DisplayPort (eDP). The basic structure of the header information for these secondary data packets is described in table 2-33 of section Table 1 FIG. 3 depicts an example manner of communication of secondary data packets over one and more lanes of DisplayPort. In particular, FIG. 3 reproduces Figure 2-24 of the DP 1.1 a specification. As shown, secondary data packets can include header bytes, parity bytes, and data bytes. In accordance with various embodiments, the following table provides an example of commands that can be transmitted in header bytes of secondary data packets, in accordance with various embodiments. Commands can be performed by a target device such as a display with capability to perform self refresh display. Table 2 Various embodiments provide controls in bits 0-2 of header byte HB2. Table 3 describes example commands in bits 0, 1, and 2 in header byte HB2. Table 3 Control Field Bit Definition B0: Frame Type B0 = 0 means current frame is identical to the one previously sent. BO = 1 means current frame is different from the previously sent frame. Bl : Source SRD Source SRD state control field indicates the source's display State controller state, which is used as a command by the target device to manage its local controller. B 1 = 0 means SRD Off. Source state is such that normal display processing occurs and the eDP link remains active. B 1 = 1 means SRD On. Source state is such that normal display processing may be disabled and the eDP link may be placed in standby. B2: Link Standby B2 = 0 means main link to remain in normal active state. Enable B2 = 1 enables main link to enter standby state. Bit BO indicates whether a frame to be sent to a target device has not changed from a previous frame that was sent to the target device. Bit BO indicates whether a target device is to store an incoming image in a buffer. The target device can be a display with capability to enter self refresh display mode and display an image from a buffer. Bit BO can be used where an application is to update an image on a display. An update can be made to wakeup a panel and tell the panel that one or more modified frame(s) are to be transmitted to the display and to store the frames. After storing the frames, the display and display system can return to low power state and the display system can use the updated frame for self refresh display. Bit B l indicates whether the target device is to enter self refresh display mode or remain in normal operation. Bit B l also indicates whether normal display processing occurs and the link between the source and target device remains in normal active state. Bit B2 indicates whether to power down a main link. For example, the main link can be a differential pair wire having connectors, d+ and d-. The link can transmit RGB content or other types of content. The link can be powered down or enter lower power mode. Standard Embedded DisplayPort implementations support two link states: (1) full on ("Normal Operation") in which video data is transmitted to a panel and (2) full off ("ML Disabled") in which a lid is closed on a laptop and the display interface is turned off because video is not required. The standard Embedded DP implementation also supports an intermediate set of training-related transitional states. SRD adds an additional state: "ML Standby." State "ML Standby" enables a receiver to implement additional power management techniques for additional power reductions. For example, a receiver bias circuitry and PLLs can be turned-off. For example, components described with regard to FIG. IB can enter lower power state or turn- off. State "ML Standby" can turn off a display interface and display link but use an image stored in panel for SRD. FIG. 4 depicts an example of a sequence of events for entry into ML standby mode. A DisplayPort main link can be used to transmit signals X, Y, and Z. In some embodiments, header byte HB2 can be used to transmit signals X, Y, and Z. Signal X represents whether the current frame, that is to be transmitted after a VBI, is modified or unmodified relative to a previously transmitted frame. In this example, the value of signal X can indicate that the current frame is modified or unmodified relative to the previously transmitted frame. In this example, it does not matter whether frame is modified or unmodified. Signal Y indicates whether SRD is on or off. In this case, signal Y indicates that SRD state is ON. Signal Z indicates whether a link standby entry is to occur. In this case, signal Z indicates link standby is to be entered. In some embodiments, header byte HB2 can be used to transmit signals X, Y, and Z. To transmit X, Y, and Z, the following scheme can be used: bit BO represents X, bit Bl represents Y, and bit B2 represents Z. Segment "Active" can include RGB color data for transmission to a display. Segment "BS" can indicate a start of a vertical blank interval in the system. Segment "BS to stdby" indicates a delay between a start of a vertical blank interval and a start of standby mode. FIG. 5 depicts an example of a sequence of events for exit from ML standby mode. In particular, states of the main link and auxiliary channel are described. The main link state is in state "Standby." The source initiates ML Standby exit using an AUX channel to transmit a write operation. Command WR can be used to write to register address location 00600h to wake up the target device and cause the target device to exit ML standby mode. Other register address locations can be used. The target device monitors location 00600h and wakes up on reading a wake up command in that location. After some delay, the target device transmits command ACK to the host using an AUX channel to indicate acknowledgement of receipt of the WR command. The length of the delay between receipt of WR and transmission of ACK can be defined by the DisplayPort Specification. On detecting the write event, the target device power-ups the main link receiver and re- enters the training state to be ready for link training. Accordingly, as shown, the main link enters the state "Training." Re-entering the training state after exiting standby mode without explicit command provides faster synchronization. After the source completes sending the write transaction, the source may initiate link training. The transmitter may initiate either full training or Fast Link Training as described in the DP specification. A target device could be turned off and lose awareness of need to train when it wakes up. Causing the target device to train immediately after exiting standby allows full power down of a DP receiver. The graphics and/or video processing techniques described herein may be implemented in various hardware architectures. For example, graphics and/or video functionality may be integrated within a chipset. Alternatively, a discrete graphics and/or video processor may be used. As still another embodiment, the graphics and/or video functions may be implemented by a general purpose processor, including a multicore processor. In a further embodiment, the functions may be implemented in a consumer electronics device. Embodiments of the present invention may be implemented as any or a combination of: one or more microchips or integrated circuits interconnected using a motherboard, hardwired logic, software stored by a memory device and executed by a microprocessor, firmware, an application specific integrated circuit (ASIC), and/or a field programmable gate array (FPGA). The term "logic" may include, by way of example, software or hardware and/or combinations of software and hardware. Embodiments of the present invention may be provided, for example, as a computer program product which may include one or more machine-readable media having stored thereon machine-executable instructions that, when executed by one or more machines such as a computer, network of computers, or other electronic devices, may result in the one or more machines carrying out operations in accordance with embodiments of the present invention. A machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD- ROMs (Compact Disc-Read Only Memories), and magneto-optical disks, ROMs (Read Only Memories), RAMs (Random Access Memories), EPROMs (Erasable Programmable Read Only Memories), EEPROMs (Electrically Erasable Programmable Read Only Memories), magnetic or optical cards, flash memory, or other type of media / machine-readable medium suitable for storing machine-executable instructions. The drawings and the forgoing description gave examples of the present invention. Although depicted as a number of disparate functional items, those skilled in the art will appreciate that one or more of such elements may well be combined into single functional elements. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of the present invention, however, is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of the invention is at least as broad as given by the following claims. |
PROBLEM TO BE SOLVED: To provide a wireless charging system that does not cause any problem with initialization even if a received voltage varies.SOLUTION: During an initialization phase, an apparatus for voltage regulation in a wireless power receiver PRU 104 generates load modulation signaling by toggling a power switch, which selectively supplies a regulated voltage to a battery at a regulated current.SELECTED DRAWING: Figure 1 |
A device for power regulation in a wireless power receiver, comprising: a power switch selectively supplying a regulated voltage to a battery with a regulated current; and generating load modulation signaling by toggling the power switch Means forThe apparatus of claim 1, wherein the regulated voltage is preselected based on characteristics of the battery.3. Apparatus according to claim 1 or 2, wherein the means for generating load modulation signaling is a component of a battery charger circuit.The apparatus of claim 3, wherein the battery charger circuit is configured to receive the rectified voltage and generate a regulated voltage.The apparatus according to claim 1, wherein the load modulation signaling includes a beacon extension request.6. The method of claim 5, wherein the beacon extension request extends a time period to complete a wireless handshake between the wireless power receiver and a wireless power transmitter that is inductively coupled to the wireless power receiver. Device.The means for generating the load modulation signaling comprises: detecting an available power level at the battery; and power associated with components of the wireless handshake if the available power level at the battery is below a predetermined threshold. Device according to claim 1 or 2, characterized in that only the rails are activated.The apparatus of claim 7, wherein the means for generating load modulation signaling activates a power rail for system boot if the power level available at the battery exceeds the predetermined threshold.The apparatus according to claim 1 or 2, wherein the means for generating load modulation signaling generates the load modulation signaling within a predetermined power range.The apparatus according to claim 1 or 2, wherein the apparatus is a component of a computing device, and the load modulation is performed during an initialization phase of the computing device.A method for power regulation in a wireless power receiver comprising: selectively providing a regulated voltage at a regulated current from a power switch to a battery; and load modulation signaling by toggling the power switch. Generating the method.The method of claim 11, wherein the regulated voltage is preselected based on characteristics of the battery.13. A method according to claim 11 or 12, wherein the generation of load modulation signaling is performed by components of a battery charger circuit.The method of claim 13, further comprising receiving the rectified voltage by the battery charger circuit and generating a regulated voltage.The method according to claim 11 or 12, wherein generating the load modulation signaling comprises issuing a beacon extension request.The issuance of the beacon extension request generates a time duration extension to complete a wireless handshake between the wireless power receiver and a wireless power transmitter that is inductively coupled to the wireless power receiver. Item 18. The method according to item 15.Detecting the power level available at the battery; and activating only the power rail associated with the component of the wireless handshake if the power level available at the battery is below a predetermined threshold; The method according to claim 11 or 12 comprising.The method according to claim 17, further comprising the step of activating a power rail for system boot if the power level available to the battery exceeds the predetermined threshold.The method according to claim 11 or 12, further comprising the step of generating the load modulation signaling within a predetermined power range.The method according to claim 11 or 12, wherein said load modulation is performed during an initialization phase of a computing device.A system for power regulation in a wireless power receiver, comprising: a battery charger circuit that receives a rectified voltage and generates a regulated voltage; selectively regulating the battery with the regulated voltage at a regulated current. A system comprising: a power switch to provide; and load modulation logic that generates load modulation signaling by toggling the power switch.22. The system of claim 21, wherein the load modulation signaling comprises a beacon extension request.23. The method of claim 22, wherein the beacon extension request extends a time period to complete a wireless handshake between the wireless power receiver and a wireless power transmitter that is inductively coupled to the wireless power receiver. System.The load modulation logic detects: a power level available at the battery; activating only a power rail associated with a component of the wireless handshake if the power level available at the battery is below a predetermined threshold. And activating a power rail for a system boot if the power level available to the battery exceeds the predetermined threshold value. System described.24. The system of any of claims 21-23, wherein the load modulation logic generates the load modulation signaling within a predetermined power range.21. A computer program which causes a computer of an apparatus to execute the method according to any one of claims 11 to 20.A storage medium storing the computer program according to claim 26. |
Power adjustment in wireless chargingThe present disclosure generally relates to techniques for wireless charging. In particular, the present disclosure relates to the adjustment of power in a wireless power system.The basic wireless charging system includes a wireless power transmission unit (PTU) and a wireless power reception unit (PRU). For example, PTU includes a transmit (Tx) coil and PRU includes a receive (Rx) coil. Magnetic resonance wireless charging exploits the magnetic coupling (magnetic coupling) between the Tx and Rx coils. In some cases, it is feared that the voltage received may fluctuate and cause problems with the initialization of the wireless charging system. In some cases, voltage fluctuations may violate the wireless charging standard specification.FIG. 7 is a block diagram of a PTU for powering a PRU. The PRU includes logic configured to have power variation within a limited range for load modulation.7 illustrates a logic configured to regulate power in load modulation signaling.It is a graph which shows the phase change of a voltage and an electric current.It is a flowchart which performs electric power regulation in load modulation.FIG. 5 is a block diagram illustrating a method for reducing power fluctuations in a wireless charging device. The same numbers are used throughout the disclosure and drawings to refer to similar components and features. The numbers in the hundreds refer to the features first appearing in FIG. 1; the numbers in the 200 refer to the features first appearing in FIG. 2; and so on.The present disclosure generally relates to techniques for wireless charging. Specifically, the techniques described herein include an apparatus in a wireless power receiving unit (PRU) having a power switch and load modulation logic. As mentioned above, in addition to wireless charging inefficiencies, voltage fluctuations also cause violations to standard specifications. More specifically, the spatial placement freedom of the charged device with the receiver (Rx) coil results in potentially large fluctuating rectified receiver voltage (Vrect) . The fluctuation of Vrect results in a large fluctuation of the rectified power (Prect).The load modulation logic as referred to herein includes one or more electronic circuit components, modules or integrated circuit components, which are configured to generate load modulation signaling (or notification) by toggling of the power switch. Ru. The power switch includes one or more electronic circuit components, modules or integrated circuit components, which are configured to selectively supply a regulated voltage to the battery with a regulated current. In some cases, the power switch may be a power path switch in which a voltage is selectively provided between the battery and the system load of the device. In some cases, selectively supplying voltage may include simultaneously supplying voltage to both the battery and the system load. In some cases, Vrect is received by a battery charging circuit, such as a battery charger integrated circuit (IC) or the like. The battery charging circuit regulates (or regulates) Vrect and supplies Vrect to the power switch, and the power switch is configured to deliver regulated voltage to the battery and / or system load with regulated current. Ru. Thus, due to the regulated voltage due to the regulated current, toggling of the power switch generates load modulation signaling for the wireless power transmission unit (PTU) that is inductively coupled to the PRU.The techniques described herein reduce the power loss that may occur if load modulation is performed by resistors. More specifically, rather than draining power with a resistor that would actually consume energy, the techniques described herein will harness the energy by charging the battery.As described in detail below, load modulation by toggling of the power switch may be performed during the initialization phase of the charged device. For example, a computing device having a PRU may be located on a charging pad having a PTU. In order to properly configure the PTU based on the charge capacity of the PRU and PTU, the PRU needs to broadcast the wireless data associated with the wireless handshake with the PTU. The time period associated with broadcasting wireless handshake data may be too short if the computing device, such as a mobile computing device, is turned off or has a dead battery. Thus, with the technology disclosed herein, load modulation generated by the toggle of the power switch can request and generate an extension of the time period associated with performing a wireless handshake.In some cases, the techniques described herein may also be implemented using a standard protocol for wireless charging, such as the specification provided by Alliance For Wireless Power (A4WP) version 1.3 dated November 5, 2014. Good. As described below, the wireless power receive (Rx) coil may be a component of a power receive unit (PRU), while the wireless power transmit (Tx) coil may be a component of a power transmit unit (PTU). However, the techniques described herein may be implemented utilizing any other applicable wireless charging standard protocol.FIG. 1 shows a block diagram of a PTU that powers a PRU, which includes logic that is configured to induce power fluctuations that fall within a limited range for load modulation. PTU 102 couples to PRU 104 by magnetically inductive coupling between resonators 106 and 108 as indicated by arrow 110. Resonant section 106 may be referred to as Tx coil 106 of PTU 102 in this description. Resonant section 108 may be referred to as Rx coil 108 of PRU 104 in this description.As shown in FIG. 1, PRU 104 includes a logic unit 112. Logic 112 may be referred to as load modulation logic 112 in this description. Load modulation logic 112 may be an integrated component with a battery charger such as battery charger IC 114, etc., a separate component of battery charger IC of any of the other components of PRU 104, or any combination thereof. It may be configured. In any event, the load modulation logic 112 is configured to induce fluctuations in the power supplied to the battery 116 via the power switch 118 within a limited range during load modulation. In other words, load modulation logic 112 is configured to derive within a limited range the voltage that is received, rectified and regulated from battery IC 114 during load modulation. Because the voltages are regulated and predetermined based on the characteristics of the battery IC 114 and the PRU 104 as a whole, the load modulation logic 112 may toggle the power switch 118 at a predetermined current.The load modulation logic 112 may be comprised of one or more components, such as electronic circuit components, as described in further detail in connection with FIG. For example, load modulation logic 112 may be implemented as a state machine, combinational logic, or any combination thereof. Furthermore, as described further in connection with FIG. 3, load modulation may be performed during the initialization phase to extend the time period allocated for wireless handshake. Further details are described not only in connection with FIG. 2, but also with respect to the present description, the drawings and the claims.In FIG. 1, inductive coupling occurs between Tx coil 106 and Rx coil 108. The rectifier 120 may receive an alternating current (AC) voltage from the Rx coil 108 and the rectifier 120 may be configured to generate Vrect in direct current (DC). As shown in FIG. 1, the DC2DC converter 122 provides a DC output to the battery IC 114, the load modulation logic 112, the power switch 118, the battery 116, and system loads described in detail below. However, in some cases, the DC2DC converter 122 is implemented as a component of the battery charger IC 114, such that one backstage may occur if the DC2DC converter 122 is implemented as a discrete element as shown in FIG. It may eliminate (buck stage) and potential inefficiencies.PRU 104 also includes a controller 124 configured to initiate a wireless broadcast with wireless handshake data. The wireless handshake broadcast may be performed by a wireless data transmission component such as a Bluetooth® Low Energy (BLE) module 126. In some cases, the wireless data transmission component may be integrated as a process of controller 124, load modulation circuit 112, direct current to direct current (DC2DC) converter 122, or any combination thereof, with data transmission in a pattern in load modulation. Indicated.PTU 102 includes a BLE module 128 configured to communicate with BLE module 126. PTU 102 may also include current sensor 130, controller 132, power amplifier (power amplifier) 134, DC2DC converter 134, oscillator 138, and matching network (matching network) 140. Current sensor 130 may be an ammeter, a voltmeter, or may be configured to detect load variations caused by inductive coupling between PTU 102 and other objects (eg, PRU 104) It may be any other sensor. The current sensor 130 provides notification of load changes to the controller 132 of the PTU 102. The controller 132 receives direct current (DC) from the DC2DC converter 134 and activates a power amplifier 134 configured to amplify and oscillate the current. Oscillator 138 oscillates the power provided at a given frequency, and matching network 140 is used to match the amplified oscillation provided to resonant section 106 of PTU 102.The block diagram of FIG. 1 is not intended to indicate that PTUs 102 and / or PRUs 104 should include all the components shown in FIG. Further, PTU 102 and / or PRU 104 may include any number of additional components not shown in FIG. 1, depending on the details of the specific embodiment.FIG. 2 shows logic configured to regulate power with load modulation signaling. As mentioned above, a PRU such as PRU 104 of FIG. 1 may be used to charge computing device 200. The PRU includes an Rx coil 108, a rectifier 120, a battery charger IC 114, a power switch 118, and a battery 116. Vrect is provided to the battery charger IC 114, as shown at "202". The load modulation logic 112 may be located at the battery charger IC 114, the management IC (PMIC) 204, any other location of the computing device 200, or any combination thereof. The logic may control the power switch 118 by toggling the power switch 118. Toggling the power switch 118 generates load modulation to extend the beacon period associated with the wireless handshake process. As shown in FIG. 2, the load modulation logic 112 powers without power loss that would otherwise occur if the toggling were performed with a resistor (not shown) that would lose power through the ground connection. Toggle switch 118. Further, toggling the power switch 118 is performed without using additional components in the power path system including the power switch such as the power switch 118. The power path system generally includes a power switch 118 for providing a regulated voltage (VBAT) to the battery 206 and a system load voltage 208 (VSYS) by the PMIC 204.Furthermore, rather than attempting to perform load modulation on Vrec prior to voltage regulation at the battery charger IC 114, the techniques described herein are such that both voltage and current at the power switch 118 are regulated and known, Load modulation can be performed within a predetermined range. For example, at 4 AWP, load modulation to extend the time period associated with wireless handshake completion may be required to be between 0.5 Watts (W) and 1.1 W. By regulating the power and thus becoming known to be within the predetermined range, it enables load modulation to be performed within some predetermined power range, as shown in Equation 1 below: P reg = I reg × V reg Eq. 1 In Formula 1 (Eq. 1), Preg is regulated power, Ireg is regulated current supplied from the power switch 118, and Vreg is a battery charger IC 114. Regulated voltage supplied by In other words, Vrect fluctuates, causing a predetermined voltage and a predetermined current to be supplied to the power switch 118, and load modulation is performed within a limited range. Furthermore, in some cases, a range of 0.5 W to 1.1 W is realized at the Vrect node, and the energy transfer efficiency of the battery charger IC 114 is described as extracting an appropriate power level.FIG. 3 is a graph showing phase changes of voltage and current. As described with respect to FIG. 1, load modulation may be done at the initialization stage. The initialization phase may possibly be done if the device to be charged has a low battery, if it has a fully discharged battery, if the device to be charged is turned off etc. The initialization phase may be used to extend the time period associated with completing the wireless handshake, as described above. Extending the time period is particularly advantageous, for example, if the battery is below the threshold, such as the battery 116 of FIGS. 1 and 2, or if it does not have sufficient power to initiate a wireless handshake, etc. It is.The first phase proceeds from left to right and ends at dashed line 302, and when the device to be powered is placed on a feed mat having a PTU, such as PTU 102 of FIG. Will rise. The second phase is shown between dashed line 302 and dashed line 304, and since PMIC 204 and charger IC 114 of FIGS. 1 and 2 are turned on, Vrect and Irect stabilize. If the battery 116 is depleted or discharged to below a predetermined level, the battery charger IC 114 may start prior to the PMIC 204 to provide power to the PMIC 204. The third phase is shown between dashed line 304 and dashed line 306, and PMIC 204 can be used to activate and toggle the switch by turning on and off a power switch such as power switch 118 of FIGS. Direct to a battery IC such as IC114. In some cases, toggling in phase 3 may be done at a 100 Hertz frequency. In some cases, the battery charger IC 114 may be configured to detect that the PTU 102 is plugged in and has completed load modulation with the power switch 118 performing its toggling. When the power switch 118 is on, Ireg flows to the PMIC 204 and battery 116, Irect rises, and VBAT also rises. When the power switch 118 is off, current flows only to the PMIC 204, and Prect may be less than the nominal system load (eg, 200 milliwatts (mW)). In some cases, the PMIC 204 and the battery charger IC 114 may be other components alone (with the exception of upstream components such as rectifiers, DC2DC converters, etc.) so that they can know the voltage Thus, a limited power range is available and the use of large variable voltages which may be present if other components are initialized is avoided.The fourth phase is shown between dashed lines 306 and 308 and a wireless data transmission device such as BLE module 126 is initialized. During the fourth phase, the PRU 104 registers with the PTU 102 by BLE handshake, and the PTU 102 gives the device to be charged permission to flow more power. The fifth phase is shown to continue between dashed line 308 and at least dashed line 310, and PMIC 204 configures battery charger IC 114 to disable any limitations on current and power switch 118 to charge the battery. Open and VBAT starts to rise as shown in FIG.FIG. 4 is a flow chart illustrating power regulation during load modulation. At block 400, when a device to be charged having a PRC, such as PRU 104, is placed on or near a charger having a PTU, such as PTU 102, Vrect rises. At block 404, the charger IC limits the power to a predetermined threshold. The power limitation at 402 enables load modulation to be performed within a predetermined power range, such as a predetermined power range associated with a beacon extension request for wireless handshake processing. At block 404, a PMIC, such as PMIC 204 of FIG. 2, is activated. At block 406, a power switch, such as the power switch 112 of FIGS. 1 and 2, is toggled to perform load modulation signaling associated with the beacon extension request. At block 408, VBAT is compared to a voltage threshold (Vthresh). If VBAT is greater than Vthresh, PMIC 204 turns on all power rails associated with the components of PRU 102 at block 410 and boots (or boots up) BLE module 126 and the system at block 412. Booting both the BLE module 126 as well as the system at block 412 initiates a wireless handshake at 414, and the limited power at 402 is at block 420 up to the higher power limit for normal power consumption. Raised, process 400 ends at 422. However, if VBAT is not greater than Vthresh at 408, the PMIC does not power on all power rails. Instead, the PMIC may turn on only the rails for the BLE module 126, as shown in block 424. At block 426, the BLE module 126 may be booted and wireless handshake may be performed at block 428. At block 430, the power limited at 402 is raised to the higher power limit associated with power rails other than the BLE module 126, and the PMIC turns on the additional power rail at 432 and boots the system at 434. You mayFIG. 5 is a block diagram illustrating a method of reducing power fluctuations in a wireless charging device. At 502, the method 500 selectively supplies a regulated voltage at a regulated current from the power switch to the battery, and at block 504, the method generates load modulation signaling by toggling the power switch. Including.In some cases, the regulated voltage may be predetermined based on the characteristics of the battery. However, the regulated voltage may be predetermined based on features of standard specification guidelines, such as the A4WP standard, indicating the power range of load modulation during long beacon extension requests. Generating load modulation signaling is possibly performed by components of the battery charger circuit. Furthermore, in some cases, the energy of the backup battery circuit may be used to perform load modulation. The backup battery may be a battery that is relatively smaller than the battery of FIG. 1 and is configured to maintain the clock signal when the battery 116 is not available, if discharged, and below a predetermined threshold. It may be done. The battery charger circuit may include a backup battery charger IC configured to regulate the power supplied to the backup battery of the backup battery circuit. Generating load modulation signaling is optionally performed by the logic component of the PMIC. In yet another example, generating load modulation signaling may be performed by any combination of components of the PRU.The method includes receiving the rectified voltage and generating a regulated voltage by a battery charger circuit. Further, as noted above, generating load modulation signaling at 504 includes issuing a beacon extension request. In this situation, the beacon extension request extends the time period to complete the wireless handshake between the wireless power receiver and the wireless power transmitter that is inductively coupled to the wireless power receiver.The method 500 may include detecting a power level available at the battery. In that case, method 500 may include activating only the power rails associated with the components of the wireless handshake if the available power level at the battery is below a predetermined threshold. Further, the method 500 may further include activating a power rail for booting the system if the available power level at the battery is above a predetermined threshold.In some cases, the method 500 may include generating load modulation signaling within a predetermined power range. In some cases, load modulation is performed at the initialization stage of the computing device as described above with respect to FIGS. 1 and 3.Example 1 is an apparatus for power regulation in a wireless power receiver. In this example, the wireless charging device is a power switch that selectively supplies a regulated voltage to the battery with regulated current, and load modulation logic that generates load modulation signaling by toggling the power switch including.Example 2 includes the apparatus of Example 1. In this example, the regulated voltage is preselected based on the characteristics of the battery.Example 3 includes the apparatus of any combination of Example 1-2. In this example, the load modulation logic is a component of a battery charger circuit.Example 4 includes the apparatus of any combination of examples 1-3. In this example, the battery charger circuit is configured to receive the rectified voltage and generate a regulated voltage.Example 5 includes the apparatus of any combination of examples 1-4. In this example, the load modulation signaling may include a beacon extension request.Example 6 includes the apparatus of any combination of examples 1-5. In this example, the beacon extension request is for extending a time period for completing a wireless handshake between the wireless power receiver and a wireless power transmitter that is inductively coupled to the wireless power receiver. It is a thing.Example 7 includes the apparatus of any combination of examples 1-6. In this example, the load modulation logic detects: available power level at the battery; and associated with components of wireless handshake if the available power level at the battery is below a predetermined threshold Start power rails only.Example 8 includes the apparatus of any combination of examples 1-7. In this example, the load modulation logic activates a power rail for system boot if the power level available at the battery exceeds the predetermined threshold.Example 9 includes the apparatus of any combination of examples 1-8. In this example, the load modulation logic generates the load modulation signaling within a predetermined power range.Example 10 includes the apparatus of any combination of examples 1-9. In this example, the device is a component of a computing device, and the load modulation is performed during the initialization phase of the computing device.Example 11 is a method for power regulation in a wireless power receiver. In this embodiment, the wireless charging device selectively supplies a regulated voltage at a regulated current from the power switch to the battery; and generating load modulation signaling by toggling the power switch ;including.Example 12 includes the method of example 11. In this example, the regulated voltage is preselected based on the characteristics of the battery.Example 13 includes the method of any combination of examples 11-12. In this example, the generation of the load modulation signaling is performed by components of the battery charger circuit.Example 14 includes the method of any combination of examples 11-13. This example includes the step of receiving the rectified voltage by the battery charger circuit and generating a regulated voltage.Example 15 includes the method of any combination of examples 11-14. In this example, generating the load modulation signaling includes issuing a beacon extension request.Example 16 includes the method of any combination of examples 11-15. In this example, the issuance of the beacon extension request comprises extending the time period for completing a wireless handshake between the wireless power receiver and a wireless power transmitter that is inductively coupled to the wireless power receiver. GenerateExample 17 includes the method of any combination of examples 11-16. This example detects the power level available at the battery; and, if the power level available at the battery is below a predetermined threshold, only power rails associated with components of the wireless handshake are activated. Including;Example 18 includes the method of any combination of examples 11-17. This example includes the step of activating a power rail for system boot if the power level available at the battery exceeds the predetermined threshold.Example 19 includes the method of any combination of examples 11-18. This example includes the step of generating the load modulation signaling within a predetermined power range.Example 20 includes the method of any combination of examples 11-19. In this example, the load modulation is done during the initialization phase of the computing device.Example 21 is a system for power regulation in a wireless power receiver. In this example, the wireless charging device receives a rectified voltage and generates a regulated voltage, a battery charger circuit, and a power switch that selectively supplies the regulated voltage to the battery with a regulated current, And load modulation logic that generates load modulation signaling by toggling the power switch.Example 22 includes the system of example 21. In this example, the regulated voltage is preselected based on the characteristics of the battery.Example 23 includes the system of any combination of examples 21-22. In this example, the load modulation logic is a component of a battery charger circuit.Example 24 includes the system of any combination of examples 21-23. In this example, the rectified current is not regulated before being provided to the battery charger circuit.Example 25 includes the system of any combination of examples 21-24. In this example, the load modulation signaling may include a beacon extension request.Example 26 includes the system of any combination of examples 21-25. In this example, the beacon extension request is for extending a time period for completing a wireless handshake between the wireless power receiver and a wireless power transmitter that is inductively coupled to the wireless power receiver. It is a thing.Example 27 includes the system of any combination of examples 21-26. In this example, the load modulation logic detects: available power level at the battery; and associated with components of wireless handshake if the available power level at the battery is below a predetermined threshold Start power rails only.Example 28 includes the system of any combination of examples 21-27. In this example, the load modulation logic activates a power rail for system boot if the power level available at the battery exceeds the predetermined threshold.Example 29 includes the system of any combination of examples 21-28. In this example, the load modulation logic generates the load modulation signaling within a predetermined power range.Example 30 includes the system of any combination of examples 21-29. In this example, the system is a component of a computing device, and the load modulation is performed during the initialization phase of the computing device.Example 31 is an apparatus for power regulation in a wireless power receiver. In this example, the wireless charging device includes a power switch that selectively supplies a regulated voltage to the battery with a regulated current, and means for generating load modulation signaling by toggling the power switch.Example 32 includes the apparatus of example 31. In this example, the regulated voltage is preselected based on the characteristics of the battery.Example 33 includes the apparatus of any combination of examples 31-32. In this example, the means for generating the load modulation signaling is a component of a battery charger circuit.Example 34 includes the apparatus of any combination of examples 31-33. In this example, the battery charger circuit is configured to receive the rectified voltage and generate a regulated voltage.Example 35 includes the apparatus of any combination of examples 31-34. In this example, the load modulation signaling may include a beacon extension request.Example 36 includes the apparatus of any combination of examples 31-35. In this example, the beacon extension request is for extending a time period for completing a wireless handshake between the wireless power receiver and a wireless power transmitter that is inductively coupled to the wireless power receiver. It is a thing.Example 37 includes the apparatus of any combination of examples 31-36. In this example, the means for generating load modulation signaling comprises: detecting the power level available at the battery; and, if the power level available at the battery is below a predetermined threshold, wireless handshake. Activate only the power rails associated with the component.Example 38 includes the apparatus of any combination of examples 31-37. In this example, the means for generating load modulation signaling activates a power rail for system boot if the power level available at the battery exceeds the predetermined threshold.Example 39 includes the apparatus of any combination of examples 31-38. In this example, the means for generating load modulation signaling generates the load modulation signaling within a predetermined power range.Example 40 includes the apparatus of any combination of examples 31-39. In this example, the device is a component of a computing device, and the load modulation is performed during the initialization phase of the computing device.Example 41 is a system for power regulation in a wireless power receiver. In this example, the wireless charging device receives a rectified voltage and generates a regulated voltage, a battery charger circuit, and a power switch that selectively supplies the regulated voltage to the battery with a regulated current, And means for generating load modulation signaling by toggling the power switch.Example 42 includes the system of example 41. In this example, the regulated voltage is preselected based on the characteristics of the battery.Example 43 includes the system of any combination of examples 41-42. In this example, the means for generating the load modulation signaling is a component of a battery charger circuit.Example 44 includes the system of any combination of examples 41-43. In this example, the rectified current is not regulated before being provided to the battery charger circuit.Example 45 includes the system of any combination of examples 41-44. In this example, the load modulation signaling may include a beacon extension request.Example 46 includes the system of any combination of examples 41-45. In this example, the beacon extension request is for extending a time period for completing a wireless handshake between the wireless power receiver and a wireless power transmitter that is inductively coupled to the wireless power receiver. It is a thing.Example 47 includes the system of any combination of examples 41-46. In this example, the means for generating load modulation signaling comprises: detecting the power level available at the battery; and, if the power level available at the battery is below a predetermined threshold, wireless handshake. Activate only the power rails associated with the component.Example 48 includes the system of any combination of examples 41-47. In this example, the means for generating load modulation signaling activates a power rail for system boot if the power level available at the battery exceeds the predetermined threshold.Example 49 includes the system of any combination of examples 41-48. In this example, the means for generating load modulation signaling generates the load modulation signaling within a predetermined power range.Example 50 includes the system of any combination of examples 41-49. In this example, the system is a component of a computing device, and the load modulation is performed during the initialization phase of the computing device.The components, features, structures, features, etc. described and described herein may not all be included in a particular form or multiple forms. Where the specification describes a component, feature, structure or feature, it may be, may be, can or can be "could" etc. are included, for example, particular components, features, structures or features need not be included. Where the specification or claim refers to "a" or "a" elements, this does not mean that only one element is present. Where the specification or claim refers to an "additional" element, one does not exclude the presence of more than one additional element.It should be noted that where some form is described in the context of a particular implementation, other implementations are also possible according to that form. Moreover, the arrangement and / or order of the circuit elements or other features described herein and / or shown in the drawings need not be arranged in the particular manner described and illustrated. Many other arrangements are possible according to some form.In each system shown in the drawings, the elements in a case each have the same reference number or a different reference number, suggesting that the elements to be represented may be different and / or similar. However, the elements have various means of implementation and may be sufficiently flexible to function with all or part of the system described or illustrated herein. The various elements shown in the drawings may be identical or different. It is optional which is referred to as the first element and which is referred to as the second element.It is to be understood that the specifics in the above example may be used anywhere in one or more forms. For example, all optional features of the computing device described above may be implemented in connection with any of the methods or computer readable media described herein. Further, although flowcharts and / or state diagrams may be used to describe the embodiments, the technology is not limited to those figures or corresponding descriptions. For example, it is not essential for the flow to proceed via the illustrated boxes or states respectively or in exactly the same order as described or described.The technology is not limited to the specific details presented herein. Indeed, those skilled in the art who have the benefit of the present disclosure will appreciate from the foregoing description and drawings that many other variations may be made within the scope of the present technology. Therefore, what defines the scope of the present technology is the scope of the claims (including any corrections). |
While prefetching data for a second fiber, a hierarchical data structure is traversed using a first fiber after deferring traversal for the second fiber. Then context is switched to the second fiber,and the hierarchical data structure is traversed using second fiber while prefetching data for another fiber. |
1.A method comprising:Prefetching data of the first optical fiber;Using the first fiber to initiate traversal of the hierarchical data structure while prefetching data of the second fiber and deferring traversal for the second fiber;Switching context to the second fiber;The second fiber is used to traverse the hierarchical data structure while prefetching data from another fiber.2.The method of claim 1 including providing a stream of optical fibers and, once said second optical fiber is terminated, backfilling said another ray.3.The method of claim 1 including performing a ray traversal of the bounding hierarchy using each of the fibers.4.The method of claim 1 including assigning different channels of the vector register to different fibers.5.The method of claim 3 including assigning different channels of the vector register to different rays.6.The method of claim 5 including dividing a vector register into upper and lower halves and storing one ray in one half and another ray in the other half.7.The method of claim 6 including using an envelope structure of N/2 width.8.The method of claim 7 including performing a box test on the active light in a half of the register of the active light.9.The method of claim 8 including broadcasting to replicate light across the channel.10.The method of claim 1 including traversing different hierarchical data structures using said first and second fibers.11.One or more non-transitory computer readable medium storing instructions for performing a sequence comprising the steps of:Prefetching data of the first optical fiber;Using the first fiber to initiate traversal of the hierarchical data structure while prefetching data of the second fiber and deferring traversal for the second fiber;Switching context to the second fiber;The second fiber is used to traverse the hierarchical data structure while prefetching data from another fiber.12.The medium of claim 11 further storing instructions for performing a sequence of providing a fiber stream and, once the second fiber terminates, backfilling with the other fiber.13.The medium of claim 11 further storing instructions for performing a sequence comprising: performing a ray traversal of the bounding volume hierarchy with each of the fibers.14.The medium of claim 11 further storing instructions for performing a sequence of assigning different channels of the vector register to different optical fibers.15.The medium of claim 13 further storing instructions for performing a sequence of assigning different channels of the vector register to different rays.16.The medium of claim 15 further storing instructions for performing a sequence of dividing a vector register into upper and lower halves and storing a ray in one half, and Store another light in the other half.17.The medium of claim 16 further storing instructions for performing a sequence comprising: using an N/2 width bounding hierarchy.18.The medium of claim 17 further storing instructions for performing a sequence comprising: performing a box test on the active light in a half of the register of the active light.19.The medium of claim 18, the medium further storing instructions for performing a sequence comprising: broadcasting to replicate light across the channel.20.The medium of claim 1 further storing instructions for performing a sequence of traversing different hierarchical data structures using the first and second fibers.21.A device comprising:a processor, configured to: prefetch data of the first optical fiber, initiate traversal of the hierarchical data structure using the first optical fiber, prefetch data of the second optical fiber, and delay traversal for the second optical fiber, Switching context to the second fiber, and using the second fiber to traverse the layered data structure while prefetching data of another fiber;A memory coupled to the processor.22.The apparatus of claim 21, said processor for providing a fiber stream, and once said second fiber is terminated, backfilling said another fiber.23.The apparatus of claim 21, said processor for performing a ray traversal of a bounding hierarchy using each fiber.24.The apparatus of claim 21, said processor for assigning different channels of a vector register to different fibers.25.The apparatus of claim 23, said processor for assigning different channels of a vector register to different rays. |
Reduce memory access latency during ray traversalBackground techniqueThe present disclosure relates to reducing latency from memory access during ray traversal.For example, ray tracing is used in film production and professional rendering, as well as in unrelated techniques such as visualization and light-based simulation (ballistics, radar, radio, etc.). Ray tracing finds the nearest (or any) triangle that intersects a given ray. Ray tracing typically works by traversing an acceleration structure, such as a bounding hierarchy (BVH). Several techniques have been developed to make this traversal more efficient and map this traversal to modern central processing unit (CPU) and graphics processing unit (GPU) architectures (especially the vector single instruction multiple data (SIMD) found therein) unit).The bounding volume can be a tree of bounding primitives (eg, bounding boxes) surrounding the underlying geometry. Construct a cube around each triangle to form a depiction of the object. Each group box can be packaged in a larger box, packaged in a larger box, and the like. In ray tracing, rays are traversed through BVH in depth-first or breadth-first fashion, and a ray box test is performed on the frames encountered by the ray (if the ray does not intersect the box, the ray can skip the subtree; otherwise the ray needs Traverse it). If the node that intersects the ray is an internal BVH node, the ray must dispatch the child node for traversal; if it is a leaf node, it intersects with the primitive stored in the node (to find the ray primitive intersection).When using a SIMD unit, one method is to intersect multiple rays with each frame (called packet tracking or flow tracking), or one ray can intersect multiple sub-frames of the same parent node. A common variant is to use a BVH with a branching factor of 4 (ie, each internal node has four sub-nodes, the so-called "quadruple BVH"), and perform four box tests in parallel with SIMD (so called "four" Node "test".For large models with millions of polygons, ray traversal through BVH can result in a large amount of memory access (which often results in cache misses). The depth-first nature of these algorithms makes any type of prefetching used to reduce latency from such memory accesses difficult, if not impossible. Both packet tracking and flow tracking allow for the allocation of memory access costs across multiple rays; however, this cost cannot be avoided. When loading BVH nodes (or triangles), a large amount of cost is associated with stalls originating from cache misses. Hardware-hyperthreading (multiple hardware threads per CPU core) certainly helps to reduce some of these wait times, but since the number of hardware threads per CPU core is small, this is not enough in itself.DRAWINGSSome embodiments are described with reference to the following figures:Figure 1 is a flow chart of an embodiment;Figure 2 is a flow chart of another embodiment;3 is a depiction of a vector register architecture, in accordance with one embodiment.4 is a block diagram of a processing system in accordance with one embodiment;Figure 5 is a block diagram of a processor in accordance with one embodiment;Figure 6 is a block diagram of a graphics processor in accordance with one embodiment;Figure 7 is a block diagram of a graphics processing engine, in accordance with one embodiment;Figure 8 is a block diagram of another embodiment of a graphics processor;9 is a depiction of thread execution logic, in accordance with one embodiment;10 is a block diagram of a graphics processor instruction format in accordance with some embodiments;11 is a block diagram of another embodiment of a graphics processor;12A is a block diagram of a graphics processor command format in accordance with some embodiments;12B is a block diagram showing a sequence of graphics processor commands in accordance with some embodiments;13 is a depiction of an exemplary graphics software architecture, in accordance with some embodiments;14 is a block diagram showing an IP core development system in accordance with some embodiments;15 is a block diagram showing an exemplary system-on-chip integrated circuit in accordance with some embodiments;16 is a block diagram of a graphics processor in a system on a chip, according to one embodiment; andFigure 17 is a block diagram of another graphics processor in accordance with one embodiment.Detailed waysFibering or software-hyperthreading hides more latency by switching between different traversals in the same hardware thread. In hyperthreading, multiple threads can be executed by the same processor at one time.Perform multiple independent ray traversals in the same logical software thread using multiple fibers. In particular, when one of the two fibers wants to perform a memory access, the memory access can be deferred. Alternatively, when switching to another fiber, a prefetch of the necessary data is issued before the result of the memory access is required. .In one embodiment, two different rays traverse the path through the same BVH using the same underlying data structure (BVH) of the same scene, but the rays are located at different locations within the BVH. In another embodiment, each fiber traverses a different BVH.It is advantageous to use fiber optics ("fibrillation") for context switching from light to light rather than to threads, since one fiber does not interrupt the other. The fibers work together and stop and start at a clearly defined point.In a ray-traced scene, two (or more) completely independent rays are used, and two (or more) completely independent traversal contexts (one for each ray) are managed. Then, each time a ray wants to access a BVH node (or a triangle in a leaf in a BVH's leaf), prefetching of the necessary data is issued, but the access is not processed immediately (because the data is not yet available) . Instead, the process temporarily switches via the fiber switch to perform the next traversal step for another ray whose data was previously prefetched. Once another light wants to make its next memory access, the flow returns to the suspended light.Switching from one light to another can occur in a round robin manner. If the ray is terminated, it can be replaced by another "new" ray from the input ray packet or input ray stream to maximize the time that two "live" software threads are active per hardware thread.One method completely switches the state of two rays during "fiber switching." Another method stores the state of both rays in the lower or upper half of the same set of vector registers and uses the mask to quickly "switch" between the active half in the next step.These methods do not need to be intended to share the memory stall, but rather avoid memory stalls by interleaving calculations from the corresponding other fiber while the data of the current fiber is being prefetched.Instead of traversing one ray, it traverses multiple (eg, two rays) in parallel to cross the load latency by alternating between the two rays. After popping the nodes in the BVH from the stack, the next step of the data is predetermined, but instead of performing the next step, the process switches to the next step of another ray, and only once another ray releases its prefetch Return to the next step of the current light.The sequence 10 shown in Figure 1 can be as follows:1.Initiate traversal for N rays (box 12): Initialize N traversal stacks (one for each ray), starting at the root of the stack;2.Select one of the rays as the active ray and select one as the paused ray (box 14);3.When not completed:a. popping the next traversal node of the active ray from the node stack of the active ray (box 16);b. prefetching all data for the traversal step of the current node (box 18);c. switching between active and background light (block 20);d. Load the child nodes of the new active ray (previously prefetched) (box 22).One problem with this sequence is that when one ray is terminated, the other ray no longer has the means to hide the latency. This can be solved by operating the entire ray packet or ray stream, and once any ray is terminated, the "unfiber" ray slot is always backfilled with the next unprocessed ray from the stream, thus maintaining each hardware. A thread has multiple active "fibers."In its immature instance, the sequence performs a "full handoff" (in the sense of software-fiber, not in the sense of operating system context switching). This is conceptually simple but costly. If there is no hardware support for "thread" switching, the full light will be reloaded for each traversal. This means that the state of more than one ray cannot fit into the register exactly, and the ray's data is reloaded each time the context is switched.In one embodiment shown in FIG. 3, different channels of the vector register are assigned to different rays/fibers, as indicated by block 32 in sequence 30 of FIG. Fiberization can be done using a full swap-out-swap-in, where one ray is written from the register to the memory and the other ray is loaded. In the static method, fiberization is accomplished with two ray states in the same set of registers, some of which remain reserved for the first ray and others reserved for the second ray. This is even what the compiler can do. Fiberization can also be accomplished by manually assigning the state of two (or more) rays to the lower or upper half of the same register.Instead of copying a ray on all N channels of the R N-width registers (and using them to intersect N boxes), each of these R registers is divided into an upper half and a lower half (box 36). ), storing the light of one fiber in the upper half and the light of the other fiber in the lower half (box 38). Then, in one embodiment, a N/2 width BVH (box 40) can be used instead of an N-wide BVH. Using the mask, an N/2 (e.g., 4) box test of the active ray is performed in the corresponding valid half of the active ray of the register (block 42).Sequences 10 and 30 can be implemented in hardware, software, and/or firmware. In software and firmware embodiments, computer-executed instructions may be stored in one or more non-transitory computer readable media, such as magnetic, optical, or semiconductor storage.In one embodiment, four frame tests of two different rays do not cross in parallel. The four boxes are just a set of four boxes that can be processed in parallel in the SIMD channel. In one embodiment, one of the rays is always inactive (loading its nodes in the background) and the SIMD channel of the ray is always masked. This means that SIMD efficiency is discarded for memory latency. In some embodiments, the box test is performed on half SIMD utilization (half of the channels are always masked because the light's box is not even loaded yet), but the state of the two different rays can be easily moved Stored in the same register and has a virtually zero cost to switch states.It is counterintuitive to always mask half of the registers because it prompts for more instructions. However, in practice, this means that the 8-width BVH with a large number of memory stalls (almost one node testing one titer) is swapped to a 4-width BVH with significantly less memory stalls. Since the 4 Width BVH requires only a little more than the 8 Width BVH per ray test, a significant memory latency reduction results in only slightly more operation.You can also use a swizzle or broadcast to copy light across multiple channels. Each register can be split into four ray states and a fast context switch is performed with blending to replicate any of the four rays across all channels. In other respects, the concept is the same: with (up to) four parallel ray states, all ray states are preloaded in pre-partitioned vector registers, four different traversal stacks, and any given prefetch "Context switch".The same context can be used for ray/primitive crossover. Prefetch the primitive data required for the ray-primitive crossover, and then context switch for another ray (which can be traversed or crossed).The fact that half of the vector is always masked can of course be exploited by the hardware to save power or to perform vector operations in fewer cycles.It is simple to expand to more than two rays to hide more waiting time.The same techniques can also be used for various other workloads involving depth-first traversal pointer tracking (eg, picking, nearest neighbor queries, or searching) through hierarchical data structures.FIG. 4 is a block diagram of a processing system 100 in accordance with an embodiment. In various embodiments, system 100 includes one or more processors 102 and one or more graphics processors 108, and can be a single processor desktop system, a multi-processor workstation system, or have a large number of processors 102 or processors The server system of the core 107. In one embodiment, system 100 is a processing platform incorporated into a system-on-a-chip (SoC) integrated circuit for use in a mobile device, handheld device, or embedded device.Embodiments of system 100 may include or incorporate a server-based gaming platform, game console, including a gaming and media console, a mobile gaming console, a handheld gaming console, or an online gaming console. In some embodiments, system 100 is a mobile phone, smart phone, tablet computing device, or mobile internet device. Data processing system 100 may also include, be coupled to, or integrated with a wearable device, such as a smart watch wearable device, a smart glasses device, an augmented reality device, or a virtual reality device. In some embodiments, data processing system 100 is a television or set top box device having one or more processors 102 and a graphical interface generated by one or more graphics processors 108.In some embodiments, the one or more processors 102 each include one or more processor cores 107 for processing instructions that, when executed, perform operations of the system and user software. In some embodiments, each of the one or more processor cores 107 is configured to process a particular set of instructions 109. In some embodiments, the set of instructions 109 may facilitate computation of complex instruction set calculations (CISC), reduced instruction set calculations (RISC), or via very long instruction words (VLIW). Multiple processor cores 107 may each process a different set of instructions 109, which may include instructions for facilitating emulation of other sets of instructions. Processor core 107 may also include other processing devices such as a digital signal processor (DSP).In some embodiments, processor 102 includes cache memory 104. Depending on the architecture, processor 102 can have multiple stages of a single internal cache or internal cache. In some embodiments, a cache memory is shared among the various components of processor 102. In some embodiments, processor 102 also uses an external cache (eg, Level 3 (L3) cache or Last Level Cache (LLC)) (not shown), which may use known cache coherency techniques. An external cache is shared among the processor cores 107. Additionally, register file 106 is included in processor 102, which may include different types of registers (eg, integer registers, floating point registers, status registers, and instruction pointer registers) for storing different types of data. Some registers may be general purpose registers, while other registers may be specific to the design of processor 102.In some embodiments, processor 102 is coupled to a processor bus 110 for transmitting communication signals, such as address, data, or control signals, between processor 102 and other components within system 100. In one embodiment, system 100 uses an exemplary 'hub' system architecture, including a memory controller hub 116 and an input/output (I/O) controller hub 130. The memory controller hub 116 facilitates communication between the memory device and other components of the system 100, while the I/O controller hub (ICH) 130 provides connectivity to the I/O devices via the local I/O bus. In one embodiment, the logic of the memory controller hub 116 is integrated within the processor.Memory device 120 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, a phase change memory device, or some other memory device with suitable performance for processing memory. In one embodiment, memory device 120 can operate as system memory of system 100 to store data 122 and instructions 121 for use when one or more processors 102 execute an application or process. Memory controller hub 116 is also coupled to optional external graphics processor 112, which can communicate with one or more graphics processors 108 in processor 102 to perform graphics and media operations.In some embodiments, ICH 130 causes peripheral components to connect to memory device 120 and processor 102 via a high speed I/O bus. I/O peripherals include, but are not limited to, an audio controller 146, a firmware interface 128, a wireless transceiver 126 (eg, Wi-Fi, Bluetooth), a data storage device 124 (eg, a hard drive, flash memory, etc.), and A conventional (eg, Personal System 2 (PS/2)) device is coupled to the legacy I/O controller 140 of the system. One or more universal serial bus (USB) controllers 142 connect multiple input devices, such as a combination of keyboard and mouse 144. Network controller 134 may also be coupled to ICH 130. In some embodiments, a high performance network controller (not shown) is coupled to the processor bus 110. It should be understood that the illustrated system 100 is illustrative and not limiting, as other types of data processing systems configured in different ways may also be used. For example, I/O controller hub 130 can be integrated within one or more processors 102, or memory controller hub 116 and I/O controller hub 130 can be integrated in a discrete external graphics processor (such as external graphics processing) Inside 112).FIG. 5 is a block diagram of an embodiment of a processor 200 having one or more processor cores 202A-202N, an integrated memory controller 214, and an integrated graphics processor 208. Those elements of FIG. 5 having the same reference numbers (or names) as the elements of any other figures herein may operate or function in any manner similar to that described elsewhere herein, but are not limited thereto. These ones. Processor 200 may include additional cores up to and including additional cores 202N represented by dashed boxes. Processor cores 202A through 202N each include one or more internal cache units 204A through 204N. In some embodiments, each processor core may also access one or more shared cache units 206.Internal cache units 204A-204N and shared cache unit 206 represent cache memory hierarchy internal to processor 200. The cache memory hierarchy may include at least one level of instruction and data caches within each processor core and one or more levels of shared intermediate caches, such as level 2 (L2), level 3 (L3), level 4 (L4). ), or other level of cache, where the most advanced cache is classified as LLC before the external memory. In some embodiments, cache coherency logic maintains consistency between cache units 206 and 204A through 204N.In some embodiments, processor 200 may also include a set of one or more bus controller units 216 and system agent cores 210. One or more bus controller units 216 manage a set of peripheral buses, such as one or more peripheral component interconnect buses (eg, PCI, PCI Express). System agent core 210 provides management functions for various processor components. In some embodiments, system agent core 210 includes one or more integrated memory controllers 214 for managing access to various external memory devices (not shown).In some embodiments, one or more of processor cores 202A-202N include support for synchronous multi-threading. In such an embodiment, system agent core 210 includes components for coordinating and operating cores 202A through 202N during multi-threaded processing. Additionally, system agent core 210 may also include a power control unit (PCU) that includes logic and components for adjusting the power states of processor cores 202A-202N and graphics processor 208.In some embodiments, in addition, processor 200 also includes a graphics processor 208 for performing graphics processing operations. In some embodiments, graphics processor 208 is coupled to a set of shared cache units 206 and a system proxy core 210 that includes one or more integrated memory controllers 214. In some embodiments, display controller 211 is coupled to graphics processor 208 to drive the graphics processor output to one or more coupled displays. In some embodiments, display controller 211 can be a separate module coupled to the graphics processor via at least one interconnect, or can be integrated within graphics processor 208 or system proxy core 210.In some embodiments, the ring-based interconnect unit 212 is used to couple internal components of the processor 200. However, alternative interconnect units may be utilized, such as point-to-point interconnects, switched interconnects, or other techniques, including techniques well known in the art. In some embodiments, graphics processor 208 is coupled to ring interconnect 212 via I/O link 213.The exemplary I/O link 213 represents at least one of a plurality of I/O interconnects, including facilitating communication between various processor components and a high performance embedded memory module 218, such as an eDRAM module. Package I/O interconnects. In some embodiments, each of the processor cores 202A-202N and the graphics processor 208 use the embedded memory module 218 as a shared last-level cache.In some embodiments, processor cores 202A through 202N are isomorphic cores that implement the same instruction set architecture. In another embodiment, processor cores 202A through 202N are heterogeneous in terms of an instruction set architecture (ISA), wherein one or more of processor cores 202A-202N execute a first set of instructions while other cores At least one of the first instruction set or a different instruction set is executed. In one embodiment, processor cores 202A through 202N are homogeneous in terms of microarchitecture, wherein one or more cores with relatively high power consumption are coupled to one or more power cores with lower power consumption. . Additionally, processor 200 can be implemented on one or more chips or as a SoC integrated circuit having the components shown, among other components.6 is a block diagram of a graphics processor 300, which may be a discrete graphics processing unit, or may be a graphics processor integrated with multiple processing cores. In some embodiments, the graphics processor communicates with the memory via a mapped I/O interface to a register on the graphics processor and with commands placed in the processor memory. In some embodiments, graphics processor 300 includes a memory interface 314 for accessing memory. Memory interface 314 can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.In some embodiments, graphics processor 300 also includes a display controller 302 for driving display output data to display device 320. Display controller 302 includes hardware for one or more overlapping planes of the display and a composition of multi-layer video or user interface elements. In some embodiments, graphics processor 300 includes a video codec engine 306 for encoding, decoding, or media transcoding between, or from, one or more media encoding formats, including but not limited to: motion Image Experts Group (MPEG) (such as MPEG-2), Advanced Video Coding (AVC) formats (such as H.264/MPEG-4 AVC), and Society of Motion Picture and Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) format (such as JPEG, and Motion JPEG (MJPEG) format).In some embodiments, graphics processor 300 includes a block image transfer (BLIT) engine 304 for performing two-dimensional (2D) rasterizer operations including, for example, bit boundary block transfer. However, in one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE) 310. In some embodiments, GPE 310 is a computing engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.In some embodiments, GPE 310 includes a 3D pipeline 312 for performing 3D operations, such as rendering a three-dimensional image and scene using processing functions that act on 3D primitive shapes (eg, rectangles, triangles, etc.). The 3D pipeline 312 includes programmable and fixed functional elements that perform various tasks within the components and/or generated execution threads to the 3D/media subsystem 315. While the 3D pipeline 312 can be used to perform media operations, embodiments of the GPE 310 also include a media pipeline 316 that is specifically used to perform media operations, such as video post-processing and image enhancement.In some embodiments, media pipeline 316 includes fixed functions or programmable logic units to replace, or represent, video codec engine 306 to perform one or more specialized media operations, such as video decoding acceleration, video deinterlacing, and Video encoding acceleration. In some embodiments, in addition, media pipeline 316 also includes a thread generation unit to generate threads for execution on 3D/media subsystem 315. The generated thread performs a calculation of the media operation on one or more graphics execution units included in the 3D/media subsystem 315.In some embodiments, 3D/media subsystem 315 includes logic for executing threads for 3D pipeline 312 and media pipeline 316 generation. In one embodiment, the pipeline sends a thread execution request to the 3D/media subsystem 315, which includes thread dispatching logic for arbitrating and dispatching each request to an available thread execution resource. Execution resources include a array of graphics execution units for processing 3D and media threads. In some embodiments, 3D/media subsystem 315 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem further includes shared memory (including registers and addressable memory) to share data between threads and to store output data.FIG. 7 is a block diagram of a graphics processing engine 410 of a graphics processor in accordance with some embodiments. In one embodiment, graphics processing engine (GPE) 410 is a version of GPE 310 shown in FIG. Elements in Figure 8 having the same reference numbers (or names) as the elements of any other figures herein can operate or operate in any manner similar to that described elsewhere herein, but are not limited thereto. For example, the 3D pipeline 312 and media pipeline 316 of FIG. 6 are shown. Media pipeline 316 is optional in some embodiments of GPE 410 and may not be explicitly included within GPE 410. For example and in at least one embodiment, a separate media and/or image processor is coupled to GPE 410.In some embodiments, GPE 410 is coupled to a command streamer 403 that provides a command stream to GPE 3D pipeline 312 and/or media pipeline 316. In some embodiments, the command stream converter 403 is coupled to a memory, which may be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command stream converter 403 receives commands from memory and sends them to 3D pipeline 312 and/or media pipeline 316. These commands are an indication of fetching from a ring buffer that stores commands for the 3D pipeline 312 and the media pipeline 316. In one embodiment, the ring buffer may additionally include a batch command buffer that stores a plurality of commands of the batch. The commands for the 3D pipeline 312 may also include references to data stored in memory such as, but not limited to, vertex and geometry data for the 3D pipeline 312 and/or image data and memory objects for the media pipeline 316. . The 3D pipeline 312 and the media pipeline 316 process the commands by performing operations via logic within the respective pipelines; or dispatching one or more execution threads to the graphics core array 414.In various embodiments, the 3D pipeline 312 can execute one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, by processing instructions and dispatching execution threads to the graphics core array 414. , compute shaders or other shader programs. Graphics core array 414 provides a unified block of execution resources. The multipurpose execution logic (e.g., execution unit) within graphics core array 414 includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders.In some embodiments, graphics core array 414 also includes execution logic for performing media functions, such as video and/or image processing. In one embodiment, in addition to graphics processing operations, the execution unit additionally includes general purpose logic that is programmable to perform parallel general purpose computing operations. The general purpose logic may perform processing operations in parallel or in conjunction with the general purpose logic within the processor core 107 of FIG. 5 or the cores 202A-202N of FIG.The output data generated by the threads executing on the graphics core array 414 can output the data to a memory in a unified return buffer (URB) 418. URB 418 can store data for multiple threads. In some embodiments, URB 418 can be used to transmit data between different threads executing on graphics core array 414. In some embodiments, URB 418 may additionally be used for synchronization between threads on the graphics core array and fixed function logic within shared function logic 420.In some embodiments, graphics core array 414 is scalable such that the array includes a variable number of graphics cores, each graphics core having a variable number of execution units based on target power and performance levels of GPE 410. In one embodiment, the execution resources are dynamically scalable such that execution resources can be enabled or disabled as needed.Graphics core array 414 is coupled to shared function logic 420, which includes a plurality of resources shared between graphics cores in the graphics core array. The shared function within shared function logic 420 is a hardware logic unit that provides dedicated complementary functions to graphics core array 414. In various embodiments, shared function logic 420 includes, but is not limited to, sampler 421, math 422, and inter-thread communication (ITC) 423 logic. Additionally, some embodiments implement one or more caches 425 within shared function logic 420. The sharing function is implemented where the need for a given dedicated function is not sufficient to be included within the graphics core array 414. Instead, a single instantiation of this dedicated function is implemented as a separate entity in shared function logic 420 and shared between execution resources within graphics core array 414. The precise set of functions that are shared between the graphics core arrays 414 and included within the graphics core array 414 vary between embodiments.FIG. 8 is a block diagram of another embodiment of a graphics processor 500. The elements of Figure 8 having the same reference numbers (or names) as the elements in any of the other figures herein may operate or function in any manner similar to that described elsewhere herein, but are not limited thereto. These ones.In some embodiments, graphics processor 500 includes a ring interconnect 502, a pipeline front end 504, a media engine 537, and graphics cores 580A through 580N. In some embodiments, ring interconnect 502 couples a graphics processor to other processing units, including other graphics processors or one or more general purpose processor cores. In some embodiments, the graphics processor is one of a plurality of processors integrated within a multi-core processing system.In some embodiments, graphics processor 500 receives a plurality of batches of commands via ring interconnect 502. The incoming command is interpreted by the command stream converter 503 in the pipeline front end 504. In some embodiments, graphics processor 500 includes scalable execution logic for performing 3D geometry processing and media processing via graphics core(s) 580A through 580N. For 3D geometry processing commands, the command stream converter 503 supplies the commands to the geometry pipeline 536. For at least some media processing commands, the command stream converter 503 supplies the commands to the video front end 534, which is coupled to the media engine 537. In some embodiments, media engine 537 includes a video quality engine (VQE) 530 for video and image post-processing and a multi-format encoding/decoding (MFX) 533 engine for providing hardware accelerated media data encoding and decoding. In some embodiments, geometry pipeline 536 and media engine 537 each generate an execution thread for threads executing resources provided by at least one graphics core 580A.In some embodiments, graphics processor 500 includes scalable thread execution resource characterization module cores 580A-580N (sometimes referred to as core shards), each scalable thread execution resource characterization module core having a plurality of sub-cores 550A-550N, 560A Up to 560N (sometimes called nuclear fragmentation). In some embodiments, graphics processor 500 can have any number of graphics cores 580A through 580N. In some embodiments, graphics processor 500 includes graphics core 580A having at least a first sub-core 550A and a second core sub-core 560A. In other embodiments, the graphics processor is a low power processor with a single subcore (eg, 550A). In some embodiments, graphics processor 500 includes a plurality of graphics cores 580A-580N, each of which includes a set of first sub-cores 550A-550N and a set of second sub-cores 560A-560N. Each of the set of first sub-cores 550A-550N includes at least a first set of execution units 552A-552N and media/texture samplers 554A-554N. Each of the set of second sub-cores 560A-560N includes at least a second set of execution units 562A-562N and samplers 564A-564N. In some embodiments, each of the sub-cores 550A-550N, 560A-560N shares a set of shared resources 570A-570N. In some embodiments, the shared resources include shared cache memory and pixel operation logic. Other shared resources may also be included in various embodiments of the graphics processor.FIG. 9 illustrates thread execution logic 600 that includes an array of processing elements employed in some embodiments of the GPE. The elements of Figure 9 having the same reference numbers (or names) as the elements of any other figures herein can operate or operate in any manner similar to that described elsewhere herein, but are not limited thereto.In some embodiments, thread execution logic 600 includes a shader processor 602, a thread dispatcher 604, an instruction cache 606, a scalable array of execution units including a plurality of execution units 608A-608N, a sampler 610, a data cache 612. And data port 614. In one embodiment, the scalable execution unit array may enable or disable one or more execution units (eg, any of execution units 608A, 608B, 608C, 608D, to 608N-1, and 608N) by workload-based computational requirements. One) to dynamically scale. In one embodiment, the included components are interconnected via an interconnect structure that is linked to each of the components. In some embodiments, thread execution logic 600 includes a memory (such as system memory or cache memory) through one or more of instruction cache 606, data port 614, sampler 610, and execution units 608A-608N. One or more connections. In some embodiments, each execution unit (eg, 608A) is a separate programmable general purpose computing unit capable of executing multiple simultaneous hardware threads simultaneously for processing multiple data elements in parallel for each thread. In various embodiments, the array of execution units 608A-608N is scalable to include any number of separate execution units.In some embodiments, execution units 608A-608N are primarily used to execute shader programs. Shader processor 602 can process various shader programs and can dispatch execution threads associated with shader programs via thread dispatch 604. In one embodiment, the thread dispatcher includes logic for arbitrating thread-initiated requests from graphics and media pipelines and instantiating the requested threads on one or more execution units in execution units 608A-608N. For example, a geometry pipeline (e.g., 536 of Figure 9) can dispatch vertices, tessellation, or geometry shaders to thread execution logic 600 (Fig. 10) for processing. In some embodiments, thread dispatcher 604 can also process runtime thread spawn requests from executed shader programs.In some embodiments, execution units 608A-608N support a set of instructions that include native support for many standard 3D graphics shader instructions such that shaders from graphics libraries (eg, Direct 3D and OpenGL) are executed with minimal conversion. program. Execution units support vertex and geometry processing (eg, vertex programs, geometry programs, vertex shaders), pixel processing (eg, pixel shaders, fragment shaders), and general processing (eg, computation and media shaders). Each of the execution units 608A-608N is capable of multi-issue instruction multiple data (SIMD) execution, and the multi-threaded operation implements an efficient execution environment in the face of higher latency memory accesses. Each hardware thread within each execution unit has a dedicated high bandwidth register file and associated independent thread state. Execution is a multi-issue per-clock pipeline that can perform integer, single-precision, and double-precision floating-point operations, SIMD branching, logic, overriding, and other mixed operations. While waiting for data from one of the memory or shared functions, the dependency logic within execution units 608A-608N causes the waiting thread to sleep until the requested data has been returned. When waiting for a thread to sleep, hardware resources can focus on other threads. For example, during a delay associated with a vertex shader operation, the execution unit may perform operations for a pixel shader, a fragment shader, or another type of shader program, including different vertex shaders.Each of the execution units 608A through 608N operates on an array of data elements. The number of data elements is the "execution size", or the number of channels of the instruction. An execution channel is a logical unit that performs data element access, masking, and flow control within an instruction. The number of channels can be independent of the number of physical arithmetic logic units (ALUs) or floating point units (FPUs) for a particular graphics processor. In some embodiments, execution units 608A through 608N support integer and floating point data types.The execution unit instruction set includes SIMD instructions. Various data elements can be stored in the register as compressed data types, and the execution unit will process various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256-bit vector is stored in a register, and the execution unit acts as four separate 64-bit compressed data elements (four times the word length (QW) size of the data. Element), eight separate 32-bit compressed data elements (double-word length (DW) size data elements), sixteen individual 16-bit compressed data elements (word length (W) size data elements), or thirty-two A single 8-bit data element (byte (B) size data element) operates on the vector. However, different vector widths and register sizes are possible.One or more internal instruction caches (e.g., 606) are included in the thread execution logic 600 to cache thread instructions of the execution unit. In some embodiments, one or more data caches (eg, 612) are included for caching thread data during thread execution. In some embodiments, sampler 610 is included to provide texture sampling for 3D operations and to provide media samples for media operations. In some embodiments, sampler 610 includes specialized texture or media sampling functionality to process texture or media data during the sampling process prior to providing sampled data to the execution unit.During execution, the graphics and media pipeline sends thread-initiated requests to thread execution logic 600 via thread spawning and dispatch logic. Once a set of geometric objects has been processed and rasterized into pixel data, pixel processor logic (eg, pixel shader logic, fragment shader logic, etc.) within shader processor 602 is invoked to further calculate the output information and Causes the result to be written to the output surface (for example, color buffers, depth buffers, stencil buffers, etc.). In some embodiments, the pixel shader or fragment shader calculates values of various vertex attributes that will be interpolated across the rasterized object. In some embodiments, the pixel processor logic within shader processor 602 then executes an application programming interface (API) supplied pixel or fragment shader program. To execute the shader program, shader processor 602 dispatches the thread to the execution unit (eg, 608A) via thread dispatcher 604. In some embodiments, pixel shader 602 uses texture sampling logic in sampler 610 to access texture data stored in a texture map in memory. Arithmetic operations on texture data and input geometry data are used to calculate pixel color data for each geometry segment, or to discard one or more pixels without further processing.In some embodiments, data port 614 provides a memory access mechanism for thread execution logic 600 to output the processed data to memory for processing on the graphics processor output pipeline. In some embodiments, data port 614 includes or is coupled to one or more cache memories (eg, data cache 612) to cache data for access by memory via a data port.FIG. 10 is a block diagram showing a graphics processor instruction format 700 in accordance with some embodiments. In one or more embodiments, the graphics processor execution unit supports a set of instructions having instructions in multiple formats. The solid line box shows the components that are typically included in the execution unit instructions, while the dashed lines include optional components or components that are only included in the subset of instructions. In some embodiments, the instruction formats 700 described and illustrated are macro instructions because they are instructions that are supplied to the execution unit, as opposed to micro-operations that result from instruction decoding (once the instructions are processed).In some embodiments, the graphics processor execution unit natively supports instructions that employ the 128-bit instruction format 710. The 64-bit compact instruction format 730 can be used for some instructions based on selected instructions, multiple instruction options, and operand counts. Native 128-bit instruction format 710 provides access to all instruction options, while some options and operations are limited to 64-bit format 730. Native instructions available in 64-bit format 730 vary from embodiment to embodiment. In some embodiments, the instructions are partially compressed using a set of index values in index field 713. Execution unit hardware references a set of compression tables based on index values and uses compressed table output to reconstruct native instructions in 128-bit instruction format 710.For each format, the instruction opcode 712 defines the operations to be performed by the execution unit. The execution unit executes each instruction in parallel across multiple data elements of each operand. For example, in response to the add instruction, the execution unit performs a synchronization addition operation across each color channel, the color channel representing a texel or picture element. By default, the execution unit executes each instruction across all data channels of the operand. In some embodiments, the instruction control field 714 enables control of certain execution options, such as channel selection (eg, prediction) and data channel ordering (eg, mixing). For instructions in the 128-bit instruction format 710, the execution size field 716 limits the number of data channels that will be executed in parallel. In some embodiments, the execution size field 716 is not available for the 64 bit compact instruction format 730.Some execution unit instructions have up to three operands, including two source operands (src0 720, src 1722) and one destination 718. In some embodiments, the execution unit supports dual destination instructions, where one of the destinations is implicit. The data manipulation instructions may have a third source operand (eg, SRC2 724), wherein the instruction opcode 712 determines the number of source operands. The last source operand of the instruction may be an immediate (e.g., hard coded) value passed with the instruction.In some embodiments, the 128-bit instruction format 710 includes access/address mode information 726 that defines, for example, whether to use a direct register addressing mode or an indirect register addressing mode. When using the direct register addressing mode, the register address of one or more operands is provided directly by the bits in the instruction.In some embodiments, the 128-bit instruction format 710 includes an access/address mode field 726 that specifies an address mode and/or an access mode of the instruction. In one embodiment, the access mode is used to define data access alignment for instructions. Some embodiments support access modes, including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in the first mode, instructions can use byte-aligned addressing for source operands and destination operands, and when in the second mode, instructions can use 16-byte aligned addressing to use For all source operands and destination operands.In one embodiment, the address mode portion of the access/address mode field 726 determines whether the instruction uses direct addressing or indirect addressing. When the direct register addressing mode is used, the bits in instruction 710 directly provide the register address of one or more operands. When using the indirect register addressing mode, the register address of one or more operands can be calculated based on the address register value and the address immediate field in the instruction.In some embodiments, the instructions are grouped based on the opcode 712 bit field to simplify opcode decoding 740. For the 8-bit opcode, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely exemplary. In some embodiments, the mobile and logical opcode group 742 includes data movement and logic instructions (eg, move (mov), compare (cmp)). In some embodiments, the move and logical group 742 shares five most significant bits (MSBs), where the move (mov) instruction takes the form of 0000xxxxb and the logical instruction takes the form of 0001xxxxb. The flow control instruction set 744 (e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). The hash instruction set 746 includes a mix of instructions, including synchronization instructions (eg, wait, send) in the form of 0011xxxxb (eg, 0x30). Parallel math instruction set 748 includes per-component arithmetic instructions (eg, add, multiply (mul)) in the form of 0100xxxxb (eg, 0x40). Parallel math group 748 performs arithmetic operations in parallel across data channels. Vector math group 750 includes arithmetic instructions (eg, dp4) in the form of 0101xxxxb (eg, 0x50). Vector math groups perform arithmetic operations on vector operands, such as dot product operations.11 is a block diagram of another embodiment of a graphics processor 800. The elements of Figure 11 having the same reference numbers (or names) as the elements in any of the other figures herein may operate or function in any manner similar to that described elsewhere herein, but are not limited thereto. These ones.In some embodiments, graphics processor 800 includes graphics pipeline 820, media pipeline 830, display engine 840, thread execution logic 850, and rendering output pipeline 870. In some embodiments, graphics processor 800 is a graphics processor within a multi-core processing system that includes one or more general processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown) or via a ring interconnect 802 via commands issued to graphics processor 800. In some embodiments, ring interconnect 802 couples graphics processor 800 to other processing components, such as other graphics processors or general purpose processors. Commands from ring interconnect 802 are interpreted by command stream converter 803, which supplies instructions to separate components of graphics pipeline 820 or media pipeline 830.In some embodiments, command stream converter 803 directs the operation of vertex fetcher 805, which reads vertex data from memory and executes vertex processing commands provided by command stream converter 803. In some embodiments, vertex fetcher 805 provides vertex data to vertex shader 807, which performs coordinate space transform and illumination operations on each vertex. In some embodiments, vertex fetcher 805 and vertex shader 807 execute vertex processing instructions by dispatching execution threads to execution units 852A through 852B via thread dispatcher 831.In some embodiments, execution units 852A-852B are vector processor arrays having a set of instructions for performing graphics and media operations. In some embodiments, execution units 852A-852B have attached L1 caches 851 that are dedicated to or shared between each array. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions.In some embodiments, graphics pipeline 820 includes tessellation components for performing hardware accelerated tessellation of 3D objects. In some embodiments, the programmable hull shader 811 configures a tessellation operation. The programmable domain shader 817 provides a backend evaluation of the tessellation output. The tessellator 813 operates in the direction of the hull shader 811 and contains dedicated logic for generating a detailed set of geometric objects based on the coarse geometric model that is provided as input to the graphics pipeline 820. In some embodiments, tessellation components (eg, hull shader 811, tessellator 813, and domain shader 817) may be bypassed if tessellation is not used.In some embodiments, the complete geometric object may be processed by geometry shader 819 via one or more threads assigned to execution units 852A-852B, or may travel directly to editor 829. In some embodiments, the geometry shader operates on the entire geometric object (rather than a vertex or a vertex patch in a previous stage of the graphics pipeline). Geometry shader 819 receives input from vertex shader 807 if tessellation is disabled. In some embodiments, geometry shader 819 can be programmed by a geometry shader program to perform geometric tessellation when the tessellation unit is disabled.The clipper 829 processes the vertex data prior to rasterization. The clipper 829 can be a fixed function clipper or a programmable clipper with clip and geometry shader functions. In some embodiments, the rasterizer and depth test component 873 in the render output pipeline 870 dispatches a pixel shader to convert the geometric object into its per-pixel representation. In some embodiments, pixel shader logic is included in thread execution logic 850. In some embodiments, the application can bypass the rasterizer and depth test component 873 and access the unrasterized vertex data via the outflow unit 823.Graphics processor 800 has an interconnect bus, an interconnect structure, or some other interconnect mechanism that allows data and messages to be passed among the main components of the graphics processor. In some embodiments, execution units 852A-852B and associated cache(s) 851, texture and media sampler 854, and texture/sampler cache 858 are interconnected via data port 856 to perform memory accesses And communicate with the processor's render output pipeline component. In some embodiments, sampler 854, cache 851, 858, and execution units 852A through 852B each have a separate memory access path.In some embodiments, the rendered output pipeline 870 includes a rasterizer and depth testing component 873 that converts vertex-based objects into associated pixel-based representations. In some embodiments, the rasterizer logic includes a windower/masker unit for performing fixed function triangles and line rasterization. Associated rendering cache 878 and depth cache 879 are also available in some embodiments. Pixel manipulation component 877 performs pixel-based operations on the data, however in some examples, pixel operations associated with 2D operations (eg, with mixed bit-block image delivery) are performed by 2D engine 841 or by display control at display time The 843 is replaced with an overlapping display plane. In some embodiments, the shared L3 cache 875 can be used for all graphics components, allowing data to be shared without the use of primary system memory.In some embodiments, graphics processor media pipeline 830 includes media engine 837 and video front end 834. In some embodiments, video front end 834 receives a pipeline command from command stream converter 803. In some embodiments, media pipeline 830 includes a separate command stream converter. In some embodiments, video front end 834 processes the media commands prior to sending the commands to media engine 837. In some embodiments, media engine 837 includes thread generation functionality for generating threads for dispatching to thread execution logic 850 via thread dispatcher 831.In some embodiments, graphics processor 800 includes display engine 840. In some embodiments, display engine 840 is external to processor 800 and coupled to the graphics processor via ring interconnect 802, or some other interconnect bus or mechanism. In some embodiments, display engine 840 includes a 2D engine 841 and a display controller 843. In some embodiments, display engine 840 includes dedicated logic that can operate independently of the 3D pipeline. In some embodiments, display controller 843 is coupled to a display device (not shown), which may be a system integrated display device (as in a laptop computer), or externally attached via a display device connector display screen.In some embodiments, graphics pipeline 820 and media pipeline 830 can be configured to perform operations based on multiple graphics and media programming interfaces, and are not specific to any one application programming interface (API). In some embodiments, the driver software for the graphics processor translates API calls specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for Open Graphics Library (OpenGL), Open Computing Language (OpenCL), and/or Vulkan Graphics and Computing APIs (all of which are from the Khronos Group). In some embodiments, support may also be provided for Direct3D libraries from Microsoft Corporation. In some embodiments, a combination of these libraries can be supported. Support for Open Source Computer Vision Library (OpenCV) is also available. Future APIs with compatible 3D pipelines will also be supported if the pipeline from the pipeline of the future API to the pipeline of the graphics processor can be completed.FIG. 12A is a block diagram showing a graphics processor command format 900 in accordance with some embodiments. FIG. 12B is a block diagram showing a graphics processor command sequence 910, in accordance with an embodiment. The solid lined box in Figure 12A shows the components typically included in the graphics commands, while the dashed lines include components that are optional or only included in a subset of the graphics commands. The exemplary graphics processor command format 900 of FIG. 12A includes a target client 902 for identifying commands, a command operation code (opcode) 904, and a data field for the associated data 906 for the command. Sub-opcode 905 and command size 908 are also included in some commands.In some embodiments, client 902 specifies a client unit of a graphics device that processes command data. In some embodiments, the graphics processor command parser checks the client field of each command to adjust further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client unit includes a memory interface unit, a rendering unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline that processes the commands. Once the command is received by the client unit, the client unit reads the opcode 904 and the sub-opcode 905 (if any) to determine the operation to perform. The client unit uses the information in data field 906 to execute the command. For some commands, an explicit command size 908 is expected to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments, the commands are aligned via a multiple of double word length.The flowchart in Figure 12B shows an exemplary graphics processor command sequence 910. In some embodiments, software or firmware of a data processing system featuring an embodiment of a graphics processor uses a version of the illustrated sequence of commands to initiate, execute, and terminate a set of graphics operations. Sample sequence sequences are shown and described for exemplary purposes only, as embodiments are not limited to these particular commands or sequences of such commands. Moreover, the command can be issued as a batch of commands in a sequence of commands such that the graphics processor will process the sequence of commands in an at least partially simultaneous manner.In some embodiments, the graphics processor command sequence 910 can begin with a pipeline dump clear command 912 to cause any active graphics pipeline to complete the current pending command for the pipeline. In some embodiments, the 3D pipeline 922 and the media pipeline 924 do not operate at the same time. Perform a pipeline dump cleanup to cause the active graphics pipeline to complete any pending commands. In response to the pipeline dump clear, the command parser for the graphics processor will stop command processing until the active paint engine completes the pending operation and invalidates the associated read cache. Alternatively, any data in the rendering cache that is marked as 'dirty' can be flushed to memory. In some embodiments, the pipeline dump clear command 912 can be used for pipeline synchronization or before placing the graphics processor in a low power state.In some embodiments, the pipeline selection command 913 is used when the command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, only one pipeline selection command 913 is required in the execution context prior to issuing the pipeline command, unless the context is to issue a command for both pipelines. In some embodiments, the pipeline dump clear command 912 is just needed before the pipeline switch via the pipeline select command 913.In some embodiments, pipeline control command 914 configures a graphics pipeline for operation and is used to program 3D pipeline 922 and media pipeline 924. In some embodiments, the pipeline control command 914 configures the pipeline state of the active pipeline. In one embodiment, pipeline control commands 914 are used for pipeline synchronization and for clearing data from one or more cache memories within the active pipeline before processing a batch of commands.In some embodiments, the return buffer status command 916 is used to configure a set of return buffers for the corresponding pipeline to write data. Some pipeline operations require allocating, selecting, or configuring one or more return buffers that are written to the one or more return buffers during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and perform cross-thread communication. In some embodiments, returning buffer state 916 includes selecting the size and number of return buffers for the set of pipeline operations.The remaining commands in the command sequence differ based on the active pipeline used for the operation. Based on the pipeline decision 920, the command sequence is customized for the 3D pipeline 922 beginning with the 3D pipeline state 930, or the media pipeline 924 beginning at the media pipeline state 940.The commands for configuring the 3D pipeline state 930 include 3D state set commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables to be configured prior to processing the 3D primitive command. The values of these commands are determined based, at least in part, on the particular 3D API in use. In some embodiments, the 3D pipeline state 930 command can also selectively disable or bypass specific pipeline elements (if those components will not be used).In some embodiments, the 3D primitive 932 command is used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters passed to the graphics processor via the 3D primitive 932 command will be forwarded to the vertex acquisition function in the graphics pipeline. The vertex acquisition function uses 3D primitive 932 command data to generate multiple vertex data structures. The vertex data structure is stored in one or more return buffers. In some embodiments, the 3D primitive 932 command is used to perform vertex operations on 3D primitives via vertex shaders. To process the vertex shader, the 3D pipeline 922 dispatches the shader execution thread to the graphics processor execution unit.In some embodiments, the 3D pipeline 922 is triggered via execution of 934 commands or events. In some embodiments, the register write triggers command execution. In some embodiments, execution is triggered via a &apos;go&apos; or 'kick&apos; command in the command sequence. In one embodiment, a pipeline synchronization command is used to trigger command execution to clear a sequence of commands through a graphics pipeline dump. The 3D pipeline will perform geometry processing for 3D primitives. Once the operation is complete, the resulting geometric object is rasterized and the pixel engine colors the resulting pixels. Additional commands for controlling pixel shading and pixel back end operations may also be included for these operations.In some embodiments, the graphics processor command sequence 910 follows the path of the media pipeline 924 when performing media operations. In general, the particular use and manner of programming for media pipeline 924 depends on the media or computing operations to be performed. During the media decoding process, a particular media decoding operation can be offloaded to the media pipeline. In some embodiments, the media pipeline can also be bypassed, and media decoding can be performed in whole or in part using resources provided by one or more general processing cores. In one embodiment, the media pipeline further includes an element for general purpose graphics processor unit (GPGPU) operations, wherein the graphics processor is to perform SIMD vector operations using a compute shader program, the compute shader program and Rendering graphics primitives is not explicitly related.In some embodiments, media pipeline 924 is configured in a similar manner as 3D pipeline 922. A set of commands for configuring media pipeline state 940 is dispatched or placed into the command queue, before media object command 942. In some embodiments, the media pipeline state command 940 includes data for configuring a media pipeline element that will be used to process media objects. This includes data for configuring video decoding and video encoding logic within the media pipeline, such as encoding or decoding formats. In some embodiments, media pipeline state command 940 also supports the use of one or more pointers for "indirect" state elements that include a batch of state settings.In some embodiments, the media object command 942 supplies the pointer to the media object for processing by the media pipeline. The media object includes a memory buffer containing video data to be processed. In some embodiments, all media pipeline states must be valid before the media object command 942 is issued. Once the pipeline state is configured and the media object command 942 is queued, the media pipeline 924 is triggered via execution of a 944 command or an equivalent execution event (eg, a register write). The output from media pipeline 924 can then be post processed by operations provided by 3D pipeline 922 or media pipeline 924. In some embodiments, GPGPU operations are configured and executed in a manner similar to media operations.FIG. 13 illustrates an exemplary graphics software architecture of data processing system 1000 in accordance with some embodiments. In some embodiments, the software architecture includes a 3D graphics application 1010, an operating system 1020, and at least one processor 1030. In some embodiments, processor 1030 includes a graphics processor 1032 and one or more general purpose processor cores 1034. Graphics application 1010 and operating system 1020 are each executed in system memory 1050 of the data processing system.In some embodiments, the 3D graphics application 1010 includes one or more shader programs, the one or more shader programs including shader instructions 1012. The shader language instructions can be in a high level shader language such as the Advanced Shader Language (HLSL) or the OpenGL Shader Language (GLSL). The application also includes executable instructions 1014 that employ a machine language suitable for execution by general purpose processor core 1034. The application also includes a graphical object 1016 defined by vertex data.In some embodiments, operating system 1020 is anoperating system from Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX operating system that uses a variant of the Linux kernel. Operating system 1020 can support graphics API 1022, such as the Direct3D API, OpenGL API, or Vulkan API. When the Direct3D API is in use, the operating system 1020 uses the front end shader compiler 1024 to compile any shader instructions 1012 in the HLSL into a low level shader language. Compilation can be just-in-time (JIT) compilation, or application executable shader precompilation. In some embodiments, the advanced shader is compiled into a low level shader during compilation of the 3D graphics application 1010. In some embodiments, shader instructions 1012 are provided in an intermediate form, such as a version of the Standard Portable Intermediate Representation (SPIR) used by the Vulkan API.In some embodiments, the user mode graphics driver 1026 includes a back end shader compiler 1027 for converting the shader instructions 1012 into a hardware specific representation. When the OpenGL API is used, shader instructions 1012 that employ the GLSL high level language are passed to the user mode graphics driver 1026 for compilation. In some embodiments, user mode graphics driver 1026 uses operating system kernel mode function 1028 to communicate with kernel mode graphics driver 1029. In some embodiments, kernel mode graphics driver 1029 is in communication with graphics processor 1032 to dispatch commands and instructions.One or more aspects of at least one embodiment can be implemented by representative code stored on a machine readable medium that represents and/or defines an integrated circuit such as logic within a processor. For example, a machine-readable medium can include instructions that represent various logic within a processor. The instructions, when read by a machine, may cause the machine to fabricate logic for performing the techniques described herein. Such representations (referred to as "IP cores") are logical reusable units of an integrated circuit that can be stored on a tangible, machine readable medium as a hardware model describing the structure of the integrated circuit. . The hardware model can be supplied to various consumers or manufacturing facilities that load the hardware model on the manufacturing machine that manufactures the integrated circuit. An integrated circuit can be fabricated such that the circuitry performs the operations described in association with any of the embodiments described herein.14 is a block diagram showing an IP core development system 1100 that can be used to fabricate an integrated circuit to perform operations, in accordance with an embodiment. The IP core development system 1100 can be used to generate a modular, reusable design that can be incorporated into a larger design or used to build an entire integrated circuit (eg, an SOC integrated circuit). The design facility 1130 can generate a software simulation 1110 for the IP core design in a high level programming language (eg, C/C++). Software simulation 1110 can be used to design, test, and verify the behavior of the IP core using the simulation model 1112. A register transfer level (RTL) design can then be created or synthesized by the simulation model 1112. The simulation model 1112 can include functional simulations, behavioral simulations, and/or timing simulations. The RTL design 1115 is an abstraction of the behavior of integrated circuits (including associated logic performed using modeled digital signals) that model the flow of digital signals between hardware registers. In addition to the RTL design 1115, it is also possible to create, design, or synthesize lower level designs at logic levels or transistor levels. Thus, the specific details of the initial design and simulation can vary.The RTL design 1115 or equivalent may be further synthesized by the design facility into a hardware model 1120, which may employ hardware description language (HDL) or some other representation of physical design data. The HDL can be further simulated or tested to verify the IP core design. A non-volatile memory 1140 (eg, a hard disk, flash memory, or any non-volatile storage medium) can be used to store the IP core design for delivery to the third party manufacturing facility 1165. Alternatively, the IP core design can be transmitted (eg, via the Internet) via a wired connection 1150 or a wireless connection 1160. Manufacturing facility 1165 can then fabricate an integrated circuit that is based, at least in part, on an IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein.15-17 illustrate an exemplary integrated circuit and associated graphics processor that may be fabricated using one or more IP cores in accordance with various embodiments described herein. In addition to what is shown, other logic and circuitry may be included, with additional logic and circuitry including additional graphics processors/cores, peripheral interface controllers or general purpose processor cores.15 is a block diagram showing an exemplary system-on-chip integrated circuit 1200 that can be fabricated using one or more IP cores, in accordance with an embodiment. The exemplary integrated circuit 1200 includes one or more application processors 1205 (eg, CPUs), at least one graphics processor 1210, and additionally may include an image processor 1215 and/or a video processor 1220, any of which may It can be a modular IP core from the same or multiple different design facilities. Integrated circuit 1200 includes peripheral or bus logic including USB controller 1225, UART controller 1230, SPI/SDIO controller 1235, and I2S/I2C controller 1240. Additionally, the integrated circuit can also include a display device 1245 coupled to one or more of a high definition multimedia interface (HDMI) controller 1250 and a mobile industry processor interface (MIPI) display interface 1255. Storage may be provided by flash subsystem 1260, including flash and flash controllers. A memory interface can be provided via memory controller 1265 to access an SDRAM or SRAM memory device. In addition, some integrated circuits also include an embedded security engine 1270.16 is a block diagram showing an exemplary graphics processor 1310 of a system-on-chip integrated circuit that can be fabricated using one or more IP cores, in accordance with an embodiment. Graphics processor 1310 may be a variation of graphics processor 1210 of FIG. Graphics processor 1310 includes vertex processor 1305 and one or more segment processors 1315A-1315N (eg, 1315A, 1315B, 1315C, 1315D through 1315N-1, and 1315N). Graphics processor 1310 can execute different shader programs via separate logic such that vertex processor 1305 is optimized to perform operations for vertex shader programs, while one or more fragment processors 1315A-1315N execute for segments or A fragmentation (eg, pixel) shading operation of a pixel shader program. The vertex processor 1305 performs the vertex processing stages of the 3D graphics pipeline and generates primitives and vertex data. Fragment processors 1315A-1315N use the primitives and vertex data generated by vertex processor 1305 to generate a frame buffer for display on the display device. In one embodiment, fragment processors 1315A-1315N are optimized to execute fragment shader programs as provided in OpenGLAPI, and fragment processors 1315A-1315N are available for execution with pixel shader programs as provided in Direct 3DAPI. A similar operation.Graphics processor 1310 additionally includes one or more memory management units (MMUs) 1320A-1320B, caches 1325A-1325B, and circuit interconnects 1330A-1330B. One or more MMUs 1320A-1320B provide virtual to physical address mapping for integrated circuit 1310, including virtual to physical address mapping for vertex processor 1305 and/or fragment processors 1315A-1315N, except stored in one or more In addition to the vertices or image/texture data in caches 1325A-1325B, vertex processor 1305 and/or fragment processors 1315A-1315N may also reference vertex or image/texture data stored in memory. In one embodiment, one or more MMUs 1325A-1325B may be synchronized with other MMUs within the system such that each processor 1205-1220 may participate in a shared or unified virtual memory system, with other MMUs including one or more of FIG. One or more MMUs associated with application processor 1205, image processor 1215, and/or video processor 1220. In accordance with an embodiment, one or more circuit interconnects 1330A-1330B enable graphics processor 1310 to interface with other IP cores within the SoC via an internal bus of the SoC or via a direct connection.17 is a block diagram showing an additional exemplary graphics processor 1410 of a system-on-chip integrated circuit that can be fabricated using one or more IP cores, in accordance with an embodiment. Graphics processor 1410 may be a variation of graphics processor 1210 of FIG. Graphics processor 1410 includes one or more MMUs 1320A-1320B, caches 1325A-1325B, and circuit interconnects 1330A-1330B of integrated circuit 1300 of FIG.Graphics processor 1410 includes one or more shader cores 1415A-1415N (eg, 1415A, 1415B, 1415C, 1415D, 1415E, 1415F through 1315N-1, and 1315N) that provide a unified shader core architecture in which a single core Or a type or core can execute all types of programmable shader code, including shader program code for implementing vertex shaders, fragment shaders, and/or computation shaders. The exact number of shader cores present can vary between embodiments and implementations. In addition, graphics processor 1410 includes an inter-core task manager 1405 and a blocking unit 1418 that acts as a thread dispatcher to dispatch execution threads to one or more shader cores 1415A-1415N, the partition Unit 1418 is for accelerating a block operation for tile-based rendering, in a tile operation for tile-based rendering, a rendering operation on the scene is subdivided into the image space, eg, to utilize within the scene Local spatial consistency or optimizing the use of internal caches.The following terms and/or examples relate to further embodiments:An example embodiment may be a method comprising: prefetching data of a first fiber, initiating traversal of a layered data structure using a first fiber, while prefetching data of the second fiber and deferring for the second fiber Traversing, switching the context to the second fiber, and using the second fiber to traverse the hierarchical data structure while prefetching data from another fiber. The method can also include providing a fiber stream and once the second fiber is terminated, backfilling with another fiber. The method can also include performing a ray traversal of the bounding hierarchy using each of the fibers. The method can also include assigning different channels of the vector register to different fibers. The method can also include assigning different channels of the vector register to different rays. The method can also include dividing a vector register into an upper half and a lower half, and storing one ray in one half and another ray in the other half. The method may also include using an envelope structure of N/2 width. The method can also include performing a box test on the active light in a half of the register of the active light. The method can also include broadcasting to replicate light across the channel. The method can also include traversing different hierarchical data structures using the first and second fibers.In another example embodiment, there may be one or more non-transitory computer readable media storing instructions for performing a sequence of prefetching data of a first hyperthread Using the first hyperthread to initiate traversal of the hierarchical data structure while prefetching the data of the second hyperthread and deferring the traversal for the second hyperthread, switching the context to the second hyperthread, and using the second hyperthread Traverse the hierarchical data structure while prefetching another hyperthreaded data. The medium may further store instructions for performing a sequence including: providing a fiber stream, and once the second fiber terminates, backfilling with another hyperthread. The medium may further store instructions for performing a sequence including: performing, by each hyperthread, a ray traversal of the bounding hierarchy. The medium can further store instructions for performing a sequence including: assigning different channels of the vector register to different hyperthreads. The medium can further store instructions for performing a sequence including assigning different channels of the vector register to different rays. The medium may further store instructions for performing a sequence of dividing a vector register into an upper half and a lower half, and storing one ray in one half and another ray in the other half In the ministry. The medium may further store instructions for performing a sequence including the use of an N/2 width bounding volume hierarchy. The medium may further store instructions for performing a sequence including: performing a box test on the active ray in a half of the register of the active ray. The medium can further store instructions for performing a sequence including: broadcasting to replicate light across the channel. The medium can further store instructions for performing a sequence of traversing different hierarchical data structures with the first and second hyper-threads.In another exemplary embodiment, it may be a device, the device comprising: a processor, configured to: prefetch data of the first optical fiber, initiate traversal of the hierarchical data structure using the first optical fiber, and prefetch the second Data of the fiber and delaying traversal for the second fiber, switching context to the second fiber, traversing the layered data structure using the second fiber while prefetching data of the other fiber; and memory coupled to the processor . The apparatus can also include the processor for providing a fiber stream and once the second fiber is terminated, backfilling with another fiber. The apparatus can also include the processor for performing a ray traversal of the bounding hierarchy using each of the fibers. The apparatus can also include the processor for assigning different channels of a vector register to different fibers. The apparatus can also include the processor for assigning different channels of the vector register to different rays. The apparatus can also include the processor for dividing a vector register into an upper half and a lower half, and storing one ray in one half and another ray in the other half in. The apparatus can also include the processor for using an envelope structure of an N/2 width. The apparatus can also include the processor for performing a frame test on the active light in a half of the register of the active light.The apparatus can include the processor for broadcasting to replicate light across a channel. The apparatus can include the processor for traversing different hierarchical data structures using the first and second fibers.The graphics processing techniques described herein can be implemented in a variety of hardware architectures. For example, graphics functionality can be integrated into the chipset. Alternatively, a separate graphics processor can be used. As still another embodiment, the graphics function may be implemented by a general purpose processor including a multi-core processor.A reference to an "embodiment" or "an embodiment" in this specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the present disclosure. Thus, appearances of the phrases "in an embodiment" In addition, the particular features, structures, or characteristics may be found in other suitable forms, which are different from the specific embodiments shown, and all such forms may be included in the claims of the present application.While a limited number of embodiments have been described, those skilled in the art will recognize many modifications and variations. The appended claims are intended to cover all such modifications and alternatives |
Techniques are disclosed for forming transistor devices having reduced parasitic contact resistance relative to conventional devices. The techniques can be implemented, for example, using a standard contact stack such as a series of metals on, for example, silicon or silicon germanium (SiGe) source/drain regions. In accordance with one example such embodiment, an intermediate boron doped germanium layer is provided between the source/drain and contact metals to significantly reduce contact resistance. Numerous transistor configurations and suitable fabrication processes will be apparent in Sight of this disclosure, including both planar and non-planar transistor structures {e.g., FinFETs), as well as strained and unstrained channel structures. Graded buffering can be used to reduce misfit dislocation. The techniques are particularly well-suited for implementing p-type de vices, but can be used for n-type de vices if so desired. |
CLAIMS What is claimed is: 1. A transistor device, comprising: a substrate having a channel region; a gate electrode above the channel region, wherein a gate dielectric layer is provided between the gate electrode and the channel region; p-type and n-type source/drain regions in the substrate and adjacent to the channel region; a boron doped germanium layer on at least a portion of the p-type source/drain region, and comprising a germanium concentration in excess of 90 atomic % and a boron concentration in excess of 1E20 cm"3; and a metal-germanide source/drain contact on the boron doped germanium layer. 2. The device of claim 1 wherein the device is one of a planar or FinFET transistor. 3. The device of claims 1 or 2 wherein the boron doped germanium layer is only on p-type source drain regions of the device. 4. The device of any of the preceding claims further comprising an interlayer dielectric. S. The device of the preceding claims, further comprising at least one of: a graded buffer between the substrate and at least one of the p-type and n-type source/drain regions; and a graded buffer between at least one of the p-type and n-type source/drain regions and the boron doped germanium layer. 6. The device of claim 5 wherein the graded buffer between at least one of the p-type and n-type source/drain regions and the boron doped germanium layer has a germanium concentration that is graded from a base level concentration compatible with the at least one of the p-type and n-type source/drain regions to a high concentration in excess of 95 atomic %. 7. The device of claim 6 wherein the high concentration reflects pure germanium. 8. The device of claims 5, 6, or 7 wherein the graded buffer between the at least one of the p-type and n-type source/drain regions and the boron doped germanium layer has a boron concentration that is graded from a base level concentration compatible with the at least one of the p-type and n-type source drain regions to a high concentration in excess of 1 E20 cm-3. 9. The device of any of the preceding claims wherein the boron doped germanium layer has a graded concentration of at least one of germanium and boron. 10. The device of any of the preceding claims wherein the p-type and n-type source/drain regions comprise silicon germanium having a germanium concentration that is graded from a base level concentration compatible with the substrate to a high concentration in excess of SO atomic %, and the boron doped germanium layer has a germanium concentration in excess of 95 atomic %. 11. The device of any of the preceding claims wherein the p-type and n-type source/drain regions comprise boron doped silicon germanium having a boron concentration that is graded from a base level concentration compatible with the substrate to a high concentration in excess of 1 E20 cm"3. 12. The device of any of claims 1 through 4 wherein the p-type and n-type source/drain regions comprise silicon or silicon germanium, and the device further comprises a buffer between at least one of the p-type and n-type source/drain regions and the boron doped germanium layer, the buffer having a germanium concentration that is graded from a base level concentration compatible with the at least one p-type and n-type source/drain regions to a high concentration in excess of 50 atomic %, and a boron concentration that is graded from a base level concentration compatible with the at least one of the p-type and n-type source/drain regions to a high concentration in excess of 1 E20 cm-3. 13. The device of any of the preceding claims wherein the boron doped germanium layer comprises a germanium concentration in excess of 98 atomic %, and a boron concentration in excess of 2E20 cm-3. 14. An electronic device comprising: a printed circuit board having one or more integrated circuits, wherein at least one of the one or more integrated circuits comprises one or more transistor devices as defined in any of the preceding claims. IS. The electronic device of claim 14 wherein the one or more integrated circuits includes at least one of a communication chip and/or a processor, and at least one of the communication chip and/or processor comprises the one or more transistor devices. 16. The electronic device of claims 14 or IS wherein the device is a computing device. 17. A transistor device, comprising: a substrate having a channel region; a gate electrode above the channel region, wherein a gate dielectric layer is provided between the gate electrode and the channel region and spacers are provided on sides of the gate electrode; p-type and n-type source/drain regions in the substrate and adjacent to the channel region, each of the p-type and n-type source drain regions including a tip region that extends under the gate dielectric layer and/or a corresponding one of the spacers; a boron doped germanium layer on at least a portion of the p-type source/drain region, and comprising a germanium concentration in excess of 95 atomic % and a boron concentration in excess of 2E20 cm"3; and a metal-germanide source/drain contact on the boron doped germanium layer; wherein the device is one of a planar or FinFET transistor. 18. The device of claim 17, the device further comprising: a buffer between at least one of the p-type and n-type source/drain regions and the boron doped germanium layer, wherein the buffer has a germanium concentration that is graded from a base level concentration compatible with the at least one of the p-type and n-type source/drain regions to a high concentration in excess of 95 atomic %, and a boron concentration that is graded from a base level concentration compatible with the at least one of the p-type and n-type source/drain regions to a high concentration in excess of2E20 cnr3. 19. The device of claims 17 wherein the boron doped germanium layer has a graded concentration of at least one of germanium and boron. 20. The device of claim 17 wherein the p-type and n-type source/drain regions comprise silicon germanium having a germanium concentration that is graded from a base level concentration compatible with the substrate to a high concentration in excess of SO atomic %, and the boron doped germanium layer has a germanium concentration in excess of 98 atomic %. 21. The device of claim 20 wherein the p-type and n-type source/drain regions have a boron concentration that is graded from a base level concentration compatible with the substrate to a high concentration in excess of 2E20 cm"3. 22. The device of claim 17 wherein the p-type and n-type source/drain regions comprise silicon germanium having a fixed germanium concentration, and the device further comprises a buffer between the p-type and n-type source/drain regions and the boron doped germanium layer, the buffer having a germanium concentration that is graded from a base level concentration compatible with the p-type and n-type source/drain regions to a high concentration in excess of SO atomic %, and a boron concentration that is graded from a base level concentration compatible with the p-type and n-type source drain regions to a high concentration in excess of 2E20 cm-3, the buffer having a thickness of less than 100 Angstroms. 23. A method for forming a transistor device, comprising: providing a substrate having a channel region; providing a gate electrode above the channel region, wherein a gate dielectric layer is provided between the gate electrode and the channel region; and providing p-type and n-type source drain regions in the substrate and adjacent to the channel region; providing a boron doped germanium layer on at least a portion of the p-type source/drain region, the boron doped germanium layer comprising a germanium concentration in excess of 90 atomic % and a boron concentration in excess of 1 E20 cm-3; and providing metal-germanide source drain contacts on the boron doped germanium layer. |
SELECTIVE GERMANIUM P-CONTACT METALIZATION THROUGH TRENCH Inventors: Glenn A. Glass Anand S. Murthy Tahir Ghani RELATED APPLICATION [0001] This application is a continuation-in-part of U.S. Application No. 12/975,278 filed December 21, 2010. BACKGROUND [0002] Increased performance of circuit devices including transistors, diodes, resistors, capacitors, and other passive and active electronic devices formed on a semiconductor substrate is typically a major factor considered during design, manufacture, and operation of those devices. For example, during design and manufacture or forming of, metal oxide semiconductor (MOS) transistor semiconductor devices, such as those used in a complementary metal oxide semiconductor (CMOS), it is often desired to minimize the parasitic resistance associated with contacts otherwise known as external resistance Rext. Decreased Rext enables higher current from an equal transistor design. BRIEF DESCRIPTION OF THE DRAWINGS [0003] Figure 1A illustrates a MOS device configured with a boron doped germanium layer between the source/drain layer and contact metals, in accordance with an embodiment of the present invention. [0004] Figure IB illustrates a MOS device configured with a boron doped germanium layer between the source drain layer and contact metals, in accordance with another embodiment of the present invention. [0005] Figure 1C illustrates a MOS device configured with a boron doped germanium layer between the source/drain layer and contact metals, in accordance with another embodiment of the present invention. [0006] Figure 2 is a method for forming a transistor structure with low contact resistance in accordance with an embodiment of the present invention. [0007] Figures 3A to 31 illustrate structures that are formed when carrying out the method of Figure 2, in accordance with various embodiment of the present invention. [0008] Figure 4 is a method for forming a transistor structure with low contact resistance in accordance with another embodiment of the present invention. [0009] Figures SA to 5F illustrate structures that are formed when carrying out the method of Figure 4, in accordance with various embodiment of the present invention. [0010] Figure 6 shows a perspective view of a FinFET transistor architecture, configured in accordance with one embodiment of the present invention. [0011] Figure 7 shows a plot of a split lot showing contact resistance for transistor structures configured with in accordance with embodiments of the present invention and standard transistor structures configured with no cap. [0012] Figure 8 illustrates a computing system implemented with one or more transistor structures in accordance with an example embodiment of the present invention. [0013] As will be appreciated, the figures are not necessarily drawn to scale or intended to limit the claimed invention to the specific configurations shown. For instance, while some figures generally indicate straight lines, right angles, and smooth surfaces, an actual implementation of a transistor structure may have less than perfect straight lines, right angles, and some features may have surface topology or otherwise be non-smooth, given real world limitations of the processing equipment and techniques used. In short, the figures are provided merely to show example structures. DETAILED DESCRIPTION [0014] Techniques are disclosed for forming transistor devices having reduced parasitic contact resistance relative to conventional devices. The techniques can be implemented, for example, using a standard contact stack such as a series of metals on silicon or silicon germanium (SiGe) source/drain regions. In accordance with one example such embodiment, an intermediate boron doped germanium layer is provided between the source/drain and contact metals to significantly reduce contact resistance. Numerous transistor configurations and suitable fabrication processes will be apparent in light of this disclosure, including both planar and non-planar transistor structures (e.g., FinFETs), as well as strained and unstrained channel structures. The techniques are particularly well-suited for implementing p-type devices, but can be used for n-type devices if so desired. General Overview [0015] As previously explained, increased drive current in the transistors can be achieved by reducing device resistance. Contact resistance is one component of a device's overall resistance. A standard transistor contact stack typically includes, for example, a silicon or SiGe source drain layer, a nickel silicide layer, a titanium nitride adhesion layer, and a tungsten contact pad. In such configurations, the contact resistance is effectively limited by the silicon or SiGe valence band alignment to the pinning level in the metal. Typically, using industry standard silicides such as nickel (or other suitable silicides, such as titanium, cobalt, or platinum), this results in a band misalignment of about 0.5 eV. Thus, and in accordance with an example embodiment of the present invention, an intermediate boron doped germanium layer is provided between the source/drain and contact metals to significantly reduce the band misalignment value and contact resistance. [0016] In one specific example embodiment, contacts configured with the intermediate boron doped germanium layer exhibit a reduction in the band misalignment value to less than 0.2 eV and a corresponding reduction in contact resistance of about 3X (relative to a conventional contact stack similarly configured, but without the intermediate boron doped germanium layer between the source/drain regions and contact metal). A transmission electron microscopy (ΊΈΜ) cross section or secondary ion mass spectrometry (SIMS) profile can be used to show the germanium concentration throughout the vertical stack of the film structure, as profiles of epitaxial alloys of silicon and SiGe can readily be distinguished from germanium concentration profiles. [0017] Thus, transistor structures configured in accordance with embodiments of the present invention provide an improvement over conventional structures with respect to lower contact resistance. Some such embodiments effectively marry superior contact properties of germanium with superior semiconductor transistor properties of Si and SiGe to provide next generation low resistance contacts. Selectivity can be achieved in various ways. In one embodiment, for instance, selectivity to n-type MOS (NMOS) source/drain locations can be provided by having NMOS regions masked off during p-type MOS device (PMOS) deposition. In another embodiment, both NMOS and PMOS regions can be open simultaneously, but deposition only occurs in the PMOS regions by way of a trench. An advantage here is that the low melting point germanium is absent during the relatively high thermal budget steps typical in the front end of a MOS flow. After trench processing and germanium deposition, and in accordance with one specific such example embodiment, the structure sees no temperatures above 500°C, and therefore the germanium overlayer is not in jeopardy of melting and or otherwise degrading performance. As will be further appreciated in light of this disclosure, selectively may include natural selectivity. For instance, while boron doped germanium grows on p-type SiGe (or silicon) source drain regions, it does not grow on dielectric surfaces such as silicon dioxide (S1O2) or silicon nitride (SiN); nor does it grow on, for instance, exposed heavily phosphorous doped silicon in n-type regions. [0018] Numerous transistor configurations and suitable fabrication processes will be apparent in light of this disclosure, including both planar and non-planar transistor structures (e.g., such as double-gate and trigate transistor structures), as well as strained and unstrained channel structures. Any number of such structural features and material systems can be used in conjunction with a germanium overlayer as described herein. The transistor structure may include p-type source/drain regions, n-type source drain regions, or both n-type and p-type source/drain regions. In some example embodiments, the transistor structure includes dopant- implanted source/drain regions or epitaxial (or poly) replacement source/drain regions of silicon, SiGe alloys, or nominally pure germanium films (e.g., such as those with less than 10% silicon) in a MOS structure. In any such implementations, an overlayer or cap of boron doped germanium can be formed directly over the source/drain regions, in accordance with an embodiment of the present invention. A contact metal (or series of metals) can then be deposited and a subsequent reaction (annealing) can be carried out to form metal germanide source and drain contacts. As will be appreciated, the contact may be implemented as a stack including one or more of a silicide layer, an adhesion layer, and/or a metal pad layer. The boron doped germanium overlayer can be formed directly over other parts of the transistor structure as well, such as the poly gate and/or grounding tap regions, if so desired. [0019] As is known, a MOS transistor may include source and drain tip regions that are designed to decrease the overall resistance of the transistor while improving short channel effects (SCE). Conventionally, these tip regions are portions of the substrate where a dopant such as boron or carbon is implanted using an implant and diffusion technique. The source tip region is formed in the area between the source region and the channel region. Likewise, the drain tip region is formed in the area between the drain region and the channel region. Some embodiments of the present invention are configured with such conventionally formed tip regions. In other example embodiments, fabrication techniques are employed to extend self- aligned epitaxial tip (SET) transistors to achieve very near to the theoretical limit of uniaxial strain. This can be accomplished, for instance, by selective epitaxial deposition in the source and drain regions as well as their corresponding tip regions to form a bilayer construction of boron doped silicon or SiGe (for the source/drain regions) capped with an overlayer of a boron doped germanium layer in the source drain and respective tip regions. The germanium and boron concentrations can vary, but in some example embodiments, the germanium concentration is in the range of 20 atomic % to 100 atomic %, and the boron concentration is in the range of 1E20 cm"3 to 2E21 cm-3 (e.g., germanium concentration in excess of SO atomic % and boron concentration in excess of 2E20 cm-3). Note that the boron doped germanium layer may be provided in the tip regions, but in other embodiments is just provided over the source drain regions (and not in the tip regions). [0020] In still other example embodiments, an optional thin buffer with graded germanium concentration and/or boron concentration can be used as an interfacial layer between the underlying substrate with the source/drain layer (e.g., silicon or SiGe). Likewise, a thin buffer with graded germanium concentration and/or boron concentration can be used as an interfacial layer between the source/drain layer and the boron doped germanium cap. In still other embodiments, the boron doped germanium overlayer or source/drain layer themselves can have a graded germanium and or boron concentration in a similar fashion as to the optional buffers. In any such case, since boron diffusion is suppressed in germanium (the higher the concentration, the greater the relative suppression), a high concentration of boron can be doped in the germanium, which in turn results in lower parasitic resistance and without degrading tip abruptness. In addition, the contact resistance is reduced from lowering of Schottky-barrier height Architecture and Methodology [0021] Figure 1A illustrates a MOS device 100 A formed on a substrate 102 and configured with a boron doped germanium layer between the source/drain layer and contact metals, in accordance with an embodiment of the present invention. In particular, boron doped germanium layer 117 is provided between the source layer 110 and contact metals 125, and boron doped germanium layer 119 is provided between the drain layer 112 and contact metals 127. The source region 110 and the drain region 112 can be formed using any number of conventional techniques. In this example embodiment, for instance, the source region 110 and the drain region 112 are formed by etching the substrate and then epitaxially depositing a silicon or silicon germanium material (e.g., with a germanium concentration in the range of, for instance, 10 to 70 atomic %). [0022] A gate stack 122 is formed over a channel region 120 of the transistor 100 A. As can further be seen, the gate stack 122 includes a gate dielectric layer 106 and a gate electrode 104, and spacers 108 are formed adjacent to the gate stack 122. In some example cases, and depending on the technology node, the spacers 108 create a distance of about 10 to 20 nanometers (nm) between the edges of the gate dielectric layer 106 and the edges of each of the source and drain regions 110/112. It is within this space that a source tip region 110A and a drain tip region 112A can be formed. In this example embodiment, the tip regions 1 lOA 112A are formed via a typical implantation-diffusion based process, and overlap the spacers 108 and may also overlap or underdiffuse the gate dielectric layer 106 by a distance of, for instance, less than 10 nm. In forming the implantation-diffusion based tip regions 1 lOA 112 A, a dopant such as boron or carbon is implanted into the source region 110 and the drain region 112. The transistor 100 A is then annealed to cause the dopant to diffuse towards the channel region 120. Angled ion implantation techniques may also be used to further implant dopants into those areas between the gate dielectric layer 106 and the source/drain regions 110/112. Such implantation- diffusion-based tip formation processes generally do not induce a strain on the channel region. [0023] In any case, and as will be appreciated in light of this disclosure, whether a transistor structure has a strained or unstrained channel, or source-drain tip regions or no source-drain tip regions, is not particularly relevant to various embodiments of the present invention, and such embodiments are not intended to be limited to any particular such structural features. Rather, any number of transistor structures and types can benefit from employing a boron doped germanium overlayer as described herein. The techniques provided herein are compatible, for instance, with conventional dopant implanted silicon, raised source drain, strained SiGe (or other suitable materials), and any deposited epitaxial tip (sometimes referred to as source- drain extensions) that extend below the gate electrode dielectric or are spaced away from the vertical line defined by the gate electrode dielectric. [0024] The germanium overlayer 117/119 is generally provided after formation of the source/drain regions 110/112 and prior to formation of the contacts 125/127. The thickness of this overlayer 117/119 can vary from one embodiment to the next, but in one example embodiment is in the range of 50 to 250 Angstroms (A). The boron concentration of overlayer 117/119 can also vary, but in one example embodiment is in the range of 1E20 cm'3 to 2E21 cm" 3 (e.g., in excess of 2E20 cm-3). The overlayer 117/119 can be selectively deposited over the source/drain 110/112 regions (and/or other regions as desired, such as the poly gate or grounding tap regions). Any number of suitable deposition techniques can be used to provide the overlayer 117/119 (e.g., chemical vapor deposition, molecular beam epitaxy, etc). In accordance with one example embodiment, the contact metals 125 and 127 each comprise a stack a nickel silicide layer, a titanium nitride adhesion layer, and a tungsten contact pad, and although any number of contact metal configurations can be used as will be appreciated in light of this disclosure. Standard deposition techniques can be used in providing the contact metals 125/127. [0025] Figure IB illustrates an example MOS device 100B formed on a substrate 102 and configured with a boron doped germanium layer 117/119 between the source/drain layer 110/112 and contact metals 125/127, in accordance with another embodiment of the present invention. This example configuration includes source and drain epitaxial tips (generally referred to herein as epi-tips). In more detail, the MOS transistor 100B uses an undercut etch to allow the source region 110 and the drain region 112 to extend below the spacers 108, and in some cases, below the gate dielectric layer 106. The portions of the source/drain regions 110/112 that extend below the spacers 108 (and possibly the gate dielectric layer 106) are generally referred to as the source epi-tip HOB and the drain epi-tip 112B, respectively. The source and drain epi-tips 1 lOB/112B replace the implantation/diffusion based tip regions 11 OA/112A described with regard to Figure 1 A. In accordance with one embodiment, the source/drain regions 110/112 and the source/drain epi-tips 110B/112B can be formed, for example, by etching the substrate 102, which includes undercutting the spacers 108 (and possibly the gate dielectric layer 106), and then using selective epitaxial deposition to provide, for instance, an in situ doped silicon, germanium, or SiGe to fill the source/drain regions 110/112 and the source/drain epi-tips 110B/112B, as shown in Figure IB. Note the epitaxial fill may be raised relative to the surface of substrate 102, as further shown in Figure IB, although non-raised configurations can be used as well. The germanium overlayer 117/119 and the contact metals 125/127 can be implemented, for instance, as previously described with respect to Figure 1 A. [0026] Figure 1C illustrates a MOS device lOOC formed on a substrate 102 and configured with boron doped germanium layers 117/119 between the respective source/drain layers 110/112 and contact metals 125/127, in accordance with another embodiment of the present invention. The source region 110 and the drain region 112 in this example embodiment are formed by implanting dopants such as boron into the substrate. The gate stack 122 is formed over a channel region 120 of the transistor lOOC and is this example case does not include sidewalls 108. Nor does this example transistor structure include an undercut or tip regions like the embodiments shown in Figures 1A and IB. The germanium overlayer 117/119 and the contact metals 125/127 can be implemented, for instance, as previously described with respect to Figure 1 A. [0027] Numerous other variations and features can be implemented for transistor structures configured in accordance with an embodiment of the present invention. For example, a graded buffer may be used in one or more locations of the structure. For instance, the substrate 102 can be a silicon substrate, or a silicon film of a silicon on insulator (SOI) substrate, or a multi-layered substrate comprising silicon, silicon germanium, germanium, and/or III-V compound semiconductors. Thus, and by way of example, in an embodiment having a silicon or silicon germanium substrate 102 and an in situ boron doped SiGe fill in the source/drain regions 110/112 and the source/drain epi-tips 110B/112B, a buffer can be provided between the underlying substrate 102 and the source/drain material. In one such embodiment, the buffer can be a graded boron doped (or intrinsic) silicon germanium layer with the germanium concentration graded from a base level compatible with the underlying substrate up to 100 atomic % (or near 100 atomic %, such as in excess of 90 atomic % or 95 atomic % or 98 atomic %). The boron concentration within this buffer can be either fixed (e.g., at a high level) or graded, for instance, from a base concentration at or otherwise compatible with the underlying substrate up to a desired high concentration (e.g., in excess of 2E20 cm*3). Note that 'compatibility' as used herein does not necessitate an overlap in concentration levels (for instance, the germanium concentration of underlying substrate can be 0 to 20 atomic % and initial germanium concentration of the buffer can be 30 to 40 atomic %). In addition, as used herein, the term 'fixed' with respect to a concentration level is intended to indicate a relatively constant concentration level (e.g., the lowest concentration level in the layer is within 10% of the highest concentration level within mat layer). In a more general sense, a fixed concentration level is intended to indicate the lack of an intentionally graded concentration level. The thickness of the buffer can vary depending on factors such as the range of concentrations being buffered, but in some embodiments is in the range of 30 to 120 Λ, such as 50 to 100 Λ (e.g., 60 A or 65 A). As will be further appreciated in light of this disclosure, such a graded buffer beneficially lowers the Schottky-barrier height. [0028] Alternatively, rather than using a thin buffer between the substrate 102 and the source/drain regions 110/112 and the source/drain epi-tips 1 lOB/112B, the source/drain material itself can be graded in a similar fashion. For example, and in accordance with one example embodiment, boron doped SiGe source drain regions 110/112 and the source/drain epi-tips 110B/112B can be configured with a germanium concentration graded from a base level concentration compatible with the underlying substrate (e.g., in the range of 30 to 70 atomic %) up to 100 atomic %. In some such embodiments, the boron concentration within this boron doped germanium layer can range, for example, from a base concentration at or otherwise compatible with the underlying substrate up to a desired high concentration (e.g., in excess of 2E20 cm-3). [0029] In other embodiments, a buffer can be provided between the source/drain material and the boron doped germanium overlayer 117/119. In one such embodiment, the source/drain material is a boron doped SiGe layer having a fixed concentration of germanium (e.g., in the range of 30 to 70 atomic %) and the buffer can be a thin SiGe layer (e.g., 30 to 120 A, such as 50 to 100 A) having a germanium concentration graded from a base level concentration compatible with the underlying boron doped SiGe layer up to 100 atomic % (or near 100 atomic %, such as in excess of 90 atomic % or 95 atomic % or 98 atomic %). In some such cases, the boron concentration within this buffer can be fixed at a desired high level or can range, for example, from a base concentration at or otherwise compatible with the underlying SiGe layer up to the desired high concentration (e.g., in excess of 1E20 cm-3, 2E20 cm-3, or 3E20 cm-3). Alternatively, rather than using a buffer between the source drain material and the boron doped germanium overlayer 117/119, the overlayer 117/119 itself can be graded in a similar fashion. For example, and in accordance with one example embodiment, the boron doped overlayer 117/119 can be configured with a germanium concentration graded f om a base level concentration compatible with the underlying substrate and/or source/drain regions (e.g., in the range of 30 to 70 atomic %) up to 100 atomic % (or near 100 atomic %). The boron concentration within this overlayer 117/119 layer can be fixed at a desired high level or can range, for example, from a base concentration at or otherwise compatible with the underlying substrate and/or source drain regions up to the desired high level (e.g., in excess of 2E20 cm~3). [0030] Thus, a low contact resistance architecture for numerous transistor devices is provided. The devices may be formed in part using any number of conventional processes such as, for example, by gate oxide, poly gate electrode, thin spacer, and an isotropic undercut etch in the source/drain regions (or an ammonia etch to form faceted fin recess in monocrystalline substrate, or other suitable etch to form fin recess). In accordance with some embodiments, selective epitaxial deposition can be used to provide in situ doped silicon or alternatively, a fully strained silicon germanium layer to form source/drain regions with or without tips. Optional buffers may be used as previously explained. Any suitable high-k replacement metal gate (RMG) process flow can also be used, where a high-k dielectric replaces the conventional gate oxide. Silicidation with, for example, nickel, nickel-platinum, or titanium with or without germanium pre-amorphization implants can be used to form a low resistance germanide. The techniques provided herein can be applied, for example, to benefit any technology nodes (e.g., 90nm, 65nm, 45nm, 32nm, 22nm, 14nm, and lOnm transistors, and lower), and the claimed invention is not intended to be limited to any particular such nodes or range of device geometries. Other advantages will be apparent in light of this disclosure. [0031] Figure 2 is a method for forming a transistor structure with low contact resistance in accordance with an embodiment of the present invention. Figures 3A through 31 illustrate example structures that are formed as the method is carried out, and in accordance with some embodiments. [0032] As can be seen, the method begins with forming 202 a gate stack on a semiconductor substrate upon which a MOS device, such as a PMOS transistor, may be formed. The semiconductor substrate may be implemented, for example, with a bulk silicon or a silicon-on- insulator configuration. In other implementations, the semiconductor substrate may be formed using alternate materials, which may or may not be combined with silicon, such as germanium, silicon germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, or gallium antimonide. In a more general sense, any material that may serve as a foundation upon which a semiconductor device may be built can be used in accordance with embodiments of the present invention. The gate stack can be formed as conventionally done or using any suitable custom techniques. In some embodiments of the present invention, the gate stack may be formed by depositing and then patterning a gate dielectric layer and a gate electrode layer. For instance, in one example case, a gate dielectric layer may be blanket deposited onto the semiconductor substrate using conventional deposition processes such as chemical vapor deposition (CVD), atomic layer deposition (ALD), spin-on deposition (SOD), or physical vapor deposition (PVD). Alternate deposition techniques may be used as well, for instance, the gate dielectric layer may be thermally grown. The gate dielectric material may be formed, for example, from materials such as silicon dioxide or high-k dielectric materials. Examples of high-k gate dielectric materials include, for instance, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. In some specific example embodiments, the high-k gate dielectric layer may be between around 5 A to around 200 A thick (e.g., 20 A to SO A). In general, the thickness of the gate dielectric layer should be sufficient to electrically isolate the gate electrode from the neighboring source and drain contacts. In further embodiments, additional processing may be performed on the high-k gate dielectric layer, such as an annealing process to improve the quality of the high-k material. Next, a gate electrode material may be deposited on the gate dielectric layer using similar deposition techniques such as ALD, CVD, or PVD. In some such specific embodiments, the gate electrode material is polysilicon or a metal layer, although other suitable gate electrode materials can be used as well. The gate electrode material, which is may be a sacrificial material that is later removed for a replacement metal gate (RMG) process, has a thickness in the range of 5θΑ to SOOA (e.g., ΙΟθΑ), in some example embodiments. A conventional patterning process may then be carried out to etch away portions of the gate electrode layer and the gate dielectric layer to form the gate stack, as shown in Figure 3A. As can be seen, Figure 3 A illustrates a substrate 300 upon which a gate stack is formed. In this example embodiment, the gate stack includes a gate dielectric layer 302 (which may be high-k gate dielectric material) and a sacrificial gate electrode 304. In one specific example case, the gate stack includes a silicon dioxide gate dielectric layer 302 and a polysilicon gate electrode 304. The gate stack may also include a gate hard mask layer 306 that provides certain benefits or uses during processing, such as protecting the gate electrode 304 from subsequent ion implantation processes. The hard mask layer 306 may be formed using typical hard mask materials, such as such as silicon dioxide, silicon nitride, and/or other conventional dielectric materials. Figure 3 A further illustrates spacers 310 formed on either side of the stack. The spacers 310 may be formed, for example, using conventional materials such as silicon oxide, silicon nitride, or other suitable spacer materials. The width of the spacers 310 may generally be chosen based on design requirements for the transistor being formed. In accordance with some embodiments, however, the width of the spacers 310 is not subject to design constraints imposed by the formation of the source and drain epi-tips, given sufficiently high boron doped germanium content in the source/drain tip regions, as described herein (boron won't diffuse into channel). [0033] With further reference to Figure 2, after the gate stack is formed, the method continues with defining 204 the source/drain regions of the transistor structure. As previously explained, the source/drain regions can be implemented with any number of suitable processes and configurations. For example, the source drain regions may be implanted, etched and epi filled, raised, silicon or SiGe alloy, p-type and or n-type, and have a planar or fin shaped diffusion region. In the example embodiment shown in Figure 3A, substrate 300 has been etched to provide cavities 312/314 as well as respective tip areas 312A/314A which undercuts the gate dielectric 302. Figure 3B illustrates the substrate 300 after cavities 312/314 and tip areas 312A/314A have been filled to provide the source/drain regions 318/320 and tip regions 318A/320A. In accordance with some example embodiments, the source and drain region cavities 312/314 along with their respective tip areas 312A/314A are filled with in situ doped silicon or SiGe, thereby forming source region 318 (along with epi-tip 318A) and drain region 320 (along with drain epi-tip 320A). Any number of source/drain layer configurations can be used here, with respect to materials (e.g., silicon, SiGe, III-V materials), dopant (e.g., boron in excess of 2E21 cnr3, or other suitable dopant concentration), and dimension (e.g., thickness of source/drain layer may range, for instance, from 50 to 500 nm so as to provide a flush or raised source/drain region). [0034] As previously explained, some such embodiments may include with a thin buffer between the source/drain layer and the substrate or the source/drain and boron doped germanium overlayer. For instance, and as can further be seen in the example embodiment shown in Figure 3B, a source buffer 313 and a drain buffer 315 are deposited prior to depositing the source drain materials. In some embodiments, the buffers 313 and 315 can be a graded boron doped silicon germanium layer with the germanium composition graded from a base level concentration compatible with the underlying substrate 300 material up to 100 atomic % (or near to 100 atomic % as previously described). The boron concentration can be appropriately graded as well. Numerous buffer schemes will be apparent in light of this disclosure. [0035] With further reference to Figure 2, after the source/drain regions are defined, the method continues with depositing 206 boron doped germanium on the source drain regions of the transistor structure. Figure 3C shows the boron doped germanium layer 317/319. In some example embodiments, the boron doped germanium layer 317/319, which may be epitaxial ly deposited in one or more layers, has a germanium concentration in excess of 90 atomic %, although other suitable concentration levels can be used as will be appreciated in light of this disclosure (e.g., in excess of 91 atomic %, or 92 atomic %, or 98 atomic %, or 99 atomic %, or truly pure germanium). As previously explained, this germanium concentration may be fixed or graded so as to increase from a base level (near substrate 300) to a high level (e.g., in excess of 90 atomic %). The boron concentration in some such embodiments can be in excess of 1 E20 cm-3, such as higher than 2E20 cm-3 or 2E21 cm"3, and may also be graded so as to increase from a base level near substrate 300 to a high level (e.g., in excess of 1E20 cm-3 or 2E20 cnr3 or 3E20 cm~3, 2E21 cm"3). In embodiments where the germanium concentration of the underlying source/drain regions 318/320 is fixed or otherwise relatively low, a graded buffer may be used to better interface source drain regions 318/320 with the boron doped germanium layer 317/319, as previously explained. The thickness of the boron doped germanium cap 317/319 may have a thickness in the range, for example, of 50 to 250 A, in accordance with some specific example embodiments, although alternative embodiments may have other layer thicknesses, as will be apparent in light of this disclosure. [0036] In some embodiments, a CVD process or other suitable deposition technique may be used for the depositing 206 or otherwise forming the boron doped germanium layer 317/319. For example, the depositing 206 may be carried out in a CVD, or rapid thermal CVD (RT-CVD), or low pressure CVD (LP-CVD), or ultra-high vacuum CVD (UHV-CVD), or gas source molecular beam epitaxy (GS-MBE) tool using germanium and boron containing precursors such as germane (GeHU) or digermane (Ge2¾) and diborane (B2H5) or boron difluoride (BF2). In some such embodiments, mere may be a carrier gas such as, for instance, hydrogen, nitrogen, or a noble gas (e.g., precursor is diluted at 1-5% concentration of carrier gas). There may also be an etchant gas such as, for example, halogen-based gas such as hydrogen chloride (HCl), chlorine (CI), or, hydrogen bromide (HBr). The basic deposition of germanium and also boron doped germanium is possible over a wide range of conditions using deposition temperature in the range, for example, of 300°C to 800°C (e.g., 300-500°C) and reactor pressure, for instance, in the range 1 Torr to 760 Torr. Germanium is naturally selective in that it deposits on silicon or silicon- germanium alloy, and does not deposit on other materials such as silicon dioxide and silicon nitride. Since this natural selectivity is not entirely perfect, a small flow of etchant can be used to increase the selectivity of the deposition, as previously noted. Each of the carrier and etchants can have a flow in the range of 10 and 300 SCCM (typically, no more than 100 SCCM of flow is required, but some embodiments may require higher flow rates). In one specific example embodiment, the deposition 206 is carried out using GeH4 that is diluted in hydrogen at a 1% concentration and at a flow rate that ranges between 100 and 1000 SCCM. For an in situ doping of boron, diluted B2H6 may be used (e.g., the Β2Η6 may be diluted in H2 at 3% concentration and at a flow rate that ranges between 100 and 600 SCCM. In some such specific example cases, an etching agent of HCl or Cl2 is added at a flow rate that ranges, for example, between 10 and 100 SCCM, to increase the selectivity of the deposition. [0037] As will be appreciated in light of this disclosure, the selectivity at which the boron doped germanium layer 317/319 is deposited can vary as desired. In some cases, for instance, the boron doped germanium layer 317/319 is deposited only on the source/drain regions 318/320 or a portion of the source/drain regions 318/320 (rather than across the entire structure). Any number of masking/patterning techniques can be used to selectively deposit layer 317/319. Moreover, other embodiments may benefit from layer 317/319 covering, for example, poly gate regions or grounding tap regions. As will further be appreciated in light of this disclosure, the combination of high germanium concentration (e.g., in excess of 90 atomic % and up to pure germanium) and high boron concentration (e.g., in excess of 2E20 cm-3) can be used to realize significantly lower contact resistance in the source and drain regions (and other areas where low contact resistance is desirable, such as ground tap regions), in accordance with some example embodiments. Further, and as previously explained, since boron diffusion is sufficiently suppressed by pure germanium, no adverse SCE degradation is realized with subsequent thermal anneals despite any high boron concentration proximate the channel (if applicable). Barrier height lowering is also enabled from the higher concentration of germanium at the contact surface. In some example embodiments, a germanium concentration in excess of 95 atomic % and up to pure germanium (100 atomic %) can be used to achieve such benefits. [0038] With further reference to Figure 2, after the boron doped germanium layer 317/319 is provided, the method continues with depositing 208 a dielectric over layer 317/319. Figure 3D shows dielectric 322 as being flush with the hard mask 306 of the gate stack, but it need not be. The dielectric can be configured in a number of ways. In some embodiments, dielectric 322 is implemented with Si02 or other low-k dielectric materials. In other embodiments, dielectric 322 is implemented with a SiN liner followed by one or more layers of Si<な, or any combination of nitride, oxide, oxynitride, carbide, oxycarbide, or other suitable dielectric materials). The dielectric 322, which may be referred to as an interlayer dielectric (ILD), may be planarized as commonly done. Other example dielectric materials include, for instance, carbon doped oxide (CDO), organic polymers such as perfluorocyclobutane or polytetrafluoroethylene, fluorosilicate glass (FSG), and organosilicates such as silsesquioxane, siloxane, or organosilicate glass. In some example configurations, the ILD layer may include pores or other voids to further reduce its dielectric constant. [0039] Next, in some embodiments of the present invention where a replacement metal gate (RMG) process is used and as best shown in Figure 3E, the method may further include removing the gate stack (including the high-k gate dielectric layer 302, the sacrificial gate electrode 304, and the hard mask layer 306) using an etching process as conventionally done. In alternate implementations, only the sacrificial gate 304 and hard mask layer 306 are removed. Figure 3E illustrates the trench opening that is formed when the gate stack is etched away, in accordance with one such embodiment. If the gate dielectric layer is removed, the method may continue with depositing a new gate dielectric layer into the trench opening (designated as 324 in Figure 3F). Any suitable high-k dielectric materials such as those previously described may be used here, such as hafnium oxide. The same deposition processes may also be used. Replacement of the gate dielectric layer may be used, for example, to address any damage that may have occurred to the original gate dielectric layer during application of the dry and wet etch processes, and or to replace a low-k or sacrificial dielectric material with a high-k or otherwise desired gate dielectric material. As further shown in Figure 3F, the method may further continue with depositing the metal gate electrode layer 326 into the trench and over the gate dielectric layer 324. Conventional metal deposition processes may be used to form the metal gate electrode layer, such as CVD, ALD, PVD, electroless plating, or electroplating. The metal gate electrode layer may include, for example, a p-type workfunction metal, such as ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, e.g., ruthenium oxide. In some example configurations, two or more metal gate electrode layers may be deposited. For instance, a workfunction metal may be deposited in the gate trench followed by a suitable metal gate electrode fill metal such as aluminum or silver. [0040] With further reference to Figure 2, after dielectric layer 322 is provided over layer 317/319 (and any desired RMG process), the method continues with etching 210 to form the source/drain contact trenches. Any suitable dry and/or wet etch processes can be used. Figure 3G shows the source/drain contact trenches after etching is complete, in accordance with one example embodiment. The method then continues with depositing 212 contact resistance reducing metal and annealing to form silicide/germanide, and then depositing 214 the source/drain contact plugs. Figure 3H shows the contact metals 325/327, which in some embodiments includes the silicide/germanide, although other embodiments may include additional layers (e.g., adhesion layer). Figure 31 shows the contact plug metal 329/331, which in some embodiments includes aluminum, although any suitably conductive contact metal or alloy can be used for the contact plug 329/331, such as silver, nickel-platinum or nickel- aluminum or other alloys of nickel and aluminum, or titanium, using conventional deposition processes. The germanide/metalization 212 of the source and drain contacts can be carried out, for instance, by silicidation with nickel, aluminum, nickel-platinum or nickel-aluminum or other alloys of nickel and aluminum, or titanium with or without germanium pre-amorphization implants to form a low resistance germanide. The boron doped germanium layer 317/319 allows for metal-germanide formation (e.g., nickel-germanium). The germanide allows for significantly lower Schottky-barrier height and improved contact resistance (including Rext) over that in conventional metal-silicide systems. For instance, conventional transistors typically use a source/drain SiGe epi process, with germanium concentration in the range of 30-40 atomic %. Such conventional systems exhibit R«t values of about 140 Ohm*um, limited by epi silicide interfacial resistance, which is high and may impede future gate pitch scaling. Some embodiments of the present invention allow for a significant improvement in Rext in PMOS devices (e.g., about a 2x improvement or better, such as a Rext of about 70 Ohm*um), which can better support PMOS device scaling. Thus, transistors having a source/drain configured with boron doped germanium cap 317/319 in accordance with an embodiment of the present, with a boron concentration in excess of 1E20 cm-3 and a germanium concentration in excess of 90 atomic % and up to or otherwise near pure germanium (100 atomic %) at the interface between the source/drain regions 318/320 and the contact metals 325/327, can exhibit Rext values of less than 100 Ohm*um, and in some cases less than 90 Ohm*um, and in some cases less than 80 Ohm*um, and in some cases less than 75 Ohm*um, or lower. [0041] Figure 4 is a method for forming a transistor structure with low contact resistance in accordance with another embodiment of the present invention. Figures 5A through 5F illustrate example structures that are formed as the method is carried out, and in accordance with some embodiments. As can be seen, this example transistor structure includes both p-type and n-type source and drain regions (designated p-S/D and n-S D, respectively), and the boron doped germanium is selectively deposited on the p-type regions only. In general, this method is similar to the method described with reference to Figures 2 and 3A-H, except that the deposition of the boron doped germanium layer 317/31 on the source/drain regions is selectively carried out after the dielectric 322 is deposited and etched to form the contact trench. [0042] The method includes forming 402 the gate stacks and defining 404 the various p-S D and n-S/D regions using standard processing and as best shown in Figure 5A. In some embodiments, the p-S/D and n-S/D regions can be doped to provide a desired degree of selectivity with respect to boron doped germanium. The method further includes depositing 406 the dielectric 322 directly over the p-S/D and n-S D regions, as shown in Figure 5B. The method continues with etching 408 to form the p-S/D and n-S/D region contact trenches, and then selectively deposting 410 the boron doped germanium layer 317/319 into the trench and onto the p-S/D regions (of which there may be one or more depending on the desired function and application of the transistor structure), as best shown in Figures 5C and 5D. Deposting 410 can be carried out using any suitable deposition process, such as selective epitaxy. Once layer 317/319 is provided, the method continues with depositing 412 contact metals 325/327 on top of the layer 317/31 as well as on top of any exposed n-S/D regions, and then depositing 414 the source/drain contact plugs 329/331, as shown in Figures SE and 5F. This alternate methodology provides the same benefit of improved contact resistance, but is more selective in where the boron doped germanium is deposited. Other such selective deposition processes will be apparent in light of this disclosure, using any suitable combination of maskingpatterning and selective deposition techniques. As will be further appreciated, the previous relevant discussion with respect to similar parts of the method is equally applicable here. FinFET Configuration [0043] As is known, FinFET is a transistor built around a thin strip of semiconductor material (generally referred to as the fin). The transistor includes the standard field effect transistor (FET) nodes, including a gate, a gate dielectric, a source region, and a drain region. The conductive channel of the device resides on the outer sides of the fin beneath the gate dielectric. Specifically, current runs along both sidewalls of the fin (sides perpendicular to the substrate surface) as well as along the top of the fin (side parallel to the substrate surface). Because the conductive channel of such configurations essentially resides along the three different outer, planar regions of the fin, such a FinFET design is sometimes referred to as a tri-gate FinFET. Other types of FinFET configurations are also available, such as so-called double-gate FinFETs, in which the conductive channel principally resides only along the two sidewalls of the fin (and not along the top of the fin). [0044] Figure 6 shows a perspective view of an example tri-gate architecture, configured in accordance with one embodiment of the present invention. As can be seen, the tri-gate device includes a substrate 600 having a semiconductor body or fin 660 (represented by dashed lines) extending from the substrate 600 through isolation regions 610, 620. A gate electrode 640 is formed over 3 surfaces of the fin 660 to form 3 gates. A hard mask 690 is formed on top of the gate electrode 640. Gate spacers 670, 680 are formed at opposite sidewalls of the gate electrode 640. [0045] A source region comprises the epitaxial region 631 formed on a recessed source interface 650 and on one fin 660 sidewall, and a drain region comprises the epitaxial region 631 formed on a recessed source interface 650 and on the opposing fin 660 sidewall (not shown). A cap layer 641 is deposited over the epitaxial regions 631. Note that the boron cap layer 641 may be provided in the recessed (tip) regions, but in other embodiments is just provided over the source/drain regions (and not in the recessed regions). In one embodiment, the isolation regions 610, 620 are shallow trench isolation (STI) regions formed using conventional techniques, such as etching the substrate 600 to form trenches, and then depositing oxide material onto the trenches to form the STI regions. The isolation regions 610, 620 can be made from any suitable dielectric/insulative material, such as Si02. The previous discussion with respect to the substrate 102 is equally applicable here (e.g., substrate 600 may be a silicon substrate, or SOI substrate, or a multi-layered substrate). [0046] As will be appreciated in light of this disclosure, conventional processes and forming techniques can be used to fabricate the FinFET transistor structure. However, and in accordance with one example embodiment of the present invention, the bilayer structure of the epitaxial region 631 and cap layer 641 can be implemented, for instance, using an in situ doped silicon or SiGe (for 631) capped with a boron doped germanium (for 641), with an optional germanium and/or boron graded buffer between the two bilayers. As previously explained, such a buffer may be used to transition from a base level germanium/boron concentration compatible with the epitaxial region 631 to the boron doped germanium cap 641. Alternatively, germanium and/or boron concentration grading can be implemented directly in the epitaxial region 631 and/or the cap 641, rather than in an intervening graded buffer arrangement. As will further be appreciated, note that an alternative to the tri-gate configuration is a double-gate architecture, which includes a dielectric isolation layer on top of the fin 660. [0047] Figure 7 shows a plot of a split lot showing contact resistance for transistor structures configured with in accordance with embodiments of the present invention and standard transistor structures configured with no cap. The transistor structures associated with the high resistance numbers in excess of 0.18 are all implemented with standard SiGe alloy raised PMOS source/drain regions with contact metal deposited directly thereon. The transistor structures associated with the resistance numbers of 0.107 and lower are all similarly implemented but with the addition a boron doped germanium cap between the source/drain regions and contact metal, in accordance with various embodiments of the present invention. Table 1 shows the raw data quantiles resulting from testing of the example structures with and without a boron doped germanium cap as described herein. As can be seen, this example lot actually shows an improvement (reduction) in contact resistance of about a three to six times (3X to 6X) over conventional transistor structures. The units are Ohms per arbitrary area. [0048] Other improvements enabled by using a boron doped germanium cap in accordance with an embodiment of the present invention will be apparent in light of this disclosure. In particular, the resulting germanide materials and Schottky barrier height improvement enables more than a 2x R** improvement over that in conventional SiGe source drain PMOS devices, in accordance with some example embodiments of the present invention. As is known, the Schottky barrier height is the barrier for electrical conduction across a semiconductor-metal junction. The magnitude of the Schottky barrier height reflects a mismatch in the energy position of the metal's Fermi level and the majority carrier band edge of the semiconductor across the semiconductor-metal interface. For a p-type semiconductor-metal interface, the Schottky barrier height is the difference between the metal Fermi level and the valence band maximum of the semiconductor. Example System [0049] Figure 8 illustrates a computing device 1000 configured in accordance with one embodiment of the invention. As can be seen, the computing device 1000 houses a motherboard 1002. The motherboard 1002 may include a number of components, including but not limited to a processor 1004 and at least one communication chip 1006, each of which can be physically and electrically coupled to the motherboard 1002, or otherwise integrated therein. As will be appreciated, the motherboard 1002 may be, for example, any printed circuit board, whether a main board or a daughterboard mounted on a main board or the only board of device 1000, etc. Depending on its applications, computing device 1000 may include one or more other components that may or may not be physically and electrically coupled to the motherboard 1002. These other components may include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth). Any of the components included in computing device 1000 may include one or more transistor structures as described herein. In some embodiments, multiple functions can be integrated into one or more chips (e.g., for instance, note that the communication chip 1006 can be part of or otherwise integrated into the processor 1004). [0050] The communication chip 1006 enables wireless communications for the transfer of data to and from the computing device 1000. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non- solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 1006 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev- DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, SG, and beyond. The computing device 1000 may include a plurality of communication chips 1006. For instance, a first communication chip 1006 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 1006 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others. [0051] The processor 1004 of the computing device 1000 includes an integrated circuit die packaged within the processor 1004. In some embodiments of the present invention, the integrated circuit die of the processor includes an onboard non-volatile memory or cache, and/or is otherwise communicatively coupled to off-chip memory that is implemented with one or more transistor structures as described herein. The term "processor" may refer to any device or portion of a device that processes, for instance, electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. [0052] The communication chip 1006 may also include an integrated circuit die packaged within the communication chip 1006. In accordance with some such example embodiments, the integrated circuit die of the communication chip includes one or more devices implemented with one or more transistor structures as described herein. As will be appreciated in light of this disclosure, note that multi-standard wireless capability may be integrated directly into the processor 1004 (e.g., where functionality of any chips 1006 is integrated into processor 1004, rather than having separate communication chips). Further note that processor 1004 may be a chip set having such wireless capability. In short, any number of processor 1004 and/or communication chips 1006 can be used. Likewise, any one chip or chip set can have multiple functions integrated therein. [0053] In various implementations, the computing device 1000 may be a laptop, a netbook, a notebook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the device 1000 may be any other electronic device that processes data or employs transistors. [0054] Numerous embodiments will be apparent in light of this disclosure, and features described herein can be combined in any number of configurations. One example embodiment of the present invention provides a transistor device. The device includes a substrate having a channel region, and a gate electrode above the channel region. A gate dielectric layer is provided between the gate electrode and the channel region, and p-type and n-type source drain regions are provided in the substrate and adjacent to the channel region. The device further includes a boron doped germanium layer on at least a portion of the p-type source/drain region. This boron doped germanium layer comprises a germanium concentration in excess of 90 atomic % and a boron concentration in excess of 1E20 cm"3. The device further includes a metal-germanide source/drain contact on the boron doped germanium layer. In one such example, the boron doped germanium layer is only on p-type source/drain regions of the device. In another example case, the device further includes an interlayer dielectric. In another example case, the device further includes a graded buffer between the substrate and at least one of the p-type and n-type source/drain regions, and/or a graded buffer between at least one of the p-type and n-type source/drain regions and the boron doped germanium layer. In one such case, the graded buffer between the at least one p-type and n-type source and drain regions and the boron doped germanium layer has a germanium concentration that is graded from a base level concentration compatible with the at least one of the p-type and n-type source/drain regions to a high concentration in excess of 95 atomic %. In one such specific example case, the high concentration reflects pure germanium. In another example case, the graded buffer between the at least one of the p-type and n-type source/drain regions and the boron doped germanium layer has a boron concentration that is graded from a base level concentration compatible with the at least one of the p-type and n-type source/drain regions to a high concentration in excess of 1E20 cm-3. In another example case, the boron doped germanium layer has a graded concentration of at least one of germanium and boron. In another example case, the p-type and n-type source/drain regions comprise silicon germanium having a germanium concentration that is graded from a base level concentration compatible with the substrate to a high concentration in excess of SO atomic %, and the boron doped germanium layer has a germanium concentration in excess of 95 atomic %. In another example case, the p-type and n-type source drain regions comprise boron doped silicon germanium having a boron concentration that is graded from a base level concentration compatible with the substrate to a high concentration in excess of 1E20 cm-3. In another example case, the p-type and n-type source drain regions comprise silicon or silicon germanium, and the device further comprises a buffer between at least one of the p-type and n-type source drain regions and the boron doped germanium layer, the buffer having a germanium concentration that is graded from a base level concentration compatible with the at least one of the p-type and n-type source drain regions to a high concentration in excess of 50 atomic %, and a boron concentration that is graded from a base level concentration compatible with the at least one of the p-type and n-type source/drain regions to a high concentration in excess of 1E20 cm-3. In another example case, the boron doped germanium layer comprises a germanium concentration in excess of 98 atomic %, and a boron concentration in excess of 2E20 cnr3. Another embodiment provides an electronic device that includes a printed circuit board having one or more integrated circuits, wherein at least one of the one or more integrated circuits comprises one or more transistor devices as variously defined in this paragraph. In one such case, the one or more integrated circuits includes at least one of a communication chip and/or a processor, and at least one of the communication chip and or processor comprises the one or more transistor devices. In another such case, the device is a computing device (e.g., mobile telephone or smartphone, laptop, tablet computer, etc). [0055] Another embodiment of the present invention provides a transistor device. In this example case, the device includes a substrate having a channel region, and a gate electrode above the channel region, wherein a gate dielectric layer is provided between the gate electrode and the channel region and spacers are provided on sides of the gate electrode. The device further includes p-type and n-type source/drain regions in the substrate and adjacent to the channel region, each of the p-type and n-type source drain regions including a tip region that extends under the gate dielectric layer and/or a corresponding one of the spacers. The device further includes a boron doped germanium layer on at least a portion of the p-type source/drain region, and comprising a germanium concentration in excess of 95 atomic % and a boron concentration in excess of 2E20 cnv3. The device further includes metal-germanide source drain contacts on the boron doped germanium layer. The device is one of a planar or FinFET transistor. In one such example case, the device further includes a buffer between at least one of the p-type and n- type source/drain regions and the boron doped germanium layer, wherein the buffer has a germanium concentration that is graded from a base level concentration compatible with the at least one of the p-type and n-type source/drain regions to a high concentration in excess of 95 atomic %, and a boron concentration that is graded from a base level concentration compatible with the at least one of the p-type and n-type source/drain regions to a high concentration in excess of 2E20 cnr3. In another example case, the boron doped germanium layer has a graded concentration of at least one of germanium and boron. In another example case, the p-type and n-type source/drain regions comprise silicon germanium having a germanium concentration that is graded from a base level concentration compatible with the substrate to a high concentration in excess of 50 atomic %, and the boron doped germanium layer has a germanium concentration in excess of 98 atomic %. In another example case, the p-type and n-type source/drain regions have a boron concentration that is graded from a base level concentration compatible with the substrate to a high concentration in excess of 2E20 cm"3. In another example case, the p-type and n-type source/drain regions comprise silicon germanium having a fixed germanium concentration, and the device further comprises a buffer between the p-type and n-type source/drain regions and the boron doped germanium layer, the buffer having a germanium concentration that is graded from a base level concentration compatible with the p-type and n- type source/drain regions to a high concentration in excess of SO atomic %, and a boron concentration that is graded from a base level concentration compatible with the p-type and n- type source/drain regions to a high concentration in excess of 2E20 cm-3, the buffer having a thickness of less than 100 Angstroms. Another embodiment provides a computing device (e.g., desktop or portable computer, etc) that includes a printed circuit board having a communication chip and/or a processor, wherein at least one of the communication chip and or processor comprises one or more transistor devices as variously defined in this paragraph. [0056] Another embodiment of the present invention provides a method for forming a transistor device. The method includes providing a substrate having a channel region, and providing a gate electrode above the channel region, wherein a gate dielectric layer is provided between the gate electrode and the channel region. The method continues with providing p-type and n-type source drain regions in the substrate and adjacent to the channel region, and providing a boron doped germanium layer on at least a portion of the p-type source drain region. The boron doped germanium layer comprises a germanium concentration in excess of 90 atomic % and a boron concentration in excess of 1E20 cur3. The method continues with providing metal - germanide source drain contacts on the boron doped germanium layer. In some example such cases, the method further includes providing a graded buffer between the substrate and at least one of the p-type and n-type source drain regions, and/or providing a graded buffer between at least one of the p-type and n-type source drain regions and die boron doped germanium layer. In another example case, the boron doped germanium layer has a graded concentration of at least one of germanium and boron (which may be used with or without graded buffers). This method may be employed, for example, in the fabrication of any electronic devices such as a computing device. [0057] The foregoing description of example embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of this disclosure. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto. |
A patterned organic masking layer is formed outwardly of a feature layer to be etched. It has at least one feature pattern having a minimum feature dimension of less than or equal to 0.3 micron. The feature layer has a thickness which is to be etched to form the one feature pattern in the feature layer. The feature pattern is plasma etched into the feature layer using the masking layer as a mask. The plasma etching comprises at least one etching segment where at least 30% of said thickness of the feature layer is etched using an etching gas comprising one gas compound comprising carbon, hydrogen and at least one halogen present at greater than or equal to 70% concentration by volume as compared to all carbon, hydrogen and halogen containing gas compounds in the etching gas. Such plasma etching is conducted under conditions effective to produce at least that portion of the one feature pattern in the feature layer formed during the one etching segment to have a sidewall taper, if any, of less than or equal to 5° and an organic masking layer top outer surface roughness proximate the feature pattern at a conclusion of the etching segment which is characterizable by an average value less than 100 Angstroms. Such value is determinable by scanning electron microscopy as an average maximum size of all surface discernible objects of the patterned masking layer as measured and averaged along any 0.3 micron length of top outer surface from the one feature pattern. Other implementations are also contemplated. |
What is claimed is: 1. A plasma etching method comprising:forming a patterned organic masking layer outwardly of a feature layer to be etched, the patterned organic masking layer having at least one feature pattern having a minimum feature dimension of less than or equal to 0.3 micron, the feature layer having a thickness; and plasma etching the at least one feature pattern into the feature layer using the organic masking layer as a mask, the plasma etching comprising at least one etching segment where at least 30% of the thickness of the feature layer is etched using an etching gas comprising one gas compound comprising carbon, hydrogen and at least one halogen present in the etching gas at greater than or equal to 70% concentration by volume as compared to all carbon, hydrogen and halogen containing gas compounds in the etching gas effective to produce at least that portion of the one feature pattern in the feature layer formed during the one etching segment to have a sidewall taper, if any, of less than or equal to 5[deg.] and an organic masking layer top outer surface roughness proximate the feature pattern at a conclusion of the one etching segment which is characterizable by an average value less than 100 Angstroms as is determinable by scanning electron microscopy as an average maximum size of all surface discernible objects of the patterned masking layer as measured and averaged along any 0.3 micron length of top outer surface from the one feature pattern, wherein the organic masking layer comprises photoresist, and another organic masking material layer forming thereover during the plasma etching, the average maximum value of the organic layer top outer surface roughness being that of the another organic masking material top outer surface and not that of the photoresist. 2. The method of claim 1 wherein the plasma etching comprises only one etching segment where 100% of the thickness of the feature layer is etched using said etching gas.3. The method of claim 1 wherein the organic masking layer comprises photoresist, the average maximum value of the organic layer top outer surface roughness being that of the photoresist top outer surface.4. The method of claim 1 wherein the one feature pattern comprises a contact opening.5. The method of claim 1 wherein the feature layer comprises SiO2.6. The method of claim 1 wherein the feature layer comprises Si3N4.7. The method of claim 1 wherein the one gas compound is present in the etching gas at greater than or equal to 80% concentration by volume as compared to any other carbon, hydrogen and halogen containing gas compound(s) in the etching gas during the one etching segment.8. The method of claim 1 wherein the one gas compound is present in the etching gas at greater than or equal to 90% concentration by volume as compared to any other carbon, hydrogen and halogen containing gas compound(s) in the etching gas during the one etching segment.9. The method of claim 1 wherein the one gas compound is present in the etching gas at greater than or equal to 95% concentration by volume as compared to any other carbon, hydrogen and halogen containing gas compound(s) in the etching gas during the one etching segment.10. The method of claim 1 wherein the one gas compound is present in the etching gas at 100% concentration by volume as compared to any other carbon, hydrogen and halogen containing gas compound(s) in the etching gas during the one etching segment.11. The method of claim 1 wherein the one gas compound is CH2F2.12. The method of claim 1 wherein the average value of roughness is less than or equal to 50 Angstroms.13. The method of claim 1 wherein the one etching segment comprises high density plasma.14. The method of claim 1 wherein the one etching segment is conducted at the start of said plasma etching and is not conducted throughout all of said plasma etching.15. The method of claim 1 wherein the one etching segment is conducted at the end of said plasma etching and is not conducted throughout all of said plasma etching.16. The method of claim 1 wherein the one etching segment is conducted between the start and the end of said plasma etching, and not conducted at the start or end of said plasma etching.17. The method of claim 1 wherein the plasma etching during the one segment is void of any etching gases having carbon-nitrogen bonds.18. The method of claim 1 wherein the plasma etching during the one segment is void of any etching gases having carbon-oxygen bonds.19. The method of claim 1 wherein the plasma etching during the one segment is void of any etching gases having oxygen-oxygen bonds.20. A plasma etching method comprising:forming a patterned organic masking layer outwardly of a feature layer to be etched, the patterned organic masking layer having at least one feature pattern having a minimum feature dimension of less than or equal to 0.3 micron, the feature layer having a thickness; and plasma etching the at least one feature pattern into the feature layer using the organic masking layer as a mask, the plasma etching comprising at least one etching segment where at least 30% of the thickness of the feature layer is etched using an etching gas comprising one gas compound comprising carbon, hydrogen and at least one halogen present in the etching gas at greater than or equal to 70% concentration by volume as compared to all carbon, hydrogen and halogen containing gas compounds in the etching gas effective to produce at least that portion of the one feature pattern in the feature layer formed during the one etching segment to have a sidewall taper, if any, of less than or equal to 5[deg.] and an organic masking layer top outer surface roughness proximate the feature pattern at a conclusion of the one etching segment which is characterizable by an average value less than 100 Angstroms as is determinable by scanning electron microscopy as an average maximum size of all surface discernible objects of the patterned masking layer as measured and averaged along any 0.3 micron length of top outer surface from the one feature pattern, wherein the feature layer comprises polysilicon. 21. A plasma etching method comprising:forming a patterned inorganic masking layer outwardly of a feature layer to be etched, the patterned inorganic masking layer having at least one feature pattern having a minimum feature dimension of less than or equal to 0.3 micron; and plasma etching the at least one feature pattern into the feature layer using the inorganic masking layer as a mask, the plasma etching comprising at least one etching segment where at least 30% of the thickness of the feature layer is etched using an etching gas comprising one gas compound comprising carbon, hydrogen and at least one halogen present in the etching gas at greater than or equal to 70% concentration by volume as compared to all carbon, hydrogen and halogen containing gas compounds in the etching gas effective to produce at least that portion of the one feature pattern in the feature layer formed during the one etching segment to have a sidewall taper, if any, of less than or equal to 5[deg.]; the at least one etching segment forming an organic masking layer over the inorganic masking layer, the organic masking layer having a top outer surface roughness proximate the feature pattern at a conclusion of the one etching segment which is characterizable by an average value less than 100 Angstroms as is determinable by scanning electron microscopy as an average maximum size of all surface discernible objects of the patterned masking layer as measured and averaged along any 0.3 micron length of top outer surface from the one feature pattern. 22. The method of claim 21 wherein the average value of roughness is less than or equal to 50 Angstroms.23. The method of claim 21 wherein the one etching segment comprises high density plasma.24. The method of claim 21 wherein the one etching segment is conducted at the start of said plasma etching and is not conducted throughout all of said plasma etching.25. The method of claim 21 wherein the one etching segment is conducted at the end of said plasma etching and is not conducted throughout all of said plasma etching.26. The method of claim 21 wherein the one etching segment is conducted between the start and the end of said plasma etching, and not conducted at the start or end of said plasma etching.27. The method of claim 21 wherein the plasma etching during the one segment is void of any etching gases having carbon-nitrogen bonds.28. The method of claim 21 wherein the plasma etching during the one segment is void of any etching gases having carbon-oxygen bonds.29. The method of claim 21 wherein the plasma etching during the one segment is void of any etching gases having oxygen-oxygen bonds.30. A plasma etching method comprising:forming a patterned organic masking layer outwardly of a feature layer to be etched, the patterned organic masking layer having at least one feature pattern having a minimum feature dimension of less than or equal to 0.3 micron, the feature layer having a thickness and comprising polysilicon; and plasma etching the at least one feature pattern into the feature layer using the organic masking layer as a mask, the plasma etching comprising at least one etching segment where at least 30% of the thickness of the feature layer is etched using an etching gas comprising one gas compound comprising carbon, hydrogen and at least one halogen present in the etching gas at greater than or equal to 70% concentration by volume as compared to all carbon, hydrogen and halogen containing gas compounds in the etching gas effective to produce at least that portion of the one feature pattern in the feature layer formed during the one etching segment to have a sidewall taper, if any, of less than or equal to 5[deg.] and an organic masking layer top outer surface roughness proximate the feature pattern at a conclusion of the one etching segment which is characterizable by an average value less than 100 Angstroms as an average maximum size of all surface discernible objects of the patterned masking layer as averaged along any 0.3 micron length of top outer surface from the one feature pattern. 31. The method of claim 30 wherein the plasma etching comprises only one etching segment where 100% of the thickness of the feature layer is etched using said etching gas.32. The method of claim 30 wherein the organic masking layer comprises photoresist,-the average maximum value of the organic layer top outer surface roughness being that of the photoresist top outer surface as is determinable by scanning electron microscopy.33. The method of claim 30 wherein the organic masking layer comprises photoresist, and another organic masking material layer forming thereover during the plasma etching, the average maximum value of the organic layer top outer surface roughness being that of the another organic masking material top outer surface and not that of the photoresist as is determinable by scanning electron microscopy.34. The method of claim 30 wherein the one feature pattern comprises a contact opening.35. A plasma etching method comprising:forming a patterned organic masking layer outwardly of a feature layer to be etched, the patterned organic masking layer having at least one feature pattern having a minimum feature dimension of less than or equal to 0.3 micron, the feature layer having a thickness; and plasma etching the at least one feature pattern into the feature layer using the organic masking layer as a mask, the plasma etching comprising at least one etching segment where at least 30% of the thickness of the feature layer is etched using an etching gas comprising one gas compound comprising carbon, hydrogen and at least one halogen present in the etching gas at greater than or equal to 70% concentration by volume as compared to all carbon, hydrogen and halogen containing gas compounds in the etching gas effective to produce at least that portion of the one feature pattern in the feature layer formed during the one etching segment to have a sidewall taper, if any, of less than or equal to 5[deg.] and an organic masking layer top outer surface roughness proximate the feature pattern at a conclusion of the one etching segment which is characterizable by an average value less than 100 Angstroms as an average maximum size of all surface discernible objects of the patterned masking layer as averaged along any 0.3 micron length of top outer surface from the one feature pattern, wherein the plasma etching during the one etching segment is void of any etching gases having carbon-oxygen bonds, wherein the organic masking layer comprises photoresist, and another organic masking material layer forming thereover during the plasma etching, the average maximum value of the organic layer top outer surface roughness being that of the another organic masking material top outer surface and not that of the photoresist as is determinable by scanning electron microscopy. 36. The method of claim 35 wherein the plasma etching comprises only one etching segment where 100% of the thickness of the feature layer is etched using said etching gas.37. The method of claim 35 wherein the average maximum value of the organic layer top outer surface roughness is that of the photoresist top outer surface as is determinable by scanning electron microscopy.38. The method of claim 35 wherein the one feature pattern comprises a contact opening. |
TECHNICAL FIELDThis invention relates to plasma etching methods.BACKGROUND OF THE INVENTIONIntegrated circuitry density continues to increase and feature dimensions continue to get smaller. One aspect of semiconductor integrated circuitry fabrication is the etching of contact openings through insulating layers, such as borophosphosilicate glass (BPSG), to expose inward circuit regions to which electrical connection is desired.Contact openings are typically presently formed by depositing an organic masking layer (photoresist, being one example) outwardly of the layer within which the opening is to be formed. The masking layer is patterned to leave desired contact openings therethrough while leaving other areas of the layer covered (i.e., masked) such that etching will not there occur. The insulating layer is thereafter etched through the organic masking layer openings, preferably highly selectively to remove the insulating layer at a substantially greater rate than any etching of the masking layer. The ultimate goal is to outwardly expose a desired region of the underlying substrate.Forming such openings is preferably conducted using a highly anisotropic etch, such as a plasma etch. One such prior art etch employs an Applied Materials IPS Dielectric Etcher using reactive gas flows of CHF3 and CH2F2 at a volumetric ratio of 11:9, respectively. It was discovered using such chemistry that as the minimum feature dimension of the contact opening fell to 0.3 micron and below, the etched sidewalls of the feature layer being etched were becoming striated or otherwise roughened to a degree sufficient to impact critical dimension (CD) of the feature and overall yield. Such roughening apparently resulted from formation of striations or other roughenings in the opening sidewalls of the photoresist, which were being mask transferred to the feature layer. Such roughening was more prone to occur in useful processing windows in high density deposition tools, namely in processing windows where acceptable uniformity across the substrate could be achieved. Such sidewall striations might also have occurred in etching of larger contact openings, but were not problematic due to the larger opening dimensions. However at the 0.3 micron level and below, roughened or otherwise striated sidewalls within a feature opening (i.e., a damascene trough, a contact opening or other feature) can adversely affect CD and yield.The invention was motivated in addressing and overcoming this particular problem, yet is not so limited. Aspects of the invention are seen to have applicability to other aspects of plasma etching, with the invention only being limited by the accompanying claims, appropriately interpreted in accordance with the Doctrine of Equivalents.SUMMARYThe invention comprises plasma etching methods. In one implementations, a patterned organic masking layer is formed outwardly of a feature layer to be etched. The patterned organic masking layer has at least one feature pattern having a minimum feature dimension of less than or equal to 0.3 micron. The feature layer has a thickness inwardly of the one feature pattern which is to be etched to form the one feature pattern in the feature layer. The at least one feature pattern is plasma etched into the feature layer using the organic masking layer as a mask. The plasma etching comprises at least one etching segment where at least 30% of said thickness of the feature layer is etched using an etching gas comprising one gas compound comprising carbon, hydrogen and at least one halogen present in the etching gas at greater than or equal to 70% concentration by volume as compared to all carbon, hydrogen and halogen containing gas compounds in the etching gas. Such plasma etching is conducted under conditions effective to produce at least that portion of the one feature pattern in the feature layer formed during the one etching segment to have a sidewall taper of less than or equal to 5[deg.] and an organic masking layer top outer surface roughness proximate the feature pattern at a conclusion of the etching segment which is characterizable by an average value less than 100 Angstroms: Such average value is determinable by scanning electron microscopy as an average maximum size of all surface discernible objects of the patterned masking layer as measured and averaged along any 0.3 micron length of top outer surface from the one feature pattern.Other implementations are also contemplated.BRIEF DESCRIPTION OF THE DRAWINGSPreferred embodiments of the invention are described below with reference to the following accompanying drawings.FIG. 1 is a diagrammatic sectional view of a semiconductor wafer fragment in process in accordance with an aspect of the invention.FIG. 2 is a view of the FIG. 1 wafer fragment at a processing step subsequent to that depicted by FIG. 1.FIG. 3 is a view of the FIG. 1 wafer fragment at a processing step subsequent to that depicted by FIG. 2.FIG. 4 is a diagrammatic sectional view of an alternate embodiment semiconductor wafer fragment at a processing step in accordance with an aspect of the invention.FIG. 5 is a diagrammatic sectional view of an example high density plasma etcher usable in accordance with an aspect of the invention.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThis disclosure of the invention is submitted in furtherance of the constitutional purposes of the U.S. Patent Laws "to promote the progress of science and useful arts" (Article 1, Section 8). FIG. 1 illustrates a wafer fragment to be etched indicated generally with reference numeral 10. Such comprises a bulk monocrystalline silicon substrate 12 having an exemplary diffusion region 14 formed therein. In the context of this document, the term "semiconductor substrate" or "semiconductive substrate" is defined to mean any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductive substrates described above. Alternate substrates from substrate 12 are of course usable in the invention.A feature layer 16 to be plasma etched is formed outwardly of substrate 12. In the preferred and reduction-to-practice examples, the feature to be etched within layer 16 is in the form of a contact opening, with layer 16 predominately comprising silicon dioxide, such as BPSG. The invention is not, however, in any way limited to contact opening formation nor to etching into predominately silicon dioxide comprising layers. Aspects of the inventions are applicable to etching other features, by way of example only, damascene trough lines in insulative materials, polysilicon conductive features, and etching of other materials (i.e., Si3N4) to produce features in the form of openings or projections, whether conductive or not conductive. An organic masking layer 18 is formed outwardly of feature layer 16, and is patterned to form the desired feature patterns therethrough. One example organic masking layer is a photoresist, such as SEPR 402 available from SHIN-ITSU of Tokyo, Japan. Masking layer 18 has a top outer surface 17. Exemplary thicknesses for layers 16 and 18 are 21,000 Angstroms and 8,300 Angstroms, respectively.An exemplary feature pattern in the form of a contact opening 20 is formed in layer 18, and in preferred implementations has some minimum feature dimension A which is less than or equal to 0.3 micron. Also for purposes of the continuing discussion, feature layer 16 has some thickness B inwardly of feature pattern 20 which is to be etched to form the one feature pattern in feature layer 16. Of course almost universally, identical and/or other features are being etched elsewhere in layer 16, with only a single feature 20 being shown for example.Referring to FIGS. 2 and 3, the at least one feature pattern 20 in organic masking layer 18 is plasma etched into feature layer 16 using organic masking layer 18 as a mask to form a feature 22. The plasma etching comprises at least one etching segment C where at least 30% of thickness B (FIG. 1) of feature layer 16 is etched using an etching gas comprising one gas compound comprising carbon, hydrogen and at least one halogen present in the etching gas at greater than or equal to 70% concentration by volume as compared to all carbon, hydrogen and halogen containing gas compounds in the etching gas. The one etching segment preferably comprises high density plasma, which in the context of this document is defined to mean any plasma etching achieving a density of at least 10<9 >ions/cm<3>. An example reactor is a dual source, high density plasma etcher such as an IPS Dielectric Etcher from Applied Materials, Inc., of Santa Clara, Calif. Other type etching tools are also of course contemplated such as, by way of example only, parallel plate etchers that have one or more power supplies and/or etchers that use magnetic fields to affect the motion of charged species inside the chamber.FIG. 2 illustrates approximately 80% of thickness "B" of layer 16 being etched in the one etching segment, with only a little reduction in the thickness of organic masking layer 18 occurring during the etch as typically occurs with such layer during high density plasma etching. The invention also of course contemplates other percentages of thickness being etched. Further, and by way of example only, the plasma etching can comprises only one etching segment where 100% of the thickness of the feature layer is etched using said etching gas.Preferably, the one gas compound is present in the etching gas at greater than or equal to 80% concentration by volume as compared to any other carbon, hydrogen and halogen containing gas compound(s) in the etching gas during the one etching segment. Even more preferably, such gas is present at a 90% concentration by volume, as compared to any other carbon, hydrogen and halogen containing gas compound in the etching gas during the one etching segment. Even more preferably, such gas is present at a 95% concentration by volume, as compared to any other carbon, hydrogen and halogen containing gas compound in the etching gas during the one etching segment. Even more preferably, such gas is present at a 100% concentration by volume, as compared to any other carbon, hydrogen and halogen containing gas compound(s) in the etching gas during the one etching segment. An example preferred gas compound is CH2F2. An example additional gas compound comprising carbon, hydrogen and at least one halogen present in the etching gas at less than 30% concentration with the CH2F2 is CHF3. The plasma etching during the one segment is preferably void of any etching gases having carbon-nitrogen bonds, carbon-oxygen bonds, and oxygen-oxygen bonds.Plasma etching during the one segment is most preferably effective to produce at least that portion of feature pattern 22 in feature layer 16 formed during the one plasma etching segment to have a sidewall taper, if any, of less than or equal to 5[deg.], with a preferred lack of taper essentially being depicted in FIGS. 2 and 3. Further most preferably and in accordance with what motivated the invention, top outer surface 17 of organic masking layer 18 will have a roughness proximate feature pattern 20 at a conclusion of the one etching segment which is characterizable by an average value less than 100 Angstroms. This average value is determinable by scanning electron microscopy as an average maximum size of all surface discernible objects of patterned masking layer 18 as measured and averaged along any 0.3 micron length D (FIG. 2) of top outer surface 17 from feature pattern 20. More preferably, the average surface roughness value is less than or equal to 50 Angstroms.Top outer surface roughness created by the plasma etching has been determined to be of some significance in the sidewall roughness of masking layer 18 within feature pattern opening 20, particularly in the implementations where using an etching gas comprising one gas compound comprising carbon, hydrogen and at least one halogen present in the etching gas at greater than or equal to 70% concentration by volume as compared to all carbon, hydrogen and halogen containing gas compounds. Regardless, and although somewhat undesirable, the combination of a rough top outer masking layer surface and smooth masking layer feature sidewalls was not observed in reduction-to-practice examples, and was also not observed when operating below the above stated 70% concentration. At and above the above stated minimum 70% concentration, power parameters can be readily selected, if desired, by a person of skill in the art to arrive at a sidewall roughness which matches or shadows that of the top surface roughness.Further, it was observed in reduction-to-practice examples that the masking layer sidewall roughness which was the determining factor in etched feature layer sidewall roughness/striations (and attendant CD change) was that closest to the feature layer. Roughness or striations formed by the etching in the masking layer adjacent the top outer surface but not where the masking layer joins the feature layer did not mask transfer into the feature layer.Referring more specifically to FIG. 3, plasma etching is further conducted to comprise another etching segment E which is conducted after the one etching segment, and which is not necessarily the same as the one segment. Accordingly, such further etching may or may not increase roughness in the sidewalls of materials 16 and 18. Most preferably, the degree of further etching (i.e., conditions and time of etch) is not sufficiently great such that the sidewall smoothness of layer 16 created by etching segment C is maintained at the conclusion of the final illustrated etching and also occurs in etching segment E. In the described embodiment, an example additional etching segment E would use an etching gas comprising CH2F2 and CHF3, with the CHF3 being present at greater than 30% by volume of a total of the CH2F2 and CHF3 gases.The above-described first embodiment had the organic masking layer comprising photoresist with the average maximum value of the organic layer top outer surface roughness being that of the photoresist top outer surface at the conclusion of the one etching segment. FIG. 4 illustrates an alternate embodiment 10a. Like objects from the first described embodiment are depicted with the same numerals, with differences being depicted by the suffix "a" or with different numerals. In certain etching applications, the plasma etching can result in another organic masking material layer forming over the depleting original organic masking layer 18 during etching. In the context of this document, "organic" defines any material containing carbon bonded with at least some other elements which are not carbon. FIG. 4 depicts the etching proceeding whereby another organic material layer 19 forms over masking layer 18 during the etching. This layer is formed and removed during the etching and can achieve a nearly constant steady state thickness, or it may be formed and removed as the etching proceeds. This layer is shown in FIG. 4 as forming on the surface of the underlying layer mask layer and on the mask sidewalls. Some taper (less than or equal to 5 degrees) usually accompanies the growth of the layer on the mask sidewall. Both the formation on the surface of the mask and on the sidewalls of the mask can eliminate or postpone the eventual reformation of rough mask sidewalls and the transfer of this roughness into feature film 16. In the context of this example, the average maximum value of the organic layer top outer surface roughness will be that of surface 17a of the another organic masking material 19, and not that of photoresist where layer 18 comprises photoresist.Further in the FIG. 4 example, masking layer 18 might constitute an inorganic material, with organic material 19 forming thereover at least during the one etching segment. By way of example only, preferred inorganic masking materials include, polysilicon, silicides and metals.The FIGS. 1-3 example also depicts the one etching segment C as being conducted at the start of the plasma etching, and not conducted throughout all of the plasma etching. Alternately by way of example only, the one etching segment could be conducted from the start throughout all of the plasma etching. Further alternately by way of example only, the one etching segment could be conducted at the end of the plasma etching, and not otherwise conducted throughout all of the plasma etching. Further alternately by way of example only, the one etching segment could be conducted between the start and the end of the plasma etching, and not conducted at either the start or the end of the plasma etching. In one implementation, it has been discovered that conducting a later-in-time etching in accordance with the above-described preferred 70% or greater concentration results in smoothing of sidewall roughness in the feature layer occurring from an earlier-in-time plasma etching having less than 70% concentration of the subject gas.Accordingly, one aspect of the invention contemplates plasma etching at least one feature pattern into the feature layer using the organic masking layer as a mask comprising first-in-time and second-in-time etching segments. In a first-in-time of the etching segments, an etching gas is utilized which comprises at least two gas compounds with each comprising carbon, hydrogen and at least one halogen, and each being present in the etching at greater than 30% concentration by volume as compared to all carbon, hydrogen and halogen containing gas compounds in the etching gas. The first etching segment produces a first degree of sidewall roughness along a sidewall portion of the one feature pattern being formed in the feature layer. A second etching segment is conducted after the first etching segment, with the second etching segment comprising etching at least 30% of the thickness of the feature layer using an etching gas comprising at least one gas compound present at greater than or equal to 70% concentration by volume as compared to all carbon, hydrogen and halogen containing compounds in the etching gas effective to smooth the sidewall roughness of the first degree to a smoother second degree, for example to less than 250 Angstroms or less than 100 Angstroms. Typically and preferably, the one gas compound in the second etching segment is one of the at least two gas compounds utilized in the first etching segment.In one implementation, the invention contemplates plasma etching the at least one feature pattern into the feature layer using the organic masking layer as a mask comprising first-in-time and second-in-time etching segments. In a first-in-time of the etching segments, an etching gas is utilized which comprises at least two gas compounds with each comprising carbon, hydrogen and at least one halogen and each being present in the etching gas at greater than 30% concentration by volume as compared to all carbon, hydrogen and halogen containing gas compounds in the etching gas. The first etching segment produces a first degree of top surface roughness of the organic masking layer A second etching segment is conducted after the first etching segment, with the second etching segment comprising etching at least 30% of said thickness of the feature layer using an etching gas comprising at least one gas compound present at greater than or equal to 70% concentration by volume as compared to all carbon, hydrogen and halogen containing gas compounds in the etching gas effective to smooth the organic masking layer top surface roughness of the first degree to a smoother second degree. In only a preferred aspect of this implementation, the first segment effectively produces a rough top, and also rough sidewalls but only proximate the top in the masking layer. The second segment then preferably smooths the top and largely precludes the sidewall roughness from being transferred into the film by stopping masking layer sidewall roughness from migrating downward to adjacent the feature layer.In one implementation, the plasma etching comprises a plurality of etching segments which total at least 30% of the thickness of the feature layer being etched. The plurality of etching segments use an etching gas comprising one gas compound comprising carbon, hydrogen and at least one halogen present in the etching gas at greater than or equal to 70% concentrations (i.e., not necessarily the same concentration in each segment) by volume as compared to all carbon, hydrogen and halogen containing gas compounds in the etching gas. Preferably, each etching segment of the plurality removes at least 1000 Angstroms of feature layer thickness. The plasma etching also comprises at least one intervening etching segment which is not one of the plurality. The intervening etching segment comprises using an etching gas comprising at least two gas compounds with each comprising carbon, hydrogen and at least one halogen and each being present in the etching gas at greater than 30% concentration by volume as compared to all carbon, hydrogen and halogen containing gas compounds in the etching gas. The plurality of etching segments, with the intervening segment(s) is effective to produce at least that portion of the one feature pattern in the feature layer to have a sidewall taper, if any, of less than or equal to 5[deg.] and an organic masking layer top outer surface roughness proximate the feature pattern at a conclusion of said plurality of etching segments which is characterizable by an average value less than 100 Angstroms as is determinable by scanning electron microscopy as an average maximum size of all surface discernible objects of the patterned masking layer as measured and averaged along any 0.3 micron length of top outer surface from the one feature pattern.FIG. 5 is a cross-sectional schematic view of one form of a plasma etcher 200, particularly an IPS Dielectric Etcher from Applied Materials, Inc., of Santa Clara, Calif. The illustrated plasma etcher 200 includes a chamber 203 defined by an RF window 205, an enclosure 207, a hot ring 209, and a substrate assembly chuck 211. The substrate assembly chuck 211 includes a collar 213 and a ceramic base 215 to support a substrate 217, such as a silicon wafer or other substrate. Exhaust ports 219 are defined by gaps between the enclosure 207 and the hot ring 209, and connect to exhaust chambers 221. The RF window 205 and the hot ring 209 are maintained at selected temperatures with respective temperature controllers 261, 263. The temperatures of the RF window 205 and the hot ring 209 are typically maintained between 120-200[deg.] C. and 150-300[deg.] C., respectively. The RF window 205 and the enclosure 207 may be made of either silicon (Si) or silicon carbide (SiC) or a combination thereof, the hot ring 209 may be made of quartz, and the collar 213 may be made of silicon carbide. Silicon, especially when heated, can remove or "getter" fluorine from the chamber203 and thus can alter the composition of a fluorine containing gas mixture if included in the chamber 203.In this etcher, a first set of induction coils 233 and a second set of induction coils 235 are coaxially placed in proximity to the RF window 205, with the second set 235 placed within the first set 233. RF generators 239, 237 connect to the first and second set of induction coils 233, 235, respectively. An RF bias generator 241 is provided that connects to the substrate assembly chuck 211. RF excitations (RF voltages or currents) from the RF generators 239, 237 are applied to the first and second sets of induction coils 233, 235, respectively, and produce oscillating electric and magnetic fields at the RF window 205. The RF window 205 and the chamber walls 207 in this example are grounded. Because the RF window 205 is at least partially electrically conducting, the RF window 205 shields the chamber 203 from the oscillating electric fields produced by the coils 233, 235. The oscillating electric fields are either attenuated by or, in some cases, totally blocked by the RF window 205. As a result of the shielding effect of the RF window 205, the oscillating magnetic field produced by the coils 233, 235 is primarily responsible for the generation of a plasma in the chamber 203. The RF generators 237, 239 in the illustrated etcher provide RF excitations at typical frequencies of between about 1.0-3.0 MHz.A gas inlet 251 is connected to a gas supply manifold 253. Gases, which may be gas mixtures, for the chamber 203 are mixed at the gas manifold 253 and supplied to the chamber 203 through a gas inlet 251. A vacuum pump 255 is situated to evacuate the chamber 203 and is connected to the chamber 203 via a valve 256. During etching, the pressure in the chamber may generally be maintained in the range of from about 2 mTorr to 50 mTorr.Example specific parameters utilizing this reactor and CH2F2 and CHF3 gases for the one etching segment are as follows. CH2F2 flow is preferably at from about 45 to about 55 sccm, with CHF3 flow preferably being from 0 to about 15 sccm. Outer power is preferably kept at from 620 to 760 watts, with inner power ranging from 105 to 140 watts. Substrate bias is preferably kept at between 600 and 740 watts. The temperature of the reactor roof is preferably kept at from 130[deg.] to 150[deg.] C., while that of the ring is kept at from 180[deg.] to 220[deg.] C. The backside of the substrate is preferably cooled to from between -20[deg.] C. and +10[deg.] C. Reactor pressure during deposition is preferably at or about 25 mTorr.In a first specific reduction-to-practice example, outer power was 725 Watts, inner power was 125 Watts, and bias power was 700 Watts. Gas flow was 100% CH2F2 at 35 sccm. Chuck temperature was -10[deg.] C., window temperature at 140[deg.] C., and ring temperature at 200[deg.] C. Reactor pressure was 25 mTorr. Time of etch was 100 seconds, and the depth of the etch was 1.2 micron. The top outer surface value for smoothness/roughness was less than 10 Angstroms. The material etched was BPSG.In a second specific reduction-to-practice example, outer power was 900 Watts, inner power was 100 Watts, and bias power was 665 Watts. Gas flow was CH2F2 at 50 sccm, CF4 at 1 sccm and CHF3 at 1 sccm. Chuck temperature was -10[deg.] C., window temperature at 140[deg.] C., and ring temperature at 200[deg.] C. Reactor pressure was 25 mTorr. Time of etch was 100 seconds, and the depth of the etch was 1.1 micron. The top outer surface smoothness/roughness value was less than 10 Angstroms. The material etched was BPSG.In compliance with the statute, the invention has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the invention is not limited to the specific features shown and described, since the means herein disclosed comprise preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted in accordance with the doctrine of equivalents. |
Techniques for decimating a first periodic signal to generate a second periodic signal. In an exemplary embodiment, the first periodic signal is divided by a configurable integer ratio divider, and the output of the divider is delayed by a configurable fractional delay. The configurable fractional delay may be noise-shaped using, e.g., sigma-delta modulation techniques to spread the quantization noise of the fractional delay over a wide bandwidth. In an exemplary embodiment, the first and second periodic signals may be used to generate the transmit (TX) and receive (RX) local oscillator (LO) signals for a communications transceiver from a single phase-locked loop (PLL) output. |
CLAIMS 1. A method comprising: decimating a first periodic signal to generate a second periodic signal, the decimating comprising: dividing the frequency of the first periodic signal by a configurable integer ratio to generate an intermediate signal; and delaying the intermediate signal by a configurable delay to generate the second periodic signal. 2. The method of claim 1, a first ratio comprising a ratio of the frequency of the first periodic signal to the frequency of the second periodic signal, the method further comprising varying the configurable integer ratio if the first ratio has a non-zero fractional portion. 3. The method of claim 2, a first ratio comprising a ratio of the frequency of the first periodic signal to the frequency of the second periodic signal, the method further comprising storing an incrementing cycle index, the varying the configurable integer ratio comprising: subtracting a second coefficient from a first coefficient, the second coefficient comprising the floor function of the first ratio times one less than the cycle index, the first coefficient comprising the floor function of the first ratio times the cycle index. 4. The method of claim 1, the delaying by the configurable delay comprising:delaying by less than one period of the first periodic signal. 5. The method of claim 4, a first ratio comprising a ratio of the frequency of the first periodic signal to the frequency of the second periodic signal, the method further comprising storing an incrementing cycle index, the delaying by less than one period comprising delaying the intermediate signal by one period of the first periodic signal times the fractional portion of the first ratio times the cycle index. 6. The method of claim 1, further comprising: calculating a first ratio comprising a ratio of the frequency of the first periodic signal to the frequency of the second periodic signal; storing an incrementing cycle index; accumulating the first ratio with a delayed signal once per cycle index; computing the floor function of the output of the accumulating to generate the configurable integer ratio; subtracting the output of the floor function from the output of the accumulating to generate the configurable delay; and delaying the configurable delay to generated the delayed signal. 7. The method of claim 1, further comprising: calculating a first ratio comprising a ratio of the frequency of the first periodic signal to the frequency of the second periodic signal; storing an incrementing cycle index; accumulating the first ratio with a delayed signal once per cycle index;computing the floor function of the output of the accumulating to generate the configurable integer ratio; subtracting the output of the floor function from the output of the accumulating to generate a first delay; delaying the first delay to generated the delayed signal; and noise-shaping the first delay to generate the configurable delay. 8. The method of claim 7, the noise-shaping comprising applying a first-order sigma- delta modulation to the first delay. 9. The method of claim I, further comprising: mixing a received signal with a product of the first and second periodic signals. 10. The method of claim 9, further comprising: mixing a signal to be transmitted with the first periodic signal. 1 1. The method of claim I, further comprising: mixing a signal to be transmitted with a product of the first and second periodic signals; and mixing a received signal with the first periodic signal. 12. The method of claim I, further comprising: mixing a received signal with the first periodic signal;processing the output of the mixing with the first periodic signal; and mixing the output of the processing with the second periodic signal. 13. The method of claim 1, further comprising: decimating the first periodic signal to generate a second quadrature periodic signal, the decimating to generate the second quadrature signal comprising: dividing the frequency of the first periodic signal by a configurable quadrature integer ratio to generate an intermediate quadrature signal; and delaying the intermediate quadrature signal by a configurable quadrature delay to generate the second quadrature periodic signal. 14. An apparatus comprising: an integer division block configured to divide the frequency of a first periodic signal by a configurable integer ratio to generate an intermediate signal; and a delay block configured to delay the intermediate signal by a configurable delay to generate the second periodic signal. 15. The apparatus of claim 14, a first ratio comprising a ratio of the frequency of the first periodic signal to the frequency of the second periodic signal, the configurable integer ratio being varied when the first ratio has a non-zero fractional portion. 16. The apparatus of claim 14, a first ratio comprising a ratio of the frequency of the first periodic signal to the frequency of the second periodic signal, the apparatus configured to store an incrementing cycle index, the apparatus further comprising a ratio generationblock configured to subtract a second coefficient from a first coefficient, the second coefficient comprising the floor function of the first ratio times one less than the cycle index, the first coefficient comprising the floor function of the first ratio times the cycle index. 17. The apparatus of claim 14, the delay block configured to delay the intermediate signal by a configurable delay less than one period of the first periodic signal. 18. The apparatus of claim 17, a first ratio comprising a ratio of the frequency of the first periodic signal to the frequency of the second periodic signal, the apparatus configured to store an incrementing cycle index, the delay block configured to delay the intermediate signal by one period of the first periodic signal times the fractional portion of the first ratio times the cycle index. 19. The apparatus of claim 14, a first ratio comprising a ratio of the frequency of the first periodic signal to the frequency of the second periodic signal, the apparatus configured to store an incrementing cycle index, the apparatus further comprising: a clocked summer configured to accumulate the first ratio with a delayed signal once per cycle index; a floor function block configured to compute the floor function of the output of the clocked summer to generate the configurable integer ratio; a summer configured to subtract the output of the floor function block from the output of the clocked summer to generate the configurable delay; anda delay block configured to delay the configurable delay to generated the delayed signal. 20. The apparatus of claim 14, a first ratio comprising a ratio of the frequency of the first periodic signal to the frequency of the second periodic signal, the apparatus configured to store an incrementing cycle index, the apparatus further comprising: a clocked summer configured to accumulate the first ratio with a delayed signal once per cycle index; a floor function block configured to compute the floor function of the output of the clocked summer to generate the configurable integer ratio; a summer configured to subtract the output of the floor function block from the output of the clocked summer to generate a first delay; and a delay block configured to delay the first delay to generated the delayed signal; and a noise-shaping block configured to noise-shape the first delay to generate the configurable delay. 21. The apparatus of claim 20, the noise-shaping block comprising a first-order sigma delta modulator. 22. The apparatus of claim 14, further comprising: a mixer configured to mix a received signal with a product of the first and second periodic signals. 23. The apparatus of claim 22, further comprising: a mixer configured to mix a signal to be transmitted with the first periodic signal. 24. The apparatus of claim 14, further comprising: a mixer configured to mix a signal to be transmitted with a product of the first and second periodic signals; and a mixer configured to mix a received signal with the first periodic signal. 25. The apparatus of claim 14, further comprising: a first mixer configured to mix a received signal with the first periodic signal; a second mixer configured to mix a processed output of the first mixer with the second periodic signal. 26. The apparatus of claim 14, further comprising: a quadrature integer division block configured to divide the frequency of the first periodic signal by a configurable quadrature integer ratio to generate an intermediate quadrature signal; and a quadrature delay block configured to delay the intermediate quadrature signal by a configurable quadrature delay to generate a second quadrature periodic signal. 27. An apparatus comprising means for decimating a first periodic signal to generate a second periodic signal. 28. The apparatus of claim 27, the means for decimating comprising means for delaying a signal by a configurable delay to generate the second periodic signal, the means for delaying comprising a means for noise-shaping the delay. 29. A device for wireless communications, the device comprising at least one baseband TX amplifier for amplifying an analog TX signal, an LO signal generator comprising a TX LO signal generator and an RX LO signal generator, an upconverter coupled to the TX LO signal generator and the at least one baseband TX amplifier, a TX filter coupled to the output of the upconverter, a power amplifier (PA) coupled to the TX filter, an RX filter, a low-noise amplifier (LNA) coupled to the RX filter, a downconverter coupled to the RX LO signal generator and the RX filter, and at least one low-pass filter coupled to the output of the downconverter, the LO signal generator comprising: an integer division block configured to divide the frequency of a first periodic signal by a configurable integer ratio to generate an intermediate signal; and a delay block configured to delay the intermediate signal by a configurable delay to generate the second periodic signal; at least one of the TX LO signal generator and the RX LO signal generator configured to buffer the first periodic signal as the LO signal. 30. The device of claim 29, the LO signal generator further comprising a mixer for mixing the first and second periodic signals, at least one of the TX LO signal generator and the RX LO signal generator configured to buffer an output product of the mixer as the LO signal. 31. The device of claim 29, the LO signal generator further comprising a quadrature integer division block configured to divide the frequency of the first periodic signal by a configurable quadrature integer ratio to generate an intermediate quadrature signal; and a quadrature delay block configured to delay the intermediate quadrature signal by a configurable quadrature delay to generate a second quadrature periodic signal. 32. A computer program product storing code for causing a computer to decimate a first periodic signal to generate a second periodic signal, the code comprising: code for causing a computer to divide the frequency of the first periodic signal by a configurable integer ratio to generate an intermediate signal; and code for causing a computer to delay the intermediate signal by a configurable delay to generate the second periodic signal. |
SIGNAL DECIMATION TECHNIQUES BACKGROUND Field [0001] The disclosure relates to circuit design, and in particular, to techniques for decimating periodic signals such as local oscillator signals. Background [0002] Modern communications devices are often required to process two or more signals having different carrier frequencies. For example, a communications transceiver may simultaneously transmit TX signals on one or more TX carrier frequencies, and receive RX signals on one or more RX carrier frequencies. The TX and RX frequency bands may be separated from each other by a duplex offset frequency. [0003] To accommodate the multiple carrier frequencies, a single communications device may employ multiple phase-locked loops (PLL's) to simultaneously generate the desired frequencies. However, multiple PLL's may consume considerable die area on an integrated circuit, leading to higher cost. [0004] It would be desirable to provide techniques for generating multiple carrier frequencies from a single PLL output by, e.g., decimating the signal generated by the PLL, and mixing the component signals to produce the desired carrier frequencies. It would be further desirable to generally apply such techniques to decimating an arbitrary periodic signal to generate another periodic signal of lower frequency.SUMMARY [0005] An aspect of the present disclosure provides a method comprising decimating a first periodic signal to generate a second periodic signal, the decimating comprising dividing the first periodic signal by a configurable integer ratio to generate an intermediate signal; and delaying the intermediate signal by a configurable delay to generate the second periodic signal. [0006] Another aspect of the present disclosure provides an apparatus comprising: an integer division block configured to divide the frequency of a first periodic signal by a configurable integer ratio to generate an intermediate signal; and a delay block configured to delay the intermediate signal by a configurable delay to generate the second periodic signal. [0007] Yet another aspect of the present disclosure provides an apparatus comprising means for decimating a first periodic signal to generate a second periodic signal. [0008] Yet another aspect of the present disclosure provides a device for wireless communications, the device comprising at least one baseband TX amplifier for amplifying an analog TX signal, an LO signal generator comprising a TX LO signal generator and an RX LO signal generator, an upconverter coupled to the TX LO signal generator and the at least one baseband TX amplifier, a TX filter coupled to the output of the upconverter, a power amplifier (PA) coupled to the TX filter, an RX filter, a low- noise amplifier (LNA) coupled to the RX filter, a downconverter coupled to the RX LO signal generator and the RX filter, and at least one low-pass filter coupled to the output of the downconverter, the LO signal generator comprising: an integer division block configured to divide the frequency of a first periodic signal by a configurable integer ratio to generate an intermediate signal; and a delay block configured to delay the intermediatesignal by a configurable delay to generate the second periodic signal; at least one of the TX LO signal generator and the RX LO signal generator configured to buffer the first periodic signal as the LO signal. BRIEF DESCRIPTION OF DRAWINGS [0009] FIG 1 illustrates an exemplary embodiment of a decimation block according to the present disclosure; [0010] FIG 2 illustrates an exemplary embodiment of a decimation block according to the present disclosure; [0011] FIG 3 illustrates an example of the operation of the decimation block for the values shown in Table I, wherein β I fl = 2.25; [0012] FIG 4 illustrates an exemplary embodiment of an architecture to compute both A(k) and δ(*); [0013] FIG 5 illustrates an exemplary embodiment of a noise shaping block for processing 6(k) to generate a noise-shaped signal 6s(k); [0014] FIG 6 illustrates an example of the operation of the decimation block for generating a decimated signal 2g having a quadrature phase relationship to the signal y2 illustrated in FIG 3; [0015] FIG 7A illustrates an exemplary embodiment of a communications transceiver employing the signal y\ and the decimated signal y2 [0016] FIG 7B illustrates an alternative exemplary embodiment of a communications transceiver employing the signal y\ and the decimated signal y2; [0017] FIG 8 illustrates an exemplary embodiment of a method according to the present disclosure; and[0018] FIG 9 illustrates a block diagram of a design of a wireless communication device in which the techniques of the present disclosure may be implemented. DETAILED DESCRIPTION [0019] The detailed description set forth below in connection with the appended drawings is intended as a description of exemplary embodiments of the present invention and is not intended to represent the only embodiments in which the present invention can be practiced. The term "exemplary" used throughout this description means "serving as an example, instance, or illustration," and should not necessarily be construed as preferred or advantageous over other exemplary embodiments. The detailed description includes specific details for the purpose of providing a thorough understanding of the exemplary embodiments of the invention. It will be apparent to those skilled in the art that the exemplary embodiments of the invention may be practiced without these specific details. In some instances, well known structures and devices are shown in block diagram form in order to avoid obscuring the novelty of the exemplary embodiments presented herein. [0020] FIG 1 illustrates an exemplary embodiment of a decimation block 1 10 according to the present disclosure. In FIG 1, block 1 10 accepts an input, or first, periodic signal y\ having frequency fl . In an exemplary embodiment, the input signal yl may be generated by, e.g., a PLL for a communications device. Alternatively, the input signal yl need not correspond to the output of a PLL, and may correspond instead, e.g., to another reference signal, e.g., a crystal oscillator output signal, etc. From the input signal l, block 1 10 generates an output, or second, periodic signal yl having frequency fl, wherein fl is lower than fl . The relationship between fl and fl may further be specified fl = fl I d, wherein d is a division factor greater than 1. The function performed by block 110 maybe understood as decimation, wherein a higher- frequency signal yl is decimated to generate a lower- frequency signal yl. [0021] FIG 2 illustrates an exemplary embodiment 200 of a decimation block 1 10 according to the present disclosure. In FIG 2, the input signal yl is provided to an integer division block 210 to generate a divided, or intermediate, signal x. The signal x has a frequency n or n+l times less than the frequency of the signal yl, depending on the configuration of the division ratio signal 210a. The signal x is further provided to a digital-to-time converter (DTC) 220, which introduces a time delay to the signal x based on the configuration of a digital delay control signal 220a. [0022] In FIG 2, the division ratio signal 210a is generated by a ratio generation block 230. The division ratio signal 210a output by block 230 is also denoted herein as A(k), wherein k represents a discrete incrementing cycle index. The delay control signal 220a is generated by a delay generation block 240. The delay signal 220a output by block 240 is also denoted herein as 6(k). In the exemplary embodiment shown, blocks 230 and 240 both accept the signal x output by the integer division block 210 as an input. It will be appreciated that the cycle index k may be incremented by trigger events, e.g., rising edges, in the signal x output by block 210. [0023] In an exemplary embodiment, the division ratio signal 210a at a cycle k may be calculated according to the following equation (Equation 1): wherein the notation a] denotes the floor function applied to a, or the greatest integer less than or equal to a. Furthermore, the delay at a cycle k may be generated according to the following equation (Equation 2): wherein the notation frac[Z?] denotes the fractional portion of the number b, and b may generally be a mixed fraction. [0024] From Equations 1 and 2, it will be appreciated that the integer division block 210 decimates the signal y\ by the integer division ratio A(k), while the DTC introduces a delay d(k) that compensates for instantaneous phase error resulting from division by an integer (e.g., as opposed to division by an exact number) at each cycle k. The following table shows exemplary values of A(k) and 6(k) versus k for an exemplary embodiment wherein fl I fl = 2.25, as computed according to Equations 1 and 2 (Table 1): [0025] FIG 3 illustrates an example of the operation of the decimation block 200 for the values shown in Table I, wherein β I fl = 2.25. Note FIG 3 is shown for illustrative purposesonly, and is not meant to limit the scope of the present disclosure to any particular values shown. [0026] In FIG 3, a signal y\ is shown at 310. Cycles k are enumerated at 301. At 320, the division ratio A(k) as computed from Equation 1 is shown versus k. To generate x, the signal y\ is seen to be divided by a ratio of 2 for k equals 1, 2, and 3, and by a ratio of 3 for k equals 4, etc. At 330, the delay (k) as computed from Equation 2 is shown versus k. To generate y2, the signal x is seen to be delayed by corresponding amounts 0.25, 0.5, 0.75, 0, etc. At 340, the signal edges of y2 are shown. It will be appreciated that y2 has a frequency that is approximately 2.25 times less than the frequency of yl, according to the example shown. [0027] One of ordinary skill in the art will appreciate that there are various techniques for computing Equations 1 and 2 to arrive at A(k) and 6(k), respectively, e.g., by programming in hardware, firmware, or software. FIG 4 illustrates an exemplary embodiment 400 of an architecture to compute both A(k) and 6(k). Note FIG 4 is shown for illustrative purposes only, and is not meant to limit the scope of the present disclosure. One of ordinary skill in the art may readily derive alternative architectures for computing A(k) and 6(k), and such alternative exemplary embodiments are contemplated to be within the scope of the present disclosure. [0028] In FIG 4, a first ratio fl I f2 (which ratio is expected to be greater than 1) is input to a clocked summer 410, which also accepts a signal 440a as input. The clocked summer 410 adds fl If! to 440a once every cycle k to generate a signal 410a. The signal 410a is provided to a floor function block 420, which outputs a signal 420a corresponding to the greatest integer less than or equal to the value of signal 410a. The signal 420a may also correspond to A(k), as computed according to Equation 1.[0029] Further shown in FIG 4 is a summer 430, which subtracts the signal 420a from the signal 410a to generate a signal 430a. Signal 430a may correspond to 6(k), as computed according to Equation 2. Furthermore, signal 430a is delayed by a delay element 440 to generate the signal 440a, which is provided to the clocked summer 410 as earlier described. [0030] From the description of FIGs 2 and 3, it will be appreciated that the digital-to-time converter (DTC) 220 is designed to convert the digital delay 6(k) into a continuous-time delay for delaying the signal x. In certain situations, quantization error may be present in the digital-to-time conversion, e.g., when the value of the delay computed according to Equation 2 is not precisely represented by the digital precision of either 6(k) or the DTC 220. In an aspect of the present disclosure, 6(k) may be further processed using noise- shaping techniques to advantageously spread any such quantization noise over a wider bandwidth, thereby also reducing the effect of spurs in 6(k). [0031] FIG 5 illustrates an exemplary embodiment 500.1 of a noise shaping block for processing 6(k) to generate a noise-shaped signal 6s(k). Note the noise shaping block 500.1 is shown for illustrative purposes only, and is not meant to limit the scope of the present disclosure to any particular techniques for noise shaping. [0032] In FIG 5, 6(k) is provided to a clocked summer 510, which also accepts a signal 550a as input. The clocked summer 510 adds 6(k) to 550a once every cycle k to generate a signal 510a. The signal 510a is provided to a summer 520, which adds a dithering signal 520b to the signal 510a. In an exemplary embodiment, the dithering signal 520b may be, e.g., a pseudorandom signal having amplitude less than a quantization step size of the following quantizer 530. In an exemplary embodiment, the amplitude of the dithering signal is uniformly distributed over a range -q/2 to q/2, wherein q is the quantization stepsize of the following quantizer 530. It will be appreciated that the addition of the dithering signal 520b may serve to spread the quantization noise in 6(k) over a wider bandwidth, as well as reduce spurious components present in the dithered signal 6s(k). [0033] The output 520a of the summer 520 is provided to a quantizer 530, which quantizes the signal 520a with a finite quantization step size. The quantizer 530 may correspond, e.g., to a function performed by the DTC 220 shown in FIG 2. The output 530a of the quantizer may correspond to the noise-shaped delay s(k). In an exemplary embodiment, the noise-shaped delay 6s(k) may be used in place of the delay 6(k) in FIG 2 for delaying the intermediate signal x. The signal 530a is also provided to a summer 540, which subtracts 530a from 510a to generate a signal 540a. Signal 540a is provided to a delay unit 550, which generates a delayed signal 550a to be accumulated with 6(k) using clocked summer 510. [0034] It will be appreciated that the noise-shaping scheme 500.1 is an example of a first-order sigma-delta modulation scheme. One of ordinary skill in the art will appreciate that in alternative exemplary embodiments, this scheme may readily be replaced by other sigma- delta modulation schemes, e.g., second- or third-order sigma-delta modulation schemes. Furthermore, it will be appreciated that architectures known as "error feedback" architectures for delta-sigma modulation may be employed in the design of blocks 400 and 500.1 described herein, and techniques known in the art for designing such architectures are contemplated to be within the scope of the present disclosure. Delta- sigma modulation schemes are further described in, e.g., Schreier, Richard, et al, Understanding delta-sigma data converters. IEEE Press (2005). Alternative exemplary embodiments incorporating sigma-delta modulation schemes known in the art are also contemplated to be within the scope of the present disclosure.In an exemplary embodiment, a decimated signal having a quadrature phase relationship to the decimated signal yl may be generated according to the present disclosure. For example, for the exemplary embodiment wherein fl I fl = 2.25, the division ratio at a cycle k for a quadrature signal ylQ may be generated according to the following equation (Equation 3): AQ(k) = A(k + 2); and the delay at a cycle k may be generated according to the following equation (Equation 4): SQ (k) - frac S(k) + 16 [0036] In light of the present disclosure, one of ordinary skill in the art may readily derive corresponding equations for generating a quadrature decimated signal for other ratios of fl I fl, and such alternative exemplary embodiments are contemplated to be within the scope of the present disclosure. [0037] FIG 6 illustrates an example of the operation of the decimation block 200 for generating a decimated signal 2g having a quadrature phase relationship to the signal yl illustrated in FIG 3. In FIG 6, a signal yl is shown at 610. Cycles k of yl are enumerated at 601. At 620, the division ratio Ag(k) as computed from Equation 3 is shown versus k. The signal yl is seen to be divided by a ratio of 2 for k equals 1, by a ratio of 3 for k equals 2, and again by a ratio of 2 for k equals 3 and 4, etc. At 630, the delay 5g(k) as computed from Equation 4 is shown versus k. To generate 2g, the version of yl divided by Δρ(&) is seen to be delayed by corresponding amounts 0.8125, 0.0625, 0.3125, 0.5625, etc. At 640, the signal edges of 2g are shown.[0038] FIG 7A illustrates an exemplary embodiment 700A of a communications transceiver employing the signal yl and the decimated signal yl. Note FIG 7A is shown for illustrative purposes only, and is not meant to limit the scope of the present disclosure. [0039] In FIG 7A, a baseband signal to be transmitted 750a is provided to a mixer 740A. The mixer 740A mixes the signal 750a with the signal yl generated by the TX-RX LO generator 701A, whose frequency fl is chosen to correspond to the desired RF carrier frequency for the signal to be transmitted. The output of the mixer 740A may be transmitted as signal tl. [0040] The signal y\ is further mixed using a mixer 730A with the decimated signal y2 generated by the TX-RX LO generator 701 A. The output of the mixer 73 OA is filtered by a filter 720A to extract a carrier signal having frequency fl + fl. In an exemplary embodiment, the frequency fl + fl may be chosen to correspond to the desired RF carrier frequency for the received signal, e.g., fl may be chosen to correspond to the frequency offset between the TX and RX carrier frequencies for the transceiver 700A. [0041] It will be appreciated that mixing with quadrature signals may be readily incorporated into the architecture shown in FIG 7A. Furthermore, in alternative systems, the TX and RX carrier frequencies, and corresponding TX and RX LO's may readily be interchanged. In yet alternative systems, the frequency fl of signal yl need not correspond to either of the TX or RX carrier frequencies, and may instead be another frequency. For example, fl may be chosen such that fl + fl corresponds to the TX carrier frequency, and fl - fl corresponds to the RX carrier frequency, or vice versa. Such alternative exemplary embodiments are contemplated to be within the scope of the present disclosure.[0042] FIG 7B illustrates an alternative exemplary embodiment 700B of a communications transceiver employing the signal y\ and the decimated signal y2. Note FIG 7B is shown for illustrative purposes only, and is not meant to limit the scope of the present disclosure. [0043] In FIG 7B, a baseband signal to be transmitted 750b is provided to a mixer 730B. The mixer 730B mixes the signal 750b with the signal y\, whose frequency β is chosen to correspond to the desired RF carrier frequency for the signal to be transmitted. The output of the mixer 730B may be transmitted as signal t2. [0044] The signal y\ is further provided to a mixer 710B, which mixes y\ with a received signal r2. The output of the mixer 710B is provided to a second mixer 720B, which mixes the output of the mixer 710B with the decimated signal y2. In an exemplary embodiment, the frequency fl may be chosen to place the output of mixer 710B at a first intermediate frequency (IF) corresponding to fl, to be subsequently down-converted by the decimated signal y2. [0045] FIG 8 illustrates an exemplary embodiment of a method 800 according to the present disclosure. Note FIG 8 is shown for illustrative purposes only, and is not meant to limit the scope of the present disclosure to any particular method. [0046] In FIG 8, at block 810, the method includes decimating a first periodic signal to generate a second periodic signal. [0047] At block 812, the method includes dividing the frequency of the first periodic signal by a configurable integer ratio to generate an intermediate signal. [0048] At block 814, the method includes delaying the intermediate signal by a configurable delay to generate the second periodic signal.[0049] FIG 9 illustrates a block diagram of a design of a wireless communication device 900 in which the techniques of the present disclosure may be implemented. FIG 9 shows an example transceiver design. In general, the conditioning of the signals in a transmitter and a receiver may be performed by one or more stages of amplifier, filter, upconverter, downconverter, etc. These circuit blocks may be arranged differently from the configuration shown in FIG 9. Furthermore, other circuit blocks not shown in FIG 9 may also be used to condition the signals in the transmitter and receiver. Some circuit blocks in FIG 9 may also be omitted. [0050] In the design shown in FIG 9, wireless device 900 includes a transceiver 920 and a data processor 910. The data processor 910 may include a memory (not shown) to store data and program codes. Transceiver 920 includes a transmitter 930 and a receiver 950 that support bi-directional communication. In general, wireless device 900 may include any number of transmitters and any number of receivers for any number of communication systems and frequency bands. All or a portion of transceiver 920 may be implemented on one or more analog integrated circuits (ICs), RF ICs (RFICs), mixed-signal ICs, etc. [0051] A transmitter or a receiver may be implemented with a super-heterodyne architecture or a direct-conversion architecture. In the super-heterodyne architecture, a signal is frequency converted between radio frequency (RF) and baseband in multiple stages, e.g., from RF to an intermediate frequency (IF) in one stage, and then from IF to baseband in another stage for a receiver. In the direct-conversion architecture, a signal is frequency converted between RF and baseband in one stage. The super-heterodyne and direct-conversion architectures may use different circuit blocks and/or have different requirements. In the design shown in FIG 9, transmitter 930 and receiver 950 are implemented with the direct- conversion architecture.[0052] In the transmit path, data processor 910 processes data to be transmitted and provides I and Q analog output signals to transmitter 930. In the exemplary embodiment shown, the data processor 910 includes digital-to-analog-converters (DAC's) 914a and 914b for converting digital signals generated by the data processor 910 into the I and Q analog output signals. The DAC's 914a and 914b may each be provided with a clock signal 915a generated by a clock signal generator 915. [0053] Within transmitter 930, lowpass filters 932a and 932b filter the I and Q analog output signals, respectively, to remove undesired images caused by the prior digital-to-analog conversion. Amplifiers (Amp) 934a and 934b amplify the signals from lowpass filters 932a and 932b, respectively, and provide I and Q baseband signals. An upconverter 940 upconverts the I and Q baseband signals with I and Q transmit (TX) local oscillating (LO) signals from a TX LO signal generator 970 and provides an upconverted signal. A filter 942 filters the upconverted signal to remove undesired images caused by the frequency upconversion as well as noise in a receive frequency band. A power amplifier (PA) 944 amplifies the signal from filter 942 to obtain the desired output power level and provides a transmit RF signal. The transmit RF signal is routed through a duplexer or switch 946 and transmitted via an antenna 948. [0054] In the receive path, antenna 948 receives signals transmitted by base stations and provides a received RF signal, which is routed through duplexer or switch 946 and provided to a low noise amplifier (LNA) 952. The received RF signal is amplified by LNA 952 and filtered by a filter 954 to obtain a desirable RF input signal. A downconverter 960 downconverts the RF input signal with I and Q receive (RX) LO signals from an RX LO signal generator 980 and provides I and Q baseband signals. The I and Q baseband signals are amplified by amplifiers 962a and 962b and further filteredby lowpass filters 964a and 964b to obtain I and Q analog input signals, which are provided to data processor 910. In the exemplary embodiment shown, the data processor 910 includes analog-to-digital-converters (ADC's) 916a and 916b for converting the analog input signals into digital signals to be further processed by the data processor 910. The ADC's 916a and 916b may each be provided with a clock signal 915b generated by the clock signal generator 915. [0055] The LO signal generator 974 includes TX LO signal generator 970 and RX LO signal generator 980. TX LO signal generator 970 generates the I and Q TX LO signals used for frequency upconversion. RX LO signal generator 980 generates the I and Q RX LO signals used for frequency downconversion. Each LO signal is a periodic signal with a particular fundamental frequency. A PLL 972 receives timing information from data processor 910 and generates a signal used to adjust the frequency and/or phase of the RX and TX LO signals generated by 970 and 980. In an exemplary embodiment, the PLL 972, TX LO signal generator 970, and RX LO signal generator 980 may incorporate the techniques of the present disclosure. [0056] Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. [0057] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, orcombinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the exemplary embodiments of the invention. [0058] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. [0059] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM(EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal. In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laserdisc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. The previous description of the disclosed exemplary embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these exemplary embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
An embodiment of a system for avoiding race conditions when using edge-triggered interrupts includes a processor that asserts an interrupt pending signal in response to the receipt of an edge-triggered interrupt. A power management device receives the interrupt pending signal. If the processor is in a low power state when it asserts the interrupt pending signal, then the power management device causes the processor to enter a high power state to allow the processor to service the pending interrupt. |
ClaimsWhat is claimed is: 1. A method, comprising: asserting an edge-triggered interrupt signal to a processor; and delivering an interrupt pending signal from the processor to a power management device. 2. The method of claim 1, further comprising the power management device causing the processor to enter a high power state if the processor is in a low power state when the processor delivers the interrupt pending signal to the power management device. 3. The method of claim 2, wherein delivering an interrupt pending signal includes delivering the interrupt pending signal from the processor to the power management device over a single signal line coupled between a single processor pin and the power management device. 4. The method of claim 3, wherein causing the processor to enter a high power state includes the power management device deasserting a stop clock signal. 5. A method, comprising: asserting an edge-triggered interrupt signal to a processor; setting a bit within the processor indicating that an interrupt is pending; and polling the processor to determine if an interrupt is pending. 6. The method of claim 5, wherein polling the processor to determine if an interrupt is pending includes polling the processor to determine if an interrupt is pending only if the processor is in a low power state. 7. The method of claim 6, further comprising causing the processor to enter a high power state if the polling of the processor reveals that an interrupt is pending. 8. The method of claim 7, wherein causing the processor to enter a high power state includes deasserting a stop clock signal delivered from a power management device to the processor. 9. A system, comprising: a processor including a local interrupt controller and an interrupt pending signal output; an input/output interrupt controller coupled to the processor, the input/output interrupt controller to deliver an edge-triggered interrupt signal to the processor; and a power management device including an interrupt pending signal input coupled to the interrupt pending signal output of the processor, the processor to assert the interrupt pending signal in response to the delivery of the edge-triggered interrupt signal. 10. The system of claim 9, wherein the processor further includes a stop clock signal input, the processor to cease executing instructions in response to an assertion of the stop clock signal by the power management device. 11. The system of claim 10, the power management device to cause the processor to enter a high power state if the processor is in a low power state when it asserts the interrupt pending signal. 12. The system of claim 11, wherein the power management device causes the processor to enter the high power state be deasserting the stop clock signal. 13. A power management device, comprising: an interrupt pending signal input, an assertion of the interrupt pending signal to indicate that a processor has an interrupt pending; and a processor power management signal output, the power management device to cause the processor to enter a high power state by signaling to the processor to enter the high power state via the processor power management signal if the processor is in a low power state when it asserts the interrupt pending signal. 14. The power management device of claim 13, wherein the processor power management signal is a stop clock signal, and further wherein the power management device causes the processor to enter a high power state by deasserting the stop clock signal. 15. A processor, comprising: a local interrupt controller to receive an edge-triggered interrupt signal; and an interrupt pending signal output, the processor to assert the interrupt pending signal in response to the receipt of the edge-triggered interrupt signal. 16. The processor of claim 15, further comprising a stop clock signal input, the processor to cease executing instructions in response to an assertion of the stop clock signal, the processor further to recommence execution of instructions in response ~~-to~a deassertion-of the stop clock signal. |
METHOD AND APPARATUS FOR AVOIDING RACE CONDITIONWITH EDGE-TRIGGERED INTERRUPTS FIELD OF THE INVENTIONThe present invention pertains to the field of computer systems. More particularly, this invention pertains to the field of avoiding race conditions when using edge-triggered interrupts.BACKGROUND OF THE INVENTIONMany of today's microprocessors (referred to as"processors"herein) support a protocol in which the computer system interrupt controller is split between the processor and one or more external interrupt controllers. The portion included in the processor is typically referred to as a"local"interrupt controller and the portions maintained in external devices are typically referred to as"input/output"interrupt controllers. These interrupt controllers may support both level-triggered and edgetriggered interrupt signaling. In addition, some external devices may be capable of delivering edge-triggered or level-triggered interrupt indications to the processor's local interrupt controller without any intervening external input/output interrupt controller. When a level-triggered interrupt signal is delivered from the input/output interrupt controller to the local interrupt controller, the interrupt remains pending in the input/output interrupt controller until an explicit acknowledgement is received from the processor. However, when edge-triggered interrupt signaling is used, the input/output interrupt controller does not need to"remember"that the interrupt is pending because with edge-triggered interrupt signaling the processor does not acknowledge the interrupt. Edge-triggered interrupt signaling has some advantages over level-triggered interrupts. The primary advantage is that the processor can avoid the acknowledge cycles and status reads that are required with level-triggered interrupts, thus improving overall system performance. Edge-triggered interrupts cause a problem, however, in the area of power management. In particular, if an edge-triggered interrupt is delivered from the input/output interrupt controller to the local interrupt controller at about the same time that the processor is entering a low-power state, the interrupt will not be serviced (because the processor is not currently executing instructions due to the low power state) and the processor will remain in the low power state because the system's power management logic has no knowledge that an interrupt is pending (the input/output interrupt controller does not"remember"the pending edge-triggered interrupts). Thus, the interrupt remains pending and unserviced until the power management logic causes the processor to enter a high power state due to some other system event. This latency that results from edge-triggered interrupts arriving at the processor at about the same time the processor is entering a low power state results in lower overall system performance and lost interrupts that may result in functional failures. A separate problem occurs when a level-triggered interrupt is directly delivered by a peripheral to the processor without any visibility to the input/output interrupt controller, or if another input/output interrupt controller is used that does not have a connection to the power management logic. As with the edge-triggered case described above, the power management logic has no mechanism to detect the pending interrupt in the CPU. The processor may remain in a low power state for too long, resulting in lower overall system performance, lost interrupts, and functional failures.BRIEF DESCRIPTION OF THE DRAWINGSThe invention will be understood more fully from the detailed description given below and from the accompanying drawings of embodiments of the invention which, however, should not be taken to limit the invention to the specific embodiments described, but are for explanation and understanding only. Figure 1 is a block diagram of one embodiment of a system including an interrupt pending signal delivered by a processor to a power management unit. Figure 2 is a flow diagram of one embodiment of a method for avoiding race conditions when using edge-triggered interrupts.DETAILED DESCRIPTIONFigure 1 is a block diagram of one embodiment of a system 100 for avoiding race conditions when using edge-triggered interrupts. The system 100 includes a processor 110. The processor includes a local interrupt controller 112. The system 100 also includes a system logic device 120 that includes a power management unit 124 and an input/output interrupt controller 122. Other embodiments are possible that include other devices that can directly indicate interrupts to the local interrupt controller 112. These devices may include a peripheral device or another input/output interrupt controller. The input/output interrupt controller 122 asserts a variety of interrupts to the local interrupt controller 112. Interrupts may be asserted for a wide range of reasons. Some of these interrupts may be edge-triggered and some may be level-triggered. As interrupts are asserted by the input/output interrupt controller 122, the power management unit 124 receives notification of the asserted interrupts. The power management unit 124 controls whether the processor 110 is in a low power state or a high power state. The power management unit 124 places the processor 110 in a low power state by asserting a stop clock signal 113. Other embodiments are possible using other techniques for controlling power consumption in processors. The processor 110 ceases to execute instructions in response to an assertion of the stop clock signal 113. The power management unit 124 places the processor 110 in a liigh-power state by deasserting the stop-clock signal 113, thereby allowing the processor 110 to resume execution of instructions. In addition to asserting the stop clock signal 113, the power management unit may take additional action to reduce power consumption while placing the processor 110 into a low power state including blocking clock signals and reducing voltage levels. In order to avoid the race condition that can occur when the power management unit 124 places the processor 110 into a low power state before the processor 110 has an opportunity to service an interrupt recently received by the local interrupt controller112, the processor asserts an interrupt pending signal 111. The interrupt pending signal111 alerts the power management unit 124 that an interrupt is still pending in the processor 110. In response to the assertion of the interrupt pending signal 111, the power management unit 124 deasserts the stop clock signal 113, thereby allowing the processor 110 to resume executing instructions and to service the pending interrupt. If the power management unit 124 has taken additional action to reduce power consumption while the processor 110 is in the low power state, such as blocking clock signals or reducing voltages, then the power management unit 124 reverses those actions in further response to the assertion of the interrupt pending signal 111. In embodiments including peripheral devices or another other input/output interrupt controllers that communicate edge-triggered or level-triggered interrupts directly to the processor 110 without delivering a notification of the interrupts to the power management unit 124, the processor 110 asserts the interrupt pending signal 111 to indicate to the power management unit 124 that an interrupt is pending and the system should be brought to a high power state. In one embodiment, the processor 110 uses a dedicated pin for the interrupt pending signal 111. Other embodiments are possible where the interrupt pending signal is multiplexed on a pin with another signal. For example, the interrupt pending signal may share a pin with a floating point error signal. The processor 110 can use a select bit within the processor 110 to indicate whether an assertion of the interrupt pending/floating point error signal was used to indicate a floating point error or a pending interrupt. The system logic device 120 may likewise use a select bit to indicate whether the assertion of the interrupt pending/floating point error signal was used to - indicate a floatirig pbinterronor a pending interrupt. Further, although the discussion above describes an interrupt pending signal that naos'only two states (either asserted or not asserted), other embodiments are possible where more that one state can be communicated over the interrupt pending signal. Also, although the system 100 includes a single signal line for the interrupt pending signal 111, other embodiments are possible using more than one signal line. The system 100 described above uses an interrupt pending signal 111 delivered from the processor 110 to the power management unit 124. Other embodiments are possible where instead of the processor delivering a signal to the power management unit, the system logic device or other system component may periodically poll the processor to determine whether an interrupt is pending or not. The system 100 described above includes only one processor 110. However, other embodiments are possible where more than one processor may be included in the system. The pending interrupt indications from the separate processors may be logically combined to form one pending interrupt indication to the power management unit 124 or the power management unit 124 may receive a separate indication from each of the separate processors. Figure 2 is a flow diagram of one embodiment of a method for avoiding race conditions when using edge-triggered interrupts. At block 210, an edge triggered interrupt is asserted to a processor. An interrupt pending signal is asserted from the processor to a power management device at block 220. The interrupt pending signal exposes to the power management device that an interrupt is pending. The power management device would not otherwise have this information. At block 230, a determination is made as to whether the processor is in a low power state or not. If the processor is not in a low power state, then block 240 indicates that normal system operation continues and no action is required by the power management device. If, however, the processor is in a low power state, then at block 250 the power management device causes the processor to enter a high power state to allow the processor to service the pending interrupt. The method described above in connection with Figure 2 is not limited to indication of pending edge-triggered interrupts. The interrupt pending indication can be -utilized for both edge-triggered and level-triggered interrupts. In the foregoing specification the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense. Reference in the specification to"an embodiment,""one embodiment,""some embodiments,"or"other embodiments"means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the invention. The various appearances of"an embodiment,""one embodiment,"or"some embodiments"are not necessarily all referring to the same embodiments. |
In some embodiments, the invention involves protecting a platform using locality-based data and, more specifically, to using the locality-based data to ensure that the platform has not been stolen or subject to unauthorized access. In some embodiments, a second level of security, such as a key fob, badge or other source device having an identifying RFID is used for added security. Other embodiments are described and claimed. |
1.A system for protecting a computing platform from unauthorized access includes:A host processor coupled to the first wireless communications device to receive location-based information from the positioning device;A firmware service for operating during boot to verify that the computing platform is authorized for operation based at least on location-based information received from the positioning device and a predefined platform policy; andA runtime service for running after booting, the runtime service for verifying that the computing platform is authorized for operation based at least on location-based information received from the positioning device and a predefined Platform strategy.2.The system of claim 1, wherein the runtime service is executed on a second processor different from the host processor, the second processor is coupled to the platform and is in communication with the host processor Communicate.3.The system of claim 1, wherein the runtime service is executed on the host processor.4.The system of claim 1, wherein the firmware service is to: allow the normal operation when the platform is within range of the location defined by the platform policy and the identifier associated with the platform is authenticated .5.The system of claim 4, wherein the firmware service is configured to: when the platform is outside of the range of the location defined by the platform policy or if the identification associated with the platform Is not authenticated, then the platform is prohibited from completing the booting.6.The system of claim 1, wherein the runtime service is to: allow the normal when the platform is within range of the location defined by the platform policy and the identifier associated with the platform is authenticated operating.7.The system of claim 6, wherein when the platform is outside a range of the location defined by the platform policy or if the identifier associated with the platform is not authenticated, the The firmware service is for performing at least one of locking the platform, forcing the platform to perform a screensaver, or for shutting down the platform.8.The system of claim 1, wherein the platform is configured to send an alert when the platform fails to authorize operations.9.The system of claim 8, wherein the alert is sent by at least one of a network device coupled to the host processor or a network device coupled to a second processor on the platform.10.The system of claim 1, wherein the authorizing is further based on detecting that the platform is within a predefined range of at least one security device.11.The system of claim 10, wherein the security device comprises a physical device having a passive or active radio frequency identifier known to the platform strategy.12.The system of claim 1, wherein the location-based information includes an indicator as to whether the platform is within range of an authorized network.13.The system of claim 1, wherein the positioning device is a global positioning satellite system.14.The system of claim 1, wherein the positioning device is a local positioning system within the scope of the platform.15.A method for protecting a computing platform from unauthorized access includes:Receive location-based information from the positioning device during booting and runtime;Based on the received location-based information and platform policies, whether the platform is within a predefined range of locations;Send the platform identifier to the ID authenticator on the web server;Receive one of authentication confirmation or authentication failure from the ID authenticator;Determine whether the platform is authorized to operate at the determined current location based on the received location-based information, an authentication confirmation / failure of the platform identifier, and a platform policy;When the platform is authorized to operate, allowing normal boot and runtime operations; andWhen the platform is not authorized to operate, performing at least one of the following operations based on a platform policy and whether the platform is in a boot mode or a runtime mode:Prohibit the platform to guide,When running, the platform is locked,When running, close the platform, as wellSend an alert identifying the failure to authorize the platform's normal operation.16.The method of claim 15, wherein determining whether the platform is authorized to operate is performed by a firmware service during booting and by a system service during runtime.17.The method of claim 16, wherein the system service at run time is executed on one of a host processor or a second processor on the platform, wherein the host processor and the The two processors are coupled to separate network devices that are capable of communicating independently with at least one network device.18.The method of claim 15, further comprising allowing normal bootstrap operations when the platform is within range of the location defined by the platform policy and the platform ID is validated.19.The method of claim 18, further comprising: disabling the platform when the platform is out of range of the location defined by the platform policy or if authentication from the ID authenticator is received fails The platform completes the boot.20.The method of claim 15, further comprising allowing normal runtime operation when the platform is within range of the location defined by the platform policy and the platform ID authentication is validated.21.The method of claim 20, further comprising: performing the following operation when the platform is outside the range of the location defined by the platform policy or when receiving authentication failure from the ID authenticator At least one of: locking the platform, forcing the platform to perform a screensaver, or shutting down the platform.22.The method of claim 15, further comprising: sending an alert to the network device when the platform fails to authorize the operation.23.The method of claim 15, wherein determining whether the platform is authorized to operate at the current location further comprises:Detecting the proximity of at least one security device, wherein the authorization is revoked or stopped when the at least one security device fails to remain in the vicinity of the platform, wherein the platform policy defines a threshold of proximity and Authorized safety equipment.24.The method of claim 23, wherein the security device comprises a physical device having a passive or active radio frequency identifier known to the platform policy.25.The method of claim 15, wherein the location-based information includes an indicator as to whether the platform is within range of an authorized network.26.The method of claim 15, wherein the positioning device is a global positioning satellite system.27.The method of claim 15, wherein the positioning device is a local positioning system within the scope of the platform.28.A computer-readable medium storing instructions for protecting a computing platform from unauthorized access that when executed on at least one processor on the platform cause the platform to:Receive location-based information from the positioning device during booting and runtime;Based on the received location-based information and platform policies, whether the platform is within a predefined range of locations;Send a platform identifier to an ID authenticator on the web server;Receive one of authentication confirmation or authentication failure from the ID authenticator;Determine whether the platform is authorized to operate at the determined current location based on the received location-based information, an authentication confirmation / failure of the platform identifier, and a platform policy;When the platform is authorized to operate, allowing normal boot and runtime operations; andWhen the platform is not authorized to operate, performing at least one of the following operations based on a platform policy and whether the platform is in a boot mode or a runtime mode:Prohibit the platform to guide,When running, the platform is locked,When running, close the platform, as wellSend a warning that identifies the failure to authorize normal operation of the platform.29.The medium of claim 28, wherein determining whether the platform is authorized to operate is performed by a firmware service during booting and by a system service during runtime.30.The medium of claim 29, wherein the system service at runtime is executed on one of a host processor or a second processor on the platform, wherein the host processor and the The two processors are coupled to separate network devices that are capable of communicating independently with at least one network device.31.The medium of claim 28, further comprising instructions for allowing normal bootstrap operations when the platform is within a range of locations defined by the platform policy and the platform ID is validated.32.The medium of claim 31, further comprising means for disabling when the platform is out of range of the location defined by the platform policy or if authentication fails to be received from the ID authenticator The platform completes the guided instructions.33.The medium of claim 28, further comprising instructions for allowing normal runtime operation when the platform is within a range of locations defined by the platform policy and the platform ID authentication is validated.34.The medium of claim 33, further comprising means for performing the following when the platform is outside of the range of the location defined by the platform policy or when authentication is not received from the ID authenticator Instructions for at least one of: locking the platform, forcing the platform to perform a screen saver, or closing the platform.35.The medium of claim 28, further comprising sending an alert to the network device when the platform fails to authorize the operation.36.The medium of claim 28, wherein determining whether the platform is authorized to operate at the current location further comprises instructions for:Detecting proximity of at least one security device, wherein the authorization is revoked or stopped when the at least one security device fails to remain in the vicinity of the platform, wherein the platform policy defines a threshold of proximity and Authorized safety equipment.37.The medium of claim 36, wherein the security device comprises a physical device having a passive or active radio frequency identifier known to the platform strategy.38.The medium of claim 28, wherein the location-based information includes an indicator as to whether the platform is within range of an authorizing network.39.The medium of claim 28, wherein the positioning device is a global positioning satellite system.40.The medium of claim 28, wherein the pointing device is a local positioning system within the scope of the platform. |
Systems and methods for providing additional security to a platform with location-based dataThis application is a divisional application for a patent application with the same title with the filing date of Dec. 25, 2009 and application number 200911000240.7.Technical fieldEmbodiments of the present invention generally relate to securing a platform using location-based data and, more particularly, to using location-based data to ensure that the platform is not stolen or unauthorized access. In some embodiments, a second level of security, such as a key fob, is used for additional security.Background techniqueVarious mechanisms exist to protect mobile computing devices from theft or unauthorized access. Password protection for hard drive and operating system logins is commonly used in existing systems. While existing platforms are protected by power-on passwords, screensaver passwords, and network login passwords, one can not yet protect the platform from inadvertent user-judged failures. The password may be stolen or captured, allowing unauthorized people to access the device. An example of determining a failure may be related to the theft of an unattended laptop computer or to the user's failure to choose to use reasonable password protection.BRIEF DESCRIPTION OF THE DRAWINGS FIGThe features and advantages of the present invention will become apparent from the following detailed description of the invention in which:Figure 1 is a block diagram illustrating a very high level system according to an embodiment of the invention comprising a platform having location based components;Figure 2 illustrates the basic logic for a position sensor according to an embodiment of the invention;Figure 3 shows an exemplary formula for calculating Powerr according to an embodiment of the present invention;Figure 4 is a flowchart illustrating an exemplary method for securing a platform using location-based information in accordance with an embodiment of the present invention; andFigure 5 is a block diagram illustrating an example platform for implementing the features disclosed herein according to an embodiment of the invention.detailed descriptionEmbodiments of the present invention relate to systems and methods that use location-based data to protect a platform from unauthorized access or use. For platforms built as "connected" devices, these devices can be tuned via the setting of policy variables to accommodate the presence of a familiar network and the presence of authorized users. Instead of meeting any of these policy variables, an alternative boot path can be started (during power-up). If the platform is already up and operating system (OS) is up, the platform can immediately enter "locked" mode when the policy variable changes to indicate unauthorized use. Examples of these situations may be when a platform detects no familiar network, detects a physical intrusion, or in a location-enabled platform when there is no suitable radio frequency (RF) transceiver or RFID (RF identifier) near the platform .Reference in the specification to "one embodiment" or "an embodiment" of the present invention means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.For the purposes of explanation, specific structural and details are set forth to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that embodiments of the present invention may be practiced without the specific details set forth herein. In addition, well-known features may be omitted or simplified so as not to obscure the present invention. Various examples can be given throughout this specification. These examples are merely descriptions of specific embodiments of the invention. The scope of the invention is not limited to the given examples.With the advent of platform components that enhance the manageability of remote platforms, for example, using manageable and innovative technologies available from Intel Corporation, such as "Active Management Technology" (IAMT), one can take advantage of the out-of-band nature of the communications environment out-of-band nature to report to the regulatory agencies that there are potential security breaches. For more information about IAMT, see the public Internet URL: www * intel * .com / technology / manage / iamt /, where some of the point numbers for the URLs in the document are replaced with asterisks to avoid unintentional hyperlinks.Figure 1 is a block diagram illustrating a very high level system in accordance with an embodiment of the present invention wherein the system includes a platform having location based components. In an embodiment, the platform 101 is coupled to the RF device 103. The RF device may be used to receive location information from a global positioning satellite (GPS) system 107. In another embodiment, it may be important to use the platform only in a subset of houses within a building, for example when the platform is located within a safe building. Therefore, the location is still important. In this case, the RF device 103 may be in communication with a more localized positioning device (not shown) located within the building, and the positioning device has a more limited range than the satellite 107. In any case, the RF device 103 receives the location-based information from the positioning device.In an embodiment, platform 101 may detect if it is in range or whether it is connected to a known network 109. One mechanism may ensure that the platform is booted in a network 109 known nearby; the lack of such a known / familiar environment may prompt the platform 101 to lead to an alternative boot path, which may require additional user 105 authentication.In an embodiment, platform 101 may be equipped with a manageable engine or microprocessor capable of out-of-band communication 111, such as "Active Management Technology (IAMT)." This out-of-band network connection can report to remote authorities the potential security issues associated with systems that are trying to be booted without authorization.By using a platform-based heuristic approach (eg, intrusion detection without password confirmation, location-based operations, etc.), platform 101 may change its behavior to automatically run an alternative boot path to deter the use of the platform, And may report to the central agency that the platform is being used in an unauthorized manner. The heuristic method may include the detection of the network 109; the location from the location source 107; or the security device with RFID 120 such as a key fob 121 or a token card 123 or a biometric reader for detecting an authorized user, 125. Other passive or active safety devices can be used. Based on the detected parameters, the platform policy indicates which candidate lead path to launch and whether to send a warning or whether to ask the user for a password.The embodiments of the present invention differ from existing systems in that they do not simply rely on shared secret information that may be forgotten, such as pass-phrases. Using location information (eg, a familiar network) or some RFID data (eg, data embedded in a person's corporate identity card) is not considered or must be carried by most users in order to enter, for example, a physical building.Using relatively low cost techniques (eg, RF transceivers), the presence or absence of a safety device (eg, an object such as a company identification card with RFID, or a fob, etc.) can be detected. The platform policy instructs the service to continuously run on the platform to detect the security device and verify that the object is in the vicinity of the transceiver, thus authorizing the use of the given system. This object sensing also allows the platform to detect when the authorized user has left the detection radius of the transceiver or is out of the range threshold. When proximity tests fail, the service can automatically lock the platform out of service.This same solution can be applied to various types of platforms. For mobile systems, it is common for laptop computers to be stolen (eg, from cars, airports, etc.), but the system will not boot when thieves attempt to use the system due to a lack of proper proximity response identification. For desktop systems, platform policies may indicate that the user is authorized to operate the computer in the user's compartment or office, or to operate the computer in the traveler's workstation area, but not authorized to operate the computer in another user's work area. For servers, one can expect more complicated situations. For example, two places for an ID can be used: one is the maintainer's ID, which allows the server to be operated locally (eg, keyboard interaction, etc.) and the second is that the ID located in the room itself can allow the server to work properly. In the first server ID example, if the maintainer leaves, the server can automatically lock and disable user input, but still work differently. Another strategy is to reboot the server to another boot path when the maintainer leaves the vicinity. In the second ID example, if the server detects that it has been moved to an unauthorized location, it can stop working.Figure 2 illustrates the basic logic for a position sensor according to an embodiment of the present invention. Other platform strategies can be used based on the projected impact of potential safety issues. In this logic, Powerr represents the detection strength or power of the safety device, or the distance / position threshold. For example, if a key fob is detected within a predetermined range of the platform, Powerr will be higher than a threshold. In some embodiments, Powerr indicates that the platform is within a certain range of the specified location.In an embodiment, as shown in row 201, if Powerr is higher than a threshold and an identifier (eg, an identifier on a tag or fob) is correct, then normal operation is allowed. As shown in row 203, if Powerr is higher than the threshold and the identifier is not correct (eg, an unauthorized identifier on a token or fob) or the identifier is missing and the platform attempts to boot, then booting may be disabled or only allowed Use alternative boot paths. As shown in row 205, if Powerr is higher than the threshold and the identifier is incorrect, and the platform is not in the pre-boot phase, the policy may indicate that the session is locked, such as disallowing user input or output, or forcing the screensaver to run. In some cases, you can perform a shutdown.As shown in row 207, if Powerr is lower than a threshold, such as out of range or in an unauthorized location, and the platform attempts to pre-boot phase, the policy may indicate that platform boot is disabled. As shown at line 209, if Powerr is lower than a threshold, such as out of range or in an unauthorized location, and the platform is not in the pre-boot phase, the policy may indicate that the session will be locked, eg, not allowing user input or output, or forcing Screensaver. In some cases, you can perform a shutdown.FIG. 3 shows an exemplary formula for calculating Powerr according to an embodiment of the present invention. Equation 200 is a common formula for calculating the received power by multiplying and dividing the power / gain / wavelength (λ) by (4 * Pi * radius) 2. This formula is usually used to determine the distance of an object. Therefore, it is used to determine whether the device is within a certain radius of the machine.4 is a flowchart illustrating an example method for securing a platform using location-based information in accordance with an embodiment of the present invention. At block 401, the platform initiates initialization during the boot process. The point at which the policy engine runs during boot can change as long as the process runs before the operating system (OS) runs. In other words, this process, discussed below, can be run by the firmware or the BIOS before the OS is running. At block 403, it is determined if the platform enables location-based security features. If the security feature is not enabled, then at block 405, the platform boots to the target as usual.When security feature features are enabled, at block 407, it is determined whether intrusion detection has been initiated. The detected intrusions are usually when the platform enclosure is opened and / or some changes have occurred to the hardware of the platform since the last boot. It is well known in the art to detect when the cabinet is opened. If such intrusion is detected, then at block 409, the user may be forced to challenge to ensure that the user is authorized. At block 411, it is determined if the challenge failed. At block 451, based on the platform policy, the failure of the challenge may result in the platform locking and / or the alert may be sent via out-of-band communication and the platform will not boot. It is also possible to send a position change or status in the alarm. If the challenge is successful, the platform may be allowed to boot and continue to block 417.For example, if someone steals a platform and attempts to circumvent security by opening the enclosure and changing the hardware, embodiments of the present invention may still protect the device from unauthorized use as long as the firmware remains intact. When thieves try to guide the platform, it will force the challenge. One challenge can be simple password requirements. Another more passive challenge may be to determine whether a known network is within range or if the platform is within range of an authorized location. If the challenge fails, the system will not boot and the attempt attempted by the thief will fail. Based on the platform strategy, the required challenges can be different.In some embodiments, the platform may connect to the network during booting. When the network is available, the firmware can send an alert message to the appropriate agency describing the security breach. If the platform is able to determine its location, it can be sent in an alert message. In an embodiment, this communication is possible using a manageability engine or IAMT network connection. In another embodiment, the host network driver is already running and the alert is sent via the host processor network connection.If no intrusions were detected at block 407, a location-based query may be initiated at block 413. The query may be sent to a pointing device, such as the GPS system 107, or a local device (not shown). The platform's location can be returned to the platform in various formats. It is understood that there may be a delay in receiving location-based information from the positioning device. Thus, in an embodiment, at block 415, a timer t may be set such that a cycle may be initiated to await a response. Once the time has passed and location-based information is received, the process continues to block 417.At block 417, it is determined if a mandatory challenge response security measure is enabled on the platform. If this security measure is not enabled, at block 419, the boot target can run the OS or application. If security is enabled, processing continues to block 421. In an embodiment, each platform has a unique identifier known to authorize the network. At block 421, the platform identifier may be sent to the network using the request packet. Request packets can be used only for network response. At block 423, if it is determined that a response has occurred, then at block 427, the web server can verify the ID. If it is determined at block 429 that the ID is valid, at block 419, a boot target may be executed to run the OS or application.Within the threshold time t, as long as it is determined at block 425 that no response was received, the process loops back to block 423 to wait for a response. If there is no response within time t, the ID request may be sent again at block 421.Once the platform has been booted at block 419, the platform may still be removed from an authorized location or stolen, for example, while it is in standby or hibernation mode, or even while in operation. Therefore, runtime security can be enabled. During run time, at block 431, it is determined whether the forced challenge response security feature is enabled. This check can be performed periodically, as determined by the platform strategy. If the security function is not enabled, the processing will continue in the usual way, ie back to block 431 in the loop, in order to be able to continue checking whether the function is enabled.If it is determined at block 431 that security is enabled, then at block 433 an ID request may be sent to the web server. After booting, additional location-based queries may also be periodically performed at block 461. As discussed for blocks 423 and 425, wait loops are executed at blocks 435 and 437 to await a response. When a response is received by the web server, an ID is verified at block 439. If it is determined at block 441 that the ID is not valid for the location and / or network, at block 443, based on the platform policy, the platform may be forced to enter lock mode, screen saver mode, or shutdown. If the platform policy indicates a locked, standby or screensaver mode instead of a full shutdown, then it would be obvious to one skilled in the art that various ways could be chosen to continue the process. In an embodiment, it may be necessary to physically reboot the platform, for example by forcing a power button to force a reboot, and possibly restarting the location-based check from the boot phase. In another embodiment, polling of location information and / or ID requests may be performed periodically until the platform enters into an authorized state. In another embodiment, the user may press a predefined sequence of keys to elicit a challenge response in an attempt to unlock. It can be understood that the method shown in FIG. 4 is exemplary, and may be performed by polling, interrupting, or periodically setting in various chronological order according to a platform policy, a platform architecture, or an implementation manner. Query and identifier verification.If the ID is valid after the first attempt, or when already in lock mode, the platform may be allowed to continue processing as usual at block 431. This loop reflects a periodic check to ensure that the platform identifier is authorized at the current location. In another embodiment, the process may continue at block 461 to perform another location-based query.However, if the position sensor fails to obtain a valid pre-programmed position after n retries, then at block 445 various platform strategy drive activities may be performed. For example, a platform may be outside of the allowed range of locations even if the ID is authenticated on the network. Therefore, the platform can be forced to lock, standby, shutdown and other state, and can send alerts to network managers. In an embodiment, subsequent processing may continue to wait for more location-based information at block 461.In another embodiment, in addition to location-based policies, additional checks may be performed to require that the user also have known security devices in the vicinity of the platform at boot and runtime. The security device will have an RF function, either passive or active, to send an RFID or other identifier to the RF receiving platform (103 or another RF receiver / transceiver) or to allow it to access the RFID or other Identifier. In an embodiment, the presence of an authorized security device within the scope of the platform may replace or avoid a challenge response such as a password or question / answer challenge. In another embodiment, even when the security devices are in close proximity, challenges may be required if the platform can not access the authorized network or is not in an authorized location.Referring again to FIG. 4, testing for enabling the RFID security device may precede or override the mandatory challenge (409, 417, and 431) prior to the location-based query (413 or 461), and / or compulsory.A platform configured to use both location-based information and RFID security devices can be selectively turned on or off based on platform policies.FIG. 5 is a block diagram illustrating an example platform implementing the features disclosed herein in accordance with an embodiment of the present invention. FIG. Platform 500 includes a processor 501. The processor 501 may be connected to the random access memory 505 via the memory controller hub 503. Processor 501 may be any type of processor capable of executing software, such as a microprocessor, digital signal processor, microcontroller, or the like. Although only one such processor 501 is shown in FIG. 5, there may be one or more processors in the platform 500, and the one or more processors may include multiple threads, multiple cores, or the like.The processor 501 may also be connected to I / O devices via an input / output controller hub (ICH) 507. The ICH may be coupled to various devices such as a super I / O controller (SIO), a keyboard controller (KBC), or a Trusted Platform Module (TPM) via a low pin count (LPC) bus 502. SIOs, for example, can access floppy disk drives or industry standard architecture (ISA) devices. In an embodiment, the ICH is coupled to non-volatile memory via a Serial Peripheral Interface (SPI) bus 504. The non-volatile memory may be a flash memory or a static random access memory (SRAM) or the like. The out-of-band (OOB) microcontroller 510n (shown in FIG. 1) may appear on the platform 500. OOB microcontroller 510n may be connected to ICH via bus 512, which is typically a Peripheral Component Interconnect (PCI) or PCI Express bus. The OOB microcontroller may also be coupled with non-volatile memory storage (NV storage) 517 via SPI bus 504. NV storage 517 may be flash memory or static RAM (SRAM) or the like. In many existing systems, the NV storage device is flash memory. It is understood that various architectures can be used. For example, the memory controller may be directly coupled to the processor, and / or the platform may have an IOH (Input / Output Controller Hub) instead of an ICH or the like.OOB microcontroller 510n can be thought of as a "micro" processor. Like a full-featured processor, the OOB microcontroller has a processor unit 511 that may be operably coupled to a cache memory 515 and RAM and ROM memory 513. OOB microcontrollers may have a built-in network interface 527 and separate connections to power supply 525 to enable out-of-band communication even when the in-band processor 501 is inactive.In an embodiment, the processor has a basic input output system (BIOS) 519 in the NV storage device 517. In other embodiments, the processor boots from a remote device (not shown), and a boot vector (pointer) is located in the BIOS portion 519 of the NV storage 517. The OOB microcontroller 510n can access all of the contents of the NV Storage 517, including the BIOS portion 519 and the protection portion 521 of the non-volatile memory. In some embodiments, "ctiveManagement Technology (IAMT)" may be utilized to secure the protected portion 521 of the memory. In an embodiment, the portion 521 of the NV memory device is protected from firmware access based on a chipset selection in a base address register (BAR).It is susceptible to malicious tampering because the BIOS portion 519 of the non-volatile memory can be modified by an application running within the OS or OS. The protected area of memory 521 is only available to OOB microcontrollers, which can be used to store critical vector of bootstrap information without the risk of tampering. The only way to access the OOB microcontroller side of the NV Storage 517 is via the OOB micro-controller via proxy authentication, ie, signature verification and the like. The embodiments of the present invention use the hardware protection area 521 of the non-volatile memory 517 and make the protection area non-accessible by the OS.Many existing systems use the Extensible Firmware Interface (EFI) and its associated flashvariable. EFI is a specification that defines a new model for the interface between operating system and platform firmware, commonly referred to as the basic input output system (BIOS). The canonical version 1.10 of December 1, 2002, is available on the public Internet at www * intel * com / technology / efi / main_specification.htm. You can also use the Unified EFI (UEFI) architecture. You can find more information about UEFI at www * uefi * org / specs /. In the EFI boot location specification, instead of relying entirely on pointers to a single boot location, a series of boot variables can be used. The boot variable specifies where the platform should boot from. The EFI system stores bootstrap variables in non-volatile memory, usually flash memory. Because of the well-defined location of the bootstrap variables, this standard architecture facilitates the implementation of embodiments of the present invention.In an embodiment, the implementation of a "mailbox" for transferring messages and data between in-band (host processor communications) and out-of-band processors is implemented according to the techniques discussed in U.S. Patent Application , Which is a patent application of Rothman et al., Filed on October 12, 2004, U.S. Patent Application Serial No. 10 / 964,355 (Attorney Docket No. P19896), entitled "BUS COMMUNICATION EMULATION."OOB microcontroller 510n may be operable to store "messages" containing commands in a memory shared by OOB microcontroller 510n and a processor of the computer system (eg, processor 501 of host computer 500). In the illustrated embodiment, the host computer 500 includes a shared memory 552 that can be accessed by the processor 501 and the OOB microcontroller 510n. The shared memory 552 may be located in the reserved area 552a of the RAM or in a separate non-volatile memory storage device 552b, and so on. Shared storage can work as a mailbox for these messages. Thus, in one aspect, the OOB microcontroller 51 On may store or retrieve messages from the shared memory 552 independent of the state of the host computer 500 including the state of the processor 501, operating system and programs. Thus, in the illustrated embodiment, the OOB microcontroller 51 On may store or retrieve a message in the shared memory 552 whether the processor 501 is being initialized or turned off, or whether the operating system is booted, running, crashed or Other status.To facilitate this independent operation, in this example, controller 510n, shared memory 552, local bus 512, and other suitable components may be powered independently of the main components of host computer 500, including processor 501 and Host memory 505. The shared memory 552 may be a non-volatile (NV) memory such as a flash memory or a static random access memory (SRAM). In embodiments described in greater detail below, the OOB microcontroller 510n operates independently of the operating system or system startup routine so that the OOB microcontroller 510n may have its own dedicated control circuitry, firmware, operating system, etc. to The operation of the OOB microcontroller 510n is controlled independently of the state of the rest of the host computer 500. It should be understood that the degree of operational independence (if any) of the controller 510n and other components may vary depending on the particular application.In an embodiment, security is performed on the host processor 501 during pre-boot. However, security and location-based checking may be performed on the OOB microcontroller 510n after booting, such as using IAMT. This division of tasks allows the host processor to run more efficiently at runtime without having to keep running firmware drives and services to check for location-based information. In this case, the microcontroller 510n may send a message to the host BIOS to shut down or lock down when the security fails.The techniques described herein are not limited to any specific hardware or software configuration; they may find their place in any computing, consumer electronics, or processing environment. These techniques can be implemented in hardware, software, or a combination of both.For simulation, the program code may represent the hardware using a hardware description language or another functional description language that essentially provides a model of how the intended hardware is to be implemented. The program code may be in assembly language or machine language or may be compiled and / or interpreted data. In addition, speaking of software, it is common in the art to take actions or cause results in one form or another. This representation is merely an approximation of a simple way of executing program code by a processing system, causing the processor to perform actions or produce results.Each program can be implemented in a high-level or object-oriented programming language to communicate with the processing system. However, the program can be implemented in assembly or machine language, if needed. In any case, the language can be compiled or interpreted.The program instructions may be used to cause a general purpose or special purpose processing system programmed using instructions to perform the operations described herein. Alternatively, the operations may be performed by dedicated hardware components containing hard-wired logic for performing operations, or by any combination of programmed computer components and custom hardware components. The methods described herein can be provided as a computer program product that can include a machine-accessible medium that stores instructions that can be used to program a processing system or other electronic device to perform the method.The program code or instructions may be stored in, for example, volatile and / or nonvolatile memory, such as storage devices and / or associated machine-readable or machine-accessible media, including solid state memories, hard disk drives, floppy disks, optical disk storage devices , Magnetic tapes, flash memory, memory sticks, digital video disks, digital versatile disks (DVDs), etc., as well as more specific media such as machine-accessible biological state keeping storage devices. A machine-readable medium can include any apparatus for storing, transmitting, or receiving information in a form readable by a machine, and the medium can include a tangible medium through which the electrical, optical, acoustical, or acoustic signals that encode the program code Other forms of propagation signals or carriers may be transmitted, such as antennas, optical fibers, communication interfaces, and the like. Program code may be sent in the form of packets, serial data, parallel data, propagated signals, and the like, and may be used in a compressed or encrypted format.Program code may be implemented in programs executing on programmable machines such as mobile or fixed computers, personal digital assistants, set-top boxes, cell phones and pagers, consumer electronic devices including DVD players, personal video recorders, personal video players, Satellite receiver, stereo receiver, cable TV receiver), and other electronic devices, each device including a processor, a processor readable volatile and / or nonvolatile memory, at least one input device, and / or One or more output devices. The program code may be applied to data input using the input device to execute the described embodiments and generate output information. Output information can be applied to one or more output devices. One of ordinary skill in the art may recognize that embodiments of the presently disclosed subject matter may be implemented using a variety of computer system configurations, including multiprocessor or multi-core processor systems, minicomputers, mainframe computers, and any type of computer system that may embed any virtual device In the general or small computer or processor. Embodiments of the herein disclosed subject matter may also be practiced in distributed computing environments where tasks or portions of tasks may be performed by remote processing devices that are linked through a communications network.Although the operations may be described as sequential processes, in fact some operations may be performed in parallel, concurrently, and / or in a distributed environment, and the program code may be stored locally and / or remotely for single or multiple A processor machine to visit. In addition, in some embodiments, the order of operations may be rearranged without departing from the spirit of the disclosed subject matter. The program code may be used by or in conjunction with an embedded controller.Although the present invention has been described with reference to the illustrated embodiments, the description is not intended to be construed in a limiting manner. It will be apparent to those skilled in the art to which the invention pertains that various modifications of the illustrative embodiments as well as other embodiments of the invention are considered to be within the spirit and scope of the invention. |
Methods, systems, and devices for reconfigurable channel interfaces for memory devices are described. A memory device may be split into multiple logical channels, where each logical channel is associated with a memory array and a command/address (CA) interface. In some cases, the memory device may configure a first CA interface associated with a first channel to forward commands to a first memory array associated with the first channel and a second memory array associated with a second channel. The configuring may include isolating a second CA interface associated with the second channel from the second array and coupling the first CA interface with the second memory array. |
1. A method for operating a memory device comprising:Receiving at a first command/address interface coupled to a first memory array a command indicating a configuration of a plurality of command/address interfaces comprising the first command/address interface, wherein each of the plurality of command/address interfaces a command/address interface coupled to a corresponding memory array of a plurality of memory arrays comprising at least said first memory array;isolating a second command/address interface of the plurality of command/address interfaces from a second memory array of the plurality of memory arrays based at least in part on receiving the command; andcoupling the first command/address interface to the second memory array based at least in part on isolating the second command/address interface from the second memory array, wherein the first command/address interface is at least is coupled with both the first memory array and the second memory array based in part on the configuration.2. The method of claim 1, further comprising:deactivating the second command/address interface based at least in part on receiving the command at the first command/address interface, wherein coupling the first command/address interface with the second memory array is at least Based in part on deactivating the second command/address interface.3. The method of claim 2, wherein deactivating the second command/address interface comprises:A clock associated with the second command/address interface is deactivated based at least in part on receipt of the command at the first command/address interface.4. The method of claim 1, further comprising:receiving a read command for the second memory array at the first command/address interface based at least in part on coupling the first command/address interface with the second memory array; andData is retrieved from the second memory array based at least in part on receiving the read command at the first command/address interface.5. The method of claim 4, further comprising:Additional data is retrieved from the first memory array based at least in part on receiving the read command at the first command/address interface.6. The method of claim 5, further comprising:forwarding the read command from the first command/address interface to the first memory array and the second memory array, wherein retrieving the data is based at least in part on forwarding the read command to the The second memory array, and wherein retrieving the additional data is based at least in part on forwarding the read command to the first memory array.7. The method of claim 1, further comprising:receiving a second command at the first command/address interface for associating the second command/address interface with the second memory array;isolating the first command/address interface from the second memory array based at least in part on receiving the second command; andThe second command/address interface is coupled with the second memory array based at least in part on isolating the first command/address interface from the second memory array.8. The method of claim 7, further comprising:The second command/address interface is activated based at least in part on receiving the second command at the first command/address interface.9. The method of claim 1, wherein the command is received after a memory device including the first command/address interface has performed a boot process.10. The method of claim 1 , wherein as part of a boot process of a memory device comprising the first command/address interface, receiving the above order.11. A memory device comprising:a first memory array coupled to a first data channel configured to communicate first data between the first memory array and a memory controller;a second memory array distinct from the first memory array and coupled to a second data channel configured to communicate second data between the second memory array and the memory controller ;a first command/address interface coupled to a first control channel and associated with the first memory array;a second command/address interface coupled to a second control channel and associated with the second memory array; anda selection component coupled to the first command/address interface and the second command/address interface and configured to selectively interface the second memory array with the first command/address interface at a first time coupling and selectively coupling the second memory array with the second command/address interface at a second time, wherein the first command/address interface is with the second command/address interface at the first time and the second time The first memory array is coupled.12. The memory device of claim 11, wherein:A command to the second memory array is received over the first control channel based at least in part on the second memory array being coupled to the first command/address interface.13. The memory device of claim 11, further comprising:a third memory array coupled to a third data channel;a fourth memory array coupled to a fourth data channel;a third command/address interface coupled to a third control channel and associated with the third memory array; andA fourth command/address interface coupled to a fourth control channel and associated with the fourth memory array, wherein the selection component is coupled to the third command/address interface and the fourth command/address interface.14. The memory device of claim 13 , wherein the first command/address interface uses the selection component with the first memory array, the second memory array, the third memory array, and the A fourth memory array is coupled.15. The memory device of claim 13, wherein the selection component is further configured to selectively couple the fourth memory array with the third command/address interface or the fourth command/address interface.16. The memory device of claim 13 , wherein the selection component is further configured to selectively couple the third memory array with the first command/address interface or the third command/address interface and The fourth memory array is selectively coupled to the first command/address interface or the fourth command/address interface.17. The memory device of claim 13 , wherein the first memory array, the second memory array, the third memory array, and the fourth memory array, or a combination thereof, comprise dynamic random access memory (DRAM) DRAM) memory unit.18. The memory device of claim 11, wherein the selection component comprises a multiplexer coupled with the first command/address interface and the second command/address interface.19. The memory device of claim 18, wherein the selection component further comprises a latch component configured to transmit a selection signal to the multiplexer.20. The memory device of claim 19 , wherein the selection component further comprises a second multiplexer coupled to a third command/address interface and a fourth command/address interface, wherein the third command/address interface coupled to a third control channel and associated with a third memory array, and the fourth command/address interface is coupled to a fourth control channel and associated with a fourth memory array, and wherein the latch component is further configured to The select signal is transmitted to the second multiplexer.21. The memory device of claim 20, wherein the selection component further comprises:a third multiplexer coupled to the first command/address interface and the third command/address interface;a fourth multiplexer coupled to the first command/address interface and the fourth command/address interface; andA second latch component configured to transmit a second selection signal to the third multiplexer and the fourth multiplexer.22. A method for operating a memory device comprising:A read command is received at a command/address interface coupled to a first control channel, a first memory array coupled to a first data channel, and a second memory array coupled to a memory array different from a second data channel coupling of said first data channel;identifying that the read command is for the second memory array;retrieving a data set from the second memory array based at least in part on receiving the read command; andThe set of data is transmitted over the second data channel based at least in part on retrieving the set of data from the second memory array.23. The method of claim 22, further comprising:receiving a write command at the command/address interface over the first control channel;receiving a second set of data over the second data channel based at least in part on receiving the write command at the command/address interface; andThe second set of data is written to the second memory array based at least in part on receiving the write command and the second set of data.24. The method of claim 22, further comprising:retrieving a second set of data from the first memory array based at least in part on receiving the read command; andThe second set of data is transmitted over the first data channel based at least in part on retrieving the second set of data from the first memory array.25. The method of claim 22, further comprising:retrieving a second set of data from a third memory array coupled to a third data channel based at least in part on receiving the read command at the command/address interface; andThe second set of data is transmitted over the third data channel based at least in part on retrieving the second set of data from the third memory array.26. A memory device comprising:a first memory array coupled to the first data channel and a second memory array coupled to the second data channel; anda command/address interface coupled to the first control channel, the first memory array, and the second memory array and configured to receive a write command and identify that the write command is for the second memory array , wherein: the command/address interface is configured to forward the write command to the second memory array through an internal channel; and the second memory array is configured to forward the write command based at least in part on the write command through The second data channel receives a set of data and writes the set of data to the second memory array based at least in part on receiving the set of data.27. The memory device of claim 26, wherein:the command/address interface is configured to receive a read command and identify that the read command is for the second memory array; andThe second memory array is configured to retrieve a second set of data based at least in part on receipt of the read command by the command/address interface, and to retrieve a second set of data based at least in part on retrieving the second set of data by communicating with the first The second data channel coupled to the two memory arrays transmits the second set of data.28. The memory device of claim 26, further comprising:A second command/address interface coupled to a second control channel and configured to isolate from the second memory array when the command/address interface receives the write command to the second memory array.29. The memory device of claim 28, further comprising:A third memory array coupled to a third data channel and configured to receive over the third data channel based at least in part on receipt of the write command by the command/address interface coupled to the first control channel a third set of data, and writing the third set of data to the third memory array based at least in part on receiving the third set of data; anda fourth memory array coupled to a fourth data channel and configured to receive over the fourth data channel based at least in part on receipt of the write command by the command/address interface coupled to the first control channel A fourth set of data, and writing the fourth set of data to the fourth memory array based at least in part on receiving the fourth set of data.30. The memory device of claim 29, further comprising:a third command/address interface coupled to a third control channel and configured to separate from the third memory array when the command/address interface coupled to the first control channel receives the write command; as well asA fourth command/address interface coupled to a fourth control channel and configured to detach from the fourth memory array when the command/address interface coupled to the first control channel receives the write command.31. The memory device of claim 26 , wherein the first memory array is configured to, at least in part, receive the write command through the command/address interface coupled to the first control channel. The first data channel receives a second data set.32. A method for operating a memory device comprising:determining, by the host device, a size of information associated with an access command executed by the memory device;determining a configuration of at least one of a plurality of command/address interfaces of the memory device based at least in part on determining the size of the information associated with the access command; andA command indicating the configuration is transmitted to a command/address interface of the plurality of command/address interfaces.33. The method of claim 32, wherein the configuration indicates that the command/address interface is coupled to the first memory array and the second memory array and the second command/address interface is deactivated.34. The method of claim 32, further comprising:identifying a number of reconfigurable command/address interfaces of the memory device based at least in part on determining the size of the information associated with the access command, wherein determining the configuration is based at least in part on identifying the Describe the number of reconfigurable command/address interfaces.35. The method of claim 32, further comprising:Data having the size is transmitted over a data channel based at least in part on transmitting the configuration to the memory device. |
Reconfigurable channel interface for memory devicescross referenceThis patent application is dependent on PCT Application No. PCT/US2020/030099, entitled "RECONFIGURABLE CHANNEL INTERFACES FOR MEMORY DEVICES," filed April 27, 2020 by RICHTER The aforementioned PCT application claims U.S. Serial No. 16/858,286, entitled "RECONFIGURABLE CHANNEL INTERFACES FOR MEMORY DEVICES," filed April 24, 2020, by RICHTER Priority of Patent Application and U.S. Provisional Patent Application No. 62/855,305, entitled "RECONFIGURABLE CHANNEL INTERFACES FOR MEMORY DEVICES," filed May 31, 2019 by RICHTER rights, each of the foregoing applications is assigned to the present assignee, and each of the foregoing applications is expressly incorporated herein by reference in its entirety.technical fieldThe technical field relates to reconfigurable channel interfaces for memory devices.Background techniqueThe following generally relates to systems including at least one memory device, and more particularly, to reconfigurable channel interfaces for memory devices.Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Information is stored by programming different states of the memory device. For example, binary devices most often store one of two states, often represented by a logical one or a logical zero. In other devices, more than two states may be stored. In order to access the stored information, a component of the device may read or sense at least one stored state in the memory device. To store information, components of the device may write or program the state in the memory device.Various types of memory devices exist, including magnetic hard disks, random access memory (RAM), read only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM ( MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), etc. Memory devices can be volatile or non-volatile. Non-volatile memory such as FeRAM maintains its stored logic state for long periods of time even when no external power source is present. Volatile memory devices, such as DRAM, may lose their stored state when disconnected from an external power source.In some cases, a host device may transmit a read command or a write command to the memory device. If the memory device receives a read command, the memory device can decode the read command and can transmit the corresponding set of data to the host device via the data channel. If the memory device receives a write command, the memory device can decode the write command and can receive a corresponding set of data from the host device via the data channel.Contents of the inventionOne method is described. In some examples, the method may include receiving, at a first command/address (CA) interface coupled to a first memory array, a command indicating configuration of a plurality of CA interfaces including the first CA interface; at least in part isolating a second CA interface of the plurality from a second memory array based at least in part on receiving the command; and isolating the second CA interface from the second memory array based at least in part on isolating the second CA interface from the second memory array A first CA interface is coupled with the second memory array.A device is described. In some examples, the apparatus can include: a first memory array coupled to a first data channel; a second memory array coupled to a second data channel; a first command/address (CA) interface coupled to a first a control channel coupled to and associated with the first memory array; a second CA interface coupled to the second control channel and associated with the second memory array; and a selection component interfaced with the first CA coupled to the second CA interface and configured to selectively couple the second memory array to the first CA interface at a first time and selectively couple the second memory array to the first CA interface at a second time The second CA interface is coupled.One method is described. In some examples, the method can include receiving a read command at a command/address (CA) interface coupled to a first control channel, a first memory array, and a second memory array, the first memory array and the second memory array a data channel is coupled and the second memory array is coupled to the second data channel; identifying that the read command is for the second memory array; based at least in part on receiving the read command from the first retrieving a data set from a second memory array; and transmitting the data set over the second data channel based at least in part on retrieving the data set from the second memory array.A device is described. In some examples, the apparatus can include: a first memory array coupled to the first data channel and a second memory array coupled to the second data channel; and a command/address (CA) interface coupled to the first control channel , the first memory array and the second memory array are coupled and configured to receive a write command and identify that the write command is for the second memory array, wherein the second memory array is configured to at least receiving a set of data over the second data channel based in part on the CA interface receiving the write command and writing the set of data to the second memory array based at least in part on receiving the set of data .One method is described. In some examples, the method may include determining, by the host device, a size of information associated with an access command executed by the memory device; determining, at least in part, the size of the information associated with the access command determining a configuration of at least one of a plurality of command/address (CA) interfaces of the memory device based on the size; and transmitting a command indicating the configuration to the CA interface of the plurality.Description of drawings1 illustrates an example of a system supporting a reconfigurable channel interface for a memory device according to examples disclosed herein.2 shows an example of a memory die supporting a reconfigurable channel interface for a memory device according to examples disclosed herein.3A and 3B illustrate examples of channel configurations for memory devices supporting reconfigurable channel interfaces for memory devices according to examples disclosed herein.4A, 4B, and 4C illustrate examples of channel configurations for memory devices supporting reconfigurable channel interfaces for memory devices according to examples disclosed herein.5 shows an example of a flow diagram supporting a reconfigurable channel interface for a memory device according to examples disclosed herein.6A and 6B illustrate examples of routing schemes supporting reconfigurable channel interfaces for memory devices according to examples disclosed herein.7 shows a block diagram of a memory device supporting a reconfigurable channel interface for the memory device according to examples disclosed herein.8 shows a block diagram of a host device supporting a reconfigurable channel interface for a memory device according to examples disclosed herein.9-11 show flowcharts illustrating one or more methods of supporting a reconfigurable channel interface for a memory device according to examples disclosed herein.Detailed waysIn general, the architecture of a memory device, such as a dynamic random access memory (DRAM) device, may have a predefined access granularity, where an access granularity may refer to the number of bits or bytes written or read by a single access operation. A memory device may be divided into multiple logical channels. Each logical channel can be associated with a memory array with a predefined access granularity. Each logical channel can be accessed in parallel. Thus, if a write or read command is sent over multiple logical channels, the memory array from each of the multiple logical channels can write or read data in parallel and with a higher effective granularity. For example, each memory array of a memory device may have an access granularity of thirty-two (32) bytes. If a read or write command is sent over two logical channels, the memory array from the first logical channel can read or write 32 bytes, and the memory array from the second logical channel can read or write 32 bytes bytes. Thus, even though each individual memory array can read or write 32 bytes, the two memory arrays together can read or write sixty-four (64) bytes for a given command.While splitting the memory device can achieve effectively increased access granularity, splitting the memory device can also increase the pin overhead associated with operating the device (e.g., from fifteen (15) pins to twenty-four (24) pins). For example, each logical channel may be associated with a command/address (CA) interface for controlling the logical channel's data interface. Each CA interface can be associated with a predefined number of pins. Therefore, increasing the number of logical channels may correspondingly increase the number of pins.The memory device may contain reconfigurable CA interfaces such that the memory device can be configured to use a certain number of CA interfaces that is less than the total number of logical channels. For example, a single CA interface may be used to control data interfaces for one or more channels other than the channels associated with the single CA interface. Configuring a single CA interface for this purpose may involve coupling the CA interface with another channel's data interface. Additionally, configuration may involve isolating the CA interface for the channel from the data interface.Features of the present disclosure are first described in the context of the memory system and memory die described with reference to FIGS. 1 and 2 . Features of the present disclosure are described in the context of the channel configurations, flow diagrams, and routing schemes of the memory devices described with reference to Figures 3A-6B. These and other features of the present disclosure are further illustrated and described with reference to apparatus diagrams and flowcharts related to reconfigurable channel interfaces for memory devices as described with reference to FIGS. 7-11.1 shows an example of a system 100 utilizing one or more memory devices according to examples disclosed herein. System 100 may include an external memory controller 105 , a memory device 110 , and a plurality of channels 115 coupling external memory controller 105 with memory device 110 . System 100 may include one or more memory devices, but for ease of description, the one or more memory devices may be described as a single memory device 110 .System 100 may include portions of electronic devices such as computing devices, mobile computing devices, wireless devices, or graphics processing devices. System 100 may be an example of a portable electronic device. System 100 may be an example of a computer, laptop, tablet, smartphone, cell phone, wearable device, Internet connected device, and the like. Memory device 110 may be a system component configured to store data for one or more other components of system 100 .At least portions of system 100 may be instances of host devices. Such a host device may be an example of a device that uses memory for executing a process, such as a computing device, mobile computing device, wireless device, graphics processing device, computer, laptop, tablet computer, smart phone, cellular phone, wearable device, an Internet-connected device, some other fixed or portable electronic device, a graphics processing unit (GPU), or the like. In some cases, a host device may refer to hardware, firmware, software, or a combination thereof that implements the functions of external memory controller 105 . In some cases, external memory controller 105 may be referred to as a host or a host device. In some examples, system 100 is a graphics card.In some cases, memory device 110 may be a separate device or component configured to communicate with other components of system 100 and provide physical memory addresses/other spaces that system 100 may use or reference. In some examples, memory device 110 may be configured to cooperate with at least one or more different types of system 100 . Signaling between components of system 100 and memory device 110 may be used to support modulation schemes for modulating signals, different pin designs for transmitting signals, different packaging of system 100 and memory device 110, and communication between system 100 and memory device 110. clock signaling and synchronization, timing conventions, and/or other factors.Memory device 110 may be configured to store data for components of system 100 . In some cases, memory device 110 may act as a slave device to system 100 (eg, respond to and execute commands provided by system 100 through external memory controller 105). Such commands may include access commands for access operations, such as write commands for write operations, read commands for read operations, refresh commands for refresh operations, or other commands. Memory device 110 may include two or more memory die 160 (eg, memory chips) supporting a desired or specified capacity for data storage. A memory device 110 containing two or more memory dies may be referred to as a multi-die memory or package (also known as a multi-chip memory or package).System 100 may further include processor 120 , basic input/output system (BIOS) component 125 , one or more peripheral components 130 , and input/output (I/O) controller 135 . Components of system 100 may communicate electronically with each other using bus 140 .Processor 120 may be configured to control at least part of system 100 . Processor 120 may be a general purpose processor, digital signal processor (DSP), application specific integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or It can be a combination of these types of components. In such cases, processor 120 may be an example of a central processing unit (CPU), a GPU, a general-purpose graphics processing unit (GPGPU), or a system on a chip (SoC), among other examples.BIOS component 125 may be a software component containing the BIOS operating as firmware that initializes and runs various hardware components of system 100 . BIOS component 125 may also manage the flow of data between processor 120 and various components of system 100, such as peripheral components 130, I/O controller 135, and the like. BIOS component 125 may contain programs or software stored in read-only memory (ROM), flash memory, or any other non-volatile memory.Peripheral component 130 may be any input device or output device, or interface for such a device, which may be integrated into or with system 100 . Instances can include disk controllers, sound controllers, graphics controllers, Ethernet controllers, modems, Universal Serial Bus (USB) controllers, serial or parallel ports, or slots for peripheral cards such as Peripheral Component Interconnect ( PCI) or dedicated graphics port. The peripheral component 130 may be other components understood as peripheral devices by those skilled in the art.I/O controller 135 may manage data communication between processor 120 and peripheral components 130 , input device 145 or output device 150 . I/O controller 135 may manage peripheral devices that are not integrated into or integrated with system 100 . In some cases, I/O controller 135 may represent a physical connection or port to external peripheral components.Inputs 145 may represent devices or signals external to system 100 that provide information, signals, or data to system 100 or components thereof. This may include a user interface or an interface with or between other devices. In some cases, input 145 may be a peripheral device that interfaces with system 100 via one or more peripheral components 130 , or may be managed by I/O controller 135 .Output 150 may represent a device or signal external to system 100 configured to receive an output from system 100 or any component thereof. Examples of output 150 may include a display, audio speakers, a printing device, or another processor on a printed circuit board, among others. In some cases, output 150 may be a peripheral device that interfaces with system 100 via one or more peripheral components 130 , or may be managed by I/O controller 135 .The components of system 100 may consist of general or special purpose circuits designed to perform their functions. This may include various circuit elements configured to perform the functions described herein, such as wires, transistors, capacitors, inductors, resistors, amplifiers, or other active or passive elements.Memory device 110 may include a device memory controller 155 and one or more memory die 160 . Each memory die 160 may include a local memory controller 165 (e.g., local memory controller 165-a, local memory controller 165-b, and/or local memory controller 165-N) and a memory array 170 (e.g., memory array 170-a, memory array 170-b, and/or memory array 170-N). Memory array 170 may be a collection (eg, a grid) of memory cells, where each memory cell is configured to store at least one bit of digital data. Features of memory array 170 and/or memory cells are described in more detail with reference to FIG. 2 .Memory device 110 may be an example of a two-dimensional (2D) memory cell array or may be an example of a three-dimensional (3D) memory cell array. For example, a 2D memory device may contain a single memory die 160. A 3D memory device may contain two or more memory dies 160 (eg, memory die 160-a, memory die 160-b, and/or any number of memory dies 160 Die 160-N). In a 3D memory device, multiple memory dies 160-N may be stacked on top of each other. In some cases, memory dies 160-N in a 3D memory device may be referred to as stacks, tiers, layers, or dies. A 3D memory device may include any number of stacked memory die 160-N (eg, two-way, three-way, four-way, five-way, six-way, seven-way, eight-way). This can increase the number of memory cells that can be positioned on a substrate, which in turn can reduce production costs or improve memory array performance, or both, compared to a single 2D memory device. In some 3D memory devices, different stacks may share at least one common access line, such that some stacks may share at least one of word lines, digit lines, and/or plate lines.The device memory controller 155 may include circuitry or components configured to control the operation of the memory device 110 . Thus, device memory controller 155 may include hardware, firmware, and software that enable memory device 110 to execute commands, and may be configured to receive, transmit, or execute commands, data, or control information regarding memory device 110 . Device memory controller 155 may be configured to communicate with external memory controller 105 , one or more memory die 160 , or processor 120 . In some cases, memory device 110 may receive data and/or commands from external memory controller 105 . For example, memory device 110 may receive a write command indicating that memory device 110 is to store certain data on behalf of a component of system 100 (e.g., processor 120), or receive a command indicating that memory device 110 is to store certain data stored in memory die 160. These data provide read commands to components of system 100 (eg, processor 120). In some cases, device memory controller 155 may control the operation of memory device 110 described herein in conjunction with local memory controller 165 of memory die 160 . Examples of components included in the device memory controller 155 and/or local memory controller 165 may include receivers for demodulating signals received from the external memory controller 105, receivers for modulating and transmitting signals to the external memory controller Decoders, logic, decoders, amplifiers, filters, etc. of the device 105.A local memory controller 165 (eg, local to memory die 160 ) may be configured to control the operation of memory die 160 . Also, the local memory controller 165 may be configured to communicate (eg, receive and transmit data and/or commands) with the device memory controller 155 . Local memory controller 165 may support device memory controller 155 to control the operation of memory device 110 as described herein. In some cases, memory device 110 does not include device memory controller 155, and local memory controller 165 or external memory controller 105 may perform the various functions described herein. Accordingly, local memory controller 165 may be configured to communicate with device memory controller 155 , to communicate with other local memory controllers 165 , or to communicate directly with external memory controller 105 or processor 120 .External memory controller 105 may be configured to enable the transfer of information, data, and/or commands between components of system 100 (eg, processor 120 ) and memory device 110 . The external memory controller 105 may serve as a liaison between the components of the system 100 and the memory device 110 such that the components of the system 100 may not need to know the operational details of the memory device. Components of system 100 may present requests (eg, read commands or write commands) to external memory controller 105 that external memory controller 105 fulfills. External memory controller 105 may translate or translate communications exchanged between components of system 100 and memory device 110 . In some cases, external memory controller 105 may contain a system clock that generates a common (source) system clock signal. In some cases, external memory controller 105 may contain a common data clock that generates a common (source) data clock signal.In some cases, external memory controller 105 or other components of system 100 or the functions described herein may be implemented by processor 120 . For example, external memory controller 105 may be hardware, firmware, or software implemented by processor 120 or other components of system 100, or some combination thereof. Although external memory controller 105 is depicted as being external to memory device 110 , in some cases external memory controller 105 or the functions described herein may be implemented by memory device 110 . For example, external memory controller 105 may be hardware, firmware, or software implemented by device memory controller 155 or one or more local memory controllers 165 , or some combination thereof. In some cases, external memory controller 105 may be distributed across processor 120 and memory device 110 such that portions of external memory controller 105 are implemented by processor 120 and other portions are implemented by device memory controller 155 or a local memory controller. 165 implemented. Likewise, in some cases, one or more functions herein attributed to device memory controller 155 or local memory controller 165 may, in some cases, be performed by external memory controller 105 (separate from processor 120 or included in processing device 120) to execute.Components of system 100 may exchange information with memory device 110 using number of channels 115 . In some examples, channel 115 may enable communication between external memory controller 105 and memory device 110 . Each channel 115 may include one or more signal paths or transmission media (eg, conductors) between terminals associated with components of system 100 . For example, channel 115 may include a first terminal that includes one or more pins or pads at external memory controller 105 and one or more pins or pads at memory device 110 . A pin may be an example of a conductive input or output point of a device of system 100, and a pin may be configured to act as part of a channel.In some cases, a pin or pad of a terminal may be part of the signal path of channel 115 . Additional signal paths may be coupled to the terminals of the channels for routing signals within the components of system 100 . For example, memory device 110 may contain signal paths (e.g., within memory device 110 or components thereof, such as within memory die 160) that route signals from terminals of channel 115 to various components of memory device 110. components (eg, device memory controller 155, memory die 160, local memory controller 165, memory array 170).Channels 115 (and associated signal paths and terminals) may be dedicated to conveying particular types of information. In some cases, channel 115 may be an aggregated channel and thus may contain multiple individual channels. For example, data channel 190 may be x4 (eg, include four signal paths), x8 (eg, include eight signal paths), x16 (eg, include sixteen signal paths), etc. Signals communicated over the channel may use a double data rate (DDR) timing scheme. For example, some symbols of the signal may be recorded on the rising edge of the clock signal, and other symbols of the signal may be recorded on the falling edge of the clock signal. Signals transmitted over the channel may use single data rate (SDR) signaling. For example, one symbol of the signal may be recorded for each clock cycle.In some cases, channels 115 may include one or more command and address (CA) channels 186 . The CA channel 186 may be configured to communicate commands, including control information associated with the commands (eg, address information), between the external memory controller 105 (eg, a host device or a component thereof) and the memory device 110 . For example, CA channel 186 may contain a read command for the address of the desired data. In some cases, CA channel 186 may register on rising clock signal edges and/or falling clock signal edges. In some cases, CA channel 186 may include any number of signal paths to convey address and command data (eg, eight or nine signal paths). In general, CA channel 186 may be coupled with a CA interface that may forward commands communicated by CA channel 186 to local memory controller 165 controlling memory array 170 or directly to memory array 170 .In some cases, channels 115 may include one or more clock signal (CK) channels 188 . CK channel 188 may be configured to carry one or more common clock signals between external memory controller 105 and memory device 110 . Each clock signal is configurable to oscillate between a high state and a low state, and coordinate the actions of the external memory controller 105 and the memory device 110 . In some cases, the clock signals may be differential outputs (eg, CK_t signal and CK_c signal) and the signal path of CK channel 188 may be configured accordingly. In some cases, the clock signal may be single-ended. CK channel 188 may contain any number of signal paths. In some cases, clock signal CK (eg, CK_t signal and CK_c signal) may provide a timing reference for command and address operations of memory device 110 or other system-wide operations of memory device 110 . The clock signal CK may thus be variously referred to as a control clock signal CK, a command clock signal CK or a system clock signal CK. The system clock signal CK may be generated by a system clock, which may include one or more hardware components (eg, oscillators, crystals, logic gates, transistors, etc.). There may be a CK channel 188 for each logical channel.In some cases, channels 115 may include one or more data (DQ) channels 190 . Data channel 190 may be configured to communicate data and/or control information between external memory controller 105 and memory device 110 . For example, data channel 190 may convey information to be written to memory device 110 (eg, bi-directional) or information to be read from memory device 110 . In some cases, data channel 190 may be coupled to memory array 170 .In some cases, data channel 190 and associated CA channel 186 may be instances of logical channels. If memory device 110 is divided into logical channels, memory device 110 may have multiple data channels 190 and multiple CA channels 186 . Each data channel 190 may be coupled with memory array 170 and each CA channel 186 may be coupled with a CA interface. In some cases, a CA interface associated with first CA channel 186 and first data channel 190 may gain control of second data channel 190 . Additionally, the CA interface associated with the second CA channel 186 and the second data channel 190 may be deactivated or may otherwise lose control of the second data channel 190 . Accordingly, gaining control of data channel 190 may involve coupling the CA interface and memory array 170 coupled to data channel 190, or losing control of data channel 190 may involve detaching the CA interface and memory array 170 coupled to data channel 190. coupling.In some cases, a CA interface may receive commands (eg, read or write commands) over an associated CA channel 186 . If the CA interface is associated with multiple data channels 190 , the CA interface may forward the command to the memory array 170 coupled to the multiple data channels 190 . If the command is a read command, the first memory array 170 receiving the command may transmit the first set of data over the first data channel 190 and the second memory array receiving the command may transmit the second set of data over the second data channel 190 . If the command is a write command, the first memory array 170 receiving the command may write the first set of data received through the first data channel 190 into memory, and the second memory array 170 receiving the command may write the first set of data received through the second data channel 190 into memory. 190 The received second data set is written to memory.In some cases, channel 115 may include one or more other channels 192 that may be dedicated for other purposes. These other channels 192 may contain any number of signal paths.Channel 115 can couple external memory controller 105 with memory device 110 using a variety of different architectures. Examples of various architectures may include buses, point-to-point connections, crossbar switches, high density interposers such as silicon interposers, or channels formed in an organic substrate, or some combination thereof. For example, in some cases, a signal path may at least partially comprise a high-density interposer, such as a silicon interposer or a glass interposer.Signals transmitted on channel 115 may be modulated using a variety of different modulation schemes. In some cases, signals communicated between external memory controller 105 and memory device 110 may be modulated using a binary-sign (or binary-level) modulation scheme. The binary symbol modulation scheme may be an example of an M-ary modulation scheme, where M is equal to two. Each symbol of a binary symbol modulation scheme may be configured to represent one bit of digital data (eg, a symbol may represent a logical one or a logical zero). Examples of binary symbol modulation schemes include, but are not limited to, non-return-to-zero (NRZ), unipolar encoding, bipolar encoding, Manchester encoding, pulse amplitude modulation (PAM) with two symbols (eg, PAM2), and the like.In some cases, signals communicated between external memory controller 105 and memory device 110 may be modulated using a multi-symbol (or multi-level) modulation scheme. The multi-symbol modulation scheme may be an example of an M-ary modulation scheme, where M is greater than or equal to three. Each symbol of a multi-symbol modulation scheme may be configured to represent more than one bit of digital data (eg, a symbol may represent a logical 00, a logical 01, a logical 10, or a logical 11). Examples of multi-symbol modulation schemes include, but are not limited to, PAM3, PAM4, PAM8, etc., Quadrature Amplitude Modulation (QAM), Quadrature Phase Shift Keying (QPSK), etc. A multi-symbol signal (eg, a PAM3 signal or a PAM4 signal) may be a signal modulated using a modulation scheme comprising at least three levels to encode more than one information bit. Multi-symbol modulation schemes and symbols may alternatively be referred to as non-binary, multi-bit or higher order modulation schemes and symbols.In general, dividing memory device 110 (eg, DRAM) into multiple logical channels can achieve effectively reduced granularity and larger die size. Additionally, dividing memory device 110 into multiple logical channels may allow memory device 110 to meet bandwidth constraints while operating at a limited internal array speed.Using the CA interface to control multiple data channels 190 or data interfaces (eg, memory arrays) can achieve reduced pin overhead. Using one set of pins for a CA interface rather than multiple sets of pins associated with multiple CA interfaces can reduce delays associated with receiving or writing data. Additionally, using the CA interface to control multiple data channels 190 can reduce operating power. For example, when each CA interface of memory device 110 controls one data channel 190, each CA interface may be active and may consume a certain amount of power. However, when one CA interface controls multiple data channels 190, the CA interfaces associated with other data channels 190 may be deactivated. Deactivating these CA interfaces can reduce the overall amount of power consumption associated with operating the CA interfaces because fewer pins can be driven (eg, only the pins of the CA interfaces used to control multiple data channels can be driven).When the CA interface is configured to control multiple data channels 190 or data interfaces at startup, an overall smaller amount of hardware or physical circuit board (PCB) material may be used. For example, a system using a 2-channel memory array may be designed to always operate such that the CA interface controls multiple data channels 190 . Accordingly, no pins may be provided for the CA interface that might otherwise control one of the plurality of data channels 190 . When the CA interface is configured to control multiple data channels 190 via commands sent by the host device (e.g., mode register set commands), the configuration can be performed dynamically (e.g., in real time) at the same time, which can enable the host device to Controls whether the CA interface is configured to control multiple data channels or a single data channel. In this way, memory device 110 is able to temporarily save operating power.In general, memory devices that are divided into multiple logical channels, such as synchronous graphics RAM (SGRAM), can maintain access granularity. A memory device with one (1) logical channel that has 32 pins and outputs data in eight (8) bursts along the 32 pins can output 256 bits or 32 bytes of data for a given operation . A memory device with two (2) logical channels, sixteen (16) pins each, and outputs data in 16 bursts along 16 pins can output 2x256 bits or 2x32 bytes of data for a given operation. data. A memory device that has four (4) logical channels, 8 pins per channel, and outputs data along the pins in 32 bursts can output 4x256 bits or 4x32 bytes of data. In each of these cases, an access granularity (eg, of 32 bytes) may be preserved.FIG. 2 shows an example of a memory die 200 according to examples disclosed herein. Memory die 200 may be an example of memory die 160 described with reference to FIG. 1 . In some cases, memory die 200 may be referred to as a memory chip, a memory device, or an electronic memory device. Memory die 200 may include one or more memory cells 205 that are programmable to store different logic states. Each memory cell 205 may be programmable to store two or more states. For example, memory unit 205 may be configured to store digital logic (eg, logical zero and logical one) one bit at a time. In some cases, a single memory cell 205 (eg, a multi-level memory cell) may be configured to store more than one bit of digital logic (eg, logical 00, logical 01 , logical 10, or logical 11) at a time.Memory cell 205 may store charge representing a programmable state in the capacitor. DRAM architectures may contain capacitors containing dielectric material to store charges representing programmable states. In other memory architectures, other memory devices and components are also possible. For example, non-linear dielectric materials may be used.Operations such as reading and writing can be performed on memory cell 205 by activating or selecting access lines such as word line 210 and/or digit line 215 . In some cases, digit lines 215 may also be referred to as bit lines. References to access lines, word lines and digit lines or the like may be interchanged without affecting understanding or operation. Activating or selecting word line 210 or digit line 215 may include applying a voltage to the respective line.Memory die 200 may include access lines (eg, word lines 210 and digit lines 215 ) arranged in a grid-like pattern. Memory cell 205 may be positioned at the intersection of word line 210 and digit line 215 . By biasing word line 210 and digit line 215 (eg, applying a voltage to word line 210 or digit line 215 ), a single memory cell 205 can be accessed at their intersection.Access to memory cells 205 may be controlled by row decoder 220 or column decoder 225 . For example, row decoder 220 may receive a row address from local memory controller 260 and activate word line 210 based on the received row address. Column decoder 225 may receive a column address from local memory controller 260 and may activate digit line 215 based on the received column address. For example, memory die 200 may contain a number of word lines 210 labeled WL_1 through WL_M and a number of digit lines 215 labeled DL_1 through DL_N, where M and N depend on the size of the memory array. Thus, by activating the word line 210 and the digit line 215, such as WL_1 and DL_3, the memory cell 205 at their intersection can be accessed. The intersection of the word line 210 and the digit line 215 in a two-dimensional or three-dimensional configuration may be referred to as an address of the memory cell 205 .Memory unit 205 may contain logic storage components such as capacitor 230 and switch component 235 . Capacitor 230 may be an example of a dielectric capacitor or a ferroelectric capacitor. A first node of capacitor 230 may be coupled with switch assembly 235 and a second node of capacitor 230 may be coupled with voltage source 240 . In some cases, voltage source 240 may be a cell board reference voltage, such as Vpl, or may be grounded, such as Vss. In some cases, voltage source 240 may be an instance of a plate line coupled with a plate line driver. Switch component 235 may be an example of a transistor or any other type of switching device that selectively establishes or deestablishes electrical communication between two components.Selecting or deselecting memory cells 205 may be accomplished by activating or deactivating switch assembly 235 . Capacitor 230 may be in electronic communication with digit line 215 using switch assembly 235 . For example, capacitor 230 may be isolated from digit line 215 when switch component 235 is deactivated, and capacitor 230 may be coupled to digit line 215 when switch component 235 is activated. In some cases, switching component 235 is a transistor, and its operation can be controlled by applying a voltage to the transistor gate, where the voltage difference between the transistor gate and the transistor source can be greater or less than the threshold voltage of the transistor. In some cases, switch component 235 may be a p-type transistor or an n-type transistor. The word line 210 may be in electronic communication with the gate of the switching element 235 , and the switching element 235 may be activated/deactivated based on a voltage applied to the word line 210 .Word line 210 may be a conductive line in electronic communication with memory cell 205 for performing access operations on memory cell 205 . In some architectures, the word line 210 can be in electronic communication with the gate of the switching component 235 of the memory cell 205 and can be configured to control the switching component 235 of the memory cell. In some architectures, word line 210 may be in electronic communication with a node of a capacitor of memory cell 205, and memory cell 205 may not include a switching component.Digit line 215 may be a wire that connects memory cell 205 and sense component 245 . In some architectures, memory cell 205 may be selectively coupled to digit line 215 during a portion of an access operation. For example, the word line 210 and the switch component 235 of the memory cell 205 may be configured to couple and/or isolate the capacitor 230 of the memory cell 205 and the digit line 215 . In some architectures, the memory cell 205 may be in electronic communication (eg, constant) with a digit line 215 .The sensing component 245 may be configured to detect a state (eg, charge) stored on the capacitor 230 of the memory cell 205 and determine a logic state of the memory cell 205 based on the stored state. In some cases, the charge stored by memory cell 205 may be extremely small. Accordingly, the sense component 245 may include one or more sense amplifiers to amplify the signal output by the memory unit 205 . The sense amplifier can detect small changes in charge of digit line 215 during a read operation and can generate a signal corresponding to logic state 0 or logic state 1 based on the detected charge. During a read operation, the capacitor 230 of the memory cell 205 may output a signal (eg, discharge a charge) to its corresponding digit line 215 . The signal may cause the voltage on digit line 215 to change. The sensing component 245 may be configured to compare the signal received from the memory cell 205 across the digit line 215 to a reference signal 250 (eg, a reference voltage). The sensing component 245 can determine the stored state of the memory cell 205 based on the comparison. For example, in binary signaling, if the digit line 215 has a higher voltage than the reference signal 250, the sensing component 245 can determine that the stored state of the memory cell 205 is a logic 1, and if the digit line 215 has a lower voltage than the reference signal 250. , the sense component 245 can determine that the stored state of the memory cell 205 is a logic zero. The sensing component 245 may contain various transistors or amplifiers to detect and amplify the difference in signals. The detected logic state of memory cell 205 may be provided as an output of sensing component 245 (e.g., to input/output 255), and may be communicated to another component of memory device 110 including memory die 200, such as device memory controller 155. A component indicates the detected logical state (eg, directly or using the local memory controller 260).Local memory controller 260 may control the operation of memory unit 205 through various components (eg, row decoder 220 , column decoder 225 , and sensing component 245 ). Local memory controller 260 may be an example of local memory controller 165 described with reference to FIG. 1 . In some cases, one or more of row decoder 220 , column decoder 225 , and sense component 245 may be co-located with local memory controller 260 . Local memory controller 260 may be configured to receive commands and/or data from external memory controller 105 (or device memory controller 155 as described with reference to FIG. 1 ), convert the commands and/or data into information usable by memory die 200 , perform one or more operations on memory die 200, and transfer data from memory die 200 to external memory controller 105 (or device memory controller 155) in response to performing the one or more operations. Local memory controller 260 can generate row and column address signals to activate target word line 210 and target digit line 215 . Local memory controller 260 may also generate and control various voltages or currents used during operation of memory die 200 . In general, the amplitude, shape, or duration of applied voltages or currents discussed herein may be adjusted or varied, and may be different for the various operations discussed in operating memory die 200 .Local memory controller 260 may contain a CA interface configured to receive and command over CA channel 186 . In some cases, the selection component can isolate the CA interface from one or more memory arrays of memory die 200 associated with logical channels corresponding to the CA interface. Additionally or alternatively, the selection component can couple a CA interface with one or more memory arrays of memory die 200 associated with logical channels corresponding to different CA interfaces. In some cases, the CA interface may be deactivated when isolated from one or more memory arrays, and may be activated or reactivated when coupled to one or more memory arrays.In some cases, local memory controller 260 may be configured to perform write operations (eg, program operations) to one or more memory cells 205 of memory die 200 . During a write operation, memory cells 205 of memory die 200 can be programmed to store a desired logic state. In some cases, multiple memory cells 205 may be programmed during a single write operation. The local memory controller 260 may identify (eg, via a command received from the CA interface over an internal channel) a target memory unit 205 to perform a write operation on. The local memory controller 260 can identify the target word line 210 and the target number line 215 in electronic communication with the target memory cell 205 (eg, the address of the target memory cell 205 ). The local memory controller 260 can activate the target wordline 210 and the target digit line 215 (eg, apply a voltage to the wordline 210 or the digit line 215 ) to access the target memory cell 205 . Local memory controller 260 may apply a particular signal (e.g., a voltage) to digit line 215 during a write operation to store a particular state (e.g., charge) in capacitor 230 of memory cell 205, which state (e.g., charge) Indicates the desired logic state. In general, local memory controller 260 may receive an indication of what to write to memory unit 205 from data channel 190 coupled to local memory controller 260 .In some cases, local memory controller 260 may be configured to perform read operations (eg, sense operations) on one or more memory cells 205 of memory die 200 . During a read operation, the logic state stored in memory cells 205 of memory die 200 may be determined. In some cases, multiple memory cells 205 may be sensed during a single read operation. The local memory controller 260 may identify (eg, via a command received from the CA interface over an internal channel) the target memory unit 205 to perform a read operation on. The local memory controller 260 can identify the target word line 210 and the target number line 215 in electronic communication with the target memory cell 205 (eg, the address of the target memory cell 205 ). The local memory controller 260 can activate (eg, apply a voltage to) the target word line 210 and the target digit line 215 to access the target memory cell 205 . Target memory cell 205 may transmit a signal to sense component 245 in response to the biased access line. Sensing component 245 may amplify the signal. Local memory controller 260 may trigger sense component 245 (eg, a latch sense component) and thereby compare the signal received from memory unit 205 to reference signal 250 . Based on the comparison, sensing component 245 may determine the logic state stored on memory unit 205 . As part of a read operation, local memory controller 260 may communicate the logic state stored on memory unit 205 to external memory controller 105 (or device memory controller 155 ) over data channel 190 .In some memory architectures, accessing the memory cell 205 may degrade or corrupt the logic state stored in the memory cell 205 . For example, a read operation performed in a DRAM architecture may partially or fully discharge a capacitor of a target memory cell. The local memory controller 260 may perform a rewrite operation or a refresh operation to restore the memory cells to their original logical state. The local memory controller 260 may rewrite the logic state to the target memory cell after a read operation. In some cases, a rewrite operation may be considered part of a read operation. Additionally, activating a single access line (eg, word line 210) can disturb the state stored in some of the memory cells in electronic communication with that access line. Accordingly, a rewrite or refresh operation may be performed on one or more memory cells that may not have been accessed.FIG. 3A may illustrate an example of a channel configuration 300-a of a memory device 110-a. Channel configuration 300-a may be an example of a 2-CA interface configuration. In other examples, channel configuration 300-a may be used for any number of CA interfaces. In channel configuration 300 - a , memory device 110 - a may have two logical channels 305 , where each logical channel 305 may have an associated CA channel 186 and data channel 190 . For example, logical channel 305-a may have CA channel 186-a and data channel 190-a, and logical channel 305-b may have CA channel 186-b and data channel 190-b. Additionally, the memory device 110 - a may have a CK channel 188 - a common to both logical channels 305 . In some cases, memory device 110 - a may have a CK channel 188 for each logical channel 305 .Each CA channel 186 may communicate commands between a host device and CA interface 310 . Each data channel 190 may transfer data between a host device and memory array 315 . For example, CA channel 186-a may carry commands to or from CA interface 310-a, and data channel 190-a may carry data to or from memory array 315-a. Additionally, CA channel 186-b may carry commands to or from CA interface 310-b, and data channel 190-b may carry data to or from memory array 315-b. In general, commands and data may come from a host device. Additionally, each CA interface 310 may forward commands to memory array 315 via internal channel 320 . For example, CA interface 310-a may forward commands to memory array 315-a via internal channel 320-a, and CA interface 310-b may forward commands to memory array 315-b via internal channel 320-b.To perform a read operation, one of CA interfaces 310-a and 310-b may receive a read command from a host device over CA channel 186-a or 186-b, respectively. If CA interface 310-a receives a read command, CA interface 310-a may forward the read command to memory array 315-a via internal channel 320-a. If CA interface 310-b receives a read command, CA interface 310-b may forward the read command to memory array 315-b via internal channel 320-b. Upon receiving a forwarded read command, memory array 315-a or 315-b may transmit data corresponding to the read command via data channel 190-a or 190-b, respectively. The host device can receive the transmitted data.To perform a write operation, one of the CA interfaces 310-a and 310-b may receive a write command from a host device. If CA interface 310-a receives a write command, CA interface 310-a may forward the write command to memory array 315-a, and if CA interface 310-b receives a write command, CA interface 310-b The write command may be forwarded to memory array 315-b. Upon receiving a forwarded write command, memory array 315-a or 315-b may write and store data received from the host device via data channel 190-a or 190-b, respectively.Figure 3B may illustrate an example of a different channel configuration 300-b for the memory device 110-a. Channel configuration 300-b may be an example of a 2-CA interface configuration. In other examples, channel configuration 300-b may be used for any number of CA interfaces. For example, CA interface 310-b may be deactivated (eg, disabled) and decoupled (eg, disconnected, isolated) from memory array 315-b. Additionally or alternatively, the CA interface 310-b may be ignored by the memory device 110-a when the memory device 110-a receives commands from the host device. Additionally, CA interface 310-a may be coupled to memory array 315-b via internal channel 320-c. This process of coupling and decoupling can be accomplished based at least in part on selection component 325-a coupled to CA interface 310-a and CA interface 310-b. Selection component 325-a may additionally be configured to deactivate CA interface 310-b. More details on the coupling and decoupling process can be described with reference to FIG. 6A.To transition from channel configuration 300-a to channel configuration 300-b, CA interface 310-a may receive configuration commands (eg, mode register set commands) from a host device over CA channel 186-a. Alternatively, configuration commands may be received by CA interface 310-b over CA channel 186-b. In any event, upon receiving the configuration command, selection component 325-a can deactivate CA interface 310-b (eg, put CA interface 310-b to sleep) and can disconnect internal channel 320-b. Additionally, selection component 325-a may connect internal channel 320-c between CA interface 310-a and memory array 315-b. To switch back from channel configuration 300-b to channel configuration 300-a, CA interface 310-a may receive another configuration command from the host device over CA channel 186-a. After CA interface 310-a receives other configuration commands, selection component 325-a can activate CA interface 310-b and can connect internal channel 320-b. Additionally, selection component 325-a may disconnect internal channel 320-c between CA interface 310-a and memory array 315-b. More details on receiving configuration commands can be discussed with respect to FIG. 5 .Alternatively, channel configuration 300-a or 300-b may be set at start-up time. In such cases, one or more pins of at least some CA interface 310 may be pulled to a predefined logic level (eg, a high logic level or a low logic level). Memory device 110-a can latch this logic level with an inactive edge of the RESET input and can store this value. The memory device 110-a may store this value until the memory device 110-a is powered down or until a subsequent reset is issued. In some cases, memory device 110-a may be in a static configuration (eg, always in channel configuration 300-b) when set at startup. In such cases, CA interface 310-b and/or selection component 325-a may not be included.In some cases, channel configuration 300 may not be set by a configuration command. For example, channel configuration 300-a or 300-b may be hardwired on the physical circuit board (PCB) making up memory device 110-a (e.g., memory device 110-a may have one or more dedicated pin).Once in channel configuration 300-b, CA interface 310-b may not react to external signals received by CA interface 310-b and may avoid decoding commands. Instead, commands for the second memory array 315-b may be received through the CA interface 310-a. In such cases, commands received from CA interface 310-a may be used to control the flow of information through data channel 190-b. Additionally, CK channel 188 associated with logical channel 305-b may not be used if each logical channel 305 has a corresponding clock (eg, corresponding CK channel 188). In general, configuration choices (eg, channel configuration 300-a or channel configuration 300-b) may have no effect on data channel 190 of memory device 110-a. However, configuration may affect CA interfaces 310 and their associated command decoders.In channel configuration 300-b, executing a read command may involve CA interface 310-a receiving a read command from a host device over CA channel 186-a. In some cases, CA interface 310-a may forward read commands to memory arrays 315-a and 315-b (eg, via internal channels 320-a and 320-c, respectively). Upon receiving a read command, memory array 315-a may retrieve a first data set corresponding to the read command and transmit it over data channel 190-a, and memory array 315-b may retrieve a first set of data corresponding to the read command The second data set is combined and transmitted through the data channel 190-b. The host device can receive both sets of data. In general, data channels 190-a and 190-b may operate synchronously in channel configuration 300-b and may transmit or retrieve their corresponding data sets at approximately the same time.In channel configuration 300-b, executing a write command may involve CA interface 310-a receiving a write command from a host device over CA channel 186-a. In some cases, CA interface 310-a may forward write commands to memory arrays 315-a and 315-b (eg, via internal channels 320-a and 320-c, respectively). Upon receiving a write command, memory array 315-a may write to memory the first set of data received from the host device over data channel 190-a, and memory array 315-b may write the first set of data received from the host device over data channel 190-b to memory. The data set received by the device is written to memory. In general, data channels 190-a and 190-b may operate synchronously in channel configuration 300-b and may receive or write their corresponding data sets at approximately the same time.If in channel configuration 300-b, the granularity associated with data received or transmitted to memory device 110-a may be of the granularity associated with data received or transmitted to memory device 110-a in channel configuration 300-a double. For example, channel configuration 300-b may be associated with a 64-byte access granularity and channel configuration 300-a may be associated with a 32-byte access granularity.In general, a device with configurable access granularity can select the granularity based on the amount of data an application requests to write or store. Selecting a granularity based on the amount of data may enable the device to transfer data more efficiently. For example, if an application requests 64 bytes of data and memory device 110 with a static access granularity has a granularity of 32 bytes, memory device 110 may perform two read operations to retrieve the data. However, if memory device 110 has a configurable granularity, memory device 110 may use a granularity of 64 bytes and may perform a single read operation, which may be associated with less latency than two read operations. On the other hand, if an application requests 32 bytes of data and memory device 110 with a static access granularity has a granularity of 64 bytes, memory device 110 may retrieve more data than requested by the application. Additional data can be discarded, but additional time may be required to transmit the data. However, if the memory device 110 has a configurable granularity, the memory device 110 can use a granularity of 32 bytes and can retrieve the requested data without the extra 32 bytes.FIG. 4A may illustrate an example of a channel configuration 400-a of a memory device 110-b. Channel configuration 400-a may be an example of a 4-CA interface configuration. In other examples, channel configuration 400-a may be used for any number of CA interfaces. In channel configuration 400 - a , memory device 110 - b may have four logical channels 305 , where each logical channel 305 may have an associated CA channel 186 and data channel 190 . For example, logical channel 305-c may have CA channel 186-c and data channel 190-c; logical channel 305-d may have CA channel 186-d and data channel 190-d; logical channel 305-e may have CA channel 186 -e and data channel 190-e; and logical channel 305-f may have CA channel 186-f and data channel 190-f. Additionally, the memory device 110-b may have a CK channel 188-b common to the four logical channels 305. In some cases, memory device 110 - b may have a CK channel 188 for each logical channel 305 .Each CA channel 186 may communicate commands between a host device and CA interface 310 . Each data channel 190 may communicate between a host device and memory array 315 . For example, CA channel 186-c may carry commands to or from CA interface 310-c, and data channel 190-c may carry data to or from memory array 315-c. Additionally, CA channel 186-d may carry commands to or from CA interface 310-d, and data channel 190-d may carry data to or from memory array 315-d. Additionally, CA channel 186-e may carry commands to or from CA interface 310-e, and data channel 190-e may carry data to or from memory array 315-e. In addition, CA channel 186-f may carry commands to or from CA interface 310-f, and data channel 190-f may carry data to or from memory array 315-f. In general, commands and data may come from a host device. Additionally, each CA interface 310 may forward commands to memory array 315 via internal channel 320 . For example, CA interface 310-c may forward commands to memory array 315-c via internal channel 320-d; CA interface 310-d may forward commands to memory array 315-d via internal channel 320-e; CA interface 310-e may The commands are forwarded to the memory array 315-e via the internal channel 320-f; and the CA interface 310-f may forward the commands to the memory array 315-f via the internal channel 320-g.To perform a read operation, one of the CA interfaces 310-c, 310-d, 310-e, or 310-f may receive the Read command. If CA interface 310-c receives a read command, CA interface 310-c may forward the read command (eg, via internal channel 320-d) to memory array 315-c. If CA interface 310-d receives a read command, CA interface 310-d may forward the read command (eg, via internal channel 320-e) to memory array 315-d. If CA interface 310-e receives a read command, CA interface 310-e may forward the read command (eg, via internal channel 320-f) to memory array 315-e. If CA interface 310-f receives a read command, CA interface 310-f may forward the read command (eg, via internal channel 320-g) to memory array 315-f. Upon receiving a forwarded read command, the memory arrays 315-c, 315-d, 315-e, or 315-f may transmit and read via data channels 190-c, 190-d, 190-e, or 190-f, respectively. Get the data corresponding to the command. The host device can receive the transmitted data.To perform a write operation, one of the CA interfaces 310-c, 310-d, 310-e, or 310-f may receive a write command from a host device. If CA interface 310-c receives a write command, CA interface 310-c may forward the write command (eg, via internal channel 320-d) to memory array 315-c. If CA interface 310-d receives a write command, CA interface 310-d may forward the write command (eg, via internal channel 320-e) to memory array 315-d. If CA interface 310-e receives a write command, CA interface 310-e may forward the write command (eg, via internal channel 320-f) to memory array 315-e. If CA interface 310-f receives a write command, CA interface 310-f may forward the write command (eg, via internal channel 320-g) to memory array 315-f. After receiving the forwarded write command, the memory arrays 315-c, 315-d, 315-e, 315-f can respectively write data from Data received by the host device.Figure 4B may illustrate examples of different channel configurations 400-b for memory device 110-b. Channel configuration 400-b may be an example of a 4-CA interface configuration. In other examples, channel configuration 400-b may be used for any number of CA interfaces. For example, CA interface 310-d may be deactivated and decoupled from memory array 315-d, and CA interface 310-f may be deactivated and decoupled from memory array 315-f. Additionally or alternatively, CA interfaces 310-d and 310-f may be ignored by memory device 110-b when memory device 110-b receives commands from a host device. Additionally, CA interface 310-c may be coupled with memory array 315-d via internal channel 320-h, and CA interface 310-e may be coupled with memory array 315-f via internal channel 320-i. This coupling and decoupling process may be implemented based at least in part on selection component 325-b coupled to CA interfaces 310-c, 310-d, 310-e, and 310-f. More details on the coupling and decoupling process can be described with reference to FIG. 6B.To transition from channel configuration 400-a to channel configuration 400-b, CA interface 310-c or 310-e may receive a configuration command (e.g., a mode register set command) from a host device via CA channel 186-c or 186-e, respectively . Alternatively, configuration commands may be received by CA interface 310-d or 310-f over CA channel 186-d or 186-f, respectively. In any case, after receiving the configuration command, the selection component 325-b can deactivate the CA interfaces 310-d and 310-f (eg, put the CA interfaces 310-d and 310-f to sleep) and can shut down internal Channels 320-e and 320-g. Additionally, selection component 325-b may connect internal channel 320-h between CA interface 310-c and memory array 315-d and internal channel 320-i between CA interface 310-e and memory array 315-f. More details on receiving configuration commands can be discussed with respect to FIG. 5 .To switch back from channel configuration 400-b to channel configuration 400-a, CA interface 310-c or 310-e may receive a configuration command (e.g., a mode register set command) from a host device via CA channel 186-c or 186-e, respectively . After receiving the configuration command, the selection component 325-b can activate the CA interface 310-d and can connect the internal channel 320-e. In addition, the selection component 325-b can activate the CA interface 310-f and can connect the internal channel 320-g. In addition, selection component 325-b may disconnect internal channel 320-h between CA interface 310-c and memory array 315-d and internal channel 320-i between CA interface 310-e and memory array 315-f.Once in channel configuration 400-b, CA interfaces 310-d and 310-f may not respond to external signals and may stop receiving and decoding commands. Conversely, commands for CA interface 310-d may be received via CA interface 310-c and commands for CA interface 310-f may be received via CA interface 310-e. Additionally, CK channel 188 associated with logical channels 305-d and 305-f may not be used if each logical channel 305 has a corresponding clock (eg, corresponding CK channel 188).In channel configuration 400-b, executing a read command may involve CA interface 310-c or 310-e receiving a read command from a host device over CA channel 186-c or 186-e, respectively. In some cases, CA interface 310-c may forward received read commands to memory arrays 315-c and 315-d (eg, via internal channels 320-d and 320-h, respectively), and CA interface 310- e may forward the received read command to memory arrays 315-e and 315-f (eg, via internal channels 320-f and 320-i, respectively). Upon receiving a read command from CA interface 310-c, memory array 315-c may retrieve a data set corresponding to the received read command and transmit over data channel 190-c, and memory array 315-d may The data set corresponding to the received read command is retrieved and transmitted over data channel 190-d. After receiving a read command from CA interface 310-e, at the same time, memory array 315-e may retrieve a data set corresponding to the received read command and transmit it over data channel 190-e, and memory array 315-e -f The data set corresponding to the received read command may be retrieved and transmitted over the data channel 190-f. In general, in channel configuration 400-b, data channel 190-c may operate synchronously with data channel 190-d and data channel 190-e may operate synchronously with data channel 190-f, and their corresponding memory arrays 315 may Their corresponding data sets are transferred or retrieved at approximately the same time.In channel configuration 400-b, executing a write command may involve CA interface 310-c or 310-e receiving a write command from a host device via CA channel 186-c or 186-e, respectively. In some cases, CA interface 310-c may forward received write commands to memory arrays 315-c and 315-d (eg, via internal channels 320-d and 320-h, respectively), and CA interface 310- e may forward the received write command to memory arrays 315-e and 315-f (eg, via internal channels 320-f and 320-i, respectively). Upon receiving a write command from CA interface 310-c, memory array 315-c may write to memory the data set received from the host device over data channel 190-c, and memory array 315-d may write the set of data received over data channel 190-c to memory. -d Writes the data set received from the host device to memory. After receiving the write command from the CA interface 310-e, at the same time, the memory array 315-e can write the data set received from the host device through the data channel 190-e into memory, and the memory array 315-f can write the Data channel 190-f writes the data set received from the host device into memory. In general, in channel configuration 400-b, data channel 190-c may operate synchronously with data channel 190-d and data channel 190-e may operate synchronously with data channel 190-f, and their corresponding memory arrays 315 may Receive or write their corresponding data sets at approximately the same time.Figure 4C may illustrate examples of different channel configurations 400-c for memory device 110-b. Channel configuration 400-c may be an example of a 4-CA interface configuration. In other examples, channel configuration 400-c may be used for any number of CA interfaces. For example, CA interface 310-d may be deactivated and decoupled from memory array 315-d; CA interface 310-e may be deactivated and decoupled from memory array 315-e; and CA interface 310-f may be deactivated and decoupled from the memory array 315-f. Additionally or alternatively, CA interfaces 310-d, 310-e, and 310-f may be ignored by memory device 110-b when memory device 110-b receives commands from a host device. Additionally, CA interface 310-c may be coupled to memory array 315-d via internal channel 320-h, to memory array 315-e via internal channel 320-j, and to memory array 315-f via internal channel 320-k . This coupling and decoupling process may be implemented based at least in part on selection component 325-b coupled to CA interfaces 310-c, 310-d, 310-e, and 310-f. More details on the coupling and decoupling process can be described with reference to FIG. 6B.To transition from channel configuration 400-a to channel configuration 400-c, CA interface 310-c may receive configuration commands (eg, mode register set commands) from a host device over CA channel 186-c. Alternatively, configuration commands may be received by one of CA interfaces 310-d, 310-e, or 310-f over CA channels 186-d, 186-e, or 186-f, respectively. In any case, after receiving the configuration command, the selection component 325-b can deactivate the CA interfaces 310-d, 310-e, and 310-f (e.g., make the CA interfaces 310-d, 310-e, and 310-f sleep) and internal channels 320-e, 320-f, and 320-g may be disabled. In addition, selection component 325-b may connect internal channel 320-h between CA interface 310-c and memory array 315-d, internal channel 320-j between CA interface 310-c and memory array 315-e, and CA Internal channel 320-k between interface 310-c and memory array 315-f.To switch back from channel configuration 400-c to channel configuration 400-a, CA interface 310-c may receive another configuration command from the host device over CA channel 186-c. After CA interface 310-c receives another configuration command, selection component 325-b can activate CA interfaces 310-d, 310-e, and 310-f and can connect internal channels 320-e, 320-f, and 320- g. Additionally, selection component 325-b may disconnect internal channel 320-h between CA interface 310-c and memory array 315-d, internal channel 320-j between CA interface 310-c and memory array 315-e, and Internal channel 320-k between CA interface 310-c and memory array 315-f.In a similar manner, to transition from channel configuration 400-b to channel configuration 400-c, CA interface 310-c or 310-e may receive a configuration command from a host device over CA channel 186-c or 186-e. After receiving the switch command, selection component 325-b can deactivate CA interface 310-e (eg, put CA interface 310-e to sleep) and can disconnect internal channel 320-i. Additionally, selection component 325-b may connect internal channel 320-j between CA interface 310-c and memory array 315-e and internal channel 320-k between CA interface 310-c and memory array 315-f.To switch back from channel configuration 400-c to channel configuration 400-b, CA interface 310-c may receive another configuration command from the host device over CA channel 186-c. After CA interface 310-c receives other configuration commands, selection component 325-b can activate CA interface 310-e and can connect internal channel 320-i. In addition, selection component 325-b can disconnect internal channel 320-j between CA interface 310-c and memory array 315-e and internal channel 320-k between CA interface 310-c and memory array 315-f. More details on receiving configuration commands can be discussed with respect to FIG. 5 .Once in channel configuration 400-c, CA interfaces 310-d, 310-e, and 310-f may not respond to external signals and may stop receiving and decoding commands. Instead, commands may be received via CA interface 310-c and control of data channels 190-d, 190-e, and 190-f may be derived from CA interface 310-c. Additionally, CK channel 188 associated with logical channels 305-d, 305-e, and 305-f may not be used if each logical channel 305 has a corresponding clock (eg, corresponding CK channel 188).In channel configuration 400-c, executing a read command may involve CA interface 310-c receiving a read command from a host device over CA channel 186-c. In some cases, CA interface 310-c may forward read commands to memory arrays 315-c, 315-d, 315-e and 315-f. After receiving the read command, memory array 315-c can retrieve the first data set corresponding to the read command and transmit it over data channel 190-c; memory array 315-d can retrieve the first data set corresponding to the read command; The second data set is transmitted over data channel 190-d; memory array 315-e can retrieve a third data set corresponding to the read command and transmitted over data channel 190-e; and memory array 315-f can retrieve A fourth data set corresponding to the read command is also transmitted over the data channel 190-f. The host device can receive four data sets. In general, data channels 190-c, 190-d, 190-e, and 190-f can operate synchronously in channel configuration 400-c, and their corresponding memory arrays 315 can transmit or retrieve their corresponding data sets at approximately the same time .In channel configuration 400-c, executing a write command may involve CA interface 310-c receiving a write command from a host device over CA channel 186-c. In some cases, CA interface 310-c may forward write commands to memory arrays 315-c, 315-d, 315-e and 315-f. After receiving a write command, memory array 315-c can write to the first set of data received from the host device over data channel 190-c; memory array 315-d can write to the first set of data received from the host device over data channel 190-d; memory array 315-e can write a third data set received from the host device via data channel 190-e; and memory array 315-f can write a third set of data received from the host device via data channel 190-f; A fourth data set. In general, data channels 190-c, 190-d, 190-e, and 190-f can operate synchronously in channel configuration 400-c, and their corresponding memory arrays 315 can receive or write their corresponding data channels at approximately the same time gather.In some cases, channel configuration 400-a, 400-b, or 400-c may be set at startup. In such cases, one or more pins of at least some CA interface 310 may be pulled to a predefined logic level (eg, a high logic level or a low logic level). Memory device 110-b can latch this logic level with an inactive edge of the RESET input and can store this value. The memory device 110-b may store this value until the memory device 110-b is powered down or until a subsequent reset is issued. In some cases, when set at startup, memory device 110-b may be in a static configuration (eg, always in channel configuration 400-b or 400-c). If this is the case for channel configuration 400-b, selection component 325-b, CA interface 310-d, and CA interface 310-f may not be included. If this is the case for channel configuration 400-c, selection component 325-b and CA interfaces 310-d, 310-e, and 310-f may not be included.In some cases, channel configuration 400 may not be set by a configuration command. For example, channel configurations 400-a, 400-b, or 400-c may be hardwired on the PCB making up memory device 110-b. In general, a configuration choice (eg, between channel configurations 400-a, 400-b, and 400-c) may have no effect on the data channel 190 of the memory device 110-b. However, configuration may affect CA interfaces 310 and their associated command decoders.The access granularity associated with channel configuration 400-b may be twice the granularity associated with channel configuration 400-a. Additionally, the access granularity associated with channel configuration 400-c may be twice the granularity associated with channel configuration 400-b and/or four times the granularity associated with channel configuration 400-a. For example, channel configuration 400-c may be associated with a one hundred twenty eight (128) byte granularity, channel configuration 400-b may be associated with a 64 byte granularity, and channel configuration 400-a may be associated with a 32 byte granularity couplet.5 shows an example of a flowchart 500 supporting a reconfigurable channel interface for a memory device according to examples disclosed herein. In some cases, flowchart 500 may be implemented by memory device 110 as described with reference to FIGS. 1 and 3A-4C.At 505 , the memory device 110 may receive a configuration command from the host device at the first CA interface 310 over the CA channel 186 . The first CA interface 310 may be associated with the first logical channel 305 of the memory device 110 .At 510, memory device 110 may decode the configuration command. If memory device 110 determines that the configure command is decoded as a “merge” command, memory device 110 may proceed to 515 . If memory device 110 decodes the configuration command as an "unmerge" command, memory device 110 may proceed to 525 .At 515 , memory device 110 may disable one or more other CA interfaces 310 in other logical channels 305 (eg, logical channels 305 next to first logical channel 305 ). Additionally, a memory device may establish a connection between the first CA interface 310 and a memory array 315 associated with the other logical channel 305 .At 520, the first CA interface 310 may forward subsequent commands to other logical channels. For example, memory device 110 may receive the command and may forward the command to memory array 315 in other logical channel 305 via the connection.At 525, memory device 110 may enable one or more other CA interfaces in other logical channels 305 (eg, logical channel 305 next to first logical channel 305). Additionally, the memory device may disconnect between the first CA interface 310 and the memory array 315 in the other logical channel 305 .At 530, the first CA interface 310 may not forward subsequent commands received at the first CA interface to other logical channels. For example, memory device 110 may receive a command and may not forward the command to memory array 315 in other logical channels 305 . Instead, memory device 110 may forward the command to memory array 315 within first logical channel 305 .6A illustrates an example of a routing scheme 600-a showing a selection component 325-c configured to route commands between the CA interface 310 and the memory array 315 according to examples disclosed herein. In some examples, selection component 325-c may be an instance of selection component 325-a described with reference to FIGS. 3A and 3B. Routing scheme 600 - a can represent how selection component 325 - c can route commands between CA interface 310 and memory array 315 . The paths shown in routing scheme 600-a may represent at least a portion of internal channel 320 described with reference to FIGS. 3A-3B.The selection component 325-c can be configured in various configurations. For example, selection component 325-c may be in a channel A configuration, where signals received from internal channel 605-a are routed to memory arrays 315-g and 315-h. Alternatively, the selection component 325-c may be in a channel AB configuration in which signals received from internal channel 605-a are routed to memory array 315-g and signals received from internal channel 605-b are routed to memory array 315-h . In general, a selection component 325-c in a channel A configuration may correspond to a memory device 110 in a channel configuration 300-a. Selection component 325-c in channel AB configuration may correspond to memory device 110 in channel configuration 300-b.To enable such routing, the selection component 325-c may include a multiplexer 610-a and a latch component 615-a. The multiplexer 610-a may be coupled with the first CA interface 310-g, the second CA interface 310-h, and the memory array 315-h. Multiplexer 610-a may also be coupled to latch component 615-a via select signal path 620-a. The latch component 615-a can output a select signal via select signal path 620-a that causes the multiplexer 610-a to selectively connect the memory array 315-h to the first CA interface 310-a based on the value of the select signal. g or the second CA interface 310-h is coupled.The configuration the multiplexer 610-a is in may depend on the value of the select signal received from the latch component 615-a. For example, if the select signal is at a low value (e.g., a logic 0), the multiplexer 610-a may be configured to connect the first CA interface 310-g (e.g., via internal channel 605-a and internal channel 635-a) A memory array 315-h is coupled. In such cases, the selection component 325-c can be in the channel A configuration. If the select signal is at a high value (e.g., logic 1), the multiplexer 610-a may be configured to connect the second CA interface 310-h to the memory array (e.g., via internal channel 605-b and internal channel 635-a) 315-h coupling. In such cases, the selection component 325-c can be in the channel AB configuration. The latch component 615-a can determine the value of the select signal by sampling the level of the CONFIG input 625-a with a rising edge of the RESET_n input 630-a.In some examples, commands may be received by CA interface 310-g over CA channel 186-g. CA interface 310-g may forward commands to memory array 315-g over internal channel 605-a. The CA interface 310-g can couple with the memory array 315-g regardless of whether the selection component 325-c is in the channel A configuration or the channel AB configuration. Accordingly, the CA interface 310-g can forward commands to the memory array 315-g over the internal channel 605-a regardless of whether the selection component 325-c is in the channel A configuration or the channel AB configuration. Internal channel 605-a may correspond to internal channel 320-a in FIGS. 3A and 3B. If select component 325-c is in channel AB configuration, the command may not be forwarded to memory array 315-h (eg, multiplexer 610-a may isolate CA interface 310-g from memory array 315-h). However, if select component 325-c is in the channel A configuration, the command may be forwarded to memory array 315-h through internal channels 605-a and 635-a. The combination of internal channel 605-a and internal channel 635-a may correspond to internal channel 320-c described with reference to FIG. 3B.In other examples, commands may be received by CA interface 310-h over CA channel 186h. CA interface 310-h may forward the command to multiplexer 610-a through internal channel 605-b. If the selection component 325-c is in the channel AB configuration, the command may be forwarded to the memory array 315-h through the internal channels 605-b and 635-a. The combination of internal channel 605-b and internal channel 635-a may correspond to internal channel 320-b described with reference to FIG. 3A. If select component 325-c is in the channel A configuration, the command may not be forwarded to memory array 315-h (eg, multiplexer 610-a may isolate CA interface 310-h from memory array 315-h).Figure 6B illustrates an example of a routing scheme 600-b showing a selection component 325-d configured to route commands between the CA interface 310 and the memory array 315 according to examples disclosed herein. In some examples, selection component 325-d may be an instance of selection component 325-b described with reference to FIGS. 4A, 4B, and 4C. Routing scheme 600 - b can represent how selection component 325 - d can route commands between CA interface 310 and memory array 315 . The paths shown in routing scheme 600-b may represent at least a portion of internal channel 320 described with reference to FIGS. 4A-4C.Selection component 325-d can be configured in various configurations. For example, selection component 325-d may be in a channel A configuration in which signals received from internal channel 605-c are routed to memory arrays 315-i, 315-j, 315-k, and 315-1. Alternatively, selection component 325-d may be in a channel AC configuration where signals received from internal channel 605-c are routed to memory arrays 315-i and 315-j and signals received from internal channel 605-e are routed to Memory arrays 315-k and 315-l. Alternatively, the selection component 325-d may be in a channel ABCD configuration in which signals received from internal channel 605-c are routed to memory array 315-i; signals received from internal channel 605-d are routed to memory array 315-i j; signals received from internal channel 605-e are routed to memory array 315-k and signals received from internal channel 605-f are routed to memory array 315-1. In general, a selection component 325-d in a channel A configuration may correspond to a memory device 110 in a channel configuration 400-c; a selection component 325-d in an AB configuration may correspond to a selection component 325-d in a channel configuration 400-b. and the selection component 325-d in the ABCD configuration may correspond to the memory device 110 in the channel configuration 400-a.To implement such routing, selection component 325 - d may include a collection of multiplexers 610 . The set of multiplexers 610 may include multiplexers 610-b, 610-c, 610-d, and 610-e. The multiplexer 610-b may be coupled with the first CA interface 310-i, the second CA interface 310-j, and the memory array 315-j. The multiplexer 610-c may be coupled with the third CA interface 310-k, the fourth CA interface 310-1 and the multiplexer 610-e. The multiplexer 610-d may be coupled with the first CA interface 310-i, the third CA interface 310-k, and the memory array 315-k. The multiplexer 610-e may be coupled with the first CA interface 310-i, the multiplexer 610-c, and the memory array 315-1.The selection component 325 - d may also include a set of latch components 615 . The set of latch components 615 may include latch components 615-b and 615-c. Multiplexers 610-b and 610-c may be coupled to latch component 615-b through select signal path 620-b. Latch component 615-b may output a first select signal via select signal path 620-b that causes multiplexer 610-b to selectively connect memory array 315-j to Either the first CA interface 310-i or the second CA interface 310-j is coupled. Additionally, the first selection signal may cause the multiplexer 610-c to selectively couple the multiplexer 610-e with the third CA interface 310-k or the fourth CA interface 310-1 based on the value of the first selection signal. Concurrently, multiplexers 610-d and 610-e may be coupled to latch component 615-c via select signal path 620-c. Latch component 615-c may output a second select signal via select signal path 620-c that causes multiplexer 610-d to selectively connect memory array 315-k to The first CA interface 310-i or the third CA interface 310-k couple. Additionally, the second select signal may cause the multiplexer 610-e to selectively couple the memory array 315-1 with the first CA interface 310-i or the multiplexer 610-c based on the value of the second select signal.The configuration in which the multiplexer 610-b is placed may depend on the value of the select signal received from the latch component 615-b. For example, if the select signal is at a low value (e.g., logic 0), the multiplexer 610-b may be configured to connect the first CA interface 310-i to the memory array (e.g., via internal channels 605-c and 635-b) 315-j coupling. In such cases, the selection component 325-d can be in a channel A or channel AC configuration. If the select signal is at a high value (e.g., a logic 1), the multiplexer 610-b may be configured (e.g., via internal channels 605-d and 635-b) to connect the second CA interface 310-j to the memory array 315-j j coupling. In such cases, selection component 325-c may be in an ABCD configuration. The latch component 615-b can determine the value of the select signal by sampling the level of the CONFIGO input 625-b with a rising edge of the RESET_n input 630-b.The configuration in which the multiplexer 610-c is placed may depend on the value of the select signal received from the latch component 615-b. For example, if the select signal is at a low value (e.g., logic 0), the multiplexer 610-c may be configured (e.g., via internal channels 605-e and 635-c) to combine the third CA interface 310-k with the multiplexer device 610-e coupling. In such cases, the selection component 325-d can be in the channel A configuration or the channel AC configuration. If the select signal is at a high value (e.g., logic 1), the multiplexer 610-c may be configured to connect the fourth CA interface 310-1 with the multiplexer 610 (e.g., via internal channels 605-f and 635-c). -e coupling. In such cases, selection component 325-c may be in an ABCD configuration. In some cases, multiplexer 610-c may couple third CA interface 310-k to multiplexer 610-e while multiplexer 610-b couples first CA interface 310-i to memory array 315-j. and couples the fourth CA interface 310-1 to the multiplexer 610-e while the multiplexer 610-b couples the second CA interface 310-j to the memory array 315-j.The configuration in which the multiplexer 610-d is placed may depend on the value of the select signal received from the latch component 615-c. For example, if the select signal is at a low value (e.g., logic 0), the multiplexer 610-d may be configured to connect the first CA interface 310-i to the memory array (e.g., via internal channels 605-c and 640-a) 315-k coupling. In such cases, the selection component 325-d can be in the channel A configuration. If the select signal is at a high value (e.g., a logic 1), the multiplexer 610-d may be configured (e.g., via internal channels 605-e and 640-a) to connect the third CA interface 310-k with the memory array 315- k coupling. In such cases, selection component 325-d may be in an AC configuration. The latch component 615-c can determine the value of the select signal by sampling the level of the CONFIG1 input 625-c with a rising edge of the RESET_n input 630-b.The configuration in which multiplexer 610-e is placed may depend on the value of the select signal received from latch component 615-c. For example, if the select signal is at a low value (eg, a logic 0), the multiplexer 610-e may be configured (eg, via internal channels 605-c and 640b) to connect the first CA interface 310-i with the memory array 315- lCoupling. In such cases, the selection component 325-d can be in the channel A configuration. If the select signal is at a high value (eg, logic 1), multiplexer 610-e may be configured to couple multiplexer 610-c with memory array 315-1 (eg, via internal channel 635-c). In such cases, the selection component 325-d may be in an AC or ABCD configuration. Whether the selection component 325-d is in the AC configuration or the ABCD configuration may depend on the selection signal from the latch component 615-b. In some cases, multiplexer 610-e may couple first CA interface 310-i to memory array 315-1 while multiplexer 610-d couples first CA interface 310-i to memory array 315-k , and the multiplexer 610-c is coupled to the memory array 315-1 while the multiplexer 610-d couples the third CA interface 310-k to the memory array 315-k.In some examples, first CA interface 310-i may receive commands over CA channel 186i. The first CA interface 310-i may forward the command to the memory array 315-i through the internal channel 605-c. The first CA interface 310-i can be coupled with the memory array 315-i regardless of whether the selection component 325-d is in a channel A configuration, a channel AC configuration, or a channel ABCD configuration. Thus, the first CA interface 310-i can forward commands over the internal channel 605-c regardless of the configuration the selection component 325-d is in. Internal channel 605-c may correspond to internal channel 320-d in Figures 4A, 4B, and 4C. If the selection component 325-d is in the channel ABCD configuration, the command may not be forwarded to the memory arrays 315-j, 315-k, or 315-l, possibly due to multiplexers 610-b, 610-d, and 610-e The first CA interface 310-i is isolated from memory arrays 315-j, 315-k, and 315-1, respectively. However, if select component 325-d is in channel A or AC configuration, the command may be forwarded to memory array 315-j via internal channels 605-c and 635-b. The combination of internal channels 605-c and 635-b may correspond to internal channel 320-h in Figures 4B and 4C. If the selection component 325-d is in the channel AC configuration, the command may not be forwarded to the memory array 315-k or 315-l, possibly because the multiplexers 610-d and 610-e divide the first CA interface 310-i are isolated from memory arrays 315-k and 315-1, respectively. However, if select component 325-d is in channel A configuration, the command can be forwarded to memory array 315-k via internal channels 605-c and 640-a and can be forwarded to memory array 315-k via internal channels 605-c and 640-b 315-l. The combination of internal channels 605-c and 640-a may correspond to internal channel 320-j in FIG. 4C, and internal channels 605-c and 640-b may correspond to internal channel 320-k in FIG. 4C.In other examples, the command may be received by the second CA interface 310-j over the CA channel 186j. The second CA interface 310-j may forward the command to the multiplexer 610-b through the internal channel 605-d. If the selection component 325-d is in the channel ABCD configuration, the command may be forwarded to the memory array 315-j through the internal channels 605-d and 635-b. The combination of internal channel 605-d and internal channel 635-b may correspond to internal channel 320-e described with reference to FIG. 4A. If select component 325-d is in channel A or AC configuration, the command may not be forwarded to memory array 315-j (e.g., multiplexer 610-b may isolate second CA interface 310-j from memory array 315-j ).In other examples, the command may be received by the third CA interface 310-k over the CA channel 186-k. The third CA interface 310-k may forward the command to the multiplexer 610-d through the internal channel 605-e. The third CA interface 310-k can be coupled with the multiplexer 610-d regardless of whether the selection component 325-d is in the channel A configuration, the channel AC configuration, or the channel ABCD configuration. Thus, the third CA interface 310-i can forward commands over the internal channel 605-e regardless of what configuration the selection component 325-d is in. If select component 325-d is in channel A configuration, the command may be forwarded to multiplexer 610-e, but not to memory array 315-k or 315-l, possibly due to multiplexer 610- d and 610-e isolate the third CA interface 310-k from the memory arrays 315-k and 315-1, respectively. If select component 325-d is in channel AC or ABCD configuration, the command may be forwarded to memory array 315-k through internal channels 605-e and 640-a. The combination of internal channels 605-e and 640-a may correspond to internal channel 320-f in Figures 4A and 4B. If the selection component 325-d is in the channel AC configuration, the command may be forwarded to the memory array 315-1 through internal channels 605-e, 635-c, and 640b. The combination of internal channels 605-e, 635-c, and 640b may correspond to internal channel 320-i in FIG. 4B. If the selection component 325-d is in the channel ABCD configuration, the command may not be forwarded to the multiplexer 610-e, and relatedly, the command may not be forwarded to the memory array 315-1, possibly because the multiplexer 610-c The third CA interface 310-k is isolated from the multiplexer 610-e.In other examples, the command may be received by the fourth CA interface 310-1 over the CA channel 186-1. The fourth CA interface 310-1 may forward the command to the multiplexer 610-c through the internal channel 605-f. If select component 325-d is in channel ABCD configuration, the command may be forwarded to memory array 315-1 through internal channels 605-f, 635-c, and 640b. The combination of internal channels 605-f, 635-c, and 640b may correspond to internal channel 320-g in FIG. 4A. If the select component 325-d is in channel A or AC configuration, the command may not be forwarded to the memory array 315-l (e.g., the multiplexer 610-c may combine the fourth CA interface 310-l with the multiplexer 610-e isolation).7 shows a block diagram 700 of a memory device 705 supporting a reconfigurable channel interface for a memory device according to examples disclosed herein. The memory device 705 may be an example of aspects of the memory device described with reference to FIGS. 1-6 . The memory device 705 can include a CA interface component 710 , a selection component 715 , a memory array component 720 and a CA interface activation component 725 . Each of these modules may communicate with each other directly or indirectly (eg, via one or more buses).CA interface component 710 can receive a command at a first command/address (CA) interface coupled to a first memory array indicating a configuration of a set of CA interfaces including the first CA interface. In some examples, CA interface component 710 can receive a read command at a command/address (CA) interface coupled to a first control channel, a first memory array, and a second memory array, the first memory array to the first data channel coupled and the second memory array is coupled to the second data channel. In some examples, CA interface component 710 can identify that the read command is for the second memory array. In some examples, CA interface component 710 can receive a read command at the first CA interface for the second memory array based on coupling the first CA interface with the second memory array. In some examples, CA interface component 710 can forward a read command from a first CA interface to a first memory array and a second memory array, wherein retrieving data is based on forwarding the read command to the second memory array, and wherein retrieving The additional data is based on forwarding the read command to the first memory array. In some examples, CA interface component 710 can receive a second command at the first CA interface to associate the second CA interface with the second memory array. In some examples, CA interface component 710 can receive the write command at the CA interface over the first control channel. In some cases, the command is received after the memory device including the first CA interface has performed a boot process. In some cases, commands are received over one or more dedicated pins of the first CA interface as part of a boot process of a memory device including the first CA interface.The selecting component 715 can isolate the second CA interface in the set from the second memory array based on receiving the command. In some examples, selecting component 715 can couple the first CA interface with the second memory array based on isolating the second CA interface from the second memory array. In some examples, selecting component 715 can isolate the first CA interface from the second memory array based on receiving the second command. In some examples, selecting component 715 can couple the second CA interface with the second memory array based on isolating the first CA interface from the second memory array.The memory array component 720 can retrieve the data set from the second memory array based on receiving the read command. In some examples, memory array component 720 can transmit the set of data over the second data channel based on retrieving the set of data from the second memory array. In some examples, memory array component 720 can retrieve data from the second memory array based on receiving a read command at the first CA interface. In some examples, memory array component 720 can retrieve additional data from the first memory array based on receiving a read command at the first CA interface. In some examples, memory array component 720 can receive the second set of data over the second data channel based on receiving the write command at the CA interface. In some examples, memory array component 720 can write the second set of data to the second memory array based on receiving the write command and the second set of data. In some examples, memory array component 720 can retrieve the second set of data from the first memory array based on receiving the read command. In some examples, memory array component 720 can transmit the second set of data over the first data channel based on retrieving the second set of data from the first memory array. In some examples, memory array component 720 can retrieve the second set of data from a third memory array coupled to the third data channel based on receiving the read command at the CA interface. In some examples, memory array component 720 can transmit the second set of data over the third data channel based on retrieving the second set of data from the third memory array.The CA interface activation component 725 can deactivate the second CA interface based on receiving a command at the first CA interface, wherein coupling the first CA interface with the second memory array is based on deactivating the second CA interface. In some examples, CA interface activation component 725 can deactivate a clock associated with the second CA interface based on receiving the command at the first CA interface. In some examples, CA interface activation component 725 can activate the second CA interface based on receiving the second command at the first CA interface.8 shows a block diagram 800 of a host device 805 supporting a reconfigurable channel interface for a memory device according to examples disclosed herein. Host device 805 may be an example of aspects of a host device described with reference to FIGS. 1 and 3-5. The host device 805 can include a message size determination component 810 , a configuration determination component 815 , a command transmission component 820 , a CA interface number component 825 , and a data transmission component 830 . Each of these modules may communicate with each other directly or indirectly (eg, via one or more buses).The information size determining component 810 can determine, by the host device, the size of the information associated with the access command executed by the memory device.Configuration determining component 815 can determine a configuration of at least one CA interface in a set of command/address (CA) interfaces of the memory device based on determining a size of information associated with the access command.In some cases, the configuration indicates that the CA interface is coupled to the first memory array and the second memory array and the second CA interface is deactivated.Command transmission component 820 can transmit commands indicating configurations to the CA interfaces in the set.The number of CA interfaces component 825 can identify the number of reconfigurable CA interfaces of the memory device based on determining a size of information associated with the access command, wherein determining the configuration is based on identifying the number of reconfigurable CA interfaces.Data transfer component 830 can transfer data having the size over the data channel based on transferring the configuration to the memory device.9 shows a flowchart of one or more methods 900 of supporting reconfigurable channel interfaces for memory devices according to examples disclosed herein. The operations of method 900 may be implemented by a memory device or components thereof as described herein. For example, the operations of method 900 may be performed by a memory device as described with reference to FIG. 7 . In some examples, a memory device can execute a set of instructions to control functional elements of the memory device to perform the described functions. Additionally or alternatively, the memory device may employ dedicated hardware to perform aspects of the described functions.At 905, the memory device may receive, at a first CA interface coupled to the first memory array, a command indicating a configuration of a set of CA interfaces including the first CA interface. The operation of 905 may be performed according to the methods described herein. In some examples, aspects of the operations of 905 may be performed by the CA interface components described with reference to FIG. 7 .At 910, the memory device may isolate a second CA interface in the set from a second memory array based on receiving the command. The operations of 910 may be performed according to methods described herein. In some examples, aspects of the operations of 910 may be performed by selection components described with reference to FIG. 7 .At 915, the memory device can couple the first CA interface with the second memory array based on isolating the second CA interface from the second memory array. Operations at 915 may be performed according to methods described herein. In some examples, aspects of the operations of 915 may be performed by selection components described with reference to FIG. 7 .In some instances, an apparatus as described herein may perform one or more methods, such as method 900 . The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for receiving an indication at a first CA interface coupled to a first memory array a command for configuration of a set of CA interfaces comprising a first CA interface; isolating a second CA interface in the set from a second memory array based on receiving the command; and isolating the second CA interface from the second memory array based on isolating the second CA interface from the second memory array The first CA interface is coupled with the second memory array.Some examples of the method 900 and apparatus described herein may further include operations, features, means, or instructions for deactivating a second CA interface based on receiving a command at the first CA interface, wherein the first CA interface Coupling with the second memory array may be based on deactivating the second CA interface.In some examples of the method 900 and devices described herein, deactivating the second CA interface may include operations, features, means, or instructions for: The clock associated with the CA interface.Some examples of the method 900 and apparatus described herein may further include operations, features, means, or instructions for receiving, at the first CA interface based on coupling the first CA interface with the second memory array, data for the second memory array. A read command of the array, and retrieving data from the second memory array at the first CA interface based on receipt of the read command.Some examples of the method 900 and apparatus described herein may further include operations, features, means, or instructions for retrieving additional data from the first memory array based on receiving a read command at the first CA interface.Some examples of the method 900 and apparatus described herein may further include operations, features, means, or instructions for forwarding a read command from the first CA interface to the first memory array and the second memory array, wherein the retrieved data It may be based on forwarding the read command to the second memory array, and wherein retrieving the additional data may be based on forwarding the read command to the first memory array.Some examples of the method 900 and apparatus described herein may further include operations, features, means, or instructions for receiving, at the first CA interface, a second CA interface for associating a second CA interface with a second memory array. commands; isolating the first CA interface from the second memory array based on receiving the second command; and coupling the second CA interface to the second memory array based on isolating the first CA interface from the second memory array.Some examples of the method 900 and apparatus described herein may further include operations, features, means, or instructions for activating the second CA interface based on receiving the second command at the first CA interface.In some examples of method 900 and apparatus described herein, the command may be received after the memory device including the first CA interface may have performed a boot process.In some examples of method 900 and apparatus described herein, commands may be received over one or more dedicated pins of the first CA interface as part of a boot process of a memory device including the first CA interface.10 shows a flowchart of one or more methods 1000 of supporting reconfigurable channel interfaces for memory devices according to examples disclosed herein. The operations of method 1000 may be implemented by a memory device or components thereof as described herein. For example, the operations of method 1000 may be performed by a memory device as described with reference to FIG. 7 . In some examples, a memory device can execute a set of instructions to control functional elements of the memory device to perform the described functions. Additionally or alternatively, the memory device may employ dedicated hardware to perform aspects of the described functions.At 1005, the memory device may receive a read command at a CA interface coupled to a first control channel, a first memory array coupled to a first data channel, and a second memory array coupled to a second memory array. Data channel coupling. The operation of 1005 may be performed according to the methods described herein. In some examples, aspects of the operations of 1005 may be performed by the CA interface components described with reference to FIG. 7 .At 1010, the memory device may identify that the read command is for the second memory array. In some cases, the memory device may also identify that the read command is for the first memory array. The operations of 1010 may be performed according to methods described herein. In some examples, aspects of the operations of 1010 may be performed by the CA interface components described with reference to FIG. 7 .At 1015, the memory device may retrieve the data set from the second memory array based on receiving the read command. In some cases, the memory device may retrieve an additional set of data from the first memory array based on receiving the read command. The operation of 1015 may be performed according to methods described herein. In some examples, aspects of the operations of 1015 may be performed by memory array components described with reference to FIG. 7 .At 1020, the memory device may transmit the set of data over the second data channel based on retrieving the set of data from the second memory array. The operations of 1020 may be performed according to methods described herein. In some examples, aspects of the operations of 1020 may be performed by memory array components described with reference to FIG. 7 .In some instances, an apparatus as described herein may perform one or more methods, such as method 1000 . The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for: communicating with the first control channel, the first memory array, and the second memory A read command is received at a CA interface coupled to the array, the first memory array is coupled to the first data channel and the second memory array is coupled to the second data channel; identifying the read command is for the second memory array; based on receiving the read command to retrieve the data set from the second memory array; and transmit the data set over the second data channel based on the data set retrieved from the second memory array.Some examples of method 1000 and apparatus described herein may further include operations, features, means, or instructions for: receiving a write command at the CA interface over the first control channel; based on receiving the write command at the CA interface And receiving a second set of data over the second data channel; and writing the second set of data to the second memory array based on receiving the write command and the second set of data.Some examples of method 1000 and apparatus described herein may further include operations, features, means, or instructions for: retrieving a second set of data from the first memory array based on receiving a read command; The second data set is retrieved from the memory array and transmitted over the first data channel.Some examples of method 1000 and apparatus described herein may further include operations, features, means, or instructions for: reading from a third memory array coupled to a third data channel based on receiving a read command at the CA interface Retrieving a second set of data; and transmitting the second set of data over a third data channel based on retrieving the second set of data from the third memory array.11 shows a flowchart of one or more methods 1100 of supporting reconfigurable channel interfaces for memory devices according to examples disclosed herein. The operations of method 1100 may be implemented by a host device or components thereof as described herein. For example, the operations of method 1100 may be performed by a host device as described with reference to FIG. 8 . In some examples, a host device can execute a set of instructions to control functional elements of the host device to perform the described functions. Additionally or alternatively, the host device may use dedicated hardware to perform aspects of the described functions.At 1105, the host device may determine, by the host device, a size of information associated with the access command executed by the memory device. The operation of 1105 may be performed according to the methods described herein. In some examples, aspects of the operations of 1105 may be performed by the message size determining component described with reference to FIG. 8 .At 1110, the host device may determine a configuration of at least one CA interface in a set of CA interfaces of the memory device based on determining a size of information associated with the access command. The operations of 1110 may be performed according to methods described herein. In some examples, aspects of the operations of 1110 may be performed by the configuration determining component described with reference to FIG. 8 .At 1115, the host device may transmit a command to the CA interfaces in the set indicating the configuration. Operations at 1115 may be performed according to methods described herein. In some examples, aspects of the operations of 1115 may be performed by the command transmission components described with reference to FIG. 8 .In some instances, an apparatus as described herein may perform one or more methods, such as method 1100 . The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for determining, by a host device, to be associated with an access command executed by a memory device determining the size of the information associated with the access command; determining a configuration of at least one CA interface in a set of command/address (CA) interfaces of the memory device based on determining the size of the information associated with the access command; and transmitting an indication to the CA interface in the set configuration command.In some examples of the method 1100 and the devices described herein, the configuration indicates that a CA interface can be coupled with the first memory array and the second memory array and that the second CA interface can be deactivated.Some examples of method 1100 and apparatus described herein may further include operations, features, means, or instructions for identifying a number of reconfigurable CA interfaces of a memory device based on determining a size of information associated with an access command , where determining the configuration may be based on identifying a number of reconfigurable CA interfaces.Some examples of the method 1100 and apparatus described herein may further include operations, features, means, or instructions for transmitting data having the size over the data channel based on transmitting the configuration to the memory device.It should be noted that the methods described above describe possible implementations and that operations and steps may be rearranged or otherwise modified and other implementations are possible. Additionally, portions from two or more of the methods described may be combined.Describe a device. The apparatus may include a first memory array coupled to a first data channel, a second memory array coupled to a second data channel, a first CA interface coupled to a first control channel and associated with the first memory array, and A second CA interface coupled to the second control channel and associated with the second memory array, and a selection component coupled to the first CA interface and the second CA interface and configured to selectively connect the second memory array at a first time The array is coupled with the first CA interface and selectively couples the second memory array with the second CA interface at a second time.Some examples can further include that the command to the second memory array can be received over the first control channel based on the second memory array being coupled with the first CA interface.Some examples of the apparatus can include a third memory array coupled to a third data channel, a fourth memory array coupled to a fourth data channel, a third CA coupled to a third control channel and associated with the third memory array interface, and a fourth CA interface coupled to the fourth control channel and associated with the fourth memory array, wherein the selection component can be coupled to the third CA interface and the fourth CA interface.In some examples, the first CA interface can be coupled with the first memory array, the second memory array, the third memory array, and the fourth memory array using a selection component. In some examples, the selection component can be further configured to selectively couple the fourth memory array with the third CA interface or the fourth CA interface.In some examples, the selection component can be further configured to selectively couple the third memory array with the first CA interface or the third CA interface and selectively couple the fourth memory array with the first CA interface or the fourth CA interface . In some examples, the first memory array, the second memory array, the third memory array, and the fourth memory array, or combinations thereof, include DRAM memory cells.In some examples, a selection component can include operations, features, means, or instructions for: a multiplexer coupled with the first CA interface and the second CA interface. In some examples, the selection component can further include operations, features, means, or instructions for: a latch component configured to transmit the selection signal to the multiplexer.In some examples, the selection component can further include operations, features, means, or instructions for: a second multiplexer coupled with a third CA interface and a fourth CA interface, where the third CA interface can be coupled with a third CA interface The control channel is coupled to and associated with the third memory array, and the fourth CA interface can be coupled to the fourth control channel and associated with the fourth memory array, and wherein the latch component can be further configured to transmit the select signal to the second complex With device.In some examples, the selection component can further include operations, features, means, or instructions for: a third multiplexer coupled with the first CA interface and the third CA interface, an interface with the first CA and the fourth CA A fourth multiplexer coupled to the interface, and a second latch component configured to transmit the second selection signal to the third multiplexer and the fourth multiplexer.Describe a device. The apparatus may include: a first memory array coupled to a first data channel and a second memory array coupled to a second data channel; and a CA interface to the first control channel, the first memory array, and the second memory array coupled, and configured to receive a write command and identify that the write command is directed to a second memory array, wherein the second memory array is configured to receive a data set over a second data channel based on the CA interface receiving the write command and based on receiving The data set is written to the second memory array.In some examples, the CA interface may be configured to receive the read command and identify that the read command may be directed to the second memory array, and the second memory array may be configured to retrieve the second set of data based on the CA interface receiving the read command , and transmitting a second set of data over a second data channel coupled to the second memory array based on the retrieval.Some examples of the apparatus may include a second CA interface coupled to the second control channel and configured to isolate from the second memory array when the CA interface receives a write command for the second memory array.Some examples of the apparatus may include a third memory array coupled to a third data channel and configured to receive a third set of data over the third data channel upon receipt of a write command by a CA interface coupled to the first control channel, and writing the third set of data to the third memory array based on receiving the third set of data; and a fourth memory array coupled to the fourth data channel and configured to receive the write based on the CA interface coupled to the first control channel A fourth set of data is received over a fourth data channel through an input command, and based on receiving the fourth set of data, the fourth set of data is written to a fourth memory array.Some examples of the apparatus may include a third CA interface coupled to the third control channel and configured to detach from the third memory array when a write command is received by the CA interface coupled to the first control channel; and a fourth CA interface An interface coupled to the fourth control channel and configured to detach from the fourth memory array when the CA interface coupled to the first control channel receives a write command.In some examples, the first memory array can be configured to receive the second set of data over the first data channel based on receipt of the write command by the CA interface coupled to the first control channel.The information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and codes that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. piece. Some figures may show a signal as a single signal; however, one of ordinary skill in the art will understand that a signal may represent a bus of signals, where the bus may have various bit widths.As used herein, the term "virtual ground" refers to a circuit node that is held at a voltage of approximately zero volts (0V) and is not directly coupled to ground. Therefore, the voltage of the virtual ground may temporarily fluctuate and return to about 0V. A virtual ground can be implemented using various electronic circuit elements such as a voltage divider composed of an operational amplifier and resistors. Other implementations are also possible. "Virtual ground" or "virtually grounded" means connected to about 0V.The terms "electronic communication," "conductive contact," "connected," and "coupled" may refer to a relationship between components that enables the flow of electrons between the components. Components are considered to be in electronic communication with each other (or in conductive contact with each other, or connected to each other, or coupled to each other) if there is any conductive path between the components that can at any time support the flow of signals between the components. At any given time, a conductive path between components that are in electronic communication with each other (or in conductive contact or connection or coupling) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive paths between connected components may be direct conductive paths between components, or the conductive paths between connected components may be indirect conductive paths that may include intermediate components such as switches, transistors, or other components. In some cases, signal flow between connected components may be interrupted for a period of time, eg, using one or more intermediate components such as switches or transistors.The term "coupling" refers to the condition of moving from an open-circuit relationship between components, in which a signal cannot currently travel between components through a conductive path, to a closed-circuit relationship, in which a signal is able to pass through a conductive Paths are passed between components. When a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components via conductive paths that previously did not permit signal flow.The term "isolation" refers to a relationship between components in which signals cannot currently flow between components. Components are isolated from each other if there is an open circuit between them. For example, components spaced apart by a switch positioned between two components are isolated from each other when the switch is open. When the controller separates two components, the controller implements a change that prevents signals from flowing between the components using conductive paths that previously permitted signal flow.Devices including memory arrays described herein may be formed on semiconductor substrates such as silicon, germanium, silicon germanium alloys, gallium arsenide, gallium nitride, and the like. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or an epitaxial layer of semiconductor material on another substrate. The conductivity of the substrate or sub-regions of the substrate can be controlled by doping with various chemistries including but not limited to phosphorous, boron or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion implantation or by any other doping method.A switching component, or transistor, as discussed herein may represent a field effect transistor (FET), and includes a three-terminal device including a source, drain, and gate. The terminals may be connected to other electronic components through conductive material such as metal. The source and drain may be conductive and may comprise heavily doped, eg degenerate, semiconductor regions. The source and drain may be separated by a lightly doped semiconductor region or channel. If the channel is n-type (eg, the majority of carriers are the signal), then the FET may be referred to as an n-type FET. If the channel is p-type (ie, the majority of carriers are holes), then the FET may be referred to as a p-type FET. The channel may be terminated by an insulating gate oxide. Channel conductivity can be controlled by applying a voltage to the gate. For example, applying a positive or negative voltage to an n-type FET or a p-type FET, respectively, can cause the channel to become conductive. A transistor may be "on" or "active" when a voltage greater than or equal to the threshold voltage of the transistor is applied to the gate of the transistor. A transistor may be "off" or "deactivated" when a voltage less than the transistor's threshold voltage is applied to the transistor gate.The description set forth herein in connection with the accompanying figures describes example configurations, and does not represent all examples that may be implemented or are within the scope of the claims. The term "exemplary" as used herein means "serving as an example, instance or illustration", rather than "preferred" or "better than other examples". The detailed description contains specific details to provide an understanding of the described technology. However, these techniques may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order not to obscure the concepts of the described examples.In the figures, similar components or features may have the same reference label. Additionally, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among similar components. If only a first reference sign is used in the specification, the description applies to any of similar components having the same first reference sign, regardless of the second reference sign.The information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and codes that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. piece.The various illustrative blocks and modules described in connection with the disclosure herein may employ general purpose processors, DSPs, ASICs, FPGAs or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, or computer programs designed to perform the functions described herein. implemented or performed in any combination thereof. A general-purpose processor can be a microprocessor, but in the alternative, the processor can be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (eg, a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in combination with a DSP core, or any other such configuration).The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and the appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring or combinations of any of these. Features implementing functions may also be physically located at various locations, including distributed such that parts of functions are implemented at different physical locations. And, as used herein, contained in a claim, a list of items (eg, a list of items beginning with a phrase such as "at least one of" or "one or more of") is used in An "or" indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (ie, A and B and C). Additionally, as used herein, the phrase "based on" should not be read as referring to a closed set of conditions. For example, an exemplary step described as "based on condition A" may be based on both condition A and condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase "based on" should be interpreted equally as the phrase "based at least in part on."Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. Non-transitory storage media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example and not limitation, non-transitory computer-readable media may include RAM, ROM, Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc (CD) ROM or other optical disk storage, magnetic disk storage, or other A magnetic storage device, or any other non-transitory medium that can be used for carrying or storing desired program code means in the form of instructions or data structures and which can be accessed by a general purpose or special purpose computer or a general purpose or special purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwaves, then the coaxial cable, fiber optic Cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of media. Disk and disc, as used herein, includes CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically using lasers. Combinations of the above are also included within the scope of computer-readable media.The description herein is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to the present disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the present disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
In described examples, an apparatus (600) includes: a first power transistor (B-FET) having a first current conduction path coupled between an input (VIN) for receiving a supply voltage and a node (VMID) and a first gate terminal coupled to a first gate control signal (BGATE); a second power transistor (HS-FET) having a second current conduction path coupled between the node (VMID) and an output terminal (VOUT) for supplying a load current (IL) to a load; and a second gate terminal (HGATE) coupled to a second gate control signal; and a current sense transistor (SENSE FET) having a third gate terminal coupled to the first gate control signal (BGATE)), and outputting a sense current (Isense). The apparatus further includes: a differential amplifier (607) having an output signal, a feedback transistor (FB-FET) having a gate terminal coupled to the output signal of the differential amplifier; and a resistor (RMON) coupled between a monitor node (VMON) and ground. |
1.An apparatus comprising:a first power transistor having a first current conduction path between the first current conduction terminal and the second current conduction terminal, the first current conduction path of the first power transistor being coupled to receive a supply voltage Between the input terminal and the node, the first power transistor has a first gate terminal coupled to the first gate control signal for controlling the first power transistor;a second power transistor having a second current conduction path between the third current conduction terminal and the fourth current conduction terminal, the second current conduction path of the second power transistor being coupled to the node and for Between the output terminals supplying the load current to the load; the second power transistor having a second gate terminal coupled to the second gate control signal;a current sensing transistor having one current conducting terminal coupled to the node and the first power transistor, having a third gate terminal coupled to the first gate control signal and at another current conducting terminal Output sensing current;a differential amplifier having a first input coupled to one of the first current conducting terminal and the second current conducting terminal of the first power transistor and having a first current conducting terminal and a a second input of the other of the second current conducting terminals, and having an output signal responsive to a voltage difference between the first input and the second input;a feedback transistor having another current conduction path coupled in series between the current sense transistor and a monitoring node, having a feedback transistor gate terminal coupled to an output of the differential amplifier;A resistor coupled between the monitoring node and ground, the sense current flowing through the resistor, the sense current being proportional to the load current flowing through the second power transistor.2.The apparatus of claim 1, wherein the current sensing transistor and the first power transistor are formed on a semiconductor substrate, and a device area of the current sensing transistor is smaller than a device area of the first power transistor .3.The apparatus of claim 1 wherein said sense current flowing through said current sense transistor is proportional to said load current.4.The apparatus of claim 1 wherein said first power transistor, said second power transistor and said current sense transistor are field effect transistor FET devices formed on a single integrated circuit.5.The apparatus of claim 4 wherein said FET device is selected from the group consisting of a vertical FET device and a non-vertical FET device.6.The apparatus of claim 4 wherein said nodes are formed in a semiconductor substrate of said single integrated circuit.7.The apparatus of claim 1 and further comprising a fast trip comparator coupled between said node and a voltage divider coupled to said input for responding to said load A rapid trip signal is output when the current is rapidly increased as the voltage at the node drops.8.The apparatus of claim 1 further comprising a current limiting circuit coupled to said second gate terminal of said second power transistor for limiting when sensed current exceeds current limit The voltage of the second gate control signal.9.The apparatus of claim 1, wherein the first current conducting terminal of the first power transistor is a first source terminal, and the second current conducting terminal of the first power transistor is a first drain terminal The third current conducting terminal of the second power transistor is a second drain terminal, the fourth current conducting terminal of the second power transistor is a second source terminal, and the current sensing transistor Having a third drain terminal as the current conducting terminal, the third drain terminal being coupled at the node to the first drain terminal of the first power transistor and the second power transistor The second drain terminal is described.10.The apparatus of claim 1 wherein said differential amplifier is an operational amplifier.11.The apparatus of claim 10 wherein said operational amplifier is coupled to said feedback transistor in a closed loop.12.A circuit system comprising:a first field effect transistor having a first source terminal and a first drain terminal, the first source terminal being coupled to an input terminal for receiving a power source and the first drain terminal coupled to a node; and having a first gate terminal for receiving a first gate control signal;a second field effect transistor having a second drain terminal coupled to the node and a second source terminal coupled to supply a load current to the load An output terminal; and having a second gate terminal for receiving a second gate control signal;a current sensing transistor having a third drain terminal coupled to the node and a third source terminal coupled to output a sense current, the current sense transistor having a first gate control signal coupled thereto Third gate control terminal;a first current limiting amplifier having a first input coupled to the input terminal and a second input coupled to the node and outputting the second gate control signal;An operational amplifier coupled to a feedback transistor having a voltage reference at a first input and a current limiting output terminal at a second input and having an output coupled to a gate terminal of the feedback transistor The feedback transistor has a current conduction path coupled between the sense current output of the current sense transistor and the current limit output terminal.13.The circuit system of claim 12, further comprising a first resistor and a second resistor coupled between the input terminal and the first input of the current limiting amplifier The second resistor is coupled between the first resistor and the third source terminal of the current sense transistor.14.The circuit system of claim 13 wherein said second resistor further comprises a third resistor and a fourth resistor in a resistor ladder configuration.15.The circuit system of claim 14 and further comprising a fast trip comparator coupled to couple a voltage between said third resistor and said fourth resistor to said node The voltage at which it is compared is such that a fast trip output signal is output in response to a drop in the voltage at the node, the drop indicating that the load current is rapidly increasing.16.The circuit system of claim 12 and further comprising a current limiting resistor coupled between said current limiting output and ground.17.The circuit system of claim 12 wherein said first field effect transistor, said second field effect transistor and said current sense transistor are on an integrated circuit.18.An apparatus comprising:a voltage input terminal for receiving a power supply voltage;a voltage output terminal for coupling to a load;a first power transistor having a first current conduction path coupled between the voltage input terminal and a common node and having a first gate terminal coupled to the first gate control signal;a second power transistor having a second current conduction path coupled between the common node and the voltage output terminal and having a second gate terminal coupled to the second gate control signal;a first current sense transistor having a third current conduction path coupled to the common node and having a third gate terminal coupled to the first gate control signal for outputting and inputting from the voltage a first sensing current proportional to a load current flowing to the voltage output terminal;a second current sensing transistor having a fourth current conduction path coupled to the common node and having a fourth gate terminal coupled to the second gate control signal, and outputting and flowing from the output terminal to a second sensing current proportional to the load current of the input terminal;a differential amplifier having a first input terminal and a second input terminal and having an output signal corresponding to a difference between voltages at the first input terminal and the second input terminal;a feedback transistor coupled to the monitoring resistor at the monitoring node and having a current conducting path coupled to one of the first sensing current and the second sensing current and having a coupling to the differential amplifier The gate control terminal at the output.19.The apparatus of claim 18, and further comprising a first selection circuit for coupling said first input terminal of said differential amplifier to said responsive to a signal indicative of a direction of load current Selected one of: a resistor coupled to the input voltage terminal and to the second current sense transistor.20.The apparatus of claim 18, and further comprising a second selection circuit for coupling said feedback transistor to said first current sensing in response to a signal indicative of a direction of load current One of the first sense current of the transistor and the second sense current from the second current sense transistor. |
Current sensing and control for transistor power switchesThe present invention generally relates to power switches and corresponding control circuits, and more particularly to circuits for controlling circuits including transistor power switches that supply current to a load.Background techniqueAn electrical fuse ("electronic fuse") circuit controls the connection between the input voltage source and the load coupled at the output terminals. The electrical fuse can include a series power transistor that connects the load to the input power source. For example, a board can get its power from the bus. When the board is inserted into the bus socket, the contacts in the bus socket connect the board to power. Electrical fuses often provide: current overrun control; short circuit protection; inrush current limit; dv/dt or start voltage ramp control; and reverse current protection. Electrical fuses can reduce the available current to the load or even completely close the power connection to the load in the presence of an overcurrent.In an example application, a power transistor has a drain terminal coupled to a voltage source and a source terminal coupled to a load at the output terminal. When power is supplied to the load at the output terminal, the gate of the power transistor needs to be at a sufficient voltage to turn on the power transistor to couple the load to the power supply. A sensing circuit is used to monitor the current to the load. If the current flowing through the series power transistor exceeds the current limit, the gate voltage of the power transistor can be lowered to limit the load current, or the gate voltage can be changed to turn off the power transistor. It is necessary to disconnect before any physical damage to the power transistor may occur. If a short to ground occurs at the output terminals or a short circuit occurs in the load circuit, the load current may exceed the current limit.Summary of the inventionIn the depicted example, an apparatus includes: a first power transistor having a first current conduction path between a first current conducting terminal and a second current conducting terminal, the first of the first power transistors a current conduction path coupled between an input for receiving a supply voltage and a node, the first power transistor having a first gate terminal coupled to the first gate control signal for controlling the first power transistor a second power transistor having a second current conduction path between the third current conduction terminal and the fourth current conduction terminal, the second current conduction path of the second power transistor being coupled to the node Between an output terminal that supplies a load current to the load; the second power transistor has a second gate terminal coupled to the second gate control signal; and a current sensing transistor having a coupling to the node and the a current conducting terminal of the first power transistor having a third gate terminal coupled to the first gate control signal and conducting current conduction in another Sensing current measured at the output of the sub. The apparatus further includes a differential amplifier having a first input coupled to one of the first current conduction terminal and the second current conduction terminal of the first power transistor and having a coupling to the first a second input of the other of the current conducting terminal and the second current conducting terminal, and having an output signal responsive to a voltage difference between the first input and the second input; a feedback transistor Having another current conduction path coupled in series between the current sense transistor and the monitoring node, having a feedback transistor gate terminal coupled to the output of the differential amplifier; and a resistor coupled to the monitoring Between the node and ground, the sense current flows through the resistor, the sense current being proportional to the load current flowing through the second power transistor.DRAWINGSFigure 1 is a circuit diagram of a power transistor circuit.2 is a circuit diagram of a conventional power transistor circuit including a current monitor and a current limiting circuit.3 is another circuit diagram of an alternative conventional power transistor circuit with a current monitor.4 is a circuit diagram of a conventional high side current sensing circuit.Figure 5 is a circuit diagram of an embodiment in an electrical fuse circuit for including a current monitor.6 is a circuit diagram of an embodiment of a current monitor embodiment in conjunction with FIG.7 is a circuit diagram of an embodiment of a high side transistor with a current monitoring circuit.Figure 8 is a circuit diagram of an embodiment circuit with a fast trip comparator and current limit.9 is a circuit diagram showing the operation of a portion of a fast trip comparator for use with an embodiment.10 is a circuit diagram of an embodiment of a bidirectional current path in a power transistor circuit having a current monitor for load current flowing in two directions.11A and 11B are circuit diagrams of circuitry used with the embodiment of Fig. 10.12 is a system block diagram of an embodiment electrical fuse system including a power transistor integrated circuit coupled to a controller integrated circuit.detailed descriptionIn the figures, unless otherwise stated, the corresponding numerals and symbols are generally referring to the corresponding parts. The drawings are not necessarily to scale.In the present specification, the term "coupled" may include a connection established with an intervening element, and any element and various connections may be present between any of the elements "coupled."FIG. 1 is a simplified diagram of a power supply circuit 100. The power supply circuit 100 includes a circuit 101 coupled between a power supply terminal VIN and an output terminal VOUT. Circuit 101 is a power transistor circuit that can form part of an electrical fuse circuit. The load drawing current (not shown in Figure 1) will be coupled to the VOUT terminal. The high side transistor HS-FET acts as a switch between the power supply VIN and the load coupled to terminal VOUT. A control circuit (not shown) is coupled to the gate terminal of the high side transistor HS-FET and supplies the gate control voltage HGATE. In circuit 101, a blocking transistor B-FET is coupled between node VMID and input voltage VIN. The blocking transistor has a body diode (shown by a dashed line to indicate that the body diode is an intrinsic device) between a source coupled to the node VMID and a drain coupled to VIN, the body diode blocking current flow from the output terminal VOUT to Enter the terminal VIN, which can be considered a "reverse" current. The gate of the blocking FET B-FET is coupled to the gate control voltage BGATE. BGATE is supplied from a control circuit (not shown).In applications for supplying an output voltage from an input voltage, the electrical fuse circuit including circuit 100 is arranged to protect the input power source, load device, and expensive FET device from damage due to overcurrent conditions. By sensing the current flowing through the HS-FET, the control circuitry in the electrical fuse can turn off the transistor HS-FET using the gate control signal HGATE. The current is limited or the circuit is turned off to protect the HS-FET and the load.In Figure 1, circuit 101 includes a current sensing device SENSE FET. The current sensing device is coupled to the same gate voltage BGATE as a blocking device B-FET and coupled to the same voltage VMID at the drain terminal. Since the sense FET is built on the same substrate as the B-FET and is constructed using the same semiconductor process as the blocker B-FET, the current flowing through the SENSE-FET should flow through the blocking transistor B. - The load current IL of the FET is proportional. However, in practice, in the conventional configuration as shown in FIG. 1, the sense current "I sense" lacks sufficient accuracy, especially when the gate-source voltage (Vgs) is small. The lack of accuracy is due to the fact that the threshold voltage of the SENSE FET whose device size is made much smaller than the HS-FET and the blocking device B-FET does not match the threshold voltage of the blocking transistor B-FET under all conditions.An important aspect of circuit 101 is determined by current power FET technology. Recently, the development of low resistance MOSFET devices fabricated using vertical FET processes has produced enhanced circuit performance. These devices are rapidly replacing existing device types in power applications (such as bipolar transistors, lateral FETs such as DMOS FETs, and conventional trench FETs). An example advanced FET device is the NexFETTM technology device offered by Texas Instruments Incorporated. "NexFET" is a trademark of the power MOSFET owned by Texas Instruments. NexFETTM devices have very low on-resistance Rdson, have high device performance, are robust, are devices with relatively small silicon area, and these devices can carry very high voltages and currents, such as voltages up to 100 volts. Embodiments may be implemented using NexFETTM devices, using other power FET techniques, using vertical FETs, and using other FET arrangements.In Figure 1, power transistor circuit 101 can be implemented on a single semiconductor substrate containing all of the FET devices. However, in order to form the device of FIG. 1 in an efficient manner using vertical FETs, the common substrate node VMID is coupled to one terminal of the FET transistor. In Figure 1, the drains of the three devices B-FET, HS-FET, and SENSE FET in electrical fuse 101 are each coupled to the substrate at node VMID. Because the drain is coupled to the substrate at the bottom of the vertical FET structure, this is referred to as a "drain-down" configuration. This common drain configuration limits the current sensing circuit arrangement that can be used. Additional improvements are therefore needed to improve the accuracy of the sensed current I sensing over a wide range of conditions. Embodiments are applicable to arrangements formed with vertical FET devices.An overview of conventional FET current sensing methods is now presented. FIG. 2 depicts a conventional power supply circuit 200 with current sensing and current limiting. The similarly labeled components of Figure 2 perform similar functions as the components of power supply circuit 100 (Figure 1). For example, the high side device labeled HS-FET in Figure 1 operates in the same manner as the high side device HS-FET of Figure 2.In FIG. 2, a current sensing path numbered 201 (labeled as a sensing path) is shown coupled in parallel to a power supply current path numbered 203 (labeled as a power path). In the sense path 201, the input voltage VIN is coupled to the source terminal of the sense transistor SENSE FET, which can be scaled (using the device W/L area) to be smaller than the power FET. Various scaling factors can be used. The drain of the SENSE FET is coupled to supply a current IMON (monitoring current) to the feedback transistor 209. An operational amplifier (op-amp) 207 is coupled as a comparator. The output of operational amplifier 207 changes in response to the voltage difference at the positive and negative terminals. The source of the sense transistor SENSE FET is coupled to the positive input terminal of op-amp 207 (shown by the "+" symbol in Figure 2). The negative terminal of op-amp 207 (shown by the "-" symbol in FIG. 2) is coupled to the common drain terminal VMID between the blocking transistor B-FET and the high side transistor HS-FET. The high side transistor HS-FET carries the load current IL from the input voltage source coupled at node VIN to the output terminal VOUT and to the load coupled to VOUT (not shown for clarity).Op-amp 207 is coupled into a feedback configuration using feedback transistor 209. There is a virtual ground condition at the input of operational amplifier 207. In operation, op-amp 207 will adjust the voltage at the gate of feedback transistor 209 to maintain the voltages at the positive and negative terminals (labeled "+" and "-" in Figure 2) equal. The current IMON will then be proportional to the load current IL. The ratio will be determined by the scaling between the sense transistor SENSE-FET and a power transistor such as an HS-FET. In an example, the scaling is such that the sense current is 1/1000 of the load current IL, but other scaling factors can also be used, and the magnitude of the sense current relative to the load current will change corresponding to the scaling factor.In operation, current limit block 211 controls the high side transistor HS-FET. When the high side transistor HS-FET delivers current IL to the load, the drain-source voltage of the blocking transistor B-FET in the power path 203 will be equal to the drain-source voltage of the sense transistor SENSE FET. If the drain voltages are not equal, the operational amplifier 207 will change the voltage at the gate of the feedback transistor 209 until the drain voltages are equal. By matching the device SENSE FET and B-FET, the current flowing through the device can be made proportional to the size ratio of the device. This is true because the devices are matched, the source terminals of the two devices are at the same potential (VIN, input supply voltage) and the gate terminals are tied to the same gate control voltage BGATE. When the two devices carry the same (proportional) current, the drain voltages will also be equal.In Figure 2, current IMON provides an output voltage at terminal VMON that can be used to control the power transistor circuit and provide a current limit. The output terminal VMON can be used to control the current limit by providing a user-determined value for the resistor RMON. By setting the size of the resistor RMON, a monitoring voltage VMON proportional to the current IMON can be generated. The monitoring voltage VMON can be observed by the current limit control block 211. A gate voltage signal HGATE coupled to the high side transistor HS-FET is output through the current limit block 211. When the voltage VMON exceeds the threshold or reference voltage, the current limit block 211 can limit or decrease the gate voltage HGATE and lower the load current IL or even prevent the load current IL from flowing through the high side transistor HS-FET to the load. Additional optional outputs can be created to give an indicator that the current limiting condition is occurring for use by the user or a controller in the system. The voltage VMON can be coupled to observe the current IMON flowing in the system, which current is proportional to the load current IL.The connection in circuit 200 requires that the drain terminal of the sense transistor SENSE FET and the drain terminals of the power transistor B-FET and HS-FET are physically separated. However, in the production of vertical FET devices (such as NexFETTM devices) for power supply applications, the transistors on the power integrated circuit have a current-carrying terminal (source or terminal) coupled to a common substrate node such as node VMID as shown in FIG. Drain). Therefore, the conventional circuit 200 cannot be used to sense the current in these advanced power devices.FIG. 3 is a simplified diagram of another conventional power supply circuit 300. The similarly labeled components in Figure 3 perform similar functions as the components of power supply circuit 200 (in Figure 2). For example, the transistor HS-FET of FIG. 3 performs the same function as the transistor HS-FET of FIG. The power supply circuit 300 differs from the power supply circuit 200 (see FIG. 2) in that the sense transistor SENSE FET, the blocking transistor B-FET, and the high side transistor HS-FET have a common drain connection at the node VMID. These transistors can be implemented in the drain down vertical FET device due to the common drain connection.Circuit 300 includes a sensing path 301 and a power path 303. In sense path 303, op-amp 307 is in a virtual ground comparator configuration. The source voltage of the SENSE FET is at the positive input of operational amplifier 307, and the source voltage of the blocking transistor B-FET coupled to the input supply voltage VIN is at the negative input terminal. The gate voltages of both the SENSE FET and the B-FET are coupled to a control voltage BGATE. Therefore, the SENSE FET and B-FET match and the current ISENSE flowing through the SENSE FET in the sense path will be proportional to the load current IL flowing through the blocking transistor B-FET.In operation, current sensing occurs when the output of op-amp 307 controls the gate of feedback FET 309, which regulates current ISENSE to match load current IL. The mirror transistor 310 outputs a sense current as the monitor current IMON, and an output voltage proportional to the current IMON is available at the output terminal VMON. The voltage VMON can be controlled by selecting the value of the resistor RMON. The user can set the limit voltage and use a current limiting circuit (not shown in Figure 3) to limit the current. The gate signal of the high side transistor HS-FET can be controlled by a current limiting circuit and thus provides a current limiting function.The conventional circuit 300 of Figure 3 requires a charge pump (not shown for clarity) to provide the voltage VCP. Because operational amplifier 307 has an input coupled to input voltage VIN, it is necessary to supply operational amplifier 307 with a voltage higher than VIN. A charge pump is needed to supply this higher voltage, which is also used to provide the current ISENSE flowing in the sensing path and the monitoring current IMON. It is not desirable to use a charge pump to provide voltage VCP and currents ISENSE and IMON. Charge pumps require considerable power and silicon area, are relatively inefficient, and are relatively expensive to manufacture.4 is a circuit diagram of a power supply circuit 400 that includes current sensing and current limiting. The similarly labeled components in Figure 4 perform similar functions as the components of power supply circuit 300 (Figure 3). For example, the high side transistor HS-FET of Figure 4 operates in the same manner as the high side transistor HS-FET of Figure 3.In FIG. 4, power path 401 includes a high side transistor HS-FET for coupling an input voltage at input terminal VIN to output terminal VOUT, at which a load (not shown) receives load current IL. In Figure 4, the power supply path includes a blocking transistor B-FET controlled by a gate control voltage B-GATE. When the voltage at the output terminal exceeds the voltage at the input terminal VIN, the intrinsic body diode of the B-FET transistor (not shown for simplicity of illustration) prevents current from flowing from the output terminal VOUT to the input terminal VIN.In FIG. 4, the sensing path 403 includes sensing circuitry. In this conventional circuit, current sensing is accomplished by a transistor SENSE FET that is coupled to match the high side transistor HS-FET. The high side transistor HS-FET is controlled by a current limiting amplifier A2, labeled 413, which provides a gate control signal HGATE to both the HS-FET and the sense transistor SENSE-FET.The current sensing circuitry includes an op-amp 407 coupled to a virtual ground circuit at the input terminal. The drain terminal of the sense transistor SENSE-FET is coupled to the positive input terminal, and the drain terminal of the high side transistor HS-FET is coupled to the negative input terminal. Op-amp 407 is in a feedback configuration in which transistor M3 acts as a feedback transistor.In operation, sensing circuitry 403 senses load current IL by matching the drain-source voltage for the high-side transistor HS-FET and the sense transistor SENSE-FET. The operational amplifier 407 is used to control the current through the feedback transistor M3. The current ISENSE will match in proportion to the load current IL.In FIG. 4, circuit 400 uses a blocking transistor B-FET, a high side transistor HS-FET, and a current sense transistor SENSE-FET having a common drain connection at node VMID. Because the drain terminals are coupled, these three devices can be implemented in a vertical FET device, such as a NexFETTM device, where the drains are coupled together at a drain-down configuration at the substrate.However, in the configuration of Figure 4, the accuracy of current I sensing by the SENSE FET transistor output is limited. The sense transistor SENSE FET has a gate terminal coupled to the gate of the HS-FET to connect HGATE. In the case of current limiting, the voltage across the user-specified resistor RMON is compared to the reference voltage Vref. If the current through resistor RMON is above the current limit, voltage VMON will exceed reference voltage Vref and current limiting amplifier 413 will limit the current through the HS-FET by lowering gate voltage HGATE. When HGATE is lowered, the gate voltage at the sense transistor SENSE FET is lowered and the gate-to-source voltage (Vgs) of the sense transistor will be lowered. At low gate-source voltages, the threshold matching of the sense transistor SENSE FET and the high-side transistor HS-FET is poor, so the accuracy of the sense current is poor. The conventional circuit in Fig. 4 has a problem of lack of accuracy, especially in the case of current limiting, where the accuracy of the sense current is the most important.5 is a circuit diagram of an embodiment current sensing circuit that can be used in high side power applications. The similarly labeled components perform similar functions as the components of power supply circuit 400 (FIG. 4). For example, the blocking transistor B-FET of Figure 5 operates in a similar manner to the B-FET device of Figure 4. In Figure 5, a sensing path 503 is shown while showing a portion of the corresponding power path 501. The complete power path is not shown in Figure 5 but is further described below. The features of the embodiments are applicable to arrangements formed with NexFETTM devices and other FET devices; and embodiments are not limited to any particular type of FET device.In FIG. 5, the circuit includes a blocking transistor B-FET having a current conduction path between the first current conducting terminal and the second current conducting terminal, the first current conducting terminal and the second current conducting terminal being connected in series It is coupled between an input terminal VIN for receiving a power supply voltage and a common node VMID. In FIG. 5, the first current conducting terminal is the source terminal of the transistor B-FET and the second current conducting terminal is the drain terminal of the transistor B-FET. The blocking transistor has a gate control terminal coupled to the signal BGATE. The sense transistor SENSE-FET has a current conduction path between a first current conduction terminal coupled to the common node VMID and a second current conduction terminal coupled to provide sensing current output I sensing. The gate of the sense transistor SENSE-FET is coupled to the gate control signal BGATE.In Figure 5, current sensing is across the blocking transistor B-FET. The sense transistor SENSE-FET is matched to the sense transistor B-FET. The operational amplifier 507 is configured as a unity gain amplifier having a gain of "-1". The drain-source voltage of the blocking transistor B-FET, labeled "vd1" in Figure 5, is applied to the drain-source voltage labeled "vd2" of the SENSE-FET. The operational amplifier 507 will adjust the gate voltage of the feedback transistor FB-FET until the equation vd2 = vd1*(R2/R1) is satisfied. The unity gains described for operational amplifier 507 assume that resistors R1 and R2 have the same value; however, in an alternative embodiment, the ratio of resistor R1 to resistor R2 can be changed to operational amplifier 507 as shown in the equation. Gain provides additional adjustments. By using a ratio of resistors R2 to R1 that is less than one, additional scaling can be achieved, allowing for a smaller sensing current and a corresponding reduction in power consumption.In operation, the current flowing through the sense transistor SENSE-FET is proportional to the load current IL (scaled by device size ratio, such as by a scaling factor of 1/1000). Since the operational amplifier 507 and the feedback transistor FB-FET are used, the sense current is more accurate. The operational amplifier is responsive to any voltage difference between the drain-source voltage vd1 of the blocking transistor B-FET carrying the load current IL and the drain-source voltage vd2 of the sensing transistor SENSE-FET carrying the sensing current through its current conducting path. The gate voltage of the feedback transistor FB-FET is adjusted. The voltage at terminal VMON provides a voltage due to the sense current proportional to load current IL. The value of resistor RMON can be adjusted to change the voltage VMON of a given monitor current IMON, and voltage VMON can be used to set the limit current for use by a current limiting circuit (not shown).The embodiment of Figure 5 provides a common drain node VMID for the blocking transistor B-FET and the sensing transistor SENSE-FET. This common connection can be further extended to include the drain of the high side FET (not shown in Figure 5, but is further described below). Since the drain terminals are connected at a common node, these three FETs can be implemented on vertical FET devices such as NexFETTM devices. Since the blocking transistor B-FET is used to sense the load current, the accuracy of the sensing current I sensing in the embodiment of FIG. 5 is higher. The blocking transistor B-FET has a gate voltage B-GATE that is independent of the gate voltage of the high side transistor (not shown). When the current limiting condition occurs, the gate voltage B-GATE does not change, so that the sensing transistor and the blocking transistor B-FET and SENSE FET have a high gate even when the high side gate voltage is controlled to limit the load current. Extreme voltage. Because the gate-to-source voltages of both the B-FET and the sense transistor SENSE-FET are still high during the current limiting event, the sense transistor and the blocking transistor are well matched and both remain online through the voltage BGATE at the gate terminal. The sensing current is still accurate when operating the area. Although the embodiment of FIG. 5 can be used with a common drain node in a vertical FET device, embodiments can also be used with non-vertical FET devices such as lateral FET devices, and partially due to the use of operational amplifiers and feedback transistors. Provides accurate current sensing.6 is a circuit diagram of an embodiment power supply circuit 600 showing the use of the current sensing arrangement of FIG. 5 for providing a current limiting function within a power supply circuit. The similarly labeled components in Figure 6 perform similar functions as the corresponding components of circuit 503 (Figure 5). For example, the blocking transistor B-FET of Figure 6 operates in a similar manner to the blocking transistor B-FET of Figure 5.In FIG. 6, power path 601 includes a high side transistor HS-FET and a blocking transistor B-FET that are coupled to supply current and voltage from an input voltage source coupled to terminal VIN to output terminal VOUT. The load current IL will flow into the load coupled to the output terminal VOUT (not shown). Power path 601 is coupled to a sensing path 603 that includes components arranged in a similar manner to the embodiment of FIG. 5 and further includes a current limiting amplifier 613. When resistors R1 and R2 have the same value, operational amplifier 607 is coupled into a unity gain configuration with a gain of -1 as described above with respect to FIG. This gain can be modified by changing the ratio of R2/R1 to provide additional adjustments. In Figure 6, resistor RMON is shown implemented using an adjustable value resistor. By adjusting the value of the resistor RMON, the voltage appearing at the terminal VMON can be adjusted. The current limiting function can be realized by setting the monitoring voltage VMON of the selected limiting current to a voltage greater than the reference voltage Vref. In an alternative embodiment, the reference voltage Vref can also be adjusted to adjust the limit.In operation, when the high side transistor H-FET delivers current IL to a load (not shown) coupled to output terminal VOUT, current I sensing will be proportional to load current IL. The ratio is determined by the device area (W/L) ratio between the blocking transistor B-FET and the sensing transistor SENSE-FET. In the example, the scaling is 1000 such that current I sensing is 1/1000 of the load current IL. In additional embodiments, other scaling factors can be used. The ratio of resistors R1 to R2 provides additional scaling. A ratio of 5 to 1 may be used or other ratios other than 1 to 1 may be used.When the voltage at the voltage monitoring terminal VMON exceeds the reference voltage Vref, the current limiting amplifier 613 will limit the current flowing into the load at the output terminal VOUT. Control is accomplished by modifying the gate voltage control signal HGATE. Since the gate control signal HGATE of the high-side transistor is controlled while the gate control signal BGATE remains the same during the current limiting event, when the HGATE voltage is changed and even when the gate-source voltage HGATE of the high-side transistor approaches the threshold voltage Vt, The accuracy of sensing current I sensing is also unaffected.In an example embodiment, the blocking transistor B-FET, the high side transistor HS-FET, and the sense transistor SENSE-FET are formed on a vertical FET semiconductor device having a "drain down" configuration such that the node VMID is coupled to the semiconductor lining bottom. The operational amplifier 607 and the current limiting amplifier 613 can be implemented on a separate conventional CMOS semiconductor device. Resistors R1 and R2 may be formed on a CMOS device or alternatively may be provided using an external resistor. The adjustable resistor RMON can be provided by the designer for a particular application and can have fixed, adjustable or programmable values. The reference voltage Vref can also be a fixed or adjustable value, alternatively the value can be selected from pre-programmed voltage levels.FIG. 7 is a circuit diagram of another embodiment circuit 700 that is arranged for application without current blocking. Figure 7 does not block the transistor. In an application, current can be allowed to flow from the output terminal VOUT back to the input terminal VIN under certain conditions. The embodiment of Figure 7 includes a high side transistor labeled H-FET having a current conducting path coupled between input terminal VIN and output terminal VOUT. In Figure 7, the H-FET provides an embodiment that is compatible with "source down" vertical FET devices such as NexFETTM devices. Other power FET devices can also be used. In FIG. 7, the high side transistor H-FET and the sense transistor SENSE-FET each have a first current conducting terminal, and the respective source terminals are coupled together in a common source circuit such that the FETs can be "source down" Implemented in a vertical FET device with the source terminal at the substrate.In Figure 7, power path 701 includes only the high side transistor H-FET coupled between input terminal VIN for the supply voltage and output terminal VOUT for coupling the load to the circuit. The load current IL flows through the transistor H-FET and flows to the output terminal VOUT.The control signal HGATE controls the gate voltage of the transistor H-FET. Sensing path 703 includes an op-amp 707 of unity gain configuration, resistors R1 and R2, and a feedback transistor FB-FET coupled to a gate terminal coupled to the output of operational amplifier 707. There is a closed loop in which the op-amp 707 has a drain-source voltage of the H-FET transistor labeled "vd1" across the positive and negative terminals. The amplifier reflects this voltage to the node receiving the drain-source voltage "vd2" of the sense transistor SENSE-FET. The op-amp 707 will adjust the gate voltage of the feedback transistor FB-FET such that the equation vd2 = vd1 * R2 / R1 holds. When the SENSE-FET has the same drain-source voltage as the H-FET, the sense current I sense will be proportional to the load current IL. As with the embodiment described above, the ratio is determined by the device area ratio of the H-FET device to the sense transistor SENSE-FET. In the example, the ratio is 1/1000 such that the sense current I sense is scaled to 1/1000 of the load current IL.In operation, the value of the monitoring resistor RMON external to the integrated circuit sets the voltage VMON. Then, a current limiting circuit (not shown in Figure 7) that controls the gate voltage HGATE can be used with VMON and the reference voltage to control the load current.An advantage of the circuit arrangement of the embodiment of Figure 7 is that the sense transistor and the high side transistor H-FET can be implemented using vertical FET technology using a common source node at the substrate, such as a "source down" device. However, since the gate voltage of the sense transistor SENSE-FET is at the same node as the gate voltage of the high-side transistor H-FET, the accuracy of the sense current at low gate voltage conditions compared to other embodiments reduce. When the current limit is reached and the voltage HGATE is lowered to limit the load current IL, the two devices, SENSE-FET and H-FET, will no longer closely match, and the sense current will not accurately track the load current IL.Embodiments provide a current monitoring output that can be used to provide a current limiting function for a FET that delivers current to a load. In the event of a sudden increase in load current, the above circuit may not be fast enough to turn off the power transistor current conduction path to prevent damage. This can occur when the output suddenly shorts to ground or the load device is shorted.8 is a circuit diagram of an alternative embodiment 800 with a fast trip comparator and having a fast trip output signal that can be used to quickly turn off the power path of the circuit. The fast trip output signal can also be used to limit the load current to a safe level. The fast trip comparator circuit is triggered when the load current exceeds a multiple of the current limit. It is generally desirable to have a short circuit threshold that scales with the current limit (used to trigger a multiple of the fast trip comparator). For example, the short circuit current threshold can be set to twice the current limit. In an example embodiment, if the current limit is increased, the short circuit current will increase proportionally as the current limit increases.In FIG. 8, the power supply path 801 includes a blocking transistor BFET and a high side transistor HS-FET coupled in series with a current conduction path between a terminal VIN for inputting a voltage and an output terminal VOUT for outputting an output voltage. A load (not shown) can receive the load current IL flowing through the transistor B-FET and the HS-FET. In the sensing path 803, the sense transistor SENSE FET is coupled to match the blocking transistor B-FET and has a drain terminal at the common drain node VMID and has a gate coupled to the blocking transistor B-FET The gate terminal of the gate control signal BGATE.Current limiting amplifier 811 is coupled to node (B) and is also coupled to common drain node VMID and node (A) blocks the drain of transistor B-FET, said node (B) also passes through resistor R1 having a value of 3R Coupled to the source terminal of the blocking transistor B-FET. Therefore, the two inputs of the differential amplifier 811 are coupled to receive the drain-source voltage that blocks the transistor B-FET. The sense transistor SENSE-FET is coupled to output current I limit. As described above, since the source terminal and the gate terminal of the sense transistor are coupled with the source terminal and the gate terminal of the blocking transistor B-FET, the sense current I limit will be proportional to the load current IL. The output of the current limiting amplifier 811 controls the gate terminal of the high side transistor HS-FET.Instead of providing a monitoring output VMON, the embodiment of Figure 8 is arranged to limit the load current IL to a particular limiting current I limit set by operational amplifier 815, reference voltage Vref, transistor 817, and limiting resistor Rlim. This circuit is used as a voltage-current converter and sets the limit current I limit equal to the current level Vref/Rlim. When the load current rises to the limit current I limit, the control loop formed by the current limiting amplifier 811 via the high side transistor HS-FET becomes active, and the control signal HGATE is used to lower the voltage at the gate of the high side FET, Thereby the load current IL is controlled and prevented from rising further.In operation, when the current limit is met, current limiting amplifier 811 will control current IL to match current I limit by changing the gate signal HGATE of the HS-FET. Additionally, the embodiment of Figure 8 provides a fast trip function. The fast trip comparator amplifier 813 compares the voltage at the common node (A) belonging to the source terminal of the transistor with the voltage at the node (C). Node (C) is the voltage generated using a resistor divider. In Fig. 8, the value of the resistor R1 is 3R, and the resistor R2 is implemented using a series resistor R2A (= R) + R2B (= 2R). As shown in Figure 8, resistors R1 and R2 can be equal. Using the two resistors R2A and R2B in a trapezoidal manner to form resistor R2 produces a voltage at node (C) for use by the fast trip comparator amplifier 813.Figure 9 shows a simplified circuit schematic for further describing the operation of the fast trip comparator circuit for use in an embodiment. In FIG. 9, like reference numerals are used for components similar to those in FIG. For example, in FIG. 9, the comparator 913 corresponds to the comparator 813 in FIG.In FIG. 9, a resistor ladder including a resistor having a value of 4R (R1 (=3R) and a series resistor of a resistor R2A (=R)), a blocking transistor B-FET, and a sensing transistor SENSE-FET form a Whist. Powered bridge. Comparator 913 is triggered when the voltage at node (C) exceeds the voltage at node (A). The load current IL will typically be such that the voltage at node (A) exceeds the voltage at node (C). In the case of a sudden and rapid increase in load current IL, the voltage at node (A) will drop rapidly (compared to the voltage at node (C)). Comparator 913 will respond with an output signal that quickly trips the signal at O/P. In Figure 9, the example voltage drop across the resistor ladder is shown to be 30 millivolts, and the corresponding voltage drop across the blocking transistor B-FET is 20 millivolts, while the sense transistor drops by 10 millivolts. The load current IL flows through the blocking transistor B-FET but does not flow through the sensing transistor SENSE-FET. When the load current IL suddenly increases, the fast comparator 913 triggers when the drain-source voltage across the blocking transistor B-FET suddenly increases, thereby causing the voltage at the node (A) to drop while the node (C) The voltage is not affected by the increased load current IL.This particular fast trip circuit example achieves a short circuit current threshold that is twice the current limit I limit (ie, the current that the fast trip output signal becomes active). For example, if the current limit is 1 amp, the fast trip signal FAST TRIP will be triggered when the current IL suddenly exceeds 2 amps. This occurs when a sudden short circuit occurs at the circuit output VOUT in FIG. 8, where the load current IL may rise faster than the response time of the current limiting circuit including the amplifier 811. Different current limit thresholds can be selected by changing the arrangement and value of the resistors.FIG. 10 is a circuit diagram of an additional alternative embodiment circuit 1000. The reference numerals in Fig. 10 are similar to the reference numerals of the similar components in Fig. 8. For example, amplifier 1007 in FIG. 10 is similar to amplifier 807 in FIG. In FIG. 10, the load current may flow from the terminal VIN for receiving the power supply voltage to the output terminal VOUT in either direction, or alternatively, the load current may flow in the opposite direction. In some applications, the VIN and VOUT terminals can couple two devices that can receive or supply current. For example, the USB-C connector interface can be located between two battery powered devices and current can flow in either direction. In the embodiment of FIG. 10, the circuitry is arranged to share amplifier 1007. Amplifier 1007 can be a differential amplifier and can be implemented as an operational amplifier. By sharing this part of the circuit, the silicon area and cost are reduced. However, as described below, additional transistors are used to couple the voltage required by amplifier 1007 depending on the direction of current flow. In an alternative embodiment, an additional amplifier can be used, but at the expense of increased silicon area.In FIG. 10, the blocking transistor B-FET and the high side transistor HS-FET are coupled to a current conduction path between the input terminal VIN and the output terminal VOUT. When power path 1001 is active, load current IL will flow through both the blocking transistor B-FET and the high side transistor HS-FET. System 1000 has two current sense transistors, a sense transistor SENSE-FET B and a sense transistor SENSE-FET H. Each sense transistor has a common node with a power transistor at node VMID. The embodiment of FIG. 10 is compatible with a "drain down" configuration such that the blocking transistor B-FET, the blocking sensing transistor SENSE-FET B, the high side transistor HS-FET, and the high side current sensing transistor SENSE-FET H The drain terminal is coupled to the node VMID. Thus, the device in power path 1001 can be implemented using a semiconductor device having a vertical FET arrangement, such as a NexFETTM device. However, other power FET transistors can also be used with the current sensing circuitry of the embodiments, whether vertical FETs or other transistors. Discrete FET devices can be used.The embodiment of Figure 10 operates in a similar manner to the embodiment of Figure 8 when the input voltage at terminal VIN is greater than the output voltage at terminal VOUT. A sense circuit in sense path 1003 couples one terminal of amplifier 1007 to the source terminal of the blocking transistor B-FET using resistor network R1, R2 and transistor M5, which is coupled to terminal VIN. The signal RV is at a "low" potential in this example when the circuit operates in a forward manner with current IL flowing from VIN to VOUT. The opposite terminal of amplifier 1007 is coupled through transistor M6 to a common drain node VMID, which is also controlled by signal RV. In the example embodiment of FIG. 10, transistors M5, M6, M7, and M8 are P-channel transistors and are active when there is a "low" potential on the gate terminal. These transistors form a selection circuit that selects between the node between R1 and R2 and the output of the high side sense FET SENSE FET-H according to the current direction as indicated by the control signals RV, RV_. Input of the positive terminal of amplifier 1007. The selection circuit selects between the VMID and the voltage of the output voltage terminal VOUT for input to the negative input terminal of the operational amplifier 1007. The operational amplifier 1007 will control the current flowing in the feedback transistor FB-FET such that the current flowing through the monitoring resistor RMON is proportional to the load current IL. In an example embodiment, amplifier 1007 is an operational amplifier that is coupled in a closed loop configuration.In the embodiment of FIG. 10, system 1000 can also sense current when load current IL reverses direction and flows from output terminal VOUT to input terminal VIN. In this configuration, current IL flows through the high side transistor HS-FET and the blocking transistor B-FET to terminal VIN. This occurs when the voltage at terminal VOUT is greater than the voltage at terminal VIN. A sense transistor SENSE-FET H is coupled, wherein a gate of the sense transistor is coupled to a gate of a high side transistor (signal HGATE is coupled to the two gate terminals), and a drain of the two devices is coupled to Common drain node VMID. Therefore, the sense transistor SENSE FET-H is matched to the high side transistor HS-FET. The sensed current flowing through the high side sense transistor SENSE FET-H will be proportional to the load current flowing through the high side transistor HS-FET. Another selection circuit is formed by transistors M1, M2 and M3, M5 and selects between the SENSE FET-B output and the SENSE FET-H output in accordance with control signals R_ and R. When the current is inverted as indicated by signal R, transistors M3, M4 couple the high side sense current to the feedback transistor FB-FET, and the sensed current can be observed as the voltage at monitor terminal VMON. Operational amplifier 1007 will be coupled through transistor M8 to the source of the high side transistor (coupled to terminal VOUT), while the drain terminal of the high side transistor is coupled through transistor M7 to the opposite terminal of amplifier 1007. Transistor M7 and transistor M8 each have a direction signal RV coupled to the gate terminal. Transistor M3 and transistor M4 have a direction signal R coupled to the gate terminal.The signals RV and RV_ are direction signals indicating when the current IL flows in the opposite direction, level shifting to the voltage of the VMID domain. Signals R and R_ are directional signals that indicate when VOUT is greater than VIN and when load current IL is flowing in the opposite direction. Signals RV and RV_ are coupled to a first selection circuit that selects a signal to the positive input terminal and the negative input terminal of operational amplifier 1007. Signals R and R_ are coupled to a second selection circuit that selects the sense current input to the feedback transistor FB-FET.In operation, the sensed current flowing through the feedback transistor FB-FET is proportional to the load current as described above; the ratio is between the sense transistor SENSE FET_B, the SENSE FET-H, and the power transistor B-FET and HS-FET The device area ratio is determined. In an example, the sense transistor is 1/1000 of the device size of the power transistor, and the sense current is therefore 1/1000 of the magnitude of the load current IL.By detecting the direction of the load current and by enabling the appropriate sense current path and sensing device, the embodiment of Figure 10 can provide a sense current under two conditions: VIN > VOUT and load current flowing from VIN to VOUT; and VOUT >VIN and the load current flows from VOUT to VIN in the opposite direction.In Figure 10, signals R and R_ and corresponding level shift signals RV and RV_ are required for operation of circuit 1000. Figure 11A is a circuit diagram for providing an arrangement of direction signals R and R_. Fig. 11B is a circuit diagram of a level shift circuit that generates signals RV and RV_.In FIG. 11A, voltage comparator 1101 compares the voltage at input terminal VIN with the voltage at output terminal VOUT and determines when VOUT is at a higher voltage than VON. When the output voltage VOUT is a large voltage, the signal R becomes active, indicating that the current is reversed. The signal R_reverse signal is then simply output through the inverter 1103. Figure 11B is a circuit for a level shifter that shifts the signal R to the VMID voltage using a buffer powered by the voltage VMID. The buffer 1107 outputs a level shifted version of the signals R, RV. The inverter 1109, which also receives the voltage VMID, outputs an inverted signal RV_. Current bias 1111 provides current to the level shifting circuit. Other arrangements of the level shifting circuitry can be used for use with the embodiments.Figure 12 is a block diagram of an embodiment electrical fuse system. In FIG. 12, a first integrated circuit 1203 includes a power transistor and a sense transistor in a single device. In the embodiment of FIG. 12, a power path of an electrical fuse system including a blocking transistor, a high side transistor, and a sensing transistor can be implemented on a single semiconductor substrate using power transistor technology. In an example, a vertical FET device can be used. In an example, a NexFETTM device from Texas Instruments can be used. However, the current sensing embodiments and current limiting embodiments described above can also be used with other power FET technologies. Discrete FET devices on the board can be used to form embodiments.In FIG. 12, control IC 1201 may include a sense path device operational amplifier, a resistor divider circuit system, and a feedback FET device as described above. Since the sensed current can be scaled far less than the load current and because the circuitry in the control IC 1201 containing the operational amplifier can be powered by the commonly used low current IC supply voltage, conventional high voltage, low current CMOS semiconductor devices can be implemented The integrated circuit 1201 is controlled. Using low current devices results in lower system cost and reduced power consumption.In operation, control IC 1201 can supply a BGATE signal and a HGATE signal to cause power IC 1203 to supply current to a load (not shown) coupled to the output terminal. The power IC 1203 can couple the current conduction path of the power transistor in series between the output terminal and the input voltage at the input terminal VIN. The load current can be sensed by a sense transistor on the power IC and the sensed current is output through the ISENSE signal. The operational amplifier and feedback transistors in the control IC can be used to provide an output voltage VMON using an external resistor RMON. In an example embodiment, a current limiting circuit may also be provided within the control IC 1201. When a load current exceeding the limit occurs, the gate voltage HGATE can be lowered to control the load current. This can be performed as described above when the voltage VMON exceeds the reference voltage.In an embodiment, one or more of the power transistors and the sense transistors may share a common drain node or a common source node. Advanced FET semiconductor devices, such as vertical FETs, can be used to have power transistors and sense transistors implemented on a single substrate, with shared drain nodes or shared source nodes formed at the substrate. In an alternative embodiment, the lateral FET device for the power transistor can also be used with the current sensing circuitry and current limiting circuitry described above.An alternative arrangement that can form additional embodiments includes increasing the level of integration to form a single integrated circuit that includes current sensing circuitry and power circuitry. However, because the semiconductor process of the power FET is optimized for high voltage, high current capability transistors and is more expensive than conventional CMOS processes, it would be more cost effective to produce an embodiment that is arranged as two integrated circuits as shown in FIG.In an example, an apparatus includes: a first power transistor having a first current conduction path between a first current conduction terminal and a second current conduction terminal, the first current conduction of the first power transistor a path coupled between an input for receiving a supply voltage and a node, the first power transistor having a first gate terminal coupled to the first gate control signal for controlling the first power transistor; a power transistor having a second current conduction path between the third current conduction terminal and the fourth current conduction terminal, the second current conduction path of the second power transistor being coupled to the node and for loading Between the output terminals supplying the load current; the second power transistor having a second gate terminal coupled to the second gate control signal; a current sensing transistor having a coupling to the node and the first power transistor One current conducting terminal having a third gate terminal coupled to the first gate control signal and output sensing at another current conducting terminal a differential amplifier having a first input coupled to one of the first current conducting terminal and the second current conducting terminal of the first power transistor and having a first current conducting terminal coupled thereto And a second input of the other of the second current conducting terminals, and having an output signal responsive to a voltage difference between the first input and the second input; a feedback transistor having a series Another current conduction path coupled between the current sense transistor and the monitoring node, having a feedback transistor gate terminal coupled to an output of the differential amplifier; and a resistor coupled to the monitoring node and ground The sense current flows through the resistor, and the sense current is proportional to the load current flowing through the second power transistor.In a further example, in the above apparatus, the current sensing transistor and the first power transistor are formed on a semiconductor substrate, and a device having a device area smaller than the first power transistor of the current sensing transistor area.In another example, in the above apparatus, the sense current flowing through the current sense transistor is proportional to the load current. In an additional example, in the above apparatus, the first power transistor, the second power transistor, and the current sense transistor are field effect transistor FET devices formed on a single integrated circuit. In a further example, the FET device forming the power transistor is a device selected from the group consisting of a vertical FET device and a non-vertical FET device.In yet another example, in the above apparatus, the node is formed in a semiconductor substrate of the single integrated circuit.In an alternative arrangement, the apparatus further includes a fast trip comparator coupled between the node and a voltage divider coupled to the input for responding to when the load current rapidly increases When the voltage is large, the voltage at the node drops to output a fast trip signal.In still another example, the apparatus further includes a current limiting circuit coupled to the second gate terminal of the second power transistor for limiting when the sensed current exceeds a current limit The voltage of the second gate control signal.In still another example, in the above device, the first current conducting terminal of the first power transistor is a first source terminal, and the second current conducting terminal of the first power transistor is first The drain terminal, the third current conduction terminal of the second power transistor is a second drain terminal, the fourth current conduction terminal of the second power transistor is a second source terminal, and the current sense The transistor has a third drain terminal as the current conduction terminal, the third drain terminal being coupled to the first drain terminal and the second power transistor of the first power transistor at the node The second drain terminal.In yet another example, in the above examples, the differential amplifier is an operational amplifier. In still a further example, the operational amplifier is coupled to the feedback transistor in a closed loop.In another example, a circuit system includes: a first field effect transistor having a first source terminal and a first drain terminal, the first source terminal being coupled to an input terminal for receiving a power supply and a first drain terminal coupled to the node; and having a first gate terminal for receiving a first gate control signal; a second field effect transistor having a second drain terminal and a second source terminal, a second drain terminal coupled to the node and the second source terminal coupled to an output terminal for supplying a load current to a load; and having a second gate terminal for receiving a second gate control signal; current a sense transistor having a third drain terminal coupled to the node and a third source terminal coupled to output a sense current, the current sense transistor having a first gate control signal coupled thereto a third gate control terminal; a first current limiting amplifier having a first input coupled to the input terminal and a second input coupled to the node and outputting the second gate control signal; And an operational amplifier coupled to the feedback transistor, the operational amplifier having a voltage reference at the first input and having a current limiting output terminal at the second input and having an output coupled to a gate terminal of the feedback transistor The feedback transistor has a current conduction path coupled between the sense current output of the current sense transistor and the current limit output terminal.In yet another example, the circuitry further includes a first resistor and a second resistor coupled between the input terminal and the first input of the current limiting amplifier, The second resistor is coupled between the first resistor and the third source terminal of the current sense transistor.In still another example, in the above circuit system, the second resistor further includes a third resistor and a fourth resistor in a resistor ladder configuration.In yet another example, the circuitry further includes a fast trip comparator coupled to apply a voltage between the third resistor and the fourth resistor and at the node The voltages are compared to output a fast trip output signal in response to a decrease in the voltage of the node, the drop indicating that the load current is rapidly increasing.In still another additional example, the circuitry further includes a current limiting resistor coupled between the current limiting output and ground. In an additional example, in the above examples, the first field effect transistor, the second field effect transistor, and the current sense transistor are on an integrated circuit.In yet another example, an apparatus includes: a voltage input terminal for receiving a supply voltage; a voltage output terminal for coupling to a load; a first power transistor having a coupling at the voltage input terminal and a common a first current conduction path between the nodes and having a first gate terminal coupled to the first gate control signal; a second power transistor having a second coupled between the common node and the voltage output terminal The current conduction path also has a second gate terminal coupled to the second gate control signal. The apparatus further includes a first current sense transistor having a third current conduction path coupled to the common node and having a third gate terminal coupled to the first gate control signal for output a first sense current proportional to a load current flowing from the voltage input terminal to the voltage output terminal; a second current sense transistor having a fourth current conduction path coupled to the common node and having a coupling a fourth gate terminal to the second gate control signal, and outputting a second sense current proportional to the load current flowing from the output terminal to the input terminal; a differential amplifier having a An input terminal and a second input terminal and having an output signal corresponding to a difference between voltages at the first input terminal and the second input terminal; and a feedback transistor coupled to the monitoring resistor at the monitoring node And having a current conduction path coupled to one of the first sense current and the second sense current and having a coupling to the difference The gate control terminal of the output terminal of the amplifier.In still another example, the apparatus further includes a first selection circuit for coupling the first input terminal of the differential amplifier to the following in response to a signal indicative of a direction of a load current One selected: a resistor coupled to the input voltage terminal and coupled to the second current sense transistor.In yet another example, the apparatus further includes a second selection circuit for coupling the feedback transistor to the first current sense transistor in response to a signal indicative of a load current direction One of the first sensing current and the second sensing current from the second current sensing transistor.Modifications may be made in the described embodiments, and other embodiments are possible within the scope of the claims. |
The invention discloses an apparatus, method and system for an 8-bit floating point matrix dot product instruction. Systems, methods, and apparatus related to 8-bit floating point matrix dot product instructions are described. A processor embodiment includes fetch circuitry to fetch an instruction having fields to specify an opcode and a location of a destination matrix having single precision elements, a location of a first source matrix, and a location of a second source matrix, the source matrix having elements of a tetrad each including an 8-bit floating point value, the opcode is to instruct the execution circuitry to cause: for each element of the first source matrix and a corresponding element of the second source matrix, to convert an 8-bit floating point value to a single precision value, to multiply different pairs of the converted single precision values to generate a plurality of results, and to accumulate the results with previous content of a corresponding element of the destination matrix; the decoding circuit is used for decoding the fetched instruction; and execution circuitry to respond to the decoded instruction as specified by the opcode. |
1.A device comprising:A fetch circuit for fetching a single instruction with a position for specifying an opcode and an M-by-N destination matrix with single-precision elements, a position for an M-by-K first source matrix, and a K-by-N second source a field for the position of a matrix having elements each comprising a quadruple of 8-bit floating point values, the opcode for instructing the execution circuit to cause: for each element of the first source matrix and Corresponding elements of the second source matrix, converting the 8-bit floating point values to single-precision values, multiplying the converted single-precision values of the first values from the quadruple together to generate a first result, converting the values from the quadruple The converted single-precision values of the second value of the tuple are multiplied together to generate a second result, the converted single-precision values of the third value from the quadruple are multiplied together to generate a third result, the the converted single-precision values of the fourth value of the tuple are multiplied together to generate a fourth result, and the first result, the second result, the third result, and the fourth result are combined with the The previous contents of the corresponding elements of the destination matrix are accumulated;decoding circuitry for decoding the fetched instruction; andExecution circuitry responds to the decoded instruction as specified by the opcode.2.2. The apparatus of claim 1, wherein the 8-bit floating point format is specified by the opcode of the single instruction.3.The apparatus of claim 1, wherein M, N, and K are specified by the single instruction.4.2. The apparatus of claim 1, wherein the execution circuit is configured to cause the matrix operation accelerator to perform at least multiply and accumulate.5.5. The apparatus of claim 4, wherein M, N, and K are specified by a configuration of the matrix operation accelerator for programming by execution of a matrix accelerator configuration instruction prior to executing the single instruction.6.5. The apparatus of any one of claims 1-5, wherein the execution circuit is further configured to cause saturation of the execution result when necessary.7.5. The apparatus of any one of claims 1-5, wherein the single instruction is further for specifying a write mask comprising M x N bits, each bit for controlling whether to write a corresponding element of the destination matrix mask.8.5. The apparatus of any one of claims 1-5, wherein the execution circuit is further configured to generate an error when an error condition occurs, the error condition being selectable from:the number of rows of the destination matrix is less than the number of rows of the first source matrix; andThe number of columns of the destination matrix is less than the number of columns of the second source matrix.9.A method that includes:A single instruction is fetched by the processor's fetch circuit, the single instruction having the location for the specified opcode and the M by N destination matrix with single-precision elements, the location of the M by K first source matrix, and the K by N second a field for the location of a source matrix having elements each comprising a quadruple of 8-bit floating point values, the opcode for instructing the execution circuit to cause: for each element of the first source matrix and the corresponding elements of the second source matrix, converting the 8-bit floating point values to single-precision values, multiplying the converted single-precision values of the first values from the quad together to generate a first result, converting the values from The converted single-precision values of the second value of the quadruple are multiplied together to generate a second result, the converted single-precision values from the third value of the quadruple are multiplied together to generate a third result, the The converted single-precision values of the fourth value of the quadruple are multiplied together to generate a fourth result, and the first result, the second result, the third result, and the fourth result are combined with the The previous contents of the corresponding elements of the destination matrix are accumulated;decoding, by decoding circuitry of the processor, the fetched instruction into a decoded single instruction; andThe decoded single instruction is executed by the execution circuitry of the processor according to the opcode.10.9. The method of claim 9, wherein the 8-bit floating point format is specified by the opcode of the single instruction.11.10. The method of claim 9, wherein M, N, and K are specified by the single instruction.12.10. The method of claim 9, wherein the execution circuit causes the matrix operation accelerator to perform at least multiply and accumulate.13.13. The method of claim 12, further comprising executing, by the execution circuitry of the processor, a matrix accelerator configuration instruction prior to executing the single instruction, the matrix accelerator configuration instruction pairing the specified M, N and K for the The configuration of the matrix operation accelerator is programmed.14.14. The method of any of claims 9-13, wherein the executing comprises saturating the execution results.15.14. The method of any of claims 9-13, wherein the single instruction further specifies a writemask comprising M x N bits, each bit controlling whether or not to mask a corresponding element of the destination matrix .16.14. The method of any of claims 9-13, wherein the executing generates an error when an error condition occurs, the error condition being selectable from:the number of rows of the destination matrix is less than the number of rows of the first source matrix; andThe number of columns of the destination matrix is less than the number of columns of the second source matrix.17.A non-transitory machine-readable medium storing program code that, when executed by a machine, causes the machine to perform a method comprising the steps of:A single instruction is fetched by the processor's fetch circuit, the single instruction having the location for the specified opcode and the M by N destination matrix with single-precision elements, the location of the M by K first source matrix, and the K by N second a field for the location of a source matrix having elements each comprising a quadruple of 8-bit floating point values, the opcode for instructing the execution circuit to cause: for each element of the first source matrix and the corresponding elements of the second source matrix, converting the 8-bit floating point values to single-precision values, multiplying the converted single-precision values of the first values from the quad together to generate a first result, converting the values from The converted single-precision values of the second value of the quadruple are multiplied together to generate a second result, the converted single-precision values from the third value of the quadruple are multiplied together to generate a third result, the The converted single-precision values of the fourth value of the quadruple are multiplied together to generate a fourth result, and the first result, the second result, the third result, and the fourth result are combined with the The previous contents of the corresponding elements of the destination matrix are accumulated;decoding, by decoding circuitry of the processor, the fetched instruction into a decoded single instruction; andThe decoded single instruction is executed by the execution circuitry of the processor according to the opcode.18.18. The non-transitory machine-readable medium of claim 17, wherein the 8-bit floating point format is specified by the opcode of the single instruction.19.18. The non-transitory machine-readable medium of claim 17, wherein M, N, and K are specified by the single instruction.20.18. The non-transitory machine-readable medium of claim 17, wherein the executing includes the executing circuit causing a matrix operation accelerator to perform at least multiply and accumulate.21.21. The non-transitory machine-readable medium of claim 20, wherein the method further comprises executing, by the execution circuitry of the processor, a matrix accelerator configuration instruction prior to executing the single instruction, the matrix accelerator configuration The instructions program the configuration of the matrix operation accelerator specifying M, N, and K.22.The non-transitory machine-readable medium of any of claims 17-21, wherein the executing comprises saturating a result of the execution.23.21. The non-transitory machine-readable medium of any of claims 17-21, wherein the single instruction further specifies a writemask comprising M x N bits, each bit controlling whether The corresponding elements of the mask are masked.24.21. The non-transitory machine-readable medium of any of claims 17-21, wherein the execution generates an error when an error condition occurs, the error condition being selectable from:the number of rows of the destination matrix is less than the number of rows of the first source matrix; andThe number of columns of the destination matrix is less than the number of columns of the second source matrix. |
Apparatus, method and system for 8-bit floating point matrix dot product instructionstechnical fieldThe present disclosure relates generally to computer processor architecture, and more particularly to systems and methods for executing 8-bit floating point matrix dot product instructions.Background techniqueMatrices are becoming increasingly important in many computing tasks such as machine learning and other batch data processing. Deep learning is a class of machine learning algorithms. Deep learning architectures such as deep neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, and drug design.Two tools for deep learning, inference and training, are trending toward low-precision arithmetic. Maximizing the throughput of deep learning algorithms and computations can assist in meeting the demands of deep learning processors, such as those that perform deep learning in data centers.Matrix-matrix multiplication (also known as GEMM or general matrix multiplication) is a common recomputation operation on modern processors. Special hardware for matrix multiplication (eg, GEMM) is a good option for improving peak computation (and energy efficiency) for certain applications such as deep learning.Some of these applications, including deep learning, can operate on input data elements with relatively few bits without loss of accuracy, as long as the output elements have enough bits (ie, more than the input).Description of drawingsThe present disclosure is illustrated by way of example and not by way of limitation in the accompanying drawings, in which like reference numerals refer to like elements, wherein:1A illustrates an embodiment of a configured sheet;FIG. 1B illustrates an embodiment of a configured sheet;Figure 2 illustrates several examples of matrix storage;3 illustrates an embodiment of a system for operating accelerators using a matrix (slice);Figures 4 and 5 illustrate different embodiments of how a matrix operation accelerator can be used to share memory;6 illustrates an embodiment of a matrix multiply-accumulate operation ("TMMA") using slices;FIG. 7 illustrates an embodiment of a subset of execution of iterations of chained fused multiply-accumulate instructions;8 illustrates an embodiment of a subset of execution of iterations of chained fused multiply-accumulate instructions;9 illustrates an embodiment of a subset of execution of iterations of chained fused multiply-accumulate instructions;10 illustrates an embodiment of a subset of the execution of iterations of chained fused multiply-accumulate instructions;11 illustrates a power-of-2 SIMD implementation in which the accumulator uses a larger input size than the size of the input to the multiplier, according to an embodiment;12 illustrates an embodiment of a system utilizing a matrix operating circuit;13 illustrates an embodiment of a processor core pipeline that supports matrix operations using slices;14 illustrates an embodiment of a processor core pipeline that supports matrix operations using slices;Figure 15 illustrates an example of a matrix expressed in row-major format and column-major format;Figure 16 illustrates an example of the use of a matrix (slice);Figure 17 illustrates an embodiment of a method of use of a matrix (slice);18 illustrates support for configuration of the use of slices, according to an embodiment;Figure 19 illustrates an embodiment of the description of the matrices (slices) to be supported;Figures 20(A)-20(D) illustrate examples of register(s);21A is a block diagram illustrating the use of TDPBF8PS instructions to accelerate matrix multiplication in accordance with some embodiments;21B is a block diagram illustrating an example execution circuit for executing a TDPBF8PS instruction in accordance with some embodiments;Figure 22A is pseudocode illustrating execution of a TDPBF8PS instruction according to some embodiments;Figure 22B is pseudocode illustrating a helper function for use by the pseudocode of Figure 22A, according to some embodiments;23 illustrates an embodiment of a processor executing a flow for processing a TDPBF8PS instruction;24 is a block diagram illustrating the format of a TDPBF8PS instruction in accordance with some embodiments;25A-25B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to an embodiment;25A is a block diagram illustrating a generic vector friendly instruction format and its Class A instruction template, according to an embodiment;25B is a block diagram illustrating a generic vector friendly instruction format and its Class B instruction template, according to an embodiment;26A is a block diagram illustrating an exemplary dedicated vector friendly instruction format according to an embodiment;26B is a block diagram illustrating fields with a dedicated vector friendly instruction format that make up a complete opcode field, according to one embodiment;Figure 26C is a block diagram illustrating fields with a dedicated vector friendly instruction format that make up a register index field, according to one embodiment;26D is a block diagram illustrating fields with a dedicated vector friendly instruction format that make up an extended operation field, according to one embodiment;Figure 27 is a block diagram of a register architecture according to one embodiment;28A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register-renaming out-of-order issue/execution pipeline, according to an embodiment;28B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register-renaming out-of-order issue/execute architecture core to be included in a processor, according to an embodiment;Figures 29A-29B illustrate block diagrams of a more specific exemplary in-order core architecture, which core would be one of several logic blocks in a chip (including other cores of the same type and/or different types);29A is a block diagram of a single processor core and its connection to the on-die interconnect network and its local subset of the second level (L2) cache, according to an embodiment;Figure 29B is an expanded view of a portion of the processor core in Figure 29A, according to an embodiment;30 is a block diagram of a processor that can have more than one core, can have an integrated memory controller, and can have an integrated graphics device, according to an embodiment;31-34 are block diagrams of exemplary computer architectures;Figure 31 shows a block diagram of a system according to one embodiment of the present disclosure;32 is a block diagram of a first more specific exemplary system according to an embodiment of the present disclosure;33 is a block diagram of a second more specific exemplary system according to an embodiment of the present disclosure;34 is a block diagram of a system on a chip (SoC) according to an embodiment of the present disclosure; and35 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, according to an embodiment.Detailed waysIn the following description, numerous specific details are set forth. It should be understood, however, that embodiments may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.References in the specification to "one embodiment," "an embodiment," "example embodiment," etc. indicate that the described embodiment may include a particular feature, structure, or characteristic, but that each embodiment may not necessarily include that particular characteristics, structure or properties. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure or characteristic is described in connection with one embodiment, it is believed to be within the purview of those skilled in the art to affect such feature, structure or characteristic in connection with other embodiments, whether expressly described or not.In many mainstream processors, dealing with matrices is a difficult and/or instruction-intensive task. For example, multiple rows of a matrix may be placed into multiple packed data (eg, SIMD or vector) registers, and then multiple rows of the matrix may be operated on individually. For example, adding two 8x2 matrices may require loading or gathering into four packed data registers, depending on the data size. Then, a first addition of packed data registers corresponding to the first row from each matrix is performed and a second addition of packed data registers corresponding to the second row from each matrix is performed. The resulting packed data registers are then scattered back to memory. While this scenario may be acceptable for small matrices, it is often not acceptable for larger matrices.discussDescribed herein are mechanisms for supporting matrix operations in computer hardware such as central processing units (CPUs), graphics processing units (GPUs), and accelerators. Matrix operations utilize 2-dimensional (2-D) data structures that represent one or more packed regions of memory, such as registers. Throughout this specification, these 2-D data structures are referred to as slices. Note that the matrix can be smaller than the tiles (less than all of the tiles are used), or multiple tiles can be utilized (the matrix is larger than the size of either tile). Throughout this specification, matrix (slice) language is used to indicate operations performed using slices that affect a matrix; whether a matrix is larger than any slice is generally irrelevant.Each slice may be acted upon by different operations such as those detailed herein, including but not limited to: matrix (slice) multiplication, slice addition, slice subtraction, slice diagonal, slice zeroing, slice transform, Slice Dot Product, Slice Broadcast, Slice Row Broadcast, Slice Column Broadcast, Slice Multiply, Slice Multiply and Accumulate, Slice Move, etc. Additionally, support for operators such as using scaling and/or biasing may be used in the future with these operations or to support non-numeric applications, e.g. OpenCL "local memory", data compression/decompression, and many more. Instructions for executing the matrix (slice) 8-bit floating point slice dot product (TDPBF8PS) instruction are also described herein.Portions of storage, such as memory (non-volatile and volatile), registers, caches, etc., are arranged as slices with different lateral and vertical dimensions. For example, a slice may have a lateral dimension of 4 (eg, four rows of a matrix) and a longitudinal dimension of 8 (eg, 8 columns of a matrix). Typically, the lateral dimension is related to the element size (eg, 2-bit, 4-bit, 8-bit, 16-bit, 32-bit, 64-bit, 128-bit, etc.). Multiple data types are supported (single-precision floating point, double-precision floating point, integer, etc.).Exemplary Uses of Configured SheetsIn some embodiments, slice parameters may be configured. For example, a given slice can be configured to provide slice options. Exemplary slice options include, but are not limited to, the number of rows of the slice, the number of columns of the slice, whether the slice is valid, and/or whether the slice consists of equal-sized slice pairs.Figure 1A illustrates an embodiment of a configured sheet. As shown, the 4kB of application memory 102 has four 1kB slices stored thereon - slice t0 104, slice t1 106, slice t2 108, and slice t3 110. In this example, the 4 slices do not consist of pairs, and each slice has elements arranged in rows and columns. Slice t0 104 and slice t1 106 have K rows and N columns of 4-byte elements (eg, single-precision data), where K=8 and N=32. Slice t2 108 and slice t3 110 have K rows and N/2 columns of 8-byte elements (eg, double precision data). Since double-precision operands are twice as wide as single-precision operands, this configuration is consistent with the palette used to provide slice options, providing at least 4kB of total storage for at least 4 names. In operation, slices may be loaded from and stored to memory using load and store operations. Depending on the instruction encoding scheme used, the amount of application memory available and the size, number, and configuration of available slices vary.Figure IB illustrates an embodiment of a configured sheet. As shown, 4kB of application memory 122 has 2 pairs of 1kB slices stored thereon, the first pair being slice t4L 124 and slice t4R 126, and the second pair being slice t5L 128 and slice t5R 130. As shown, slice pairs are divided into left slices and right slices. In other embodiments, slice pairs are divided into even slices and odd slices. In this example, the 4 slices each have elements arranged in rows and columns. Slice t4L 124 and slice t4R 126 have K rows and N columns of 4-byte elements (eg, single precision floating point data), where K=8 and N=32. Slice t5L 128 and slice t5R 130 have K rows and N/2 columns of 8-byte elements (eg, double precision floating point data). Since double-precision operands are twice as wide as single-precision operands, this configuration is consistent with the palette used to provide slice options, giving at least 4kB of total storage to at least 2 names. The four slices of FIG. 1A use 4 names, each naming a 1 kB slice, while the 2 slice pairs in FIG. 1B can use 2 names to designate pairs of slices. In some embodiments, the slice instruction accepts pairs of slice names as operands. In operation, slices may be loaded from and stored to memory using load and store operations. Depending on the instruction encoding scheme used, the amount of application memory available and the size, number, and configuration of available slices vary.In some embodiments, slice parameters are definable. For example, "Palette" is used to provide slice options. Exemplary options include, but are not limited to: the number of slice names, the number of bytes in the row stored, the number of rows and columns in the slice, and the like. For example, the maximum "height" (number of rows) of a slice can be defined as:Slice MaxRow = Storage Constructed / (Number of Palette Names * Bytes per Row).Thus, applications can be written such that fixed use of names will be able to take advantage of different storage sizes across implementations.Configuration of tiles is accomplished using the Matrix (Tile) Configuration ("TILECONFIG") command, where specific tile usages are defined in the selected palette. The declaration includes the number of slice names to use, the requested number of rows and columns per name (slice), and in some embodiments the requested data type for each slice. In some embodiments, a consistency check is performed during execution of the TILECONFIG instruction to determine that it matches the constraints of the palette entry.Exemplary slice storage typeFigure 2 illustrates several examples of matrix storage. In (A), slices are stored in memory. As shown, each "row" consists of four packed data elements. To reach the next "row", the stride value is used. Note that rows can be stored contiguously in memory. Stride-style memory accesses allow access to one row and subsequently to the next row when slice storage does not map the underlying memory array row width.Loading slices from memory and storing slices to memory are typically strided accesses from application memory to packed rows of data. Exemplary TILELOAD and TILESTORE instructions or other instruction references to application memory as TILE (slice) operands in load operation instructions are restartable in some embodiments to process (up to) 2* rows per instruction page faults, unmasked floating-point exceptions, and/or interrupts.In (B), the matrix is stored in a slice consisting of multiple registers, such as packed data registers (single instruction multiple data (SIMD) or vector registers). In this example, the slices are superimposed on three physical registers. Typically, consecutive registers are used, however, this need not be the case.In (C), matrices are stored in slices in non-register storage accessible by fused multiply-accumulate (FMA) circuits used in slice operations. This storage can be within the FMA, or adjacent to the FMA. Furthermore, in some embodiments, as discussed below, the storage may be for data elements rather than for entire rows or entire slices.The supported parameters of the TMMA architecture are reported via CPUID. In some embodiments, the list of information includes a maximum height and a maximum SIMD dimension. Configuring the TMMA schema requires specifying the dimensions of each slice, the element size of each slice, and the palette identifier. This configuration is done by executing the TILECONFIG instruction.Successful execution of the TILECONFIG instruction enables subsequent TILE operators. The TILERELEASEALL instruction clears the slice configuration and disables TILE operations (until the next TILECONFIG instruction is executed). In some embodiments, XSAVE, XSTORE, etc. are used in context switching using slices. In some embodiments, 2 XCR0 bits are used in XSAVE, one for TILECONFIG metadata and one corresponding to the actual tile payload data.TILECONFIG not only configures slice usage, but also sets a state variable that indicates that the program is in the code area if the slice is configured. Implementations may enumerate restrictions on other instructions that can be used with slice regions, such as no use of existing register sets, and so on.Exiting a tile area is typically done using the TILERELEASEALL instruction. This instruction takes no arguments and promptly invalidates all slices (indicating that the data no longer needs any save or restore), and clears the internal state corresponding to being in the slice area.In some embodiments, a slice operation will zero out any row and any column that exceeds the dimension specified by the slice configuration. For example, as each row is written, a slice operation will zero out data that exceeds the configured number of columns (taking into account the size of the elements). For example, for a 64-byte row and a slice configured with 10 rows and 12 columns, an operation to write an FP32 element will write the output/result data in 12*4 bytes for each of the first 10 rows, and make each The remaining 4*4 bytes in a row are zeroed out. Slice operations also completely zero out any row after the first 10 configured rows. When using a 1K slice with 64 byte rows, there will be 16 rows, so the last 6 rows will also be zeroed out in this example.In some embodiments, when loading data, a context restore instruction (eg, XRSTOR) forces data beyond the configured row of the slice to be maintained at zero. If there is no valid configuration, all lines are zeroed. XRSTOR on slice data can load useless information in columns beyond those configured. XRSTOR scavenging beyond the configured number of columns should not be possible since there is no element width associated with the slice configuration.When writing the entire TILE store to memory, a context save (eg, XSAVE) exposes the entire TILE store. If XRSTOR loads garbage into the rightmost part of the slice, that data will be saved by XSAVE. XSAVE will write zeros for more lines than the number specified for each slice.In some embodiments, slice instructions are restartable. The operation of accessing memory allows restarting after a page fault. Compute instructions that handle floating-point operations also allow unmasked floating-point exceptions by virtue of the masking of exceptions controlled by control and/or status registers.To support restarting instructions after these events, these instructions store information in a start register detailed below.matrix (slice) operating systemExemplary Hardware SupportFigure 3 illustrates an embodiment of a system for operating accelerators using a matrix (slice). In this illustration, host processor/processing system 301 passes commands 311 (eg, matrix manipulation operations, such as arithmetic or matrix manipulation operations, or load and store operations) to matrix manipulation accelerator 307 . However, this is shown in this manner for discussion purposes only. The accelerator 307 may be part of a processing core, as described in detail later. Typically, commands 311 that are slice manipulation operator instructions refer to slices as register-register ("reg-reg") or register-memory ("reg-mem") format. Other commands such as TILESTORE, TILELOAD, TILECONFIG, etc. do not perform data operations on the slice. Commands may be decoded instructions (eg, micro-operations) or macro-instructions for processing by accelerator 307 .In this example, coherent memory interface 303 is coupled to host processor/processing system 301 and matrix operation accelerator 307 so that they can share memory. Figures 4 and 5 illustrate different embodiments of how a matrix operation accelerator can be used to share memory. As shown in FIG. 4 , the host processor 401 and the matrix operation accelerator circuit 405 share the same memory 403 . 5 illustrates an embodiment in which host processor 501 and matrix operation accelerator 505 do not share memory, but may access each other's memory. For example, processor 501 may access slice memory 507 and utilize its host memory 503 as usual. Similarly, matrix operation accelerator 505 has access to host memory 503, but more typically uses its own memory 507. Note that these memories can be of different types.In some embodiments, an overlay structure on the physical register is used to support the slice. For example, depending on the implementation, a slice may utilize 16 registers of 1024 bits, 32 registers of 512 bits, and so on. In some embodiments, matrix operations utilize 2-dimensional (2-D) data structures representing one or more packed regions of memory, such as registers. Throughout this specification, these 2-D data structures are referred to as slices or slice registers.In some embodiments, matrix operation accelerator 307 includes a plurality of FMAs 309 coupled to data buffers 305 (in some implementations, one or more of these buffers 305 are stored in a grid as shown in FMA). Data buffer 305 buffers slices loaded from and/or stored to memory (eg, using slice load or slice store instructions). The data buffer may be, for example, a number of registers. Typically, these FMAs are arranged as a grid of chained FMAs 309 capable of reading and writing slices. In this example, matrix operation accelerator 307 is used to perform matrix multiplication operations using slices TO, T1 and T2. At least one of the slices is accommodated in the FMA grid 309 . In some embodiments, all slices in operation are stored in the FMA grid 309 . In other embodiments, only a subset is stored in the FMA grid 309 . As shown, T1 is accommodated, while T0 and T2 are not accommodated. Note that A, B, and C refer to the matrices of these slices, which may or may not occupy the entire space of the slice.6 illustrates an embodiment of a matrix multiply-accumulate operation ("TMMA") using slices.The number of rows in the matrix (slice A 601) matches the number of concatenated (chained) FMAs that include the calculated latency. Implementations are free to recycle on grids of smaller heights, but the computation remains the same.The source/destination vectors come from a slice of N rows (slice C 605), and the FMA's grid 611 performs N vector-matrix operations, resulting in a complete instruction that performs the matrix multiplication of the slice. Slice B 603 is another vector source and provides "broadcast" terms to the FMA in each stage.In operation, in some embodiments, the elements of matrix B (stored in slice B 603) are spread across the rectangular grid of the FMA. Matrix B (stored in slice A 601) has its row elements transformed to match the column dimensions of the FMA's rectangular grid. At each FMA in the grid, the elements of A and B are multiplied and added to the incoming summand (from the diagram above), and the outgoing sum is passed to the next row of the FMA ( or final output).The latency of a single step is proportional to K (the row height of matrix B), and the dependent TMMA typically (in a single slice or across slices) has enough source-destination rows to hide this latency. Implementations can also split the SIMD (packed data element) scale M (the row height of matrix A) across time steps, but this only changes the constant K multiplied by. When a program specifies K less than the maximum value enumerated by TMACC, implementations are free to implement this using a "mask" or "early out".The waiting time for the entire TMMA is proportional to N*K. The repetition rate is proportional to N. The number of MACs per TMMA instruction is N*K*M.7 illustrates an embodiment of a subset of the execution of iterations of chained fused multiply-accumulate instructions. In particular, this illustrates an iterative execution circuit of a packed data element position for a destination. In this embodiment, the chained fused multiply-accumulate is operating on a signed source, where the accumulator is 2 times the size of the input data.The first signed source (source 1 701) and the second signed source (source 2 703) each have four packed data elements. Each of these packed data elements stores signed data such as floating point data. The third signed source (source 3 709) has two packed data elements, each of which stores signed data. The size of the first signed source 701 and the size of the second signed source 703 are half the size of the third signed source (initial value or previous result) 709 . For example, first signed source 701 and second signed source 703 may have 32-bit packed data elements (eg, single precision floating point), while third signed source 709 may have 64-bit packed data elements (eg, double-precision floating point).In this illustration, only the most significant two packed data element positions of the first signed source 701 and the second signed source 703 and the most significant packed data element position of the third signed source 709 are shown. Of course, other packed data element positions will also be processed.As shown, packed data elements are processed in pairs. For example, multiplier circuit 705 is used to multiply the data from the most significant packed data element positions of first signed source 701 and second signed source 703, and multiplier circuit 707 is used to multiply the data from first signed source 701 and second signed source 703. The data at the next most significant packed data element position of the signed source 703 is multiplied. In some embodiments, these multiplier circuits 705 and 707 are reused for other packed data element locations. In other embodiments, additional multiplier circuits are used such that packed data elements are processed in parallel. In some contexts, parallel execution is accomplished using channels the size of the signed third source 709 . The results of each of these multiplications are added using addition circuit 711 .The result of the addition of the results of these multiplications is added to the data from the most significant packed data element position of signed source 3 709 (using a different adder 713 or the same adder 711).Finally, the result of the second addition is stored in the signed destination 715 in the packed data element position corresponding to the used packed data element position from the signed third source 709, or if there is a next iteration, the first The result of the two addition is passed on to this next iteration. In some embodiments, a writemask is applied to this store such that if the corresponding writemask (bit) is set, the store occurs, and if the corresponding writemask (bit) is not set, the store does not occur.8 illustrates an embodiment of a subset of execution of an iteration of a chained fused multiply-accumulate instruction. In particular, this illustrates an iterative execution circuit of a packed data element position for a destination. In this embodiment, the chained fused multiply-accumulate is operating on a signed source, where the accumulator is 2 times the size of the input data.The first signed source (source 1 801) and the second signed source (source 2 803) each have four packed data elements. Each of these packed data elements stores signed data such as integer data. The third signed source (source 3 809) has two packed data elements, each of which stores signed data. The size of the first signed source 801 and the size of the second signed source 803 are half the size of the third signed source 809 . For example, first signed source 801 and second signed source 803 may have 32-bit packed data elements (eg, single precision floating point), while third signed source 809 may have 64-bit packed data elements (eg, double-precision floating point).In this illustration, only the most significant two packed data element positions of the first signed source 801 and the second signed source 803 and the most significant packed data element position of the third signed source 809 are shown. Of course, other packed data element positions will also be processed.As shown, packed data elements are processed in pairs. For example, multiplier circuit 805 is used to multiply the data from the most significant packed data element positions of first signed source 801 and second signed source 803, and multiplier circuit 807 is used to multiply the data from first signed source 801 and second signed source 803. The data at the next most significant packed data element position of the signed source 803 is multiplied. In some embodiments, multiplier circuits 805 and 807 perform multiplication with infinite precision without saturation, and adder/saturation circuit 813 is used to saturate the accumulated result to positive or negative infinity in the event of overflow and Saturates the accumulated result to zero in case of any underflow. In other embodiments, the multiplier circuits 805 and 807 perform saturation by themselves. In some embodiments, these multiplier circuits 805 and 807 are reused for other packed data element locations. In other embodiments, additional multiplier circuits are used such that packed data elements are processed in parallel. In some contexts, parallel execution is accomplished using a channel that is the size of the signed third source (initial value or previous iteration result) 809 . The result of each of the multiple multiplications is added to the signed third source 809 using an add/saturate circuit 813 .Addition/saturation (accumulator) circuit 813 preserves the sign of the operand when the addition results in an excessively large value. Specifically, saturation evaluation occurs for infinite precision results between the multiplex addition and the write to the destination or the next iteration. When the accumulator 813 is floating point and the input is an integer, the sum of the products and the floating point accumulator input value are converted to infinite precision values (fixed point numbers of hundreds of digits), the addition of the multiplication result to the third input is performed, and Performs a single round to the actual accumulator type.Unsigned saturation means that the output value is limited to the largest unsigned number (all ones) of that element width. Signed saturation means that values are restricted to be in the range between the smallest negative and largest positive number for that element width (e.g., for bytes, the range is from -128 (=-2^7) to 127 (=2^7) -1)).The result of the addition and saturation check is stored in the signed result 815 in the packed data element position corresponding to the used packed data element position from the signed third source 809, or if there is a next iteration, the result is stored. Continue to pass to this next iteration. In some embodiments, a writemask is applied to this store such that if the corresponding writemask (bit) is set, the store occurs, and if the corresponding writemask (bit) is not set, the store does not occur.9 illustrates an embodiment of a subset of the execution of an iteration of chained fused multiply-accumulate instructions. In particular, this illustrates an iterative execution circuit of a packed data element position for a destination. In this embodiment, the chained fused multiply-accumulate is operating on both signed and unsigned sources, where the accumulator is 4 times the size of the input data.The first signed source (source 1 901) and the second unsigned source (source 2 903) each have four packed data elements. Each of these packed data elements has data such as floating point data or integer data. The third signed source (initial value or result 915) has packed data elements storing signed data. The size of the first source 901 and the size of the second source 903 are one-fourth the size of the third signed source 915 . For example, first source 901 and second source 903 may have 16-bit packed data elements (eg, words), while third signed source 915 may have 64-bit packed data elements (eg, double-precision floating point or 64-bit integer).In this illustration, only the most significant four packed data element positions of the first source 901 and the second source 903 and the most significant packed data element position of the third signed source 915 are shown. Of course, if there are any other packed data element positions, those packed data element positions will also be processed.Packed data elements are processed as quads as shown. For example, multiplier circuit 905 is used to multiply the data at the most significant packed data element positions of first source 901 and second source 903, and multiplier circuit 907 is used to multiply the next most significant packed data element positions from first source 901 and second source 903 The data at the packed data element position of the The data of the least significant packed data element positions of a source 901 and a second source 903 are multiplied. In some embodiments, the signed packed data elements of the first source 901 are sign extended and the unsigned packed data elements of the second source 903 are zero extended prior to multiplication.In some embodiments, these multiplier circuits 905-911 are reused for other packed data element locations. In other embodiments, additional multiplier circuits are used such that packed data elements are processed in parallel. In some contexts, parallel execution is accomplished using channels the size of the signed third source 915 . The results of each of these multiplications are added using addition circuit 913 .The result of the addition of the results of these multiplications is added to the data from the most significant packed data element position of signed source 3 915 (using either a different adder 917 or the same adder 913).Finally, the result 919 of the second addition is stored in the signed destination in the packed data element position corresponding to the used packed data element position from the signed third source 915, or passed to the next iteration. In some embodiments, a writemask is applied to this store such that if the corresponding writemask (bit) is set, the store occurs, and if the corresponding writemask (bit) is not set, the store does not occur.10 illustrates an embodiment of a subset of the execution of iterations of chained fused multiply-accumulate instructions. In particular, this illustrates an iterative execution circuit of a packed data element position for a destination. In this embodiment, the chained fused multiply-accumulate is operating on both signed and unsigned sources, where the accumulator is 4 times the size of the input data.The first signed source 1001 and the second unsigned source 1003 each have four packed data elements. Each of these packed data elements stores data such as floating point data or integer data. The third signed source 1015 (initial or previous result) has packed data elements storing signed data. The size of the first source and the size of the second source are one-fourth the size of the third signed source 1015 (initial or previous result). For example, the first and second sources may have 16-bit packed data elements (eg, words), while the third signed source 1015 (initial or previous result) may have 64-bit packed data elements (eg, double precision floats) point or 64-bit integer).In this illustration, the most significant four packed data element positions of the first signed source 1001 and the second unsigned source 1003 and the most significant packed data element position of the third signed source 1015 are shown. Of course, if there are any other packed data element positions, those packed data element positions will also be processed.Packed data elements are processed as quads as shown. For example, multiplier circuit 1005 is used to multiply the data from the most significant packed data element positions of first signed source 1001 and second unsigned source 1003, and multiplier circuit 1007 is used to multiply data from first signed source 1001 and second unsigned source 1003. The data from the second most significant packed data element position of the unsigned source 1003 is multiplied, and the data from the third most significant packed data element position of the first signed source 1001 and the second unsigned source 1003 is multiplied using a multiplier circuit 1009 Multiply and multiply the data from the least significant packed data element positions of the first signed source 1001 and the second unsigned source 1003 using a multiplier circuit 1011 . In some embodiments, the signed packed data elements of the first signed source 1001 are sign extended and the unsigned packed data elements of the second unsigned source 1003 are zero extended prior to the multiplication.In some embodiments, these multiplier circuits 1005-1011 are reused for other packed data element locations. In other embodiments, additional multiplier circuits are used such that packed data elements are processed in parallel. In some contexts, parallel execution is accomplished using channels the size of the third signed source 1015 (initial or previous result). The result of the addition of these multiplication results is added to the data from the most significant packed data element position of the third signed source 1015 (initial or previous result) using an adder/saturation 1013 circuit.Addition/saturation (accumulator) circuit 1013 preserves the sign of the operand when addition results in a value that is too large or too small for signed saturation. Specifically, for infinite-precision results between the multiplex addition and the write to the destination, saturation evaluation occurs. When the accumulator 1013 is floating point and the input is an integer, the sum of the products and the floating point accumulator input value are converted to infinite precision values (fixed point numbers of hundreds of digits), the addition of the multiplication result to the third input is performed, and Performs a single round to the actual accumulator type.The result of the addition and saturation check 1019 is stored in the signed destination at the packed data element position corresponding to the used packed data element position from the third signed source 1015 (initial or previous result) or passed to the next one iteration. In some embodiments, a writemask is applied to this store such that if the corresponding writemask (bit) is set, the store occurs, and if the corresponding writemask (bit) is not set, the store does not occur.11 illustrates a power-of-2 SIMD implementation in which the accumulator uses a larger input size than the size of the input to the multiplier, according to an embodiment. Note that the source and accumulator values (to the multiplier) can be signed or unsigned. For an accumulator with 2X input size (in other words, the size of the accumulator input value is 2 times the size of the packed data element of the source), table 1101 illustrates a different configuration. For byte-sized sources, the accumulator uses 16-bit words or half-precision floating-point (HPFP) values. For word-sized sources, the accumulator uses 32-bit integer or single-precision floating-point (SPFP) values of size 32 bits. For sources of SPFP or 32-bit integer size, the accumulator uses a 64-bit integer or double-precision floating-point (DPFP) value of size 64 bits.For an accumulator with a 4X input size (in other words, the size of the accumulator input value is 4 times the size of the source's packed data elements), table 1103 illustrates a different configuration. For byte-sized sources, the accumulator uses 32-bit integer or single-precision floating-point (SPFP) values of size 32 bits. In some embodiments, for word-sized sources, the accumulator uses 64-bit integer or double-precision floating-point (DPFP) values of size 64 bits.For an accumulator with an 8X input size (in other words, the size of the accumulator input value is 8 times the size of the packed data elements of the source), table 1105 illustrates the configuration. For byte-sized sources, the accumulator uses 64-bit integers.As suggested before, the matrix manipulation circuitry can be included in the core, or can act as an external accelerator. 12 illustrates an embodiment of a system utilizing a matrix operating circuit. In this illustration, multiple entities are coupled with ring interconnect 1245 .Multiple cores, Core 0 1201, Core 1 1203, Core 2 1205, and Core N 1207 provide non-slice-based instruction support. In some embodiments, matrix manipulation circuits 1251 are provided in core 1203 , while in other embodiments, matrix manipulation circuits 1211 and 1213 are accessible on ring interconnect 1245 .Additionally, one or more memory controllers 1223-1225 are provided to communicate with memories 1233 and 1231 on behalf of core and/or matrix operation circuits.13 illustrates an embodiment of a processor core pipeline that supports matrix operations using slices. Branch prediction and decoding circuitry 1303 performs branch prediction, decoding of instructions from instructions stored in instruction store 1301 , and/or both branch prediction and decoding. For example, the instructions detailed herein may be stored in an instruction store. In some implementations, separate circuitry is used for branch prediction, and in some embodiments at least some instructions are decoded into one or more micro-operations, micro-code entry points, micro-instructions, other instructions, or other instructions using micro-code 1305 other control signals. The branch prediction and decoding circuit 1303 may be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), and the like.Branch prediction and decoding circuit 1303 is coupled to assign/rename 1307 circuit, which, in some embodiments, is coupled to scheduler circuit 1309 . In some embodiments, these circuits provide register renaming, register allocation, and/or scheduling functionality by performing one or more of the following steps: 1) renaming logical operand values to physical operand values (eg, some embodiments 2) assigning status bits and flags to decoded instructions; and 3) scheduling decoded instructions for execution outside the instruction pool (eg, using a reservation station in some embodiments) implemented on the circuit.Scheduler circuit 1309 represents any number of different schedulers, including reservation stations, central instruction windows, and the like. Scheduler circuit 1309 is coupled to or includes physical register file(s) 1315 . Each of the physical register file(s) 1315 represents one or more physical register files, where different physical register files store one or more different data types, such as scalar integer, scalar floating point, packed integer, Packed floating point, vector integer, vector floating point, state (eg, instruction pointer that is the address of the next instruction to execute), slice, and so on. In one embodiment, physical register file(s) 1315 includes vector register circuits, writemask register circuits, and scalar register circuits. These register circuits provide architectural vector registers, vector mask registers, and general purpose registers. Physical register file(s) 1315 are overlaid by retirement circuitry 1317 to illustrate the various ways in which register renaming and out-of-order execution can be achieved, such as using reorder buffer(s) and retirement register file(s) , using future file(s), history buffer(s), retirement register file(s), using register maps and register pools, etc.). Retirement circuit 1317 and physical register file(s) 1315 are coupled to execution circuit 1311 .Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor may also include separate instruction and data cache units and a shared L2 cache unit, alternative embodiments may also have a single internal cache for both instruction and data, such as For example, a first level (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of internal cache and external cache external to the core and/or processor. Alternatively, all caches may be external to the core and/or processor.Execution circuitry 1311 is a collection of one or more execution units, including scalar circuitry 1321 , vector/SIMD circuitry 1323 and matrix manipulation circuitry 1327 , and memory access circuitry 1325 for accessing cache 1313 . The execution circuits perform various operations (eg, shift, add, subtract, multiply) and perform on various data types (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include several execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units all performing all functions. Scalar circuits 1321 perform scalar operations, vector/SIMD circuits 1323 perform vector/SIMD operations, and matrix operations circuits 1327 perform matrix (slice) operations detailed herein.As an example, an exemplary register-renaming, out-of-order issue/execute core architecture may implement a pipeline as follows: 1) instruction fetch circuits perform fetch and length decode stages; 2) branch and decode circuits 1303 perform decode stages; 3) allocation/relocation The naming 1307 circuit performs the allocation stage and the renaming stage; 4) the scheduler circuit 1309 performs the scheduling stage; 5) (coupled to or included in the scheduler circuit 1309 and the allocate/rename 1307 circuits and memory cells) (multiple ) the physical register file performs the register read/memory read stage; the execution circuit 1311 performs the execute stage; 6) the memory cell and the physical register file cell(s) perform the write back/memory write stage; 7) the individual cells may involve exceptions disposition stage; and 8) the retirement unit and physical register file unit(s) perform the commit stage.The core may support one or more instruction sets (e.g., the x86 instruction set (with some extensions added with newer versions); the MIPS instruction set from MIPS Technologies, Inc., Sunnyvale, CA; The ARM instruction set from ARM Holdings (with optional additional extensions such as NEON), which includes the instruction(s) described herein. In one embodiment, core 1390 includes logic to support packed data instruction set extensions (eg, AVX1, AVX2), thereby allowing packed data to be used to perform operations used by many multimedia applications.It will be appreciated that cores may support multithreading (performing a collection of two or more operations or threads in parallel) and that this multithreading may be accomplished in various ways, including time division multithreading, simultaneous multithreading Threading (where a single physical core provides a logical core for each of the threads that the physical core is simultaneously multithreading), or a combination thereof (eg, time division fetch and decode and thereafter, such as in hyperthreading techniques, simultaneous multithreading change).14 illustrates an embodiment of a processor core pipeline that supports matrix operations using slices. Branch prediction and decoding circuitry 1403 performs branch prediction, decoding of instructions from instructions stored in instruction store 1401, and/or both branch prediction and decoding. For example, the instructions detailed herein may be stored in an instruction store. In some implementations, separate circuits are used for branch prediction, and in some embodiments at least some instructions are decoded into one or more micro-operations, micro-code entry points, micro-instructions, other instructions, or other instructions using micro-code 1405 other control signals. The branch prediction and decoding circuit 1403 may be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), and the like.Branch prediction and decoding circuit 1403 is coupled to assign/rename 1407 circuit, which, in some embodiments, is coupled to scheduler circuit 1409 . In some embodiments, these circuits provide register renaming, register allocation, and/or scheduling functionality by performing one or more of the following steps: 1) renaming logical operand values to physical operand values (eg, some embodiments 2) assigning status bits and flags to decoded instructions; and 3) scheduling decoded instructions for execution outside the instruction pool (eg, using a reservation station in some embodiments) implemented on the circuit.Scheduler circuit 1409 represents any number of different schedulers, including reservation stations, central instruction windows, and the like. Scheduler unit scheduler circuit 1409 is coupled to or includes physical register file(s) 1415 . Each of the physical register file(s) 1415 represents one or more physical register files, where different physical register files store one or more different data types, such as scalar integer, scalar floating point, packed integer, Packed floating point, vector integer, vector floating point, state (eg, instruction pointer that is the address of the next instruction to execute), slice, and so on. In one embodiment, physical register file(s) 1415 includes vector register circuits, writemask register circuits, and scalar register circuits. These register circuits provide architectural vector registers, vector mask registers, and general purpose registers. Physical register file(s) 1415 are overlaid by retirement circuitry 1417 to illustrate the various ways in which register renaming and out-of-order execution can be achieved, such as using reorder buffer(s) and retirement register file(s) , using future file(s), history buffer(s), retirement register file(s), using register maps and register pools, etc.). Retirement circuit 1417 and physical register file(s) 1415 are coupled to execution circuit 1411 .Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor may also include separate instruction and data cache units and a shared L2 cache unit, alternative embodiments may also have a single internal cache for both instruction and data, such as For example, a first level (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of internal cache and external cache external to the core and/or processor. Alternatively, all caches may be external to the core and/or processor.Execution circuits 1411 include a set of one or more execution circuits 1427 and a set of one or more memory access circuits 1425 for accessing cache 1413 . Execution circuit 1427 performs the matrix (slice) operations detailed herein.As an example, an exemplary register-renaming, out-of-order issue/execute core architecture may implement a pipeline as follows: 1) instruction fetch circuits perform fetch and length decode stages; 2) branch and decode circuits 1403 perform decode stages; 3) allocation/relocation The naming 1407 circuit performs the allocation stage and the renaming stage; 4) the scheduler circuit 1409 performs the scheduling stage; 5) (coupled to or included in the scheduler circuit 1409 and the allocate/rename 1407 circuits and memory cells) (multiple ) the physical register file performs the register read/memory read stage; the execution circuit 1411 performs the execute stage; 6) the memory cell and the physical register file cell(s) perform the write back/memory write stage; 7) the individual cells may involve exceptions disposition stage; and 8) the retirement unit and physical register file unit(s) perform the commit stage.A core may support one or more instruction sets (e.g., the x86 instruction set (with some extensions added with newer versions); the MIPS instruction set from MIPS Technologies, Inc., Sunnyvale, CA; The ARM instruction set from ARM Holdings (with optional additional extensions such as NEON), which includes the instruction(s) described herein. In one embodiment, core 1490 includes logic to support packed data instruction set extensions (eg, AVX1, AVX2), thereby allowing packed data to be used to perform operations used by many multimedia applications.It will be appreciated that cores may support multithreading (performing a collection of two or more operations or threads in parallel) and that this multithreading may be accomplished in various ways, including time division multithreading, simultaneous multithreading Threading (where a single physical core provides a logical core for each of the threads that the physical core is simultaneously multithreading), or a combination thereof (eg, time division fetch and decode and thereafter, such as in hyperthreading techniques, simultaneous multithreading change).layoutThroughout this specification, data is represented using a row-based data layout. Users listed as primary should transform the items based on their orientation. FIG. 15 illustrates an example of a matrix expressed in row-major format and column-major format. As shown, matrix A is a 2x3 matrix. When the matrix is stored in row-major format, the data elements of the rows are contiguous. When the matrix is stored in column-major format, the data elements of the columns are contiguous. AT*BT=(BA)T is a well-known property of matrices, where the superscript T denotes the transformation. Reading column-major data as row-major data results in a matrix that looks like a transformation matrix.In some embodiments, row-major semantics are utilized in hardware, and column-major data will swap operand order and make the result a transformation of a matrix, but for subsequent column-major reads from memory, it is correct The non-transformation matrix of .For example, if you have two column-major matrices to multiply:a b g i k ag+bh ai+bj ak+blc d* h j l= cg+dh ci+dj ck+dle f eg+fh ei+fj ek+fl(3x2) (2x3) (3x3)The input matrix will be stored in linear memory (column-major) as follows:a c e b d fas well asg h i j k l.Reading those matrices as row-dominant at scales 2x3 and 3x2, they would appear as:a c e and g hb d f i jk lSwap order and matrix multiplication:g h a c e ag+bh cg+dh eg+fhi j *b d f = ai+bj ci+dj ei+fjk l ak+bl ck+dl ek+flTransformation matrices are shifted out and can then be stored in row-major order:ag+bh cg+dh eg+fh ai+bj ci+dj ei+fj ak+bl ck+dl ek+fland is used in subsequent column-major computations, which are the correct untransformed matrices:ag+bh ai+bj ak+blcg+dh ci+dj ck+dleg+fh ei+fj ek+flexemplary use16 illustrates an example of the use of matrices (eg, slices). In this example, matrix C 1601 includes two slices, matrix A 1603 includes one slice, and matrix B 1605 includes two slices. The figure shows an example of an inner loop of an algorithm for computing matrix multiplication. In this example, two result slices tmm0 and tmm1 from matrix C 1601 are used to accumulate intermediate results. When one slice (tmm2) from matrix A 1603 is multiplied by two slices from matrix B 1605, this slice is reused 2 times. The pointers are used to load a new A matrix (slice) and two new B matrices (eg slices) from the direction indicated by the arrows. The outer loop, not shown, adjusts the pointer for the C slice.The example code shown includes the use of slice configuration instructions, and is executed to configure slice use, load slice, loop for processing slice, store slice to memory, and release slice use.17 illustrates an embodiment of the use of matrices (eg, slices). At 1701, slice usage is configured. For example, execute the TILECONFIG instruction to configure slice usage, including setting the number of rows and columns per slice. Typically, at 1703, at least one matrix (slice) is loaded from memory. At 1705, at least one matrix (slice) operation is performed using the matrix (eg, slice). At 1707, at least one matrix (slice) is stored out to memory, and at 1709 a context switch can occur.Example configurationChip configuration hardware supportAs discussed above, slice use often requires configuration prior to use. For example, all rows and columns may not need to be fully used. Not configuring these rows and columns not only saves power in some embodiments, but the configuration can be used to determine whether an operation will generate an error. For example, matrix multiplication of the form (N x M)*(L x N) will generally not work if M and L are not the same.Before using a matrix utilizing tiles, in some embodiments, tile support will be configured. For example, configure how many rows and columns each slice has, which slices will be used, and so on. The TILECONFIG instruction is an improvement on the computer itself, as it provides support for configuring the computer to use a matrix accelerator (either as part of a processor core, or as an external device). Specifically, execution of the TILECONFIG instruction causes the configuration to be fetched from memory and applied to the matrix (slice) settings within the matrix accelerator.Slice usage configuration18 illustrates support for configuration of the use of slices, according to an embodiment. Memory 1801 contains slice descriptions 1803 of matrices (eg slices) to be supported.The instruction execution resource 1811 of the processor/core 1805 stores aspects of the slice description 1803 into slice configuration 1817. Slice configuration 1817 includes a palette table 1813 detailing what slices are configured for the palette (number of rows and columns in each slice) and flags that the matrix support is in use. Specifically, instruction execution resource 1811 is configured to use slices as specified by slice configuration 1817 . Instruction execution resources 1811 may also include machine-specific registers or configuration registers for indicating slice usage. Additional values are also set, such as using median and start values. Slice configuration 1817 utilizes register(s) 1819 to store slice usage and configuration information.Figure 19 illustrates an embodiment of the description of the matrices (eg, slices) to be supported. This is the description that will be stored upon execution of the STTILECFG instruction. In this example, each field is a byte. In byte[0], the palette ID 1901 is stored. The palette ID is used to index into the palette table 1813, which, as defined by the configuration, stores the number of bytes in the slice and each row of the slice associated with that ID according to the palette ID bytes.Byte 1 stores the value to be stored in the "startRow" register 1903 and byte 2 stores the value to be stored in the register startP 1905 . To support restarting instructions after these events, these instructions store information in these registers. To support restarting instructions after interrupt events such as those detailed above, these instructions store information in these registers. The startRow value indicates the row that should be used for restarting. The startP value indicates the position within the row for the store operation when the pair is used, and in some embodiments, the startP value indicates the lower half of the row (in the lower slice of the pair) or (in the upper slice of the pair) ) of the top half of the line. In general, this position in the row (column) is not required.Successful execution of the matrix (slice) instruction will set both startRow and startP to zero, with the exception of TILECONFIG and STTILECFG.It is the software's responsibility to zero the startRow and startP values at any time when the interrupted matrix (slice) instruction is not restarted. For example, an unmasked floating point exception handler may decide to complete the operation in software and change the program counter value to another instruction, usually the next instruction. In this case, the software exception handler must zero out the startRow and startP values in exceptions presented to the software exception handler by the operating system before restoring the program. The operating system will then use the restore instructions to reload those values.Byte 3 stores an indication of the pair of slices (1b per slice) 1907.Bytes 16-17 store the row number 1913 and column number 1915 of slice 0, bytes 18-19 store the row number and column number of slice 1, and so on. In other words, each 2-byte group specifies the number of rows and columns of the slice. If 2-byte groups are not used to specify slice parameters, they shall have the value zero. Specifying slice parameters for more slices than either the implementation limit or the palette limit resulted in an error. Unconfigured slices are set to the initial state with 0 rows and 0 columns.Ultimately, the configuration in memory usually ends with a trailing description such as all zeros for several consecutive bytes.Exemplary slice and slice configuration storage20(A)-20(D) illustrate examples of register(s) 1819 . FIG. 20(A) illustrates a plurality of registers 1819 . As shown, each slice (TMM0 2001...TMMN 2003) has separate registers, where each register stores the row and column dimensions of that particular slice. StartP 2011 and StartRow 2013 are stored in separate registers. One or more status registers 2015 are set (eg, TILES_CONFIGURED=1) to indicate that the slice is configured for use.FIG. 20(B) illustrates a plurality of registers 1819 . As shown, each slice has separate registers for its row and its column. For example, TMM0 row configuration 2021, TMM0 column configuration 2023, StartP 2011 and StartRow 2013 are stored in separate registers. One or more status registers 2015 are set (eg, TILES_CONFIGURED=1) to indicate that the slice is configured for use.FIG. 20(C) illustrates a single register 1819. As shown, this register stores the slice configuration (row and column per slice) 2031, StartP 2011 and StartRow 2013 in a single register as a packed data register. One or more status registers 2015 are set (eg, TILES_CONFIGURED=1) to indicate that the slice is configured for use.FIG. 20(D) illustrates a plurality of registers 1819 . As shown, a single register stores the slice configuration (rows and columns of each slice) 2031. StartP and StartRow are stored in separate registers 2011 and 2013. One or more status registers 2015 are set (eg, TILES_CONFIGURED=1) to indicate that the slice is configured for use.Other combinations are contemplated, such as combining the starting registers into a single register where the starting registers are shown separately, and so on.TDPBF8PSAs mentioned above, special hardware for general-purpose matrix multiplication (known as GEMM) is a good choice for improving peak computational performance (and energy efficiency) for certain applications such as deep learning. Some of these applications, including deep learning, can operate on input data elements with relatively few bits without loss of accuracy, as long as the output elements have enough bits (ie, more than the input).Accordingly, the disclosed method and system perform an 8-bit floating point matrix dot product operation TILEDPBF8PS (TDPBF8PS), which takes a source matrix (eg, slice) with 8-bit floating point elements, performs a dot product multiplication, and combines the resulting product Accumulates with a 32-bit single precision destination.In some embodiments, the 8-bit floating-point format is the 8-bit wide-brain floating-point format (BF8), which is an Institute of Electrical and Electronics Engineers (IEEE) (eg, IEEE 754 standard) half-precision binary floating-point format (IEEE FP16 ) with the lower half (8 LSBs) cut off. The BF8 format may include a sign field (one bit wide), an exponent field (five bits wide), and a mantissa (significant digit precision) field (two bits wide). In some embodiments, the mantissa (significance precision) field is assumed to have an implicit leading bit value of one, unless the exponent field stores all zeros. Further, the 32-bit floating point format may include binary32 (according to the IEEE standard), which is sometimes referred to herein as "single precision" or "fp32", eg, having a sign field (one bit wide), an exponent field (eight bits width), and mantissa (significance precision) fields (twenty-four bits stored implicitly, ie, twenty-three bits explicitly stored wide).In certain embodiments, the TDPBF8PS instruction is disclosed for execution by a processor that includes a fetch circuit for fetching an instruction having an M by N destination for specifying an opcode and a single-precision element Fields for the position of the matrix (slice), the position of the M by K first source matrix (slice), and the position of the K by N second source matrix (slice), each of the specified first source matrix and second source matrix Elements include quads of 8-bit floating-point values, including the first 8-bit floating-point value (eg, in each element of the source in Figure 21A (0, 1, 2, 3) quadruple group index is zero), the second 8-bit floating point value (eg, the quadruple index is one in the (0,1,2,3) quadruple in each element of the source in Figure 21A), the first Three 8-bit floating-point values (eg, quadruple index two in the (0,1,2,3) quadruple in each element of the source in Figure 21A), and a fourth 8-bit floating-point value (eg, the quadruple index is three in the (0,1,2,3) quadruple in each element of the source in Figure 21A), where the opcode is used to instruct the execution circuit to: for all For each element (eg, each of MxN elements) of the specified destination matrix (eg, slice), convert the K quads from row M of the specified first source matrix (eg, slice) The 8-bit floating-point values and the K corresponding quadruplets of 8-bit floating-point values from column N of the specified second source matrix (eg, slice) are converted to single-precision values, converting the two The K pairs of transformed first values of the source matrices (eg, slices) are multiplied to generate K first products that combine the K pairs of transformed second values from the two specified source matrices (eg, slices). Multiply generates K second products, multiplying K pairs of transformed third values from the two specified source matrices (eg, slices) to generate K third products, K of the matrix (eg, slice) multiply the transformed fourth values to generate K fourth products, add the first products to the second products to generate the first cumulative sum, and separately add the third products to the fourth The accumulation is multiplied to generate the second accumulated sum, the first accumulated sum and the second accumulated sum are added as the final accumulated sum to be added to the previous contents of element (M,N).In certain embodiments, the TDPBF8PS instruction is disclosed for execution by a processor that includes a fetch circuit for fetching the instruction, the instruction having a field for specifying an opcode that instructs the execution circuit to use Causes: For each element of the first source matrix and the corresponding element of the second source matrix, convert the 8-bit floating point value to a single-precision value, multiply the converted single-precision value from the first value of the quad by together to generate the first result, multiply the converted single-precision values of the second values from the quad together to generate a second result, multiply the converted single-precision values from the third value of the quadruple by together to generate the third result, multiply the converted single-precision values of the fourth values from the quad together to generate the fourth result, and combine the first, second, third, and fourth results with The previous contents of the corresponding elements of the destination matrix are accumulated.In some embodiments, the processor will also include other supporting hardware, such as decoding circuitry for decoding fetched instructions, and execution circuitry for responding to decoded instructions as specified by opcodes For example, an execution circuit that causes a matrix operation accelerator (eg, matrix operation accelerator 307 in FIG. 3 ) to execute one or more (eg, all) of the actions of the TDPBF8PS instruction.21A is a block diagram illustrating accelerated matrix multiplication using the TDPBF8PS instruction, according to some embodiments. As shown, instruction 2101 includes a location 2104 for specifying an opcode 2102 (eg, TDPBF8PS) and an M by N destination matrix (eg, slice) with single-precision elements, an M by K first source matrix (eg, a slice) ) and the fields of the position 2106 of the position 2106 of the K by N second source matrix (eg, slice), the specified source matrix having elements each comprising a quadruple of 8-bit (eg, BF8) floating point values . The format of the TDPBF8PS instruction in accordance with some embodiments is further illustrated and described with reference to at least FIGS. 24, 25A-25B, and 26A-26D.Here, the size of the designated first source matrix (eg, slice) 2112A is M=4 by K=3. The size of the designated second source matrix (eg, slice) 2112B is K=3 by N=5. For illustrative purposes, K, M, and N are shown as having different values, but in other embodiments they may be equal.In one embodiment of the operation, the processor 2100 is configured to respond to the opcode 2102 (TDPBF8PS) by: for each element (M, N) of the specified destination matrix (eg, slice) 2122, using Conversion circuit 2116A converts the values of the K quads from row M of the specified first source matrix (eg, slice) 2112A to single precision, and converts the values from the K quads from the specified second source matrix (eg, slice) 2112A to single precision using conversion circuit 2116B , slice) 2112B The values of the K quadruples of column N are converted to single precision, such as binary32 single precision floating point specified by IEEE 794. Processor 2100 is then configured to multiply K pairs of transformed first quadruplet values together using multiplying circuit 2118, K pairs of transformed second quadruple values together, K pairs of transformed fourth quadruple values The triple quad values are multiplied together and the K pairs of converted fourth quad values are multiplied together and the product (4*K products) is multiplied with the previous contents of element (M,N) using accumulation circuit 2120 accumulate.Execution of the TDPBF8PS instruction is illustrated here to set the destination element at a matrix (eg, slice) location, eg, (row, column) index in C1,0 as (1,0). In Figure 21A, ".0" refers to the first value of the quad, ".1" refers to the second value of the quad, and ".2" refers to the third value of the quad, ".3" refers to the fourth value of the quad, e.g. so that A1, 0.0 is the first value of the quad value stored in element A1, 0, and B2, 4.3 is the value stored in element B2, The fourth value of the quadruple of values in 4. In some embodiments, processor 2100 is configured to convert K (=3) quads from row M (=1) of designated first source matrix (eg, slice) 2112A using conversion circuits 2116A and 2116B The 8-bit floating-point values of and the 8-bit floating-point values of the K (=3) quads from column N (=0) of the specified second source matrix (eg, slice) 2112B are converted to single precision. In some embodiments, processor 2100 (eg, a matrix manipulation circuit, eg, as part of a matrix manipulation accelerator) is then used to multiply K pairs from the two specified source matrices (eg, slices) using multiply circuit 2118 The transformed first quadruplet values are multiplied to generate the K first products, and the K pairs of transformed second quadruple values from the specified source matrix (eg, slice) are multiplied to generate the Kth first products. Two-product, multiplying K pairs of transformed third quadruple values from the specified source matrix (eg, slice) to generate K third products, and multiplying the values from the specified source matrix (eg, slice) The K of the converted fourth quadruple values are multiplied to generate K fourth products, and these products are then accumulated separately using accumulation circuit 2120, including the first sum of the K first products, the K second products The second sum, the third sum of K third products, and the fourth sum of K fourth products, compare the first sum, the second sum, the third sum, and the fourth sum with the The previous accumulation, for example, is shown in the example here as the FP32 value from element C(1,0).In some embodiments, the processor 2100 is configured to use the accumulation circuit 2120 to accumulate K first products and K second products, and separately accumulate the K third products and K fourth products, and then add the two cumulative sum and the previous contents of element (M, N).As shown, three arrows originate from each of the designated first and second source matrices (eg, slices) to indicate that the transformations and multiplications occur in parallel. In some embodiments, the processor responds to the decoded instruction by generating the results in parallel and storing the results into each element of the specified destination matrix (eg, slice). In some embodiments, new values are generated and stored into the destination one row at a time or one column at a time.The disclosed embodiments improve the alternative method by allowing software to execute TDPBF8PS instructions with reduced source element size, which allows the use of less memory space and less memory bandwidth, and improves peak computing performance for certain applications (and energy efficiency). As long as the output elements have enough bits (eg, more than the input), some applications such as deep learning can operate on input data elements with relatively few bits without loss of accuracy.21B is a block diagram illustrating an example execution circuit 2114 (eg, a matrix operation circuit) for executing a TDPBF8PS instruction, according to some embodiments. Example execution circuitry 2114 includes a first data width datapath (eg, 8 bits wide, eg, according to BF8 format) and a second wider datapath width (eg, 32 bits wide, eg, according to full precision format), eg, where row 2132 is 8 bits wide (eg, BF8) and row 2134 is 32 bits wide (eg, float32). For example, use a circuit that converts BF8 to full precision (BF8 to F32), a full precision multiply (F32 MUL) circuit, and a full precision add (F32 ADD) circuit. In some embodiments, the certain precision adding circuit (eg, adder 2136) further includes a bit-aligned shifter, a bit-aligned adder, a bit-normalized subtractor, a bit incrementer, and/or exponent logic. Systems and methods for executing TDPBF8PS instructions are further illustrated and described with reference to at least FIGS. 22A-22B, 23, and 28A-28B.In some embodiments, the TDPBF8PS instruction is part of a slice (eg, AMX) architectural extension to an ISA that includes two-dimensional (2D) registers (eg, where each slice register is identified as a single "slice register" ( e.g., a single pointer to a single slice register), e.g., as opposed to a vector (e.g., ZMM, YMM, or XMM) register), and the ISA may include separate 2D blocks for loading/storing from memory (e.g., contiguously located Stride Set), instructions for performing matrix-matrix multiplication on three registers (eg, matrix Cupdated = matrix A x matrix B + matrix Cprevious), and/or for two (or three) source slices Instructions that perform element-wise arithmetic operations. In one embodiment, one or more source matrices are first loaded (eg, via a host processor) into a cache (eg, a first level (L1) data cache), and then loaded from the cache (eg, via a host processor) , via execution of a slice load instruction) into slice registers (eg, of a matrix operation accelerator), eg, via coherent memory interface 303 in FIG. 3 .Exemplary executionFigure 22A is pseudocode illustrating an example execution of a TDPBF8PS instruction in accordance with some embodiments. As shown, instruction 2201 includes opcode 2202 (eg, TDPBF8PS) and (eg, slice) location 2204 of an M by N destination matrix with single-precision elements, and (eg, slice) location 2206 of an M by K first source matrix , , and K by N (eg, slice) positions 2208 of the second source matrix, the specified source matrix having elements of quadruplets comprising 8-bit floating point values. Opcode 2202 (TDPBF8PS) instructs the processor to do the following, as shown in pseudocode 2200: For each element (M,N) of the specified destination matrix (eg, slice), the specified first source Converts the 8-bit values of the K quadruplets of row M of the matrix (eg, slice) and the 8-bit values of the K quadruples of column N of the specified second source matrix (eg, slice) to single precision , multiply the K pairs of transformed first quaternion values from the two specified source matrices (eg, slices) to generate a first product, multiplying the K pairs from the specified two source matrices (eg, slices) The K pairs of transformed second quadruplet values are multiplied to generate the second product, and the K pairs of transformed third quadruplet values from the two specified source matrices (eg, slices) are multiplied to generate the first Triple product, multiplying K pairs of transformed fourth quadruple values from the two specified source matrices (eg, slices) to generate a fourth product, adding the first product to the second product to generate a first Accumulate sum, accumulate the third and fourth products separately to generate the second accumulation sum, add the first and second accumulation sums as the final accumulation to be added to the previous contents of element (M,N) and. In other embodiments, the multiplication is performed before the conversion.In one embodiment, a machine specific register (MSR) (eg, as one of the registers 1315 in FIG. 13 ) (eg, the MXCSR register that stores control and/or status information for the SSE registers) (eg, as the part of the execution of the instruction) is read, for example, to determine exception information. DAZ may refer to "denormal zero" control (eg, in MSR). In some embodiments, 8-bit precision (eg, BF8) values can be handled as having denormal/subnormal values.In one embodiment, an architectural machine specific register (MSR) MXCSR register (eg, a MXCSR register that stores control and/or status information for SSE registers) is not read (eg, as part of execution of an instruction) (eg, , not checked and/or not updated). In some embodiments, exception information for an instruction is implicit in the instruction, eg, implying DAZ=1 for BF8 operations (eg, without consulting MXCSR) (eg, for TDPBF8PS instructions), and/or for non-BF8 operations Implies that DAZ=0 (eg, no need to consult MXCSR).In operation, M, K, and N may be specified in one or more of several ways: as operands to the TDPBF8PS instruction (eg, shown here as slice register "t"), as specified opcodes suffixes or prefixes (asterisks are used herein as shorthand to refer to those optional suffixes and prefixes) as part of the immediate provided by the instruction (e.g., K, M, and N are each specified as (e.g., 32-bit) ) different (e.g., 8) bits of the immediate value, as part of a control register programmed by software (e.g. XTILECONFIG is a register loaded by matrix accelerator configuration instructions (such as TILECFG) or XRSTORE* instructions, and by matrix save instructions (such as XSAVE*) storage), or even as a schema default.The instructions 2201 further specify a destination matrix (eg, slice) location 2204 , a first source matrix (eg, slice) location 2206 , and a second source matrix (eg, slice) location 2208 . Each specified matrix (eg, slice) location may point to any of a memory location, a set of vector registers, and a set of slice registers.Figure 22B is pseudocode 2220 of an exemplary helper function for use with the TDPBF8PS instruction, according to some embodiments. As shown, pseudocode 2220 defines the convert_bf8_to_fp32() function, write_row_and_zero() function, zero_upper_rows() function, and zero_tileconfig_start() function, all of which may be used by the TDPBF8PS pseudocode of Figure 22A.Execution of the TDPBF8PS instruction is further illustrated and described with reference to Figures 21, 22A-22B, 23, 28A-28B, and 29A-29B. An example format of the TDPBF8PS instruction is further illustrated and described with reference to Figures 24-26D.Example method(s) performedFigure 23 is a block flow diagram illustrating the processor's response to the TDPBF8PS instruction. As shown in flowchart 2300, at 2301, the processor is configured to use a fetch circuit to fetch an instruction having a location for specifying an opcode and an M by N destination matrix with single precision elements, an M by K first source matrix The fields of the position, and the position of the K by N second source matrix, the specified source matrix has elements of a quadruple consisting of 8-bit floating-point values.In some embodiments, using the processor's physical register file (or, for example, using one or more two-dimensional (2D) (eg, AMX) slice registers, eg, the slice registers formed from data buffer 305 in FIG. 3 ) In embodiments where these slice registers are separate from any scalar and/or vector (e.g., one-dimensional array) registers) to store matrices (e.g., slices), for example, when a matrix (e.g., slice) is a collection of vector registers, With vector registers of the same type, whether 128-bit xmm registers, 256-bit ymm registers, or 512-bit zmm registers, have an 8-bit floating point format value in the source because the destination element is four times as wide as the source element The quaternion allows efficient use. Such efficient use can also be achieved when matrices are stored in (eg, AMX) slice registers. In other embodiments, a single source vector with 8-bit floating point elements is converted to 32-bit elements stored in a destination vector with a quarter width of the source vector.In some embodiments, the specified opcode is used to instruct the execution circuit to: for each element (M,N) of the specified destination matrix, convert the value from row M of the specified first source matrix Convert the elements of the K quadruplets and the elements of the K quadruplets from column N of the specified second source matrix to single precision, converting the first quadruplet from the specified two source matrices (eg, slices) The K pairs of transformed single-precision elements of the tuple positions are multiplied to generate K first products that combine the K pairs of transformed single-precision elements from the second quadruple positions of the two specified source matrices (e.g., slices). precision element-wise to generate K second products, K pairs of transformed elements from the third quadruple positions of the two specified source matrices (eg, slices) are multiplied to generate K third products, Multiply K pairs of transformed elements from the fourth quadruple positions of the two specified source matrices (eg, slices) to generate K fourth products, and then add the first product to the second product Generate the first accumulated sum, and separately accumulate the third and fourth products to generate the second accumulated sum, and add the first accumulated sum and the second accumulated sum to generate the previous contents to be compared with element (m,n) The final cumulative sum of the additions.In some embodiments, the specified opcode is used to instruct the execution circuit to cause the following operation: for each element of the first source matrix and the corresponding element of the second source matrix, convert an 8-bit floating point value to a single precision values, multiplying the converted single-precision values from the first value of the quad together to generate the first result, multiplying the converted single-precision values from the second value of the quad together to generate the first result two results, multiplying the converted single-precision values from the third value of the quad together to generate a third result, multiplying the converted single-precision values from the fourth value of the quad together to generate the third result Four results, and accumulating the first, second, third, and fourth results with the previous contents of the corresponding elements of the destination matrix.At 2303, the processor operates to decode the fetched instruction using decoding circuitry. For example, the fetched TDPBF8PS instruction is decoded by decoding circuitry such as those detailed herein. In the context of the illustrated system, the decoding circuit may be the decoding circuit illustrated and described at least with reference to Figures 13, 14, and 28A-28B.At 2305, execution of the decoded instruction is scheduled (as needed), which is optional (as indicated by its dashed border) in that it may occur at different times, or not at all. At 2307, the processor operates to respond to the decoded instruction as specified by the opcode using the execution circuitry.In some embodiments, at 2309, the instruction is committed or retired, which is optional (as indicated by its dashed border), in that it may occur at a different time, or not at all.Example execution circuits are further illustrated and described with reference to FIGS. 3-14 . In some embodiments, the execution circuit causes execution of a matrix operation accelerator (eg, migrates to a matrix operation accelerator), such as the accelerator shown and described as accelerator 307 (FIG. 3). In some embodiments, the execution circuits are matrix operation circuits, such as matrix operation circuits 405 (FIG. 4), 505 (FIG. 5), or 1213 (FIG. 12) and 1327 (FIG. 13).Exemplary instruction format(s)24 is a block diagram illustrating the format of a TDPBF8PS instruction in accordance with some embodiments. As shown, TDPBF8PS instruction 2400 includes a field for specifying an opcode 2402 ( TDPBF8PS* ) that instructs the processor to: for each element (M,N) of the specified destination matrix, Converts the elements of the K quadruplets of row M of the first source matrix and the elements of the K quadruples of column N of the second source matrix specified to single precision, converting the elements from the two specified source matrices The K of the first quadruple positions (eg, slices) are multiplied by the transformed first quadruplet values to generate K first products that combine the first The K of the two-quadruple positions are multiplied by the transformed second-quadruple values to generate K second products that combine the K of the third-quadruple positions from the specified two source matrices (eg, slices) Multiply the transformed third quadruple values to generate K third products, multiplying K pairs of transformed values from the fourth quadruple positions of the two specified source matrices (eg, slices) To generate K fourth products, the first and second products are accumulated to generate the first accumulated sum, the third and fourth products are accumulated separately to generate the second accumulated sum, and the first accumulated sum and the second accumulated sum are accumulated. The accumulation sum is the final accumulated sum to be added to the previous contents of element (m,n).The instructions 2400 further include a destination matrix (eg, slice) location 2404 , a first source matrix (eg, slice) location 2406 , and a second source matrix (eg, slice) location 2408 . Each of the specified source and destination matrix locations may be in any of a memory location, a set of vector registers, and a set of (eg, AMX) slice registers.The TDPBF8PS instruction 2400 further includes several optional parameters for controlling the behavior of the processor, including source element format 2410, k (mask control) and/or z (zero control) 2412, M 2414, and N 2416. In some embodiments, M and N are each any of 4, 8, 16, and 32 (eg, any number for M or N, which can be 32, 64, or greater). In some embodiments, M and N are each an integer greater than or equal to 4.Opcode 2402 is shown to include an asterisk to convey that additional prefixes and/or suffixes may be added to specify instruction behavior. One or more of the instruction modifiers 2410, 2412, 2414, and 2416 may be specified using a prefix or suffix of the opcode 2402, eg, a prefix and a prefix indicating that a matrix operation accelerator (eg, including an FMA grid) is to be used to execute the instruction / or suffix.In some embodiments, one or more of optional instruction modifiers 2410 , 2412 , 2414 and 2416 are encoded in an immediate field (not shown) optionally included in instruction 2400 . In some embodiments, one or more of optional instruction modifiers 2410, 2412, 2414, and 2416 are specified through a configuration/status register (eg, XTILECONFIG).In some embodiments, the instruction modifier 2410 includes a mask {k} (eg, a write mask) and/or a zeroing control {z} 2412, eg, where the mask {k} is used to control which destination elements are to be The updated and/or zeroed control {z} is used to control whether zeroing (or merging) is applied to masked destination elements.When any one or more of optional modifiers 2410, 2412, 2414, or 2416 are not specified by an instruction, they may use default values or implicit parameters, eg, parameters inherited from other parts of the slice architecture.Detailed example systems, processors and simulationsDetailed herein are examples of hardware, software, etc. for executing the instructions described above. For example, what is described below details various aspects of instruction execution, including various pipeline stages such as fetching, decoding, scheduling, execution, retirement, and the like.Instruction SetAn instruction set may include one or more instruction formats. A given instruction format may define various fields (eg, number of bits, position of bits) to specify the operation to be performed (eg, opcode) and the operand(s) on which the operation is to be performed and/or Other data fields (eg, masks), etc. Some instruction formats are further broken down by the definition of instruction templates (or sub-formats). For example, an instruction template for a given instruction format can be defined as having fields for that instruction format (the fields included are typically in the same order, but at least some fields have different bit positions because fewer fields are included) A subset, and/or defined as having a given field interpreted differently. Thus, each instruction of the ISA is expressed using a given instruction format (and, if defined, in accordance with a given one of the instruction templates for that instruction format) and includes instructions for specifying operations and operands field. For example, an exemplary ADD (addition) instruction has a specific opcode and instruction format that includes an opcode field for specifying the opcode and a selection operand (source 1/destination and source 2) and the presence of this ADD instruction in the instruction stream will cause the operand field to have specific content that selects a specific operand. A set of SIMD extensions known as Advanced Vector Extensions (AVX) (AVX1 and AVX2) and utilizing the Vector Extensions (VEX) encoding scheme have been introduced and/or released (see e.g. 64 and IA-32 Architecture Software Development, September 2014 Author's Manual; and see Advanced Vector Extensions Programming Reference, October 2014).Exemplary Instruction FormatEmbodiments of the instruction(s) described herein can be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures and pipelines, but are not limited to those systems, architectures and pipelines detailed.Generic Vector Friendly Instruction FormatA vector friendly instruction format is an instruction format suitable for vector instructions (eg, there are specific fields dedicated to vector operations). Although embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations through the vector friendly instruction format.25A-25B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof, according to an embodiment. 25A is a block diagram illustrating a generic vector friendly instruction format and its class A instruction template according to an embodiment; and FIG. 25B is a block diagram illustrating a generic vector friendly instruction format and its class B instruction template according to an embodiment. Specifically, class A and class B instruction templates are defined for the generic vector friendly instruction format 2500, both of which include no memory access 2505 instruction templates and memory access 2520 instruction templates. The term "generic" in the context of a vector friendly instruction format refers to an instruction format that is not tied to any particular instruction set.Although an embodiment will be described where the vector friendly instruction format supports a 64-byte vector operand length (or size) with a 32-bit (4-byte) or 64-bit (8-byte) data element width (or size) ( and thus, a 64-byte vector consists of 16 doubleword-sized elements, or, alternatively, 8 quadword-sized elements); a 64-byte vector operand length (or size) is the same as 16 bits (2 bytes ) or 8-bit (1-byte) data element width (or size); 32-byte vector operand length (or size) and 32-bit (4-byte), 64-bit (8-byte), 16-bit (2-word) section) or 8-bit (1-byte) data element width (or size); and 16-byte vector operand length (or size) with 32-bit (4-byte), 64-bit (8-byte), 16-bit ( 2 bytes), or 8 bits (1 byte) data element width (or size); but alternative embodiments may support larger, smaller, and/or different vector operand sizes (eg, 256-byte vector operands) ) and a larger, smaller, or different data element width (eg, 128-bit (16-byte) data element width).The class A instruction template in Figure 25A includes: 1) within the instruction template of no memory access 2505, an instruction template showing a full rounding control type operation 2510 with no memory access, and an instruction template of a data transformation type operation without memory access 2515 and 2) within the memory access 2520 instruction template, the memory access time-sensitive 2525 instruction template and the memory access non-time-sensitive 2530 instruction template are shown. The class B instruction templates in Figure 25B include: 1) within the no memory access 2505 instruction template, an instruction template showing a no memory access write mask controlled partial round control type operation 2512 and a no memory access write mask and 2) within the instruction template for memory access 2520, the instruction template for writemask control 2527 for memory access is shown.The generic vector friendly instruction format 2500 includes the following fields listed below in the order illustrated in Figures 25A-25B.Format Field 2540 - A specific value in this field (instruction format identifier value) uniquely identifies the vector friendly instruction format, and thus identifies that the instruction appears in the vector friendly instruction format in the instruction stream. Thus, this field is optional in the sense that this field is not required for instruction sets that only have a generic vector friendly instruction format.Base Operation Field 2542 - Its content distinguishes between different base operations.Register Index Field 2544 - whose content specifies the location of the source or destination operand in a register or in memory, either directly or through address generation. These fields include a sufficient number of bits to select N registers from the PxQ (eg, 32x512, 16x128, 32x1024, 64x1024) register file. Although one embodiment supports up to three source registers and one destination register, alternative embodiments may support more or fewer source and destination registers (eg, may support up to two sources, where the One source also acts as a destination; up to three sources can be supported, where one of these sources also acts as a destination; up to two sources and one destination can be supported).Modifier field 2546 - whose content distinguishes instructions in the generic vector instruction format that specify memory accesses from instructions that do not specify memory accesses in the generic vector instruction format; A distinction is made between instruction templates and memory access 2520. Memory access operations read and/or write to the memory hierarchy (in some cases, using values in registers to specify source and/or destination addresses), while non-memory access operations do not (for example, the source and destination are register). Although in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, fewer or different ways to perform memory address calculations.Extended operation field 2550 - whose content distinguishes which of various operations to perform in addition to the base operation. This field is context specific. In one embodiment, this field is divided into a class field 2568, an alpha field 2552, and a beta field 2554. Extended operation field 2550 allows multiple sets of common operations to be performed in a single instruction instead of 2, 3, or 4 instructions.Scale field 2560 - whose contents allow scaling of the contents of the index field for memory address generation (eg, for address generation using (2 scale * index + base)).Displacement field 2562A - whose contents are used as part of memory address generation (eg, for address generation using (2 scale*index+base+displacement)).Displacement factor field 2562B (note that the concatenation of displacement field 2562A directly on displacement factor field 2562B indicates the use of one or the other) - its content is used as part of address generation; it specifies the size (N ) - where N is the number of bytes in a memory access (eg, for address generation using (2 scale * index + base + scaled displacement)). Redundant low-order bits are ignored, and thus the contents of the displacement factor field are multiplied by the total memory operand size (N) to generate the final displacement to be used in computing the effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 2574 (described later herein) and the data manipulation field 2554C. Displacement field 2562A and displacement factor field 2562B are not used for instruction templates without memory access 2505 and/or different embodiments may implement only one or neither of the two, in the sense that displacement Field 2562A and displacement factor field 2562B are optional.Data Element Width Field 2564 - Its content distinguishes which of multiple data element widths will be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that only one data element width is supported and/or some aspect of the opcode is used to support data element widths.Writemask field 2570 - its contents control, on a per-data-element-position basis, whether the data element positions in the destination vector operand reflect the results of the base and augment operations. Class A instruction templates support merge-writemasking, while class B instruction templates support both merge-writemasking and zeroing-writemasking. When merging, a vector mask allows to protect any set of elements in the destination from updating during the execution of any operation (specified by the base and augment operations); in another embodiment, keep where the corresponding mask bits have 0 The old value of each element of the destination. Conversely, when zeroed, a vector mask allows any set of elements in the destination to be zeroed during the execution of any operation (specified by the base and augment operations); in one embodiment, the elements of the destination are in the corresponding mask Bits with a value of 0 are set to 0. A subset of this functionality is the ability to control the length of the vector of the operation being performed (ie, the span from the first to the last element being modified), however, the elements being modified need not be contiguous. Thus, the write mask field 2570 allows partial vector operations, including loads, stores, arithmetic, logic, and the like. Although described where the contents of the writemask field 2570 selects one of the plurality of writemask registers that contains the writemask to be used (and thus, the contents of the writemask field 2570 indirectly identify the mask to perform), but alternative embodiments instead or additionally allow the contents of the mask write field 2570 to directly specify the mask to perform.Immediate field 2572 - whose content allows the specification of an immediate value. This field is optional in the sense that it is absent in implementations of the generic vector friendly format that do not support immediates and in instructions that do not use immediates.Class field 2568 - whose content distinguishes between different classes of instructions. Referring to Figures 25A-25B, the content of this field selects between Type A and Type B instructions. In Figures 25A-25B, rounded squares are used to indicate that a particular value is present in the field (eg, Class A 2568A and Class B 2568B for class field 2568 in Figures 25A-25B, respectively).Class A Instruction TemplateIn the case of a class A non-memory access 2505 instruction template, the alpha field 2552 is interpreted as its content to distinguish which of the different types of augmentation operations are to be performed (eg, round-type operations 2510 for no memory access and no memory The instruction template of the accessed data transform type operation 2515 specifies the RS field 2552A of rounding 2552A.1 and data transform 2552A.2), respectively, while the beta field 2554 distinguishes which of the operations of the specified type is to be performed. In the instruction template without memory access 2505, scale field 2560, displacement field 2562A, and displacement factor field 2562B are absent.Instruction Templates without Memory Access - Full Rounding Controlled OperationsIn the instruction template for full round control type operation 2510 without memory access, beta field 2554 is interpreted as round control field 2554A whose content(s) provide static rounding. Although in the described embodiment rounding control field 2554A includes suppress all floating point exceptions (SAE) field 2556 and rounding operation control field 2558, alternative embodiments may support both concepts, which may be coded as The same field, or have only one or the other of these concepts/fields (eg, may have only the round operation control field 2558).SAE field 2556 - its content distinguishes whether exception event reporting is disabled; when the content of SAE field 2556 indicates that suppression is enabled, a given instruction does not report floating-point exception flags of any kind, and does not invoke any floating-point exception handler.Round Operation Control Field 2558 - Its content distinguishes which of a set of rounding operations is to be performed (eg, round up, round down, round towards zero, and round to the nearest). Thus, the round operation control field 2558 allows the rounding mode to be changed on an instruction-by-instruction basis. In one embodiment where the processor includes a control register for specifying the rounding mode, the contents of the round operation control field 2550 override the register value.Instruction Templates without Memory Access - Data Transformation OperationsIn the instruction template for no memory access data transform type operation 2515, beta field 2554 is interpreted as data transform field 2554B, the content of which distinguishes which of multiple data transforms to perform (eg, no data transform, mix, broadcast) .In the case of an instruction template for class A memory access 2520, the alpha field 2552 is interpreted as an eviction hint field 2552B, the content of which distinguishes which of the eviction hints is to be used (in FIG. 25A, for the instruction template of memory access timeliness 2525 and memory access non-time-sensitive 2530 instruction templates specify time-sensitive 2552B.1 and non-time-sensitive 2552B.2 respectively), while beta field 2554 is interpreted as data manipulation field 2554C, the content of which distinguishes multiple data manipulation operations to be performed (also called primitives) which of (eg, no manipulation, broadcast, up-conversion of source, and down-conversion of destination). The instruction template for memory access 2520 includes a scale field 2560, and optionally a displacement field 2562A or a displacement factor field 2562B.Vector memory instructions use translation support to perform vector loads from memory and vector stores to memory. Like normal vector instructions, vector memory instructions transfer data from/to memory in a data-element-wise fashion, where the elements actually transferred are specified by the contents of the vector mask selected as the writemask.Instruction Templates for Memory Access - Time-sensitiveTime-sensitive data is data that is likely to be reused quickly enough to benefit from cache operations. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint entirely.Instruction Templates for Memory Access - TimelessData that is not time-sensitive is data that is unlikely to be reused quickly enough to benefit from cache operations in the first level cache and should be given eviction priority. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint entirely.Type B Directive TemplateIn the case of a Class B instruction template, the alpha field 2552 is interpreted as a writemask control (Z) field 2552C, the content of which distinguishes whether the writemask controlled by the writemask field 2570 should be merged or zeroed.In the case of a class B non-memory access 2505 instruction template, a portion of the beta field 2554 is interpreted as the RL field 2557A, the content of which distinguishes which of the different types of augmentation operations are to be performed (eg, a write mask for no memory accesses). The code control part of the instruction template for rounding control type operation 2512 and the write mask control for no memory accesses The instruction template for VSIZE type operation 2517 specifies rounding 2557A.1 and vector length (VSIZE) 2557A.2 respectively), while the beta field 2554 The remainder of the distinguishes which of the operations of the specified type is to be performed. In the instruction template without memory access 2505, scale field 2560, displacement field 2562A, and displacement factor field 2562B are absent.In the no memory access writemask control part of the instruction template for round control type operations 2510, the remainder of the beta field 2554 is interpreted as the round operation field 2559A, and exception reporting is disabled (the given instruction does not report any kind of floating-point exception flag, and does not raise any floating-point exception handler).Round operation control field 2559A - just like round operation control field 2558, its content distinguishes which of a set of rounding operations to perform (eg, round up, round down, round to zero, and round to nearest ). Thus, the round operation control field 2559A allows the rounding mode to be changed on an instruction-by-instruction basis. In one embodiment where the processor includes a control register for specifying the rounding mode, the contents of the round operation control field 2550 override the register value.In the instruction template for no memory access writemask control VSIZE type operation 2517, the remainder of the beta field 2554 is interpreted as a vector length field 2559B, the content of which distinguishes which of multiple data vector lengths to execute (eg, 128 bytes, 256 bytes, or 512 bytes).In the case of an instruction template for class B memory access 2520, a portion of the beta field 2554 is interpreted as a broadcast field 2557B, the content of which distinguishes whether a broadcast type data manipulation operation is to be performed, and the remainder of the beta field 2554 is interpreted as a vector length field 2559B. The instruction template for memory access 2520 includes a scale field 2560, and optionally a displacement field 2562A or a displacement factor field 2562B.For the general vector friendly instruction format 2500, the full opcode field 2574 is shown to include a format field 2540, a base operation field 2542, and a data element width field 2564. Although one embodiment is shown in which the full opcode field 2574 includes all of these fields, in embodiments that do not support all of these fields, the full opcode field 2574 includes less than all of these fields. The full opcode field 2574 provides the operation code (opcode).The extended operation field 2550, data element width field 2564, and writemask field 2570 allow these features to be specified on an instruction-by-instruction basis in a generic vector friendly instruction format.The combination of the writemask field and the data element width field creates various types of instructions because these instructions allow the mask to be applied based on different data element widths.The various instruction templates that appear within Class A and Class B are beneficial in different situations. In some embodiments, different processors or different cores within a processor may support only class A, only class B, or may support both classes. For example, high-performance general purpose out-of-order cores intended for general purpose computing may only support class B, cores intended primarily for graphics and/or scientific (throughput) computing may only support class A, and Cores for both general purpose computing and graphics and/or scientific (throughput) computing may support both classes A and B (with some mix of templates and instructions from both classes, of course, but not Cores of all templates and directives are within the scope of this disclosure). Likewise, a single processor may include multiple cores, all of which support the same class, or where different cores support different classes. For example, in a processor with separate graphics cores and general-purpose cores, one of the graphics cores that is intended to be used primarily for graphics and/or scientific computing may only support class A, while one or more of the general-purpose cores One may be a high-performance general-purpose core with out-of-order execution and register renaming that supports only class B, intended for general-purpose computing. Another processor that does not have a separate graphics core may include one or more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implemented in other classes in different embodiments. A program written in a high-level language will be made into (e.g., just-in-time or statically compiled) various executable forms including: 1) having only one(s) supported by the target processor(s) for execution either in the form of instructions of the class; or 2) in the form of an alternative routine and having a control flow code written using a different combination of instructions of all classes that selects these routines to be based on what is currently being executed The code is executed by the instructions supported by the processor.Exemplary Specialized Vector Friendly Instruction Format26A is a block diagram illustrating an exemplary dedicated vector friendly instruction format according to an embodiment. Figure 26A shows a dedicated vector friendly instruction format 2600 that specifies the location, size, interpretation and order of fields, and the values of some of those fields, in the sense that the dedicated vector friendly instruction format 2600 is dedicated. The dedicated vector friendly instruction format 2600 may be used to extend the x86 instruction set, and thus some of the fields are similar or identical to those used in the existing x86 instruction set and its extensions (eg, AVX). The format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate field field of the existing x86 instruction set with extensions. Fields from Figures 25A-25B are illustrated, and fields from Figure 26A map to fields from Figures 25A-25B.It should be understood that although embodiments are described with reference to specific vector friendly instruction format 2600 in the context of generic vector friendly instruction format 2500 for illustrative purposes, the present disclosure is not limited to specific vector friendly instruction format 2600 unless otherwise stated. For example, general vector friendly instruction format 2500 contemplates various possible sizes of various fields, while specific vector friendly instruction format 2600 is shown as having fields of a particular size. As a specific example, although the data element width field 2564 is illustrated as a one-bit field in the specific vector friendly instruction format 2600, the present disclosure is not so limited (ie, the generic vector friendly instruction format 2500 contemplates other sizes of the data element width field 2564 ).The dedicated vector friendly instruction format 2600 includes the following fields listed below in the order illustrated in Figure 26A.EVEX prefix 2602 (bytes 0-3) - encoded as four bytes.Format Field 2540 (EVEX Byte 0, Bits[7:0]) - The first byte (EVEX Byte 0) is Format Field 2540, and it contains 0x62 (in one embodiment, for distinguishing vector friendly unique value for the instruction format).The second-fourth bytes (EVEX bytes 1-3) include a number of bit fields that provide dedicated capabilities.REX field 2605 (EVEX byte 1, bits [7-5]) - consists of EVEX.R bit field (EVEX byte 1, bits [7]–R), EVEX.X bit field (EVEX byte 1, bits [7]–R) [6]–X) and the EVEX.B bit field (EVEX byte 1, bits [5]–B). The EVEX.R, EVEX.X and EVEX.B bitfields provide the same functionality as the corresponding VEX bitfields and are encoded using 1's complement form, ie ZMM0 is encoded as 1111B and ZMM15 is encoded as 0000B. The other fields of these instructions encode the lower three bits of the register index (rrr, xxx, and bbb) as known in the art, which can be obtained by adding EVEX.R, EVEX.X, and EVEX.B to form Rrrr, Xxxx and Bbbb.REX' field 2510 - This is the first part of the REX' field 2510 and is the EVEX.R' bit field used to encode the upper 16 or lower 16 registers of the extended 32 register set (EVEX word Section 1, bits[4]–R'). In one embodiment, this bit is stored in a bit-reversed format, along with other bits indicated below, to distinguish (in 32-bit mode of x86, known as x86) from the BOUND instruction, the real opcode byte of which is 62 , but the value 11 in the MOD field is not accepted in the MOD R/M field (described below); alternative embodiments do not store this indicated bit and the other indicated bits below in an inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and other RRRs from other fields.Opcode Map Field 2615 (EVEX Byte 1, Bits[3:0] - mmmm) - Its content encodes the implied leading opcode byte (0F, 0F 38, or 0F 3).Data Element Width Field 2564 (EVEX Byte 2, Bits[7]-W) - Denoted by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the data type (32-bit data element or 64-bit data element).EVEX.vvvv 2620 (EVEX byte 2, bits [6:3]-vvvv) - The role of EVEX.vvvv may include the following: 1) EVEX.vvvv pairs the first source specified in reversed (1's complement) form register operands are encoded and are valid for instructions with two or more source operands; 2) EVEX.vvvv encodes a destination register operand specified in 1's complement form for a specific vector displacement; or 3 )EVEX.vvvv does not encode any operands, this field is reserved and should contain 1111b. Thus, EVEX.vvvv field 2620 encodes the 4 low-order bits of the first source register specifier stored in inverted (1's complement) form. Depending on the instruction, additional different EVEX bit fields are used to extend the specifier size to 32 registers.EVEX.U 2568 Class field (EVEX byte 2, bits[2]-U) - if EVEX.U=0, it indicates class A or EVEX.U0; if EVEX.U=1, it indicates class B or EVEX.U1.Prefix encoding field 2625 (EVEX byte 2, bits[1:0]-pp) - provides additional bits for the base operation field. In addition to providing support for legacy SSE instructions in the EVEX prefix format, this also has the benefit of compressing the SIMD prefix (the EVEX prefix requires only 2 bits instead of bytes to express the SIMD prefix). In one embodiment, to support legacy SSE instructions using SIMD prefixes (66H, F2H, F3H) both in legacy format and in EVEX prefix format, these legacy SIMD prefixes are encoded into SIMD prefix encoded fields; and at runtime The PLA is expanded to the legacy SIMD prefix before being provided to the decoder (thus, without modification, the PLA can execute both these legacy instructions in legacy format and these legacy instructions in EVEX format). While newer instructions may use the contents of the EVEX prefix encoding field directly as an opcode extension, for consistency, certain embodiments extend in a similar manner, but allow for different meanings specified by these legacy SIMD prefixes. Alternative embodiments may redesign the PLA to support 2-bit SIMD prefix encoding and thus require no extensions.Alpha field 2552 (EVEX byte 3, bit[7] - EH, also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.writemask control, and EVEX.N; also shown as alpha)— - As mentioned earlier, this field is context specific.Beta field 2554 (EVEX byte 3, bits [6:4] - SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB, also illustrated with βββ ) - As mentioned earlier, this field is context-specific.REX' field 2510 - This is the remainder of the REX' field and is the EVEX.V' bit field that can be used to encode the upper 16 or lower 16 registers of the extended 32 register set (EVEX byte 3, bit[3]–V'). This bit is stored in a bit-reversed format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.vvvv.Writemask field 2570 (EVEX byte 3, bits[2:0]-kkk) - its contents specify the index of the register in the writemask register, as previously described. In one embodiment, the specific value EVEX.kkk=000 has special behavior implying that no writemask is used for a specific instruction (this can be implemented in various ways, including using a writemask hardwired to all objects or bypassing hardware to mask hardware).The real opcode field 2630 (byte 4) is also referred to as the opcode byte. Part of the opcode is specified in this field.MOD R/M field 2640 (byte 5) includes MOD field 2642, Reg field 2644, and R/M field 2646. As previously described, the contents of the MOD field 2642 distinguish memory access operations from non-memory access operations. The role of the Reg field 2644 can be boiled down to two cases: encoding either the destination register operand or the source register operand; or being treated as an opcode extension and not being used to encode any instruction operands. The role of the R/M field 2646 may include encoding an instruction operand that references a memory address; or encoding a destination register operand or a source register operand.Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the contents of SIB 2650 are used for memory address generation. SIB.xxx 2654 and SIB.bbb 2656 - The contents of these fields have been previously mentioned for register indices Xxxx and Bbbb.Displacement field 2562A (bytes 7-10) - when MOD field 2642 contains 10, bytes 7-10 are displacement field 2562A, and it works the same as traditional 32-bit displacement (disp32), and at byte granularity .Displacement Factor Field 2562B (Byte 7) - When the MOD field 2642 contains 01, byte 7 is the displacement factor field 2562B. The location of this field is the same as that of the traditional x86 instruction set 8-bit displacement (disp8) that works at byte granularity. Since disp8 is sign extended, it can only be addressed between -128 and +127 byte offsets; in terms of a 64-byte cache line, disp8 uses can be set to only four really useful values - 8 bits for 128, -64, 0, and 64; disp32 is used since a larger range is often required; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, displacement factor field 2562B is a reinterpretation of disp8; when displacement factor field 2562B is used, the actual displacement is determined by multiplying the contents of the displacement factor field by the size (N) of the memory operand access. This type of displacement is called disp8*N. This reduces the average instruction length (a single byte is used for displacement, but has a much larger range). Such compressed displacements assume that the effective displacement is a multiple of the granularity of the memory access, and thus the redundant low-order bits of the address offset need not be encoded. In other words, the displacement factor field 2562B replaces the traditional x86 instruction set 8-bit displacement. Thus, the displacement factor field 2562B is encoded in the same way as the x86 instruction set 8-bit displacement (hence, no change in ModRM/SIB encoding rules), with the only difference that disp8 is overloaded to disp8*N. In other words, there is no change in encoding rules or encoding length, but only in the hardware interpretation of the displacement value (which requires the displacement to be scaled by the size of the memory operand to obtain a byte-wise address offset). The immediate field 2572 operates as previously described.full opcode fieldFigure 26B is a block diagram illustrating the fields with the dedicated vector friendly instruction format 2600 that make up the complete opcode field 2574, according to one embodiment. Specifically, full opcode field 2574 includes format field 2540 , base operation field 2542 , and data element width (W) field 2564 . Base operation field 2542 includes prefix encoding field 2625 , opcode mapping field 2615 , and real opcode field 2630 .register index fieldFIG. 26C is a block diagram illustrating the fields with the dedicated vector friendly instruction format 2600 that make up the register index field 2544, according to one embodiment. Specifically, register index field 2544 includes REX 2605 field, REX' 2610 field, MODR/M.reg field 2644, MODR/M.r/m field 2646, VVVV field 2620, xxx field 2654, and bbb field 2656.Extended action fieldFIG. 26D is a block diagram illustrating the fields with the dedicated vector friendly instruction format 2600 that make up the extended operation field 2550, according to one embodiment. When the class (U) field 2568 contains 0, it indicates EVEX.U0 (Class A 2568A); when it contains 1, it indicates EVEX.U1 (Class B 2568B). When U=0 and MOD field 2642 contains 11 (indicating no memory access operation), alpha field 2552 (EVEX byte 3, bits [7]-EH) is interpreted as rs field 2552A. When rs field 2552A contains 1 (rounding 2552A.1), beta field 2554 (EVEX byte 3, bits [6:4]—SSS) is interpreted as rounding control field 2554A. The rounding control field 2554A includes a one-bit SAE field 2556 and a two-bit rounding operation field 2558. When the rs field 2552A contains 0 (data transform 2552A.2), the beta field 2554 (EVEX byte 3, bits [6:4]-SSS) is interpreted as a three-bit data transform field 2554B. When U=0 and the MOD field 2642 contains 00, 01, or 10 (indicating a memory access operation), the alpha field 2552 (EVEX byte 3, bits [7]-EH) is interpreted as an eviction hint (EH) field 2552B, and Beta field 2554 (EVEX byte 3, bits [6:4]-SSS) is interpreted as a three bit data manipulation field 2554C.When U=1, the alpha field 2552 (EVEX byte 3, bits [7]-EH) is interpreted as a write mask control (Z) field 2552C. When U=1 and MOD field 2642 contains 11 (indicating no memory access operation), a portion of beta field 2554 (EVEX byte 3, bits [4]-S0) is interpreted as RL field 2557A; 2557A.1), the remainder of the beta field 2554 (EVEX byte 3, bits [6-5]–S2-1) is interpreted as the round operation field 2559A, and when the RL field 2557A contains 0 (VSIZE 2557A.2 ), the remainder of the beta field 2554 (EVEX byte 3, bits [6-5]-S2-1) is interpreted as a vector length field 2559B (EVEX byte 3, bits [6-5]-L1-0) . When U=1 and MOD field 2642 contains 00, 01 or 10 (indicating a memory access operation), beta field 2554 (EVEX byte 3, bits [6:4] - SSS) is interpreted as vector length field 2559B (EVEX word Section 3, bits [6-5]–L1-0) and broadcast field 2557B (EVEX byte 3, bits [4]–B).Exemplary Register Architecture27 is a block diagram of a register architecture 2700 according to one embodiment. In the illustrated embodiment, there are 32 512-bit wide vector registers 2710; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlayed on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The dedicated vector friendly instruction format 2600 operates on these overwritten register files, as illustrated in the following table.In other words, the vector length field 2559B selects between the maximum length and one or more other shorter lengths, each of which is half the length of the previous one, and does not have an instruction template for the vector length field 2559B Operates on the maximum vector length. Furthermore, in one embodiment, the Class B instruction templates of the dedicated vector friendly instruction format 2600 operate on packed or scalar single/double precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element positions in the zmm/ymm/xmm registers; depending on the embodiment, the higher order data element positions either remain the same as before the instruction, or are zeroed.Writemask Registers 2715 - In the illustrated embodiment, there are 8 writemask registers (k0 to k7), each of which is 64 bits in size. In an alternate embodiment, the size of the writemask register 2715 is 16 bits. As mentioned earlier, in one embodiment, the vector mask register k0 cannot be used as a writemask; when the encoding that normally indicates k0 is used as a writemask, it selects the hardwired writemask 0xFFFF, effectively ground to disable the writemask for that instruction.General Purpose Registers 2725 - In the embodiment shown, there are sixteen 64-bit general purpose registers that are used with existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar floating point stack register file (x87 stack) 2745, on which is overlaid the MMX packed integer flat register file 2750 - in the illustrated embodiment, the x87 stack is used for 32/64 / 80-bit floating-point data to perform eight-element stacks of scalar floating-point operations; whereas MMX registers are used to perform operations on 64-bit packed integer data, and to save operands for some operations performed between MMX and XMM registers.Alternative embodiments may use wider or narrower registers. Additionally, alternative embodiments may use more, fewer or different register files and registers.Exemplary Core Architecture, Processor and Computer ArchitectureThe processor cores can be implemented in different processors in different ways, for different purposes. For example, implementations of such cores may include: 1) general purpose in-order cores intended for general-purpose computing; 2) high-performance general-purpose out-of-order cores designed for general purpose computing; 3) general purpose out-of-order cores intended primarily for graphics and/or Or dedicated cores for scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general purpose computing and/or one or more general purpose out-of-order cores intended for general purpose computing; and 2 ) coprocessor that includes one or more dedicated cores intended primarily for graphics and/or science (throughput). Such different processors lead to different computer system architectures, which may include: 1) a coprocessor on a separate chip from the CPU; 2) in the same package as the CPU but on a separate die 3) coprocessors on the same die as the CPU (in which case such coprocessors are sometimes called special purpose logic or special purpose cores such as integrated graphics and and/or scientific (throughput) logic); and 4) a system on a chip that can combine the described CPU (sometimes referred to as application core(s) or application processor(s)), the co-processing described above The device and additional functions are included on the same die. An exemplary core architecture is described next, followed by an exemplary processor and computer architecture.Exemplary Core ArchitectureIn-order and out-of-order core diagrams28A is a block diagram illustrating an exemplary in-order pipeline and an exemplary register-renaming out-of-order issue/execution pipeline in accordance with various embodiments. 28B is a block diagram illustrating an exemplary embodiment of an in-order architecture core and an exemplary register-renaming out-of-order issue/execute architecture core to be included in a processor, according to various embodiments. The solid-line boxes in FIGS. 28A-28B illustrate in-order pipelines and in-order cores, while the optional addition of dashed-line boxes illustrates register-renamed, out-of-order issue/execution pipelines and cores. Considering that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 28A, processor pipeline 2800 includes fetch stage 2802, length decode stage 2804, decode stage 2806, allocate stage 2808, rename stage 2810, schedule (also known as dispatch or issue) stage 2812, register read/memory Read stage 2814 , execute stage 2816 , write back/memory write stage 2818 , exception handling stage 2822 and commit stage 2824 .28B shows a processor core 2890 that includes a front end unit 2830 coupled to an execution engine unit 2850, and both of which are coupled to a memory unit 2870. Cores 2890 may be reduced instruction set computing (RISC) cores, complex instruction set computing (CISC) cores, very long instruction word (VLIW) cores, or a hybrid or alternative core type. As yet another option, cores 2890 may be dedicated cores such as, for example, network or communication cores, compression engines, coprocessor cores, general purpose computing graphics processing unit (GPGPU) cores, graphics cores, and the like.Front end unit 2830 includes branch prediction unit 2832 coupled to instruction cache unit 2834 coupled to instruction translation lookaside buffer (TLB) 2836 coupled to instruction translation lookaside buffer 2836 Fetch unit 2838, which is coupled to decode unit 2840. Decode unit 2840 (or decoder) may decode the instruction and generate one or more micro-operations, microcode entry points, micro-operations decoded from the original instruction, or otherwise reflecting the original instruction, or derived from the original instruction. Commands, other commands, or other control signals as outputs. Decoding unit 2840 may be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), and the like. In one embodiment, core 2890 includes a microcode ROM or other medium (eg, in decode unit 2840, or otherwise within front end unit 2830) that stores microcode for certain macroinstructions. Decode unit 2840 is coupled to rename/distributor unit 2852 in execution engine unit 2850.The execution engine unit 2850 includes a rename/allocator unit 2852 coupled to a retirement unit 2854 and a set 2856 of one or more scheduler units. Scheduler unit(s) 2856 represents any number of different schedulers, including reservation stations, central instruction windows, and the like. The scheduler unit(s) 2856 is coupled to the physical register file unit(s) 2858 . Each of the physical register file unit(s) 2858 represents one or more physical register files, where different physical register files store one or more different data types, such as scalar integer, scalar float point, packed integer, packed floating point, vector integer, vector floating point, state (eg, instruction pointer which is the address of the next instruction to execute), etc. In one embodiment, the physical register file unit(s) 2858 includes a vector register unit, a writemask register unit, and a scalar register unit. These register units can provide architectural vector registers, vector mask registers, and general purpose registers. Physical register file unit(s) 2858 are overlaid by retirement unit 2854 to illustrate the various ways in which register renaming and out-of-order execution can be achieved (eg, using reorder buffer(s) and retirement register(s)) heap; using future file(s), history buffer(s), retirement register file(s); using register maps and register pools, etc.). Retirement unit 2854 and physical register file unit(s) 2858 are coupled to execution cluster(s) 2860 . Execution cluster(s) 2860 includes a set 2862 of one or more execution units and a set 2864 of one or more memory access units. Execution unit 2862 may perform various operations (eg, shift, add, subtract, multiply) and may perform on various data types (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. Scheduler unit(s) 2856, physical register file unit(s) 2858, and execution cluster(s) 2860 are shown as possibly multiple because some embodiments create separate Pipelines (e.g., scalar integer pipeline, scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or each with its own scheduler unit, physical register file unit(s), and/or Execution cluster's memory access pipeline - and in the case of a separate memory access pipeline, implement certain embodiments in which only the pipeline's execution cluster has memory access unit(s) 2864). It should also be understood that where separate pipelines are used, one or more of these pipelines can be issued/executed out-of-order, and the remaining pipelines can be in-order.A set 2864 of memory access units is coupled to a memory unit 2870 that includes a data TLB unit 2872 that is coupled to a data cache unit 2874 that is coupled to the second level (L2) cache Cache unit 2876. In one exemplary embodiment, memory access unit 2864 may include a load unit, a store address unit, and a store data unit, each of which is coupled to data TLB unit 2872 in memory unit 2870 . Instruction cache unit 2834 is also coupled to a second level (L2) cache unit 2876 in memory unit 2870 . L2 cache unit 2876 is coupled to one or more other levels of cache, and ultimately to main memory.As an example, an exemplary register renaming out-of-order issue/execute core architecture may implement pipeline 2800 as follows: 1) instruction fetch 2838 performs fetch stage 2802 and length decode stage 2804; 2) decode unit 2840 performs decode stage 2806; 3) rename/allocator unit 2852 executes allocation stage 2808 and rename stage 2810; 4) scheduler unit(s) 2856 executes dispatch stage 2812; 5) physical register file unit(s) 2858 and memory unit 2870 execute register read/memory read stage 2814; execution cluster 2860 execute execution stage 2816; 6) memory unit 2870 and physical register file unit(s) 2858 execute write back/memory write stage 2818; 7) each unit may be involved in Exception handling stage 2822; and 8) retire unit 2854 and physical register file unit(s) 2858 perform commit stage 2824.The core 2890 may support one or more instruction sets (eg, the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set from MIPS Technologies, Inc., Sunnyvale, CA; Sunnyvale, CA The ARM instruction set (with optional additional extensions such as NEON) from ARM Holdings, Inc., which includes the instruction(s) described herein. In one embodiment, core 2890 includes logic to support packed data instruction set extensions (eg, AVX1, AVX2), thereby allowing packed data to be used to perform operations used by many multimedia applications.It will be appreciated that cores may support multithreading (performing a collection of two or more operations or threads in parallel) and that this multithreading may be accomplished in various ways, including time division multithreading, simultaneous multithreading Threading (where a single physical core provides a logical core for each of the threads that the physical core is simultaneously multithreading), or a combination thereof (eg, time division fetch and decode and thereafter, such as in hyperthreading techniques, simultaneous multithreading change).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. Although the illustrated embodiment of the processor also includes separate instruction and data cache units 2834/2874 and a shared L2 cache unit 2876, alternative embodiments may have a single internal cache for both instruction and data , such as, for example, a first level (L1) internal cache or multiple levels of internal caches. In some embodiments, the system may include a combination of internal cache and external cache external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.Specific Exemplary In-Order Core Architecture29A-29B illustrate block diagrams of a more specific exemplary in-order core architecture, which would be one of several logic blocks in a chip (including other cores of the same type and/or different types). Depending on the application, the logic blocks communicate with some fixed function logic, memory I/O interfaces, and other necessary I/O logic through a high bandwidth interconnect network (eg, a ring network).29A is a block diagram of a single processor core and its connection to the on-die interconnect network 2902 and its local subset 2904 of the second level (L2) cache, according to an embodiment. In one embodiment, the instruction decoder 2900 supports the x86 instruction set with packed data instruction set extensions. The L1 cache 2906 allows low latency accesses to cache memory into scalar and vector units. Although in one embodiment (to simplify the design), scalar unit 2908 and vector unit 2910 use separate sets of registers (scalar registers 2912 and vector registers 2914, respectively), and data transferred between these registers is written to memory , and then read back from the first level (L1) cache 2906, but alternative embodiments may use a different approach (eg, using a single register set or including allowing data to be transferred between the two register files without being written to) and read back communication path).The local subset 2904 of the L2 cache is part of the global L2 cache, which is divided into a number of separate local subsets, one for each processor core. Each processor core has a direct access path to its own local subset 2904 of the L2 cache. Data read by a processor core is stored in its L2 cache subset 2904 and can be quickly accessed in parallel with other processor cores accessing their own local L2 cache subset. Data written by a processor core is stored in its own L2 cache subset 2904 and flushed from other subsets as necessary. The ring network ensures the consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data path is 1012 bits wide in each direction.29B is an expanded view of a portion of the processor core in FIG. 29A, according to an embodiment. FIG. 29B includes the L1 data cache 2906A portion of the L1 cache 2904 and more details about the vector unit 2910 and vector registers 2914. Specifically, vector unit 2910 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 2928) that executes one or more of integer, single-precision floating-point, and double-precision floating-point instructions. The VPU supports mixing of register inputs through mixing unit 2920, numeric conversion through numeric conversion units 2922A-B, and replication of memory inputs through replication unit 2924. A write mask register 2926 allows masking of the resulting vector writes.30 is a block diagram of a processor 3000 that can have more than one core, can have an integrated memory controller, and can have an integrated graphics device, according to an embodiment. The solid-line box in Figure 30 illustrates a processor 3000 with a single core 3002A, a system agent 3010, a set 3016 of one or more bus controller units, while an optional addition to the dashed-line box illustrates having multiple cores 3002A-N , a set 3014 of one or more integrated memory controller units in a system agent unit 3010 and a replacement processor 3000 for special purpose logic 3008 .Thus, different implementations of the processor 3000 may include: 1) a CPU where the dedicated logic 3008 is integrated graphics and/or scientific (throughput) logic (which may include one or more cores) and the cores 3002A-N are one or more Multiple general-purpose cores (eg, general-purpose in-order cores, general-purpose out-of-order cores, a combination of the two); 2) coprocessors, where cores 3002A-N are intended primarily for graphics and/or science (throughput) and 3) coprocessors, where cores 3002A-N are a large number of general purpose in-order cores. Thus, processor 3000 may be a general-purpose processor, a co-processor, or a special-purpose processor such as, for example, a network or communications processor, a compression engine, a graphics processor, a GPGPU (General Purpose Graphics Processing Unit), a high-throughput many-core integrated (MIC) coprocessors (including 30 or more cores), embedded processors, etc. The processor may be implemented on one or more chips. Processor 3000 may be part of one or more substrates, and/or may be implemented on one or more substrates using any of a variety of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the core, a set 3006 of one or more shared cache units, and external memory (not shown) coupled to a set 3014 of integrated memory controller units. The set 3006 of shared cache units may include one or more intermediate levels of cache, such as second level (L2), third level (L3), fourth level (L4) or other levels of cache, last level Cache (LLC) and/or a combination of the above. Although in one embodiment, ring-based interconnect unit 3012 combines dedicated logic 3008 (integrated graphics logic is an example of dedicated logic and is also referred to herein as dedicated logic), a set of shared cache units 3006, and a system proxy unit 3010/Integrated memory controller unit(s) 3014 are interconnected, although alternative embodiments may use any number of known techniques to interconnect such units. In one embodiment, coherency is maintained between one or more cache units 3006 and cores 3002A-N.In some embodiments, one or more of the cores 3002A-N are capable of multithreading. System agent 3010 includes those components that coordinate and operate cores 3002A-N. The system agent unit 3010 may include, for example, a power control unit (PCU) and a display unit. The PCU may be, or may include, the logic and components required to regulate the power states of the cores 3002A-N and the dedicated logic 3008. The display unit is used to drive one or more externally connected displays.The cores 3002A-N may be homogeneous or heterogeneous in terms of architectural instruction sets; that is, two or more of the cores 3002A-N may be able to execute the same instruction set while other cores may be able to execute the instruction Only a subset of the set or a different instruction set.Exemplary Computer Architecture31-34 are block diagrams of exemplary computer architectures. Known in the art for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices Other system designs and configurations for video game devices, set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and various other electronic devices are also suitable. In general, a wide variety of systems or electronic devices capable of incorporating processors and/or other execution logic as disclosed herein are generally suitable.Referring now to FIG. 31, shown is a block diagram of a system 3100 according to one embodiment of the present disclosure. System 3100 may include one or more processors 3110 , 3115 coupled to controller hub 3120 . In one embodiment, controller hub 3120 includes graphics memory controller hub (GMCH) 3190 and input/output hub (IOH) 3150 (which may be on separate chips); GMCH 3190 includes memory and graphics controller, memory 3140 and coprocessor 3145 are coupled to the memory and graphics controller; IOH 3150 couples input/output (I/O) devices 3160 to GMCH 3190. Alternatively, one or both of the memory and graphics controller are integrated within the processor (as described herein), the memory 3140 and coprocessor 3145 are directly coupled to the processor 3110, and the controller hub 3120 communicates with the IOH The 3150 is in a single chip. Memory 3140 may include matrix acceleration code 3140A, eg, which stores code that, when executed, causes the processor to perform any of the methods of the present disclosure.The optionality of additional processors 3115 is indicated in FIG. 31 by dashed lines. Each processor 3110, 3115 may include one or more of the processing cores described herein, and may be some version of processor 3000.Memory 3140 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 3120 communicates with the process(es) via a multidrop bus such as a front side bus (FSB), a point-to-point interface such as a quick path interconnect (QPI), or similar connection 3195 3110, 3115 to communicate.In one embodiment, the coprocessor 3145 is a special purpose processor such as, for example, a high throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, and the like. In one embodiment, the controller hub 3120 may include an integrated graphics accelerator.Various differences may exist between physical resources 3110, 3115 in a range of quality metrics including architecture, microarchitecture, thermal, power consumption characteristics, and the like.In one embodiment, the processor 3110 executes instructions that control general types of data processing operations. Embedded within these instructions may be coprocessor instructions. The processor 3110 identifies these coprocessor instructions as being of a type that should be executed by the attached coprocessor 3145. Accordingly, processor 3110 issues these coprocessor instructions (or control signals representing coprocessor instructions) to coprocessor 3145 over a coprocessor bus or other interconnect. Coprocessor(s) 3145 accepts and executes the received coprocessor instructions.Referring now to FIG. 32, shown is a block diagram of a first more specific exemplary system 3200 in accordance with an embodiment of the present disclosure. As shown in FIG. 32 , the multiprocessor system 3200 is a point-to-point interconnect system and includes a first processor 3270 and a second processor 3280 coupled via a point-to-point interconnect 3250 . Each of processors 3270 and 3280 may be some version of processor 3000. In one embodiment, processors 3270 and 3280 are processors 3110 and 3115, respectively, and coprocessor 3238 is coprocessor 3145. In another embodiment, processors 3270 and 3280 are processor 3110 and coprocessor 3145, respectively.Processors 3270 and 3280 are shown including integrated memory controller (IMC) units 3272 and 3282, respectively. Processor 3270 also includes point-to-point (P-P) interfaces 3276 and 3278 as part of its bus controller unit; similarly, second processor 3280 includes P-P interfaces 3286 and 3288. The processors 3270, 3280 may exchange information via a P-P interface 3250 using point-to-point (P-P) interface circuits 3278, 3288. As shown in Figure 32, IMCs 3272 and 3282 couple the processors to respective memories, namely memory 3232 and memory 3234, which may be portions of main memory locally attached to the respective processors.Processors 3270, 3280 may each exchange information with chipset 3290 via respective P-P interfaces 3252, 3254 using point-to-point interface circuits 3276, 3294, 3286, 3298. Chipset 3290 may optionally exchange information with coprocessor 3238 via high performance interface 3292. In one embodiment, coprocessor 3238 is a special purpose processor such as, for example, a high throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, and the like.A shared cache (not shown) can be included in either processor, or external to both processors but connected to the processors via a P-P interconnect, so that if the processor is placed in a low power mode, either Local cache information for one or both processors may be stored in a shared cache.Chipset 3290 may be coupled to first bus 3216 via interface 3296 . In one embodiment, the first bus 3216 may be a Peripheral Component Interconnect (PCI) bus or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited .As shown in FIG. 32 , various I/O devices 3214 may be coupled to the first bus 3216 along with a bus bridge 3218 that couples the first bus 3216 to the second bus 3220 . In one embodiment, one or a A plurality of additional processors 3215 are coupled to the first bus 3216. In one embodiment, the second bus 3220 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 3220, including, for example, a keyboard and/or mouse 3222, a communication device 3227, and a storage unit 3228, such as a device that may include instructions/code and data 3230 disk drive or other mass storage device. Additionally, audio I/O 3224 may be coupled to the second bus 3220. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 32, the system may implement a multidrop bus or other such architecture.Referring now to FIG. 33, shown is a block diagram of a second more specific exemplary system 3300 in accordance with an embodiment of the present disclosure. Like elements in FIGS. 32 and 33 use like reference numerals, and certain aspects of FIG. 32 have been omitted from FIG. 33 to avoid obscuring other aspects of FIG. 33 .33 illustrates that processors 3270, 3280 may include integrated memory and I/O control logic ("CL") 3272 and 3282, respectively. Thus, the CL 3272, 3282 includes an integrated memory controller unit and includes I/O control logic. 33 illustrates that not only memory 3232, 3234 is coupled to CL 3272, 3282, but I/O device 3314 is also coupled to control logic 3272, 3282. Conventional I/O devices 3315 are coupled to chipset 3290 .Referring now to FIG. 34, shown is a block diagram of an SoC 3400 in accordance with an embodiment of the present disclosure. Similar elements in Figure 30 use similar reference numerals. Additionally, the dashed boxes are optional features on more advanced SoCs. In Figure 34, interconnect unit(s) 3402 are coupled to: an application processor 3410 comprising a set of one or more cores 3002A-N and a shared cache unit(s) 3006, one or more cores The set 3002A-N includes cache units 3004A-N; system proxy unit 3010; bus controller unit(s) 3016; integrated memory controller unit(s) 3014; set of one or more coprocessors 3420, It may include integrated graphics logic, image processors, audio processors, and video processors; static random access memory (SRAM) unit 3430; direct memory access (DMA) unit 3432; and for coupling to one or more external displays The display unit 3440. In one embodiment, coprocessor(s) 3420 comprise special purpose processors, such as, for example, network or communications processors, compression engines, GPGPUs, high throughput MIC processors, or embedded processors, among others.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementations. Embodiments may be implemented as a computer program or program code executing on a programmable system including at least one processor, a storage system (including volatile and nonvolatile memory and/or storage elements), at least one input device and at least one output device.Program code, such as code 3230 illustrated in Figure 32, may be applied to input instructions to perform the functions described herein and generate output information. The output information can be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor, such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high-level procedural or object-oriented programming language to communicate with the processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited to the scope of any particular programming language. In any case, the language can be a compiled language or an interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, the instructions representing various logic in a processor that, when read by a machine, cause the machine to manufacture logic for implementing the techniques described herein. Such representations, referred to as "IP cores," can be stored on tangible machine-readable media and can be supplied to various customers or production facilities for loading into the manufacturing machines that actually manufacture the logic or processors.Such machine-readable storage media may include, but are not limited to, non-transitory, tangible arrangements of articles of manufacture or formation by machines or devices, including storage media, such as hard disks; any other type of disks, including floppy disks, optical disks, compact disks Disc read only memory (CD-ROM), compact disc rewritable (CD-RW), and magneto-optical disk; semiconductor devices such as read only memory (ROM), such as dynamic random access memory (DRAM) and static random access memory Random Access Memory (RAM), Erasable Programmable Read Only Memory (EPROM), Flash Memory, Electrically Erasable Programmable Read Only Memory (EEPROM); Phase Change Memory (PCM); Magnetic Cards or optical card; or any other type of medium suitable for storing electronic instructions.Accordingly, embodiments also include non-transitory tangible machine-readable media containing instructions or containing design data, such as a hardware description language (HDL), which defines the structures, circuits, devices, processors, and/or processors described herein. system characteristics. These embodiments are also referred to as program products.Simulation (including binary transformation, code deformation, etc.)In some cases, an instruction translator may be used to convert instructions from a source instruction set to a target instruction set. For example, an instruction translator may transform an instruction (eg, using static binary transform, dynamic binary transform including dynamic compilation), warp, emulate, or otherwise convert the instruction into one or more other instructions to be processed by the core. Instruction translators can be implemented in software, hardware, firmware, or a combination thereof. The instruction translator may be on-processor, off-processor, or partially on-processor and partially off-processor.35 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, according to an embodiment. In the illustrated embodiment, the instruction translator is a software instruction translator, but alternatively, the instruction translator may be implemented in software, firmware, hardware, or various combinations thereof. 35 shows that a program in a high-level language 3502 can be compiled using an x86 compiler 3504 to generate x86 binary code 3506 that can be natively executed by a processor 3516 having at least one x86 instruction set core. A processor with at least one x86 instruction set core 3516 represents any processor capable of performing substantially the same functions as a processor with at least one x86 instruction set core by compatibly executing or otherwise processing: 1) x86 A substantial portion of the instruction set of an instruction set core, or 2) an application or other software targeted to run on a processor with at least one x86 instruction set core to achieve substantially the same results as a processor with at least one x86 instruction set core object code version. x86 compiler 3504 represents a compiler operable to generate x86 binary code 3506 (eg, object code) executable on a processor 3516 having at least one x86 instruction set core with or without additional linking processing . Similarly, FIG. 35 shows that an alternative instruction set compiler 3508 can be used to compile programs in high-level language 3502 to generate programs that can be executed by a processor 3514 that does not have at least one x86 instruction set core (eg, a Alternate instruction set binary code 3510 natively executed by the MIPS instruction set of MIPS Technologies, Inc. of Sunnyvale, CA, and/or a processor executing a core of the ARM instruction set of ARM Holdings, Inc. of Sunnyvale, CA. Instruction converter 3512 is used to convert x86 binary code 3506 into code that can be natively executed by processor 3514 without an x86 instruction set core. It is unlikely that the converted code will be identical to the alternate instruction set binary code 3510, since instruction converters capable of doing so are difficult to manufacture; however, the converted code will perform general operations and be composed of instructions from the alternate instruction set. Thus, instruction translator 3512 represents, by emulation, emulation, or any other process, software, firmware, hardware, or a combination thereof, that allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute x86 binary code 3506.further exampleAt least some embodiments of the disclosed technology may be described in terms of the following examples:Example 1: An apparatus comprising:A fetch circuit for fetching a single instruction with a position for specifying an opcode and an M-by-N destination matrix with single-precision elements, a position for an M-by-K first source matrix, and a K-by-N second source a field for the position of a matrix having elements each comprising a quadruple of 8-bit floating point values, the opcode for instructing the execution circuit to cause: for each element of the first source matrix and Corresponding elements of the second source matrix, converting the 8-bit floating point values to single-precision values, multiplying the converted single-precision values of the first values from the quadruple together to generate a first result, converting the values from the quadruple The converted single-precision values of the second value of the tuple are multiplied together to generate a second result, the converted single-precision values of the third value from the quadruple are multiplied together to generate a third result, the the converted single-precision values of the fourth value of the tuple are multiplied together to generate a fourth result, and the first result, the second result, the third result, and the fourth result are combined with the The previous contents of the corresponding elements of the destination matrix are accumulated;decoding circuitry for decoding the fetched instruction; andExecution circuitry responds to the decoded instruction as specified by the opcode.Example 2: The apparatus of Example 1, wherein the 8-bit floating point format is specified by the opcode of the single instruction.Example 3: The apparatus of Example 1, wherein M, N, and K are specified by the single instruction.Example 4: The apparatus of Example 1, wherein the execution circuit is configured to cause the matrix operation accelerator to perform at least multiply and accumulate.Example 5: The apparatus of Example 4, wherein M, N, and K are specified by a configuration of the matrix operation accelerator for programming by execution of a matrix accelerator configuration instruction prior to executing the single instruction.Example 6: The apparatus of Example 1, wherein the execution circuit is further configured to cause saturation of the execution result when necessary.Example 7: The apparatus of Example 1, wherein the single instruction is further for specifying a writemask comprising M x N bits, each bit for controlling whether to mask a corresponding element of the destination matrix .Example 8: The apparatus of Example 1, wherein the execution circuit is further configured to generate an error when an error condition occurs, the error condition being selectable from:the number of rows of the destination matrix is less than the number of rows of the first source matrix; andThe number of columns of the destination matrix is less than the number of columns of the second source matrix.Example 9: A method that includes:A single instruction is fetched by the processor's fetch circuit, the single instruction having the location for the specified opcode and the M by N destination matrix with single-precision elements, the location of the M by K first source matrix, and the K by N second a field for the location of a source matrix having elements each comprising a quadruple of 8-bit floating point values, the opcode for instructing the execution circuit to cause: for each element of the first source matrix and the corresponding elements of the second source matrix, converting the 8-bit floating point values to single-precision values, multiplying the converted single-precision values of the first values from the quad together to generate a first result, converting the values from The converted single-precision values of the second value of the quadruple are multiplied together to generate a second result, the converted single-precision values from the third value of the quadruple are multiplied together to generate a third result, the The converted single-precision values of the fourth value of the quadruple are multiplied together to generate a fourth result, and the first result, the second result, the third result, and the fourth result are combined with the The previous contents of the corresponding elements of the destination matrix are accumulated;decoding, by decoding circuitry of the processor, the fetched instruction into a decoded single instruction; andThe decoded single instruction is executed by the execution circuitry of the processor according to the opcode.Example 10: The method of Example 9, wherein the 8-bit floating point format is specified by the opcode of the single instruction.Example 11: The method of Example 9, wherein M, N, and K are specified by the single instruction.Example 12: The method of Example 9, wherein the execution circuit causes the matrix operation accelerator to perform at least multiply and accumulate.Example 13: The method of Example 12, further comprising executing, by the execution circuitry of the processor prior to executing the single instruction, matrix accelerator configuration instructions, the matrix accelerator configuration instructions for specifying M, N, and K The configuration of the matrix operation accelerator is programmed.Example 14: The method of Example 9, wherein the executing includes saturating the execution result.Example 15: The method of Example 9, wherein the single instruction further specifies a writemask comprising M x N bits, each bit controlling whether a corresponding element of the destination matrix is masked.Example 16: The method of Example 9, wherein the execution generates an error when an error condition occurs, the error condition being selectable from:the number of rows of the destination matrix is less than the number of rows of the first source matrix; andThe number of columns of the destination matrix is less than the number of columns of the second source matrix.Example 17: A non-transitory machine-readable medium storing program code that, when executed by a machine, causes the machine to perform a method comprising the steps of:A single instruction is fetched by the processor's fetch circuit, the single instruction having the location for the specified opcode and the M by N destination matrix with single-precision elements, the location of the M by K first source matrix, and the K by N second a field for the location of a source matrix having elements each comprising a quadruple of 8-bit floating point values, the opcode for instructing the execution circuit to cause: for each element of the first source matrix and the corresponding elements of the second source matrix, converting the 8-bit floating point values to single-precision values, multiplying the converted single-precision values of the first values from the quad together to generate a first result, converting the values from The converted single-precision values of the second value of the quadruple are multiplied together to generate a second result, the converted single-precision values from the third value of the quadruple are multiplied together to generate a third result, the The converted single-precision values of the fourth value of the quadruple are multiplied together to generate a fourth result, and the first result, the second result, the third result, and the fourth result are combined with the The previous contents of the corresponding elements of the destination matrix are accumulated;decoding, by decoding circuitry of the processor, the fetched instruction into a decoded single instruction; andThe decoded single instruction is executed by the execution circuitry of the processor according to the opcode.Example 18: The non-transitory machine-readable medium of Example 17, wherein the 8-bit floating point format is specified by the opcode of the single instruction.Example 19: The non-transitory machine-readable medium of Example 17, wherein M, N, and K are specified by the single instruction.Example 20: The non-transitory machine-readable medium of Example 17, wherein the executing includes the executing circuitry causing a matrix operation accelerator to perform at least multiply and accumulate.Example 21: The non-transitory machine-readable medium of Example 20, wherein the method further comprises executing, by the execution circuitry of the processor, a matrix accelerator configuration instruction prior to executing the single instruction, the matrix The accelerator configuration instructions program the configuration of the matrix operation accelerators specifying M, N, and K.Example 22: The non-transitory machine-readable medium of Example 17, wherein the executing comprises saturating a result of the execution.Example 23: The non-transitory machine-readable medium of Example 17, wherein the single instruction further specifies a write mask comprising M x N bits, each bit controlling whether corresponding elements of the destination matrix are mask.Example 24: The non-transitory machine-readable medium of Example 17, wherein the execution generates an error when an error condition occurs, the error condition can be selected from:the number of rows of the destination matrix is less than the number of rows of the first source matrix; andThe number of columns of the destination matrix is less than the number of columns of the second source matrix. |
Aspects include computing devices, systems, and methods for implementing monitoring communications between components and a memory hierarchy of a computing device. The computing device may determine at least one identifying factor for identifying execution of the processor-executable code. A communication between the components and the memory hierarchy of the computing device may be monitored for at least one communication factor of a same type as the at least one identifying factor. A determination whether a value of the at least one identifying factor matches a value of the at least one communication factor may be made. The computing device may determine that the processor-executable code is executed in response to determining that the value of the at least one identifying factor matches the value of the at least one communication factor. |
CLAIMSWhat is claimed is:1. A method for monitoring communications between components and a memory hierarchy of a computing device, comprising:determining an identifying factor for identifying execution of a processor- executable code;monitoring a communication factor in a communication between thecomponents and the memory hierarchy of the computing device of a same type as the identifying factor;determining whether a value of the identifying factor matches a value of the communication factor; anddetermining that the processor-executable code is executed in response to determining that the value of the identifying factor matches the value of thecommunication factor.2. The method of claim 1, wherein determining whether a value of the identifying factor matches a value of the communication factor comprises:determining whether a value of a first identifying factor matches a value of a first communication factor;determining whether a second identifying factor is needed to identify execution of the processor-executable code; anddetermining whether a value of the second identifying factor matches a value of a second communication factor in response to determining that the second identifying factor is needed to identify execution of the processor-executable code.3. The method of claim 2, further comprising:determining whether another identifying factor is need to identify execution of the processor-executable code in response to determining that the value of the second identifying factor matches the value of the second communication factor.4. The method of claim 2, wherein:a type of the first identifying factor and the first communication factor is different from a type of the second identifying factor and the second communication factor; anddetermining whether a second identifying factor is need to identify execution of the processor-executable code comprises determining whether the second identifying factor is need to identify execution of the processor-executable code in response to in response to determining that the value of the first identifying factor matches the value of the first communication factor, the value of the first communication factor not uniquely identifying the processor-executable code, or an overhead for monitoring the first communication factor exceeds a threshold.5. The method of claim 1, further comprising determining that the processor- executable code is not executed in response to determining that the value of the identifying factor does not match the value of the communication factor.6. The method of claim 1, wherein monitoring for a communication factor in a communication between the components and the memory hierarchy of the computing device of a same type as the identifying factor comprises:determining whether a memory access request to a first target memory of the memory hierarchy results in a miss; andmonitoring a supplemental memory access request to a second target memory of a lower level of the memory hierarchy in response to determining that the memory access request results in a miss.7. The method of claim 1, wherein the communication is associated with a target memory of the memory hierarchy, the method further comprising:determining whether the communication can be monitored; and marking the communication un-cacheable in response to determining that the communication cannot be monitored.8. The method of claim 1, wherein a type of the identifying factor and thecommunication factor comprises one of an entry point address of a target memory, an exit point address of a target memory, a callee function, a caller function, a parameter, a unique instruction, a unique pattern, a cache footprint, a local variable, and a return value.9. A computing device, comprising a stream monitor configured with stream monitor- executable instructions to perform operations comprising:determining an identifying factor for identifying execution of a processor- executable code;monitoring a communication factor in a communication between components of the computing device and a memory hierarchy of the computing device of a same type as the identifying factor;determining whether a value of the identifying factor matches a value of the communication factor; anddetermining that the processor-executable code is executed in response to determining that the value of the identifying factor matches the value of thecommunication factor.10. The computing device of claim 9, wherein the stream monitor is configured with stream monitor-executable instructions to perform operations such that determining whether a value of the identifying factor matches a value of the communication factor comprises:determining whether a value of a first identifying factor matches a value of a first communication factor;determining whether a second identifying factor is needed to identify execution of the processor-executable code; anddetermining whether a value of the second identifying factor matches a value of a second communication factor in response to determining that the second identifying factor is needed to identify execution of the processor-executable code.1 1. The computing device of claim 10, wherein the stream monitor is configured with stream monitor-executable instructions to perform operations further comprising: determining whether another identifying factor is need to identify execution of the processor-executable code in response to determining that the value of the second identifying factor matches the value of the second communication factor.12. The computing device of claim 10, wherein:a type of the first identifying factor and the first communication factor is different from a type of the second identifying factor and the second communication factor; andthe stream monitor is configured with stream monitor-executable instructions to perform operations such that determining whether a second identifying factor is need to identify execution of the processor-executable code comprises determining whether the second identifying factor is need to identify execution of the processor- executable code in response to in response to determining that the value of the first identifying factor matches the value of the first communication factor, the value of the first communication factor not uniquely identifying the processor-executable code, or an overhead for monitoring the first communication factor exceeds a threshold.13. The computing device of claim 9, wherein the stream monitor is configured with stream monitor-executable instructions to perform operations further comprising determining that the processor-executable code is not executed in response to determining that the value of the identifying factor does not match the value of the communication factor.14. The computing device of claim 9, wherein the stream monitor is configured with stream monitor-executable instructions to perform operations such that monitoring for a communication factor in a communication between the components and the memory hierarchy of the computing device of a same type as the identifying factor comprises: determining whether a memory access request to a first target memory of the memory hierarchy results in a miss; andmonitoring a supplemental memory access request to a second target memory of a lower level of the memory hierarchy in response to determining that the memory access request results in a miss.15. The computing device of claim 9, wherein the stream monitor is configured with stream monitor-executable instructions to perform operations such that thecommunication is associated with a target memory of the memory hierarchy, and wherein the processor is configured with processor-executable instructions to perform operations further comprising:determining whether the communication can be monitored; andmarking the communication un-cacheable in response to determining that the communication cannot be monitored.16. The computing device of claim 9, wherein the stream monitor is configured with stream monitor-executable instructions to perform operations such that a type of the identifying factor and the communication factor comprises one of an entry point address of a target memory, an exit point address of a target memory, a callee function, a caller function, a parameters, a unique instruction, a unique pattern, a cache footprint, a local variable, and a return value.17. A computing device, comprising:means for determining an identifying factor for identifying execution of a processor-executable code;means for monitoring a communication factor in a communication between one or more components of the computing device and a memory hierarchy of the computing device of a same type as the identifying factor;means for determining whether a value of the identifying factor matches a value of the communication factor; andmeans for determining that the processor-executable code is executed in response to determining that the value of the identifying factor matches the value of the communication factor.18. The computing device of claim 17, wherein means for determining whether a value of the identifying factor matches a value of the communication factor comprises:means for determining whether a value of a first identifying factor matches a value of a first communication factor;means for determining whether a second identifying factor is needed to identify execution of the processor-executable code; andmeans for determining whether a value of the second identifying factor matches a value of a second communication factor in response to determining that the second identifying factor is needed to identify execution of the processor-executable code.19. The computing device of claim 18, further comprising:means for determining whether another identifying factor is need to identify execution of the processor-executable code in response to determining that the value of the second identifying factor matches the value of the second communication factor.20. The computing device of claim 18, wherein: a type of the first identifying factor and the first communication factor is different from a type of the second identifying factor and the second communication factor; andmeans for determining whether a second identifying factor is need to identify execution of the processor-executable code comprises means for determining whether the second identifying factor is need to identify execution of the processor-executable code in response to in response to determining that the value of the first identifying factor matches the value of the first communication factor, the value of the first communication factor not uniquely identifying the processor-executable code, or an overhead for monitoring the first communication factor exceeds a threshold.21. The computing device of claim 17, wherein means for monitoring for acommunication factor in a communication between the components and the memory hierarchy of the computing device of a same type as the identifying factor comprises: means for determining whether a memory access request to a first target memory of the memory hierarchy results in a miss; andmeans for monitoring a supplemental memory access request to a second target memory of a lower level of the memory hierarchy in response to determining that the memory access request results in a miss.22. The computing device of claim 17, wherein the communication is associated with a target memory of the memory hierarchy, the computing device further comprising: means for determining whether the communication can be monitored; and means for marking the communication un-cacheable in response to determining that the communication cannot be monitored.23. The computing device of claim 17, wherein a type of the identifying factor and the communication factor comprises one of an entry point address of a target memory, an exit point address of a target memory, a callee function, a caller function, a parameters, a unique instruction, a unique pattern, a cache footprint, a local variable, and a return value.24. A non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a computing device to perform operations comprising:determining an identifying factor for identifying execution of a processor- executable code;monitoring a communication factor in a communication between components and a memory hierarchy of the computing device of a same type as the identifying factor;determining whether a value of the identifying factor matches a value of the communication factor; anddetermining that the processor-executable code is executed in response to determining that the value of the identifying factor matches the value of thecommunication factor.25. The non- transitory processor-readable storage medium of claim 24, wherein the stored processor-executable instructions are configured to cause a processor of a computing device to perform operations such that determining whether a value of the identifying factor matches a value of the communication factor comprises:determining whether a value of a first identifying factor matches a value of a first communication factor;determining whether a second identifying factor is needed to identify execution of the processor-executable code; anddetermining whether a value of the second identifying factor matches a value of a second communication factor in response to determining that the second identifying factor is needed to identify execution of the processor-executable code.26. The non-transitory processor-readable storage medium of claim 25, wherein the stored processor-executable instructions are configured to cause a processor of a computing device to perform operations further comprising:determining whether another identifying factor is need to identify execution of the processor-executable code in response to determining that the value of the second identifying factor matches the value of the second communication factor.27. The non-transitory processor-readable storage medium of claim 25, wherein: a type of the first identifying factor and the first communication factor is different from a type of the second identifying factor and the second communication factor; andthe stored processor-executable instructions are configured to cause a processor of a computing device to perform operations such that determining whether a second identifying factor is need to identify execution of the processor-executable code comprises determining whether the second identifying factor is need to identify execution of the processor-executable code in response to in response to determining that the value of the first identifying factor matches the value of the firstcommunication factor, the value of the first communication factor not uniquely identifying the processor-executable code, or an overhead for monitoring the first communication factor exceeds a threshold.28. The non- transitory processor-readable storage medium of claim 24, wherein the stored processor-executable instructions are configured to cause a processor of a computing device to perform operations such that monitoring for a communication factor in a communication between the components and the memory hierarchy of the computing device of a same type as the identifying factor comprises:determining whether a memory access request to a first target memory of the memory hierarchy results in a miss; andmonitoring a supplemental memory access request to a second target memory of a lower level of the memory hierarchy in response to determining that the memory access request results in a miss.29. The non-transitory processor-readable storage medium of claim 24, wherein: the stored processor-executable instructions are configured to cause a processor of a computing device to perform operations such that the communication is associated with a target memory of the memory hierarchy; andthe stored processor-executable instructions are configured to cause a processor of a computing device to perform operations further comprising:determining whether the communication can be monitored; and marking the communication un-cacheable in response to determining that the communication cannot be monitored.30. The non-transitory processor-readable storage medium of claim 24, wherein the stored processor-executable instructions are configured to cause a processor of a computing device to perform operations such that a type of the identifying factor and the communication factor comprises one of an entry point address of a target memory, an exit point address of a target memory, a callee function, a caller function, a parameters, a unique instruction, a unique pattern, a cache footprint, a local variable, and a return value. |
TITLEApproximation of Execution Events Using Memory Hierarchy Monitoring BACKGROUND[0001] Monitoring execution events at the hardware layer and in real-time allows for monitoring of application programming interface (API) calls. Monitoring API calls is useful for malware detection, malfunction detection, protecting software with hardware, and tying monitoring to hardware. The API calls may be monitored for unusual instances and patterns that may indicate that a computing device is not operating as intended. One way to monitor execution events is by monitoring central processor unit (CPU) instruction streams. The instructions executed by the CPU may occur in instances and patterns that are identified as problematic for the computing device. However, monitoring all CPU instructions to find an execution of a specific address is both complicated and inefficient. Moreover, not all computing device systems support CPU monitoring. To monitor the CPU instructions at the high frequency at which CPUs execute instructions requires additional high speed hardware added to the CPU and capable of monitoring the execution in the CPU at the same frequency.SUMMARY[0002] The methods and apparatuses of various aspects provide circuits and methods for monitoring communications between components and a memory hierarchy of a computing device that may include determining an identifying factor for identifying execution of a processor-executable code, monitoring a communication factor in a communication between the components and the memory hierarchy of the computing device of a same type as the identifying factor, determining whether a value of the identifying factor matches a value of the communication factor, and determining that the processor-executable code is executed in response to determining that the value of the identifying factor matches the value of the communication factor. In an aspect, determining whether a value of the identifying factor matches a value of the communication factor may include determining whether a value of a first identifying factor matches a value of a first communication factor, determining whether a second identifying factor is needed to identify execution of the processor-executable code, and determining whether a value of the second identifying factor matches a value of a second communication factor in response to determining that the second identifying factor is needed to identify execution of the processor-executable code. In an aspect, a type of the identifying factor and the communication factor may include one of an entry point address of a target memory, an exit point address of a target memory, a callee function, a caller function, a parameter, a unique instruction, a unique pattern, a cache footprint, a local variable, and a return value.[0003] An aspect method may further include determining whether communication matches another identifying factor is need to identify execution of the processor- executable code in response to determining that the value of the second identifying factor matches the value of the second communication factor. In an aspect, a type of the first identifying factor and the first communication factor is different from a type of the second identifying factor and the second communication factor. In an aspect, determining whether a second identifying factor is need to identify execution of the processor-executable code may include determining whether the second identifying factor is need to identify execution of the processor-executable code in response to in response to determining that the value of the first identifying factor matches the value of the first communication factor, the value of the first communication factor not uniquely identifying the processor-executable code, or an overhead for monitoring the first communication factor exceeds a threshold.[0004] An aspect method may further include determining that the processor- executable code is not executed in response to determining that the value of the identifying factor does not match the value of the communication factor. [0005] In an aspect, monitoring for a communication factor in a communication between the components and the memory hierarchy of the computing device of a same type as the identifying factor may include determining whether a memory access request to a first target memory of the memory hierarchy results in a miss, and monitoring a supplemental memory access request to a second target memory of a lower level of the memory hierarchy in response to determining that the memory access request results in a miss.[0006] In an aspect, the communication may be associated with a target memory of the memory hierarchy, and the method further include determining whether the communication can be monitored and marking the communication un-cacheable in response to determining that the communication cannot be monitored.BRIEF DESCRIPTION OF THE DRAWINGS[0007] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate example aspects of the invention, and together with the general description given above and the detailed description given below, serve to explain the features of the invention.[0008] FIG. 1 is a component block diagram illustrating a computing device suitable for implementing an aspect.[0009] FIG. 2 is a component block diagram illustrating an example multi-core processor suitable for implementing an aspect.[0010] FIG. 3 is a component block diagram illustrating an example system on chip (SoC) suitable for implementing an aspect.[0011] FIG. 4 is an illustration of memory contents stored in various configurations relative to respective memory regions in a memory in accordance with an aspect.[0012] FIG. 5 is an illustration of an interaction of memories in a memory hierarchy monitored by a stream monitor in accordance with an aspect. [0013] FIG. 6 is process flow diagram illustrating an aspect method for implementing an approximation of execution events using memory hierarchy monitoring.[0014] FIG. 7 is process flow diagram illustrating an aspect method for identifying memory contents of a monitored memory access request.[0015] FIG. 8 is process flow diagram illustrating an aspect method for monitoring a memory access request resulting in a hit or a miss.[0016] FIG. 9 is process flow diagram illustrating an aspect method for monitoring a memory access request targeting a memory that is not monitored.[0017] FIG. 10 is component block diagram illustrating an example mobilecomputing device suitable for use with the various aspects.[0018] FIG. 1 1 is component block diagram illustrating an example mobilecomputing device suitable for use with the various aspects.[0019] FIG. 12 is component block diagram illustrating an example server suitable for use with the various aspects.DETAILED DESCRIPTION[0020] The various aspects will be described in detail with reference to theaccompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims.[0021] The terms "computing device" and "mobile computing device" are used interchangeably herein to refer to any one or all of cellular telephones, smartphones, personal or mobile multi-media players, personal data assistants (PDA's), laptop computers, tablet computers, smartbooks, ultrabooks, palm-top computers, wireless electronic mail receivers, multimedia Internet enabled cellular telephones, wireless gaming controllers, and similar personal electronic devices that include a memory, and a multi-core programmable processor. While the various aspects are particularly useful for mobile computing devices, such as smartphones, which have limited memory and battery resources, the aspects are generally useful in any electronic device that implements a plurality of memory devices and a limited power budget in which reducing the power consumption of the processors can extend the battery- operating time of the mobile computing device.[0022] The term "system-on-chip" (SoC) is used herein to refer to a set ofinterconnected electronic circuits typically, but not exclusively, including a hardware core, a memory, and a communication interface. A hardware core may include a variety of different types of processors, such as a general purpose processor, a central processing unit (CPU), a digital signal processor (DSP), a graphics processing unit (GPU), an accelerated processing unit (APU), an auxiliary processor, a single-core processor, and a multi-core processor. A hardware core may further embody other hardware and hardware combinations, such as a field programmable gate array (FPGA), an application-specific integrated circuit (ASCI), other programmable logic device, discrete gate logic, transistor logic, performance monitoring hardware, watchdog hardware, and time references. Integrated circuits may be configured such that the components of the integrated circuit reside on a single piece of semiconductor material, such as silicon.[0023] Aspects include methods and computing devices implementing such methods for execution event monitoring by monitoring instruction request lines to detect or recognize certain execution events. An aspect may use memory addresses as unique function identifiers in order to increase the probability of detecting execution events.[0024] Code may be copied from a storage device or a processor to a main memory when an instruction execution function is called, and a loader may jump to the entry point of the function. The code may be copied to an instruction cache from the storage device or the processor either instead of the main memory or in addition to the main memory. The code may also be copied from the main memory to the instruction cache. No matter the manner in which the code is copied to the instruction cache, an association is created between execution events, such as calling the instruction execution function and cache entries. This association may be recognized at a bus level by observing instruction request lines, such as a miss instruction stream from the cache and non-cacheable accesses to the main memory. Thus, monitoring instruction request lines can provide information for monitoring of API calls triggered by specific execution events.[0025] In an aspect, a stream monitor executing in hardware, software, or acombination of hardware and software may determine a memory address to monitor for an identified function. Based on the memory address, the stream monitor may monitor a memory region of the instruction cache and/or main memory. The memory region may be any portion of the instruction cache and/or main memory, for example a block of memory or a page of memory. The stream monitor may monitor all access requests to the memory region to identify access request containing the identified address as an entry point.[0026] The memory address may point to a line in the instruction cache and/or main memory containing multiple functions. Monitoring access requests for the memory address may result in false identifications of an execution event if the function accessed at the memory address is a function other than the identified function. The memory address may be used in conjunction with other identifiers for the identified function to increase the probability of successfully detecting execution events.Examples of such other identifiers may include entry point, exit point, callee functions, caller functions, parameters (e.g. non-integers and buffers), unique instructions and patterns (e.g. loops), cache footprint, local variables, and return values.[0027] With multiple cache levels, it may be difficult to monitor streams from each of the cache levels. Instructions stored at one of the difficult-to-monitor cache levels may not be monitored until the instructions are evicted from the cache. Thus, exit events may be lost for the access requests to these difficult to monitor cache levels. The stream monitor may mark access request as non-cacheable to force a cache miss and to direct the access request, and subsequent access request for the same memory address, to the main memory so that the access request may be monitored.[0028] Being able to monitor access requests to the cache and/or main memory for specified memory address reduces the amount of monitoring that would otherwise have to be done to monitor CPU instructions because not all of the memory access requests must be monitored. Further, the frequency with which access requests to the specified memory address are made is likely slower than the processing frequency of the CPU. The memory addresses may be used in conjunction with other identifiers to identify access requests for certain functions where monitoring only the memory address may lead to false positives. Difficult to monitor access requests to certain levels of the cache may be altered to force the access request to the main memory in order to make the access request more visible to the stream monitor.[0029] FIG. 1 illustrates a system including a computing device 10 in communication with a remote computing device 50 suitable for use with the various aspects. The computing device 10 may include an SoC 12 with a processor 14, a memory 16, a communication interface 18, and a storage memory interface 20. The computing device may further include a communication component 22 such as a wired or wireless modem, a storage memory 24, an antenna 26 for establishing a wireless connection 32 to a wireless network 30, and/or the network interface 28 for connecting to a wired connection 44 to the Internet 40. The processor 14 may include any of a variety of hardware cores, as well as a number of processor cores. The SoC 12 may include one or more processors 14. The computing device 10 may include more than one SoCs 12, thereby increasing the number of processors 14 and processor cores. The computing device 10 may also include processor 14 that are not associated with an SoC 12. Individual processors 14 may be multi-core processors as described below with reference to FIG. 2. The processors 14 may each be configured for specific purposes that may be the same as or different from other processors 14 of the computing device 10. One or more of the processors 14 and processor cores of the same or different configurations may be grouped together.[0030] The memory 16 of the SoC 12 may be a volatile or non-volatile memory configured for storing data and processor-executable code for access by the processor 14. The computing device 10 and/or SoC 12 may include one or more memories 16 configured for various purposes. In an aspect, one or more memories 16 may include volatile memories such as random access memory (RAM) or main memory, or cache memory. These memories 16 may be configured to temporarily hold a limited amount of data and/or processor-executable code instructions that is requested from nonvolatile memory, loaded to the memories 16 from non-volatile memory in anticipation of future access based on a variety of factors, and/or intermediary processing data and/or processor-executable code instructions produced by the processor 14 and temporarily stored for future quick access without being stored in non-volatile memory.[0031] In an aspect, the memory 16 may be configured to store processor-executable code, at least temporarily, that is loaded to the memory 16 from another memory device, such as another memory 16 or storage memory 24, for access by one or more of the processors 14. In an aspect, the processor-executable code loaded to the memory 16 may be loaded in response to execution of a function by the processor 14. Loading the processor-executable code to the memory 16 in response to execution of a function may result from a memory access request to the memory 16 that isunsuccessful, or a miss, because the requested processor-executable code is not located in the memory 16. In response to a miss, a memory access request to another memory device may be made to load the requested processor-executable code from the other memory device to the memory device 16. In an aspect, loading the processor-executable code to the memory 16 in response to execution of a function may result from a memory access request to another memory device, and the processor-executable code may be loaded to the memory 16 for later access.[0032] The communication interface 18, communication component 22, antenna 26, and/or network interface 28, may work in unison to enable the computing device 10 to communicate over a wireless network 30 via a wireless connection 32, and/or a wired network 44 with the remote computing device 50. The wireless network 30 may be implemented using a variety of wireless communication technologies, including, for example, radio frequency spectrum used for wireless communications, to provide the computing device 10 with a connection to the Internet 40 by which it may exchange data with the remote computing device 50.[0033] The storage memory interface 20 and the storage memory 24 may work in unison to allow the computing device 10 to store data and processor-executable code on a non-volatile storage medium. The storage memory 24 may be configured much like an aspect of the memory 16 in which the storage memory 24 may store the processor-executable code for access by one or more of the processors 14. The storage memory 24, being non-volatile, may retain the information even after the power of the computing device 10 has been shut off. When the power is turned back on and the computing device 10 reboots, the information stored on the storage memory 24 may be available to the computing device 10. The storage memory interface 20 may control access to the storage memory 24 and allow the processor 14 to read data from and write data to the storage memory 24.[0034] Some or all of the components of the computing device 10 may be differently arranged and/or combined while still serving the necessary functions. Moreover, the computing device 10 may not be limited to one of each of the components, and multiple instances of each component may be included in various configurations of the computing device 10.[0035] FIG. 2 illustrates a multi-core processor 14 suitable for implementing an aspect. The multi-core processor 14 may have a plurality of homogeneous or heterogeneous processor cores 200, 201 , 202, 203. The processor cores 200, 201 , 202, 203 may be homogeneous in that, the processor cores 200, 201 , 202, 203 of a single processor 14 may be configured for the same purpose and have the same or similar performance characteristics. For example, the processor 14 may be a general purpose processor, and the processor cores 200, 201 , 202, 203 may be homogeneous general purpose processor cores. Alternatively, the processor 14 may be a graphics processing unit or a digital signal processor, and the processor cores 200, 201 , 202, 203 may be homogeneous graphics processor cores or digital signal processor cores, respectively. For ease of reference, the terms "processor" and "processor core" may be used interchangeably herein.[0036] The processor cores 200, 201 , 202, 203 may be heterogeneous in that, the processor cores 200, 201 , 202, 203 of a single processor 14 may be configured for different purposes and/or have different performance characteristics. Example of such heterogeneous processor cores may include what are known as "big.LITTLE" architectures in which slower, low-power processor cores may be coupled with more powerful and power-hungry processor cores.[0037] In the example illustrated in FIG. 2, the multi-core processor 14 includes four processor cores 200, 201 , 202, 203 (i.e., processor core 0, processor core 1 , processor core 2, and processor core 3). For ease of explanation, the examples herein may refer to the four processor cores 200, 201 , 202, 203 illustrated in FIG. 2. However, the four processor cores 200, 201 , 202, 203 illustrated in FIG. 2 and described herein are merely provided as an example and in no way are meant to limit the various aspects to a four-core processor system. The computing device 10, the SoC 12, or the multi-core processor 14 may individually or in combination include fewer or more than the four processor cores 200, 201 , 202, 203 illustrated and described herein.[0038] FIG. 3 illustrates an example SoC 12 including a cache memory controller 300, a cache memory 302, a main memory controller 304, a main memory 306, stream monitor 310, and other components such as the components of the SoC 12 described above. The SoC may also include or be communicatively connected to a storage memory controller 308 and the storage memory 24. Each of the cache memory 302, the main memory 306, and the storage memory 24 may be configured to store memory contents, such as data and/or processor-executable code. The memory contents may be stored a specific locations identified by physical addresses of the cache memory 302, the main memory 306, and the storage memory 24. In an aspect, memory access requests to the memories 24, 302, and 306 may be made using a virtual address that may be translated to the physical address of the respective memory 24, 302, and 306 in order to retrieve the requested memory contents of the memory access request. The storage locations of any of the data and/or processor-executable code may change with time. The physical addresses associated with the data and/or processor-executable code may be updated in a data structure mapping the locations of the data and/or processor-executable code for access by the processor 14.[0039] The cache memory 302 may be configured to temporarily store data and/or processor-executable code for quicker access than is achievable accessing the main memory 306 or the storage memory 24. The cache memory 302 may be dedicated for use by a single processor 14 or shared between multiple processors 14, and/or subsystems (not shown) of the SoC 12. In an aspect, the cache memory 302 may be part of the processor 14, and may be dedicated for use by a single processor core or shared between multiple processor cores of the processor 14. The cache memory controller 300 may manage access to the cache memory 302 by various processors 14 and subsystems (not shown) of the SoC 12. The cache memory controller 300 may also manage memory access requests for access from the cache memory controller 300 to the main memory 306 and the storage memory 24 for retrieving memory contents that may be requested from the cache memory 302 by the processor 14, but not found in the cache memory 302 resulting in a cache miss.[0040] The main memory 306 may be configured to temporarily store data and/or processor-executable code for quicker access than when accessing the storage memory 24. The main memory 306 may be available for access by the processors 14 of one or more SoCs 12, and/or subsystems (not shown) of the SoC 12. The main memory controller 304 may manage access to the main memory 306 by various processors 14 and subsystems (not shown) of the SoC 12 and computing device. The main memory controller 304 may also manage memory access requests for access by the main memory controller 304 to the storage memory 24 for retrieving memory contents that may be requested from the main memory 306 by the processor 14 or the cache memory controller 300, but not found in the main memory 305 resulting in a main memory miss.[0041] The storage memory 24 may be configured to provide persistent storage of data and/or processor-executable code for retention when the computing device is not powered. The storage memory 24 may have the capacity to store more data and/or processor-executable code than the cache memory 302 and the main memory 306, and to store data and/or processor-executable code including those not being used or predicted for used in the near future by the processors 14 or subsystems (not shown) of the SoC 12. The storage memory 24 may be available for access by the processors 14 of one or more SoCs 12, and/or subsystems (not shown) of the SoC 12. The storage memory controller 308 may manage access to the storage memory 24 by various processors 14 and subsystems (not shown) of the SoC 12 and computing device. The storage memory controller 24 may also manage memory access requests for access from the cache memory controller 300 and the main memory controller 304 to the storage memory 24 for retrieving memory contents that may be requested from the cache memory 302 or the main memory 306 by the processor 14, but not found in the cache memory 302 or the main memory 305 resulting in a cache memory miss or a main memory miss.[0042] The stream monitor 310 may be configured to monitor communications between the processor 14, subsystems of the SoC 12 (not shown), the cache memory controller 300, the main memory controller 300, and the storage memory controller 308. The stream monitor 310 may monitor these communications by monitoring the communication activity on one or more communications buses 312 connecting the processor 14 and/or the subsystems of the SoC 12 (not shown) to each of the controllers 300, 304, and 308.[0043] Monitoring the communications between the components of the SoC 12 may include monitoring instruction request lines used to approximate execution events. The instruction request lines may be used to identify the requested processor- executable code of a memory access request to the memories 24, 302, and 306. Monitoring all instruction request lines may be overly taxing or inefficient in some implementation because not all the requested processor-executable code may be of interest for approximating or detecting execution events. So in an aspect, monitoring instruction request lines may be implemented selectively by determining processor- executable code of interest and an address in one or more of the memories 24, 302, and 306 associated with the processor-executable code.[0044] The stream monitor 310 may monitor communications to the memories 24, 302, and 306 for accesses of memory regions containing the processor-executable code. The sizes and/or types of the memory regions may vary for different aspects, including a line, a block, a page, or any other memory unit size and/or type. In an aspect, the stream monitor 310 may monitor communications for memory access requests containing entry point addresses to the memories 24, 304, and 306.Identifying a memory access request including the entry point address may allow for identification of the processor-executable code requested for execution and identification of an execution event related to the processor-executable code. It should be understood that the entry point address is simply one example of many factors that may be used to identify the processor-executable code requested for execution. References to the entry point address in the descriptions of the various aspects are for example purposes only and are not meant to be limiting as to the factors that may be used to identify processor-executable code requested for execution.[0045] In an aspect, monitoring the communications between the components of the SoC 12 may include monitoring instruction request lines, and using a combination of factors, to approximate or recognize certain execution events. In various aspects, the entry point address to the memories 24, 302, and 306 may not suffice to identify the processor-executable code requested for execution. For example, the memories 24, 302, and 306 may be divided into storage units, such as the various memory regions described above. The size of a memory region may vary for the different memories 24, 302, and 306. In an aspect where a memory region contains a single processor- executable code, the entry point address indicating a certain memory region may be sufficient to use for identifying the processor-executable code. In an aspect in which a memory region contains at least part of multiple processor-executable codes, the entry point address indicating a certain memory region may not be able to uniquely identify a single processor-executable code.[0046] As demonstrated above, a factor for identifying the processor-executable code requested for execution may not always uniquely identify the processor-executable code. This may cause ambiguity identifying the processor-executable code requested for execution. In an aspect, the stream monitor 310 may employ at least two of the following factors to identify the processor-executable code of a memory access request:• Entry point address;• Exit point address;• Callee functions;• Caller functions;• Parameters (e.g., non-integers, buffers);• Unique instructions and patterns (e.g., loops);• Cache footprint (e.g. lines in the cache memory 302); • Local variables; and• Return value: whenever a return value is written, there is a chance that a new function call may happen.[0047] The overhead cost of measuring the factor(s) for identifying the processor- executable code requested for execution may cause degradation of performance of the computing device for various tasks and resources. Such tasks may include general or specific processing, including identifying the processor-executable code requested for execution. The performance degradation on resource may include power availability. Substituting a factor(s) with lower overhead cost for the factor(s) with greater overhead cost may help reduce the performance degradation.[0048] In an aspect, monitoring all, or even a portion of the communications between the components of the SoC 12 may be difficult. The number and speed of the communications may be beyond the capacity of the stream monitor 10. This may be especially true for monitoring communications to multiple memories 24, 302, and 306 when any of them have a multilevel memory hierarchy. The stream monitor 310 may lose track of processor-executable code that is moved around within in a multilevel memory hierarchy. In an aspect, the stream monitor 310 may mark a memory access request as non-storable for a given memory 302 and 306 in order to force a memory miss. The stream monitor 310 may monitor the access request to the other memory 24 and 306 resulting from the memory miss it forced. The stream monitor 310 may use the information obtained from monitoring the memory miss to follow future memory access requests for a processor-executable code, because this information may inform the stream monitor about where processor-executable code is located in the memories 24, 302, and 306.[0049] In an aspect, the stream monitor 310 may identify the processor-executable code of a memory access request, regardless of whether there is a memory miss during the memory access request. The identified processor-executable code may be used to identify an execution event, which may prompt an API call. In an aspect, the execution event may be identified as unwanted or malicious, and the API call may be used to prevent further execution of the execution event. With the execution event blocked, at least temporarily, the source of the execution event may be identified and handled to prevent future execution of that execution event.[0050] In an aspect, the above described process may be applied to monitoring memory access request for data, rather than for processor-executable code. Data producing components may be mapped to memory regions where the components read and write data. The stream monitor 310 may detect reads from the mapped memory region to verify the component or module that is reading the location, and also detect writes to the mapped memory region in case an attacker attempts to corrupt the data.[0051] In an aspect, processor-executable code may reference to other processor- executable code and/or data stored in the memories 24, 302, and 306 using virtual addresses. For example, this is common when the processor-executable code is executed via a virtual machine run by the processor 14. However communications between some of the components of the SoC 12 via the communication buses 312 may identify locations in the memories 24, 302, and 306 using physical addresses. The stream monitor 310 may monitor memory access requests at various points, some using virtual addresses and some using physical addresses. The stream monitor 310, like other components of the SoC 12 may be configured to understand and use physical addresses to communicate among the components of the SoC 12.[0052] In an aspect, the stream monitor 310 may also be configured to understand and use virtual addresses in its communications. An aspect of the stream monitor 310 handling virtual addresses may include use of a software component, which may be part of the operating system (OS) kernel, to perform translations from virtual addresses to physical addresses as needed by the stream monitor 310. In an aspect, a translation lookaside buffer (TLB) may be monitored during a memory access request to determine the physical address range, translated by the TLB, for monitoring. In response to the processor-executable code executing, the memory region for monitoring defined by the physical address range, may be stored on a content- addressable memory (CAM) array, and the addresses may be compared during a refill. In an aspect, code may be injected into each virtual address space to access the region for monitoring defined by the physical address range.[0053] The stream monitor 310 may be implemented as software executed by the processor 14, as dedicated hardware, such as on a programmable processor device, or a combination of software and hardware modules. Some or all of the components of the SoC 12 may be differently arranged and/or combined while still serving the necessary functions. Moreover, the SoC 12 may not be limited to one of each of the components, and multiple instances of each component may be included in various configurations of the SoC 12. Aspect configurations of the SoC 12 may include components, such as the main memory controller 304, the main memory 306, and stream monitor 310 separate from, but connected to the SoC 12 via thecommunication buses 312.[0054] FIG. 4 is an illustration of memory contents stored in various configurations relative to respective memory regions 402-412 in a memory 400 in accordance with an aspect. The memory 400 may be any of the above described memories, for example, the cache memory, the main memory, or the storage memory. The memory 400 may be divided into the memory regions 402-412. As discussed above, the memory regions 402-412 maybe be of any memory unit size and/or type, such as a line, a block, or a page. The memory regions 402-412 may be the memory unit size and/or type that may be used for memory access request in a respective computing device.[0055] Memory contents stored in the memory 400 may include data and/or processor-executable code. For ease of explanation, and without limiting the scope of the description, the following examples are expressed in terms of processor- executable code. The memory regions 402-412 may contain one or more processor- executable codes (PECs) 414-424. For example, the memory region 402 may store a single processor-executable code (PEC 0) 414 within the boundaries of the memory region 402. In another example, the memory region 406 may store one or more processor-executable codes (PEC 1) 416, (PEC 2) 418 that may extend beyond the boundaries of memory region 406 into memory region 408. In another example, the memory region 410 may store multiple processor-executable codes (PEC 3) 420, (PEC 4) 422, and (PEC 5) 424 within the boundaries of the memory region 410.[0056] In the case of memory region 402 storing a single processor-executable code (PEC 0) 414, the stream monitor may employ the aspect of selectively monitoring instruction request lines by determining processor-executable code of interest and an address in the memory 400 associated with that processor-executable code. The stream monitor may monitor communications to the memory 400 for accesses of memory region 402 containing the processor-executable code (PEC 0) 414. In this aspect, the stream monitor may monitor communications for a memory access request containing an entry point address to the memory 400 at memory region 402. The entry point address of the memory access request related to the memory region 402 may uniquely identify the processor-executable code (PEC 0) 414, as the processor- executable code (PEC 0) 414 is the only processor-executable code to reside in the memory region 402. Therefore, the stream monitor may identify when the processor- executable code (PEC 0) 414 is called for execution by the processor by monitoring a memory access request for the memory region 402.[0057] The above described aspect applied for monitoring memory region 402 may not be as accurate in identifying the processor-executable code that is being retrieved for execution by the processor when a memory access request involves memory regions 406, 410. Since each of memory regions 406, 410 may store multiple processor-executable codes 416-424, identifying the memory region related to the entry point address of the memory access request may lead to false positives. [0058] One such false positive may include the identification of multiple processor- executable codes 416-424 of a respective memory region 406, 410 when less than all of the processor-executable codes 416-424 of the respective memory region 406, 410 are retrieved for execution. In this example, while multiple processor-executable codes 416-424 may be retrieved in response to the memory access request, not all of them may be executed. Another false positive may result from identifying processor- executable codes 416-424 know to be stored in one of memory regions 406, 410 accidentally, when the processor-executable code 416-424 being retrieved for execution is not known to be in the same memory region 406, 410. These examples of false positives are similar, except that in the first example a target processor- executable code 416-424 may be identified along with other processor-executable codes 416-424, and in the second example only other processor-executable codes 416- 424 may be identified. Therefore, relying on the entry point address of the memory access request alone may produce overly inclusive or incomplete information.[0059] Identifying the processor-executable code that is being retrieved from memory regions 406, 410 may employ the aspect of using a combination of factors, as illustrated in the examples provided above. Since the entry point address alone may produce overly inclusive or incomplete information, use of other factors may enable the stream monitor to identify a specific processor-executable code 416-424 from the group of other processor-executable codes 416-424 stored in the same memory region 406, 410. While unnecessary, this aspect may also be used to identify the single processor-executable codes (PEC 0) 414 stored in memory region 402.[0060] In an example, using the entry point address and the exit point address of the memory access may be used to identify processor-executable code (PEC 2). Since processor-executable code (PEC 2) 418 is partially stored in memory region 406 and in memory region 408, the entry point address and exit point address may be associated with a respective memory region 406, 408. Among any of the processor- executable codes 416, 418 stored in memory regions 406, 408, the combination of an entry point address associated with memory region 406 and an exit point address associated with memory region 408 is unique to processor-executable code (PEC 2) 418.[0061] The other factors may be applied to identify any of the processor-executable codes 416-424. For example, any of the factors may be predetermined to be associated with one or more processor-executable codes 416-424. The stream monitor may be configured to identify any combination of the factors. In response to a memory access request, the stream monitor may identify the factors and compare the factors to the processor-executable codes 416-424 with which they are related. For any two or more factors identified by the stream monitor, the processor-executable codes 416-424 associated with each of the identified factors may be the processor- executable code 416-424 targeted by the memory access request. The stream monitor may be configured such that the factors it identifies are selected for uniquely identifying one of the processor-executable codes 416-424.[0062] FIG. 5 is an illustration of an interaction of memories in a memory hierarchy 500 monitored by the stream monitor in accordance with an aspect. The memory hierarchy 500 may include multiple levels of memory, such as multiple levels of cache memory (cache memory 0) 302a, (cache memory 1) 302b, the main memory 306, and the storage device 24. Each memory access request monitored by the stream monitor may result in a hit or a miss for the memory 24, 302a, 302b, and 306 targeted by the memory access request. A hit may result from a successful memory access request, such that the memory location of the memory access request is populated and the memory contents are returned 502, 506, 510, 514. A miss may result from an unsuccessful memory access request, such that the memory location of the memory access request is not populated. For a miss, rather than returning the memory contents requested by the memory access request, a supplemental memory access request 504, 508, 512 may be made to a lower level of the memory hierarchy 500. The supplemental memory access request may be made by the memory 302a, 302b, and 306 (or its respective controller) at which the memory access request missed.[0063] The stream monitor may monitor each memory access request, supplemental memory access request 504, 508, 512, and memory contents return 502, 506, 510, 514. A memory access request may target any of the memories 24, 302a, 302b, 306 in the memory hierarchy 500. In an example, a memory access request may target cache memory 0 302a. In response to a hit the request memory contents may be returned 502. In response to a miss, a supplemental memory access request 504, for the same memory contents, may be made to the next lower level in the memory hierarchy 500, cache memory 1 302b. The stream monitor may monitor the output of the cache memory 0 302a for the return 502 or the supplemental memory access request 504. In response to the return 502, the stream monitor may identify the information it may use to estimate an execution event. In response to the supplemental memory access request 504 to the cache memory 1 302b, the stream monitor may monitor the output of the cache memory 1 302b. The supplemental access requests 504, 508, 512 may occur for each level of memory in the memory hierarchy 500, as long as there is a next lower level, until one results in a hit. The stream monitor may monitor the output of the memories 24, 302b, 306 receiving a supplemental memory access request 504, 508, 512. A supplemental memory access request may be directed to any lower level of memory in the memory hierarchy 500, and does not have to be directed only to the next lower level.[0064] In an aspect, once memory content is stored to one of the cache memories 302a, 302b, the stream monitor may loses track of the memory content until the memory content is evicted. The stream monitor may not be configured to monitor all of the memory levels of the memory hierarchy 500. Memory contents returns 502, 506 may be missed by the stream monitor. A memory access request, which may include supplemental memory access request 504, may be sent to a cache memory 302a, 302b that the stream monitor does not monitor. The stream monitor may mark the memory access request as non-cacheable. This may force a miss at the targeted cache memory 302a, 302b so that the stream monitor may monitor the supplemental memory access request 504, 508, 512, and the potential memory contents return 506, 510, 514, from a memory 24, 302b, 306 that the stream monitor may be configured to monitor. Marking the memory access request as non-cacheable may be repeated for each instance of the memory access request, or may be persistent, for example, by saving the marking to a controller of the targeted cache memory 302a, 302b. Marking the memory access request as non-cacheable may be implemented at any level of memory of the memory hierarchy 500. However, doing so at lower levels of the memory hierarchy 500, such as the main memory 306, or a lowest level of cache memory, cache memory 1 302b in the examples herein, may cause performance degradations. To avoid such performance degradations the stream monitor may avoid marking memory access requests to the lower memory levels as un-cacheable.[0065] The memories 24, 302a, 302b, 306 referred to in these examples are not meant to be limiting in number or configuration. The memory hierarchy 500 may have a variety of configurations including more or fewer of any of cache, main, and storage, memories of varying types, sizes, and speeds. The memory hierarchy 500 may also be configured to have multiple memories 24, 302a, 302b, 306 share the same memory level.[0066] FIG. 6 illustrates an aspect method 600 for implementing an approximation of execution events using memory hierarchy monitoring. The method 600 may be executed in a computing device using software, general purpose or dedicated hardware, such as the processor and/or the stream monitor, or a combination of software and hardware. In block 602, the computing device may receive information identifying processes to look for by monitoring the memory hierarchy and the factors that the processor can use to identify when those processes are executing. This received information may identify the processes that are the subject of suchmonitoring as processor-executable code that may be executed by the computing device. In an aspect, the computing device may determine whether an execution event occurs by recognizing when the identified processor-executable code is the target of a memory access request. The information indicating the processor-executable code the execution of which is to be recognized via monitoring the memory hierarchy may be preprogrammed on the computing device or provided to the computing device by a software program running on the computing device. The processor-executable code that is the subject of such monitoring may be related to functions of the computing device that may correlate to execution events on the computing device that are not authorized by a user, or by software selected for execution by the user or a system of the computing device.[0067] In block 604, the computing device may determine the factor(s) to be used for identifying the processor-executable code that may be executed in response to the memory access request. As described above, one or more factors may be used to identify the processor-executable code that is the target of a memory access request. Such factors may include, for example, an entry point address, an exit point address, callee functions, caller functions, parameters (e.g., non-integers, buffers), unique instructions and patterns (e.g., loops), cache footprint (e.g. lines in the cache memory), local variables, and return values. In various aspects, any one factor, such as the entry point address, or combination of factors may be used to uniquely identify the processor-executable code that is the target of a memory access request. As with the identification of the processor-executable code in block 602, the determination of the factor(s) to be used for identifying or recognizing the processor-executable code may be preprogrammed on the computing device or provided to the computing device by a software program running on the computing device.[0068] In block 606, the computing device may monitor communications between components connected to the communication buses. Examples of suchcommunications include memory access requests, supplemental memory access requests between memories used when there is a miss at a memory, and return values in response to the various types of memory access requests. The computing device may monitor the communications for the information relating to the factor(s) that it may use to identify whether a certain processor-executable code is accessed from memory for execution by the computing device. In block 608 the computing device may retrieve the information relating to the factor(s) from the monitoredcommunications for identifying whether the certain processor-executable code is accessed from memory for execution by the computing device. In an aspect, the computing device may be configured to retrieve only the information relating to the factor(s) determined for identifying the certain processor-executable code. In another aspect, the computing device may be configured to retrieve all of the information of a communication on the communication buses, and to parse out the information relating to the factor(s) determined for identifying the certain processor-executable code.[0069] In determination block 610, the computing device may determine whether the information relating to the factor(s) retrieved from the monitored communication matches the factor(s) determined for identifying the certain processor-executable code. The computing device may compare values of the factor(s) of the target of a memory access request with the information relating to the factor(s) of the monitoredcommunication.[0070] In response to determining that the retrieved information relating to the factor(s) of the monitored communication do not match the factor(s) determined to be indicative of the certain processor-executable code (i.e. determination block 610 = "No"), the computing device may determine that the certain processor-executable code is not being executed by the computing device in block 612. In other words, the target memory contents of the monitored memory access request are not theprocessor-executable code of interest.[0071] In response to determining that the retrieved information relating to the factor(s) of the monitored communication match the factor(s) determined to be indicative of the certain processor-executable code (i.e. determination block 610 = "Yes"), the computing device may determine that the certain processor-executable code is being executed by the computing device in block 614. In other words, the target memory contents of the monitored memory access request are the processor- executable code of interest. In block 616, the computing device may approximate the occurrence of an execution event based on the determination that the certain processor-executable code is being executed and the certain processor-executable code's relation to the execution event.[0072] FIG. 7 illustrates an aspect method 700 for identifying memory contents of a monitored memory access request. The method 700 may be executed in a computing device using software, general purpose or dedicated hardware, such as the processor and/or the stream monitor, or a combination of software and hardware. The method 700 includes an embodiment of operations that may be implemented indetermination block 610 of method 600 described above.[0073] In determination block 702, the computing device may determine whether a first retrieved information relating to a first factor of the monitored communication matches a first factor determined for identifying the certain processor-executable code. The first factor may be any factor that may be used for identifying the certain processor-executable code as the target memory contents of the monitored memory access request. For example, the first factor may be the entry point address of the memory access request as the entry point address may be used by itself to uniquely identify the certain processor-executable code.[0074] In response to determining that the first retrieved information relating to the first factor of the monitored communication does not match the first factor determined for identifying the certain processor-executable code (i.e. determination block 702 = "No"), the computing device may determine that the certain processor-executable code is not executed by the computing device in block 612.[0075] In response to determining that the first retrieved information relating to the first factor of the monitored communication does match the first factor determined for identifying the certain processor-executable code (i.e. determination block 702 = "Yes"), the computing device may determine whether a next factor is needed to identify the certain processor-executable code in determination block 704. As described above, identifying a processor-executable code as the target of the monitored memory access request may require a combination of factors when a single factor may result in ambiguity or false positives for other processor-executable codes. In other words, the factor may not uniquely identify the certain processor-executable code. The next factor may be any of the factors that have not already been used to identify the certain processor-executable code. In an aspect, the determination of whether a next factor is needed may be based on the overhead of measuring the factors. For example, in response to a factor being too costly to monitor, a next factor that is less costly to monitor while providing suitable recognition of the certain code may be monitored instead. Such a substitute factor may be monitored alone or in conjunction with another factor(s) to identify the certain processor-executable code. A determination that the overhead of a factor is too costly to monitor may be based on whether the overhead for monitoring the factor exceeds a threshold.[0076] In response to determining that the next factor is not needed to identify the certain processor-executable code (i.e. determination block 704 = "No"), the computing device may determine that the certain processor-executable code is executed by the computing device in block 614.[0077] In response to determining that the next factor is needed to identify the certain processor-executable code (i.e. determination block 704 = "Yes"), the computing device may determine whether the next retrieved information relating to the next factor of the monitored communication matches the next factor determined for identifying the certain processor-executable code in determination block 706. In response to determining that the next retrieved information relating to the next factor of the monitored communication does not match the next factor determined for identifying the certain processor-executable code (i.e. determination block 706 = "No"), the computing device may determine that the certain processor-executable code is not executed by the computing device in block 612. In response todetermining that the next retrieved information relating to the next factor of the monitored communication does match the next factor determined for identifying the certain processor-executable code (i.e. determination block 706 = "Yes"), the computing device may determine whether a next factor is needed to identify the certain processor-executable code in determination block 704 as described above.[0078] FIG. 8 illustrates an aspect method 800 for monitoring a memory access request resulting in a hit or a miss. The method 800 may be executed in a computing device using software, general purpose or dedicated hardware, such as the processor and/or the stream monitor, or a combination of software and hardware. The method 800 includes an embodiment of operations that may be implemented in block 606 of method 500 described above.[0079] In determination block 802, the computing device may determine whether a monitored memory access request results in a hit. In other words, the computing device may determine whether the target memory content of the monitored memory access is located at the location of the memory specified by the monitored memory access request. The monitored memory access request may alternatively result in a miss, such that the target memory content of the monitored memory access is not located at the location of the memory specified by the monitored memory access request. In response to determining that the monitored memory access request results in a hit (i.e. determination block 802 = "Yes"), in block 608 the computing device may retrieve the information relating to the factor(s) from the monitoredcommunications for identifying whether the certain processor-executable code is accessed from memory for execution by the computing device.[0080] In response to determining that the monitored memory access request results in a miss (i.e. determination block 802 = "Yes"), the computing device may monitor a supplemental memory access request for the target memory contents in another memory in block 804. A miss for the monitored memory access request may prompt the computing device to generate a supplemental memory access request to another memory that may be at a lower level in the memory hierarchy of the computing device. The computing device may monitor the supplemental memory access request in much that same way that it may monitor the memory access request.[0081] In determination block 806, the computing device may determine whether the supplemental memory access request results in a hit. In response to determining that the supplemental memory access request results in a hit (i.e. determination block 806 = "Yes"), in block 608 the computing device may retrieve the information relating to the factor(s) from the monitored communications for identifying whether the certain processor-executable code is accessed from memory for execution by the computing device. In response to determining that the supplemental memory access request results in a miss (i.e. determination block 806 = "No"), the computing device may monitor a supplemental memory access request for the target memory contents in another memory in block 804. A miss for the supplemental memory access request may prompt the computing device to generate another supplemental memory access request to another memory that may be at a lower level in the memory hierarchy of the computing device. Supplemental memory access requests may continue to be generated by the computing device as long as there is a lower level in the memory hierarchy of the computing device to target with the supplemental memory access request.[0082] FIG. 9 illustrates an aspect method 900 for monitoring a memory access request targeting a memory that is not monitored. The method 900 may be executed in a computing device using software, general purpose or dedicated hardware, such as the processor and/or the stream monitor, or a combination of software and hardware. In determination block 902, the computing device may determine whether it is able to monitor a target memory of a memory access request. As described above, in computing devices with multi-leveled memory hierarchies, the computing device may not always be configured to monitor the inputs and outputs of each level of the memory hierarchy. As such, some of the information relating to the factor(s) for identifying a processor-executable code of a memory access request may not be retrieved by the computing device. Without the information, the computing device may not be able to accurately identify the processor-executable code of the memory access request.[0083] In response to determining that the computing device can monitor the target memory of the memory access request (i.e. determination block 902 = "Yes"), the computing device may monitor communications between components connected to the communication buses in block 606 as described above.[0084] In response to determining that the computing device cannot monitor the target memory of the memory access request (i.e. determination block 902 = "No"), the computing device may mark a memory access request targeting the target memory that cannot be monitored as un-cacheable in block 904. Marking the memory access request un-cacheable may force a miss at the target memory, and the computing device may monitor a supplemental memory access request for the target memory contents in another memory in block 804 as described above.[0085] The various aspects (including, but not limited to, aspects discussed above with reference to FIGs. 1-9) may be implemented in a wide variety of computing systems, which may include an example mobile computing device suitable for use with the various aspects illustrated in FIG. 10. The mobile computing device 1000 may include a processor 1002 coupled to a touchscreen controller 1004 and an internal memory 1006. The processor 1002 may be one or more multicore integrated circuits designated for general or specific processing tasks. The internal memory 1006 may be volatile or non-volatile memory, and may also be secure and/or encrypted memory, or unsecure and/or unencrypted memory, or any combination thereof. Examples of memory types that can be leveraged include but are not limited to DDR, LPDDR, GDDR, WIDEIO, RAM, SRAM, DRAM, P-RAM, R-RAM, M-RAM, STT-RAM, and embedded DRAM. The touchscreen controller 1004 and the processor 1002 may also be coupled to a touchscreen panel 1012, such as a resistive-sensing touchscreen, capacitive-sensing touchscreen, infrared sensing touchscreen, etc. Additionally, the display of the computing device 1000 need not have touch screen capability.[0086] The mobile computing device 1000 may have one or more radio signal transceivers 1008 (e.g., Peanut, Bluetooth, Zigbee, Wi-Fi, RF radio) and antennae 1010, for sending and receiving communications, coupled to each other and/or to the processor 1002. The transceivers 1008 and antennae 1010 may be used with the above-mentioned circuitry to implement the various wireless transmission protocol stacks and interfaces. The mobile computing device 1000 may include a cellular network wireless modem chip 1016 that enables communication via a cellular network and is coupled to the processor.[0087] The mobile computing device 1000 may include a peripheral device connection interface 1018 coupled to the processor 1002. The peripheral device connection interface 1018 may be singularly configured to accept one type of connection, or may be configured to accept various types of physical andcommunication connections, common or proprietary, such as USB, Fire Wire,Thunderbolt, or PCIe. The peripheral device connection interface 1018 may also be coupled to a similarly configured peripheral device connection port (not shown).[0088] The mobile computing device 1000 may also include speakers 1014 for providing audio outputs. The mobile computing device 1000 may also include a housing 1020, constructed of a plastic, metal, or a combination of materials, for containing all or some of the components discussed herein. The mobile computing device 1000 may include a power source 1022 coupled to the processor 1002, such as a disposable or rechargeable battery. The rechargeable battery may also be coupled to the peripheral device connection port to receive a charging current from a source external to the mobile computing device 1000. The mobile computing device 1000 may also include a physical button 1024 for receiving user inputs. The mobile computing device 1000 may also include a power button 1026 for turning the mobile computing device 1000 on and off.[0089] The various aspects (including, but not limited to, aspects discussed above with reference to FIGs. 1-9) may be implemented in a wide variety of computing systems, which may include a variety of mobile computing devices, such as a laptop computer 1 100 illustrated in FIG. 1 1. Many laptop computers include a touchpad touch surface 1 1 17 that serves as the computer's pointing device, and thus may receive drag, scroll, and flick gestures similar to those implemented on computing devices equipped with a touch screen display and described above. A laptop computer 1 100 will typically include a processor 1 1 1 1 coupled to volatile memory 1 1 12 and a large capacity nonvolatile memory, such as a disk drive 1 1 13 of Flash memory.Additionally, the computer 1 100 may have one or more antenna 1 108 for sending and receiving electromagnetic radiation that may be connected to a wireless data link and/or cellular telephone transceiver 1 1 16 coupled to the processor 1 1 1 1. The computer 1 100 may also include a floppy disc drive 1 1 14 and a compact disc (CD) drive 1 1 15 coupled to the processor 1 1 1 1. In a notebook configuration, the computer housing includes the touchpad 1 1 17, the keyboard 1 1 18, and the display 1 1 19 all coupled to the processor 1 1 1 1. Other configurations of the computing device may include a computer mouse or trackball coupled to the processor (e.g., via a USB input) as are well known, which may also be used in conjunction with the various aspects.[0090] The various aspects (including, but not limited to, aspects discussed above with reference to FIGs. 1-9) may be implemented in a wide variety of computing systems, which may include any of a variety of commercially available servers for compressing data in server cache memory. An example server 1200 is illustrated in FIG. 12. Such a server 1200 typically includes one or more multi-core processor assemblies 1201 coupled to volatile memory 1202 and a large capacity nonvolatile memory, such as a disk drive 1204. As illustrated in FIG. 12, multi-core processor assemblies 1201 may be added to the server 1200 by inserting them into the racks of the assembly. The server 1200 may also include a floppy disc drive, compact disc (CD) or DVD disc drive 1206 coupled to the processor 1201. The server 1200 may also include network access ports 1203 coupled to the multi-core processor assemblies 1201 for establishing network interface connections with a network 1205, such as a local area network coupled to other broadcast system computers and servers, the Internet, the public switched telephone network, and/or a cellular data network (e.g., CDMA, TDMA, GSM, PCS, 3G, 4G, LTE, or any other type of cellular data network).[0091] Computer program code or "program code" for execution on a programmable processor for carrying out operations of the various aspects may be written in a high level programming language such as C, C++, C#, Smalltalk, Java, JavaScript, Visual Basic, a Structured Query Language (e.g., Transact-SQL), Perl, or in various other programming languages. Program code or programs stored on a computer readable storage medium as used in this application may refer to machine language code (such as object code) whose format is understandable by a processor.[0092] Many computing devices operating system kernels are organized into a user space (where non-privileged code runs) and a kernel space (where privileged code runs). This separation is of particular importance in Android and other general public license (GPL) environments in which code that is part of the kernel space must be GPL licensed, while code running in the user-space may not be GPL licensed. It should be understood that the various software components/modules discussed here may be implemented in either the kernel space or the user space, unless expressly stated otherwise.[0093] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of the various aspects must be performed in the order presented. As will be appreciated by one of skill in the art the order of operations in the foregoing aspects may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the operations; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles "a," "an" or "the" is not to be construed as limiting the element to the singular.[0094] The various illustrative logical blocks, modules, circuits, and algorithm operations described in connection with the various aspects may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.[0095] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a fieldprogrammable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function.[0096] In one or more aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non- transitory computer-readable medium or a non-transitory processor-readable medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module that may reside on a non-transitory computer- readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.[0097] The preceding description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. |
Methods and associated structures of forming a microelectronic device are described. Those methods may include forming a first block on a nanodot material, forming a first spacer on the first block, removing the first block to form a free standing spacer, removing exposed portions of the nanodot material and then the free standing spacer to form nanowires, forming a second block at an angle to a length of the nanowires, forming a second spacer on the second block, forming a second free standing spacer on the nanowires by removing the second block, and removing exposed portions of the nanowires and then the second free standing spacer to form an ordered array of nanodots. |
IN THE CLAIMS What is claimed is: 1. A method comprising: forming a πaπodot material on a substrate; forming an oxide block on the nanodot material; forming a nitride film on the substrate; patterning the nitride film to form a nitride spacer; removing the oxide block to form a free standing nitride spacer removing exposed portions of the nanodot material and then the free standing nitride spacer to form nanowires; forming an orthogonal oxide block at a substantially orthogonal angle to a length of the nanowires; forming an orthogonal nitride spacer on the orthogonal oxide block; removing the orthogonal oxide block to form an orthogonal free standing nitride spacer; and removing an exposed portion of the nanowires and then the orthogonal free standing nitride spacer to form an ordered array of nanodots on the substrate. 2. The method of claim 1 wherein nanodot material comprises at least one of silicon, silicon germanium, germanium, silicon nitride, metal, and a material comprising a band gap different from the substrate. 3. The method of claim 1 wherein patterning the nitride film to form a nitride spacer comprises: removing the nitride film from the nanodot material adjacent to the oxide block by utilizing a dry etching process; and removing the oxide block by utilizing a wet etch. 4. The method of claim 1 further comprising wherein the nanowires comprise the nanomaterial. 5. The method of claim 1 wherein the substrate comprises an oxidized silicon substrate. 6. The method of claim 1 wherein forming the nitride spacer further comprises wherein the nitride spacer covers an outer portion of the oxide block but does not cover a top portion of the oxide block. 7. The method of claim 1 further comprising wherein a pitch between adjacent nanodots comprises less than about 10 nm. 8. The method of claim 1 further comprising wherein a thickness of the nitride spacer and a thickness of the orthogonal nitride spacer determines at least one of a side length of the nanodot 9. A method comprising: forming a first block on a nanodot material; forming a first spacer on the first block; removing the first block to form a first free standing spacer, removing exposed portions of the nanodot material and then the first free standing spacer to form nanowires; forming a second block at an angle to a length of the nanowires; forming a second spacer on the second block; forming a second free standing spacer on the nanowires by removing the second block; and removing exposed portions of the nanowires and then the second free standing spacer to form an ordered array of nanodots. 10. The method of claim 9 further comprising wherein a pitch between adjacent nanodots comprises less than about 10 nm. 11. The method of claim 9 further comprising wherein the first and second block comprise a dielectric material. 12. The method of claim 11 further comprising wherein the first spacer and the second spacer comprise a material that is selective to the dielectric material. 13. The method of claim 9 further comprising wherein nanowires comprise the nanodot material. 14. The method of claim 9 wherein the nanodot material comprises at least one of silicon, silicon germanium, germanium, silicon nitride, metal, and any material comprising a band gap different from the substrate. 15. A structure comprising: an ordered array of nanodots disposed on a substrate, wherein at least one of a first side length and a second side length of an individual nanodot comprises less than about 50 nm. 16. The structure of claim 15 wherein the nanodots comprise at least one of silicon, silicon germanium, germanium, silicon nitride, metal, and any material comprising a band gap different from the substrate. 17. The structure of claim 15 wherein the substrate comprises an oxidized silicon substrate. 18. The structure of claim 15 wherein claim 1 further comprising wherein a first side of the nanodot comprises a first length, and a second side of the nanodot comprises a second length. 19. The structure of claim 18 wherein the first length and the second length comprises different magnitudes. 20. The structure of claim 15 wherein a pitch between individual ones of the ordered array of nanodots comprises below about 20 nm. |
METHODS OF FORMING NANODOTS USING SPACER PATTERNING TECHNIQUES AND STRUCTURES FORMED THEREBY BACK GROUND OF THE INVENTION Nanodots may be utilized in the fabrication of microelectronic devices, such as data storage devices, for example. Nanodots have been fabricated using classical lithographic techniques. BRIEF DESCRIPTION OF THE DRAWINGS While the specification concludes with claims particularly pointing out and distinctly claiming that which is regarded as the present invention, the advantages of this invention can be more readily ascertained from the following description of the invention when read in conjunction with the accompanying drawings in which: FIGS. 1a-1π represent structures according to an embodiment of the present invention. DETAILED DESCRIPTION OF THE PRESENT INVENTION In the following detailed description, reference is made to the accompanying drawings that show, by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. It is to be understood that the various embodiments of the invention, although different, are not necessarily mutually exclusive. For example, a particular feature, structure, or characteristic described herein, in connection with one embodiment, may be implemented within other embodiments without departing from the spirit and scope of the invention. In addition, it is to be understood that the location or arrangement of individual elements within each disclosed embodiment may be modified without departing from the spirit and scope of the invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims, appropriately interpreted, along with the full range of equivalents to which the claims are entitled. In the drawings, like numerals refer to the same or similar functionality throughout the several views. Methods and associated structures of forming microelectronic structures are described. Those methods may include forming a first block on a nanodotmaterial, forming a first spacer on the first block, removing the first block to form a free standing spacer, removing the πanodot material in a field area and then the free standing spacer to form nanowires, forming a second block at an angle to a length of the nanowires, forming a second spacer on the second block, forming a free standing spacer on the nanowires by removing the second block, and removing an exposed portion of the nanowires and then the free standing spacer to form an ordered array of nanodots. Methods of the present invention enable the fabrication of ordered rows on nanodots at pitches below those attainable by standard lithographic techniques. FIGS. 1a-1 n illustrate embodiments of methods of forming nanodots, such as may be used in data storage devices, for example. FIG. 1a illustrates a cross- section of a substrate 100. The substrate 100 may comprise a wafer in some embodiments, and may comprise circuitry for memory functions, including but not limited to reading and writing functions, for example. The substrate 100 may comprise silicon in some cases, or any other suitable type of wafer material, depending upon the particular application. Optionally the substrate 100 may comprise an oxidized silicon wafer. In one embodiment, the substrate 100 may comprise a CMOS microelectronic wafer, which may comprise a diameter suitable for the particular application, such as a diameter of about 12 inches or greater, in some cases. A nanomaterial 104, may be formed on the substrate 100. The naπodot material 102 may comprise any type of πanodot material, may comprise a material such as but not limited to silicon, silicon germanium, germanium, silicon nitride, metal, and/or any material with a band gap and/or offset different to the surrounding medium, such as different from the substrate, for example. In one embodiment, the nanomaterial 102 may comprise a thickness 101 of the nanodot material 102 may comprise about 2πm to about 100nm. A dielectric block 104 may be formed on the nanomaterial 102 (FIG. 1b). The dielectric block 104 may be patterned from a dielectric material, such as a nitride layer and/or a silicon dioxide layer, for example, using available patterning techniques, comprising wet and/or dry etching techniques for example. In one embodiment, the dielectric block 104 may comprise a first block 104, and may comprise an oxide block 104. A dielectric material 106 may be formed on thedielectric block 104 (FIG. 1c). The dielectric material 106 may comprise a nitride and/or an oxide material in some embodiments. The dielectric material 106 may be formed by utilizing a chemical vapor deposition (CVD) process in some embodiments. The dielectric material 106 may comprise a material that is highly etch selective to the material comprising the dielectric block 104. For example, the dielectric block 104 may comprise one of a nitride and an oxide material in on embodiment, and the dielectric material 106 may comprise the other of the nitride and oxide material. In one embodiment, the dielectric material 106 may be patterned to form a spacer 108 that may comprise a first spacer 108 in some embodiments (FIG. 1d). The dielectric material of the spacer 108 may comprise a thickness 110, which may comprise about 20 nm and below in some embodiments. In one embodiment, the spacer 108 may be patterned by removing the dielectric material 106 from the nanodot material 102 adjacent the dielectric block 104, and from a top portion 107 of the dielectric block 104 by utilizing a dry etching process, for example. In one embodiment, the dry etch may comprise an anisotropic etch, such as a reactive ion etch (RIE). In one embodiment, the spacer 108 may cover an outer portion 109 of the oxide block 104. In one embodiment, the outer portion 109 of the oxide block 104 may comprise a width 111 which may be about 1 to about 20 nm. The dielectric block 104 may be removed from the nanodot material 102 by utilizing a wet etch, for example to form a free standing spacer 112 (FIG. 1e). In one embodiment, the free standing spacer 112 may comprise a first free standing spacer 112. Exposed portions 113 of the naπomaterial 102 adjacent the free standing spacer 112 may be removed using any suitable etch (FIG. 1f)- The free standing spacer material 112 may then be removed from the underlying nanodot material 102, using any suitable etch process. The underlying nanomaterial 102 may comprise nanowires 114 that may be formed/exposed from removing the free standing spacer 112 and exposed portions 113 of the naπomaterial 102. The free standing spacer 112 acts as a mask for the formation of the nanowires 114. The width 111 of the free standing spacer 112 may determine a thickness 115 of the nanowires 114. The nanowires 114 may comprise a thickness 115 of about 20 nm or less in some embodiments.A second dielectric block 116 may be formed at an angle 117 to a length 118 of the nanowires 114 (FIG. 1g). In one embodiment, the second dielectric block 116 may comprise similar materials as the first dielectric block 104, such as oxide and/or nitride materials. The second dielectric block 116 may comprise an orthogonal block 116 in some applications. The angle 117 in relation to the length 118 of the nanowires 114 may comprise a substantially orthogonal angle, in some embodiments, but the angle 117 may vary according to the particular application. In one embodiment, the second block 117 may be formed from a dielectric material, such as a nitride layer and/or a silicon dioxide layer, for example, that may be patterned using available patterning techniques, comprising wet and/or dry etching techniques for example. In one embodiment, a second dielectric material (that may be formed by utilizing a (CVD) process in some embodiments) may be formed on the second dielectric block 116, the nanowires 114 and on the substrate 100 and may be patterned to form a second spacer 120 (FIG. 1h). In one embodiment, the second spacer 120 may comprise a width 122 and a thickness 121, which may comprise about 20 nm and below in some embodiments. The second spacer 120 may be patterned by removing the second dielectric material from the nanowires 114 adjacent the second dielectric block 116, and from a top portion 123 of the second dielectric block 116 by utilizing a dry etching process, for example. In one embodiment, the dry etch may comprise an anisotropic etch, such as a reactive ion etch (RIE). The second spacer 120 may comprise a nitride and/or an oxide material in some embodiments. The second spacer 120 may comprise a material that is highly etch selective to the material comprising the second dielectric block 116. In one embodiment the second spacer 120 may comprise an orthogonal spacer 120. The second dielectric block 116 may be removed from the substrate 100 by utilizing a wet etch, for example to form a second free standing spacer 124 (FIG. Ii). In one embodiment, the second free standing spacer 124 may comprise a second free standing spacer 124 or an orthogonal free standing spacer 124 that • may comprise a width 127. Exposed portions 125 of the nanowires 114 adjacent the second free standing spacer 124 may be removed using any suitable etch (Fig. 1j). The second free standing spacer 124 may then be removed from theunderlying nanowires 114 using any suitable etch process to form an ordered array of nanodots 126 disposed on the substrate 100 (FIG. 1k). In one embodiment, individual nanodot 130 of the ordered array of nanodots 126 may comprise a first side 131 and a second side 132 (FIG. 11). The individual nanodot 130 may comprise a first side length 133 and a second side length 135. At least one of the first and the second side length 133. 135 may comprise about 50 nm or below, in some embodiments. A pitch 128 between adjacent individual nanodots 130 (referring back to FIG 1 k) may comprise about 20 nm and below. In one embodiment, a thickness 137 of the individual nanodot 130 may comprise about 20 nm and below. The exact dimensions, geometries and pitches of the nanodots may vary depending upon the particular application, and can comprise any shape suitable for the application. The width 111 of the first spacer 112 and the width 127 of the second spacer 124 may determine the lengths 133, 135 of the first side and the second side 131, 133 of an individual nanodot 130. In some embodiments, the first and second sides may be of the same length, and in other embodiments, they may be of different lengths to each other, depending upon the desired widths of the first and second spacers 112, 124 for the particular application. In another embodiment, the pitch could be made successively tighter between adjacent individual nanodots for a given array of nanodots by repeating the spacer lithography technique (Fig 1 b - fig 1 e above), i.e., depositing a second spacer material on the spacers of Fig 1e, etching this material to form a second set of spacers on the sidewalls of the original spacer (which are now removed), whose pitch is now halved with respect to the first spacer system. This can be repeated in both x-direction (fig 1c-fig1f) and y- (fig 1g-fig 1i) direction. In one embodiment, a spacer 136, which may comprise such materials as a dielectric material, for example, may be formed on individual nanodots 130 disposed on a substrate 100 (FIG. 1m), that may comprise a first pitch 128 between adjacent nanodots 130. The spacer 136 may comprise a thickness 137, which may comprise a thickness that is less than a thickness 138 of the nanodot 130. The exposed portions of the nanodots 130 may be removed (those portions not covered by the spacer 136) using suitable anisotropic etching techniques, and then the spacer 136 may be removed from the underlying nanodots 130 (FlG. 1π).A second pitch 140 may be achieved between adjacent nanodots 130 that may be smaller than the first pitch 128. The process of utilizing spacer lithography may be repeated until a desired feature pitch is obtained. Thus, benefits of the present invention include taking advantage of a spacer lithography approach to first print nanowires at less than minimum lithographic pitch, and then using the same spacer lithographic approach to print structures orthogonal to the first spacers, and patterning to form nanodots. The use of spacer patterning enables the definition of nanodots at half the minimum pitch available with conventional lithographic techniques. Another advantage is the ability to place an array of nanodots in an ordered fashion at separations below that which are attainable by conventional lithographic techniques. Although the foregoing description has specified certain steps and materials that may be used in the method of the present invention, those skilled in the art will appreciate that many modifications and substitutions may be made. Accordingly, it is intended that all such modifications, alterations, substitutions and additions be considered to fall within the spirit and scope of the invention as defined by the appended claims. In addition, it is appreciated that certain aspects of microelectronic devices, such as memory related structures, are well known in the art. Therefore, it is appreciated that the Figures provided herein illustrate only portions of an exemplary microelectronic device that pertains to the practice of the present invention. Thus the present invention is not limited to the structures described herein. |
The present disclosure relates generally to serial communication links and, more specifically, to events communicated on serial communication links and the timing of those events. The events may be communicated according to a prioritization process. |
CLAIMSWhat is claimed is: 1. A method of prioritizing events, comprising:delaying two or more events, each event delayed for at least a delay time corresponding to a uniform delay;preventing additional frames from being started on a serial communication link while any of the two or more events is being delayed;transmitting an event frame corresponding to one of the two or more events with a highest priority after its corresponding delay time;holding events with a priority lower than the event frame being transmitted until after a corresponding delay time and no higher priority events are pending; and repeating the transmitting the event frame corresponding to a highest priority event until event frames have been transmitted for all of the two or more events;wherein the event frame includes event identifier bits indicating a frame being transmitted is an event frame and event bits indicating which event of the two or more events is being transmitted in the event frame. 2. The method of claim 1 , wherein the uniform delay is equal to a frame time.3. The method of claim 1 , wherein delaying the two or more events comprises shifting each event through a shift register for a number of shifts equal to a number of bits in the uniform delay.4. The method of claim 1 , wherein delaying the two or more events comprises counting a number bits in the uniform delay.5. The method of claim 1 , further comprising inserting an error indication into an event frame corresponding to an event held past its delay time while an event frame corresponding to at least one higher priority event is transmitted.6. The method of claim 1, further comprising:asserting one or more event indicators responsive to the two or more events; and responsive to the one or more asserted event indicators, holding at least one of the two or more events while the one or more event indicators are asserted.7. The method of claim 6, wherein the holding at least one of the two or more events comprises one or more of stopping a shift register or pausing a counter.8. The method of claim 6, further comprising asserting a no-qualified event indicator responsive to the one or more event indicators.9. The method of claim 6, further comprising:de-asserting the one or more event indicators responsive to the two or more events; and responsive to the one or more de-asserted event indicators, delaying the at least one of the two or more events while the one or more event indicators are de-asserted.10. The method of claim 1, further comprising asserting a qualified event indicator corresponding to a qualified event after its delay time and responsive to there being no asserted event indicators indicating higher priority events are pending.11. The method of claim 10, further comprising encoding an event frame corresponding to the qualified event responsive to the qualified event indicator.12. A serial communication link transmitter, comprising:two or more delay circuits, each delay circuit configured for receiving an event occurrence corresponding to that delay circuit and delaying the event occurrence by a delay time corresponding to a frame time; andpriority logic configured for:determining a priority order for two or more event occurrences;preventing additional frames from being started while any of the two or more delay circuits is delaying its corresponding event;transmitting an event frame corresponding to a highest priority event after its corresponding delay time; holding event occurrences with a priority lower than the highest priority event in their corresponding delay circuit until after its corresponding delay time and no higher priority events are pending; andrepeating the transmitting the event frame corresponding to the highest priority event until event frames have been transmitted for all of the two or more event occurrences;wherein the event frame includes event identifier bits indicating a frame being transmitted is an event frame and event bits indicating which event of the two or more event occurrences is being transmitted in the event frame.13. The serial communication link transmitter of claim 12, wherein each of the two or more delay circuits comprises a shift register for shifting the event occurrence by a number of shifts equal to a number of bits in the frame time. 14. The serial communication link transmitter of claim 12, wherein each of the two or more delay circuits comprises a counter for determining the delay time.15. The serial communication link transmitter of claim 12, further comprising an interface configured for communication according to a protocol selected from the group consisting of Universal Asynchronous Receiver/Transmitter, Universal Synchronous Receiver/Transmitter, and Universal Synchronous/ Asynchronous Receiver/Transmitter.16. A serial communication link transmitter, comprising:priority logic comprising two or more priority modules corresponding to two or more event indicators, each priority module comprising:a shift register for shifting the event indicator by a number of shifts equal to anumber of bits in a frame time;logic to indicate a first pending event if any bit in the shift register is asserted; logic for holding the shift register from shifting if there is another pending event in a higher priority module; and logic for indicating a first pending event is ready to transmit if the first pending event has reached the end of the shift register and there are no pending events in higher priority modules;transmission circuitry configured for sending an event frame, wherein the transmission circuitry is configured to include in the event frame, event bits indicating which event of the two or more event indicators is being transmitted and event identifier bits indicating the frame being transmitted is an event frame. |
DEVICES AND METHODS FOR PRIORITIZING TRANSMISSION OF EVENTS ON SERIAL COMMUNICATION LINKSCROSS-REFERENCE TO RELATED APPLICATION This application claims the benefit under 35 U.S.C. §119(e) of U.S. Provisional Patent Application Serial No. 62/502,343, filed May 5, 2017, the disclosure of which is hereby incorporated herein in its entirety by this reference.TECHNICAL FIELDEmbodiments of the present disclosure relate generally to serial communication links and, more specifically, to events communicated on serial communication links and the timing of those events.BACKGROUNDIn many embedded control systems, and other computing systems, movement of data between peripheral devices and a host, or between peripheral devices, may be a significant amount of data traffic on the various buses that may exist in such systems. Moreover, some of this data traffic may include information related to events that occur and timing of these events.In conventional inter-chip communication, one approach is to communicate such event information on dedicated lines signaling the events to manage the timing of the event communication. However, there is typically an extra cost for additional lines. The cost of adding lines might be high and even be prohibitive due to layout constraints. Another approach is to send the event information as soon as possible as the next communication packet on a serial communication link. However, this approach may lose important event details, for example, timing details about when an actual event occurred. Other deficiencies and limitations in these and other approaches may exist.There is a need for communication of events and event timing details on serial communication links to indicate relative timing of events between a master and one or more slaves. DISCLOSURESome embodiments of the present disclosure relate, generally, to a method of prioritizing events. The method may include delaying two or more events, each event delayed for at least a delay time corresponding to a uniform delay; preventing additional frames from being started on a serial communication link while any of the two or more events is being delayed; transmitting an event frame corresponding to one of the two or more events with a highest priority after its corresponding delay time; holding events with a priority lower than the event frame being transmitted until after a corresponding delay time and no higher priority events are pending; and repeating the transmitting the event frame corresponding to a highest priority event until event frames have been transmitted for all of the two or more events. In one embodiment, the event frame includes event identifier bits indicating the frame being transmitted is an event frame and event bits indicating which event of the two or more events is being transmitted in the event frame.Some embodiments of the present disclosure relate, generally, to a serial communication link transmitter. The serial communication link transmitter may include two or more delay circuits and priority logic. Each delay circuit may be configured for receiving an event occurrence corresponding to that delay circuit and delaying the event occurrence by a delay time corresponding to a frame time. The priority logic may be configured for: determining a priority order for two or more event occurrences; preventing additional frames from being started while any of the two or more delay circuits is delaying its corresponding event; transmitting an event frame corresponding to a highest priority event after its corresponding delay time; and holding event occurrences with a priority lower than the highest priority event in their corresponding delay circuit until after its corresponding delay time and no higher priority events are pending; and repeating the transmitting the event frame corresponding to a highest priority event until event frames have been transmitted for all of the two or more event occurrences. In one embodiment, the event frame includes event identifier bits indicating the frame being transmitted is an event frame and event bits indicating which event of two or more event occurrences is being transmitted in the event frame.Some embodiments relate, generally, to a serial communication link transmitter.The serial communication link transmitter may include priority logic and transmission circuitry. The priority logic may comprise two or more priority modules corresponding to two or more event indicators. Each priority module may include a shift register for shifting the event indicator by a number of shifts equal to a number of bits in a frame time; logic to indicate a pending event if any bit in the shift register is asserted; logic for holding the shift register from shifting if there is a pending event in a higher priority module; and logic for indicating the event is ready to transmit if the event has reached the end of the shift register and there are no pending events in higher priority modules. The transmission circuitry may be configured for sending an event frame, wherein the transmission circuitry is configured to include in the event frame, event bits indicating which event of the two or more event indicators is being transmitted and event identifier bits indicating the frame being transmitted is an event frame.BRIEF DESCRIPTION OF THE DRAWINGSAdvantages of the embodiments of the disclosure will be apparent to those of ordinary skill in the art from the following detailed description and the accompanying drawings:FIG. 1 A shows a block diagram of a transmitter and a receiver with a serial communication link, according to an embodiment of the disclosure.FIGS. IB-IE show flowcharts showing processes for transmission of events over a serial communication link, according to embodiments of the disclosure.FIG. 2A shows a detailed timing diagram illustrating transmission of certain events over a serial communication link with a delay count included in the event transmission, according to embodiments of the disclosure.FIGS. 2B, 2C, and 2D show marked sections of FIG. 2A in an expanded view. FIG. 3A shows a detailed timing diagram illustrating transmission of certain events over a serial communication link with a predetermined delay for the event transmission, according to embodiments of the disclosure.FIGS. 3B and 3C show marked sections of FIG. 3 A in an expanded view.FIG. 3D shows a detailed timing diagram illustrating transmission of certain events over a serial communication link with a predetermined delay for the event transmission, according to embodiments of the disclosure.FIGS. 3E and 3F show marked sections of FIG. 3D in an expanded view.FIGS. 4A-4C show frame level timing diagrams illustrating different priority event timings and some errors that may occur over a serial communication link, according to embodiments of the disclosure. FIG. 5 shows a logic diagram illustrating priority logic as an example for prioritizing events on a serial communication link, according to embodiments of the disclosure.FIGS. 6A-6C show frame level timing diagrams illustrating event timings for prioritized events over a serial communication link, according to embodiments of the disclosure.FIG. 7 shows a flowchart of a process for prioritizing events according to an embodiment of the disclosure.FIG. 8 shows a block diagram of a touch panel system including a system controller, a touch controller, and a display panel with serial communication links according to an embodiment of the disclosure.MODE(S) FOR CARRYING OUT THE INVENTION In the following detailed description, reference is made to the accompanying drawings, which form a part hereof, and in which are shown, by way of illustration, specific example embodiments in which the present disclosure may be practiced. These embodiments are described in sufficient detail to enable a person of ordinary skill in the art to practice the present disclosure. However, other embodiments may be utilized, and structural, material, and process changes may be made without departing from the scope of the disclosure. The illustrations presented herein are not meant to be actual views of any particular method, system, device, or structure, but are merely idealized representations that are employed to describe the embodiments of the present disclosure. The drawings presented herein are not necessarily drawn to scale. Similar structures or components in the various drawings may retain the same or similar numbering for the convenience of the reader; however, the similarity in numbering does not mean that the structures or components are necessarily identical in size, composition, configuration, or any other property.It will be readily understood that the components of the embodiments as generally described herein and illustrated in the drawings may be arranged and designed in a wide variety of different configurations. Thus, the following description of variousembodiments is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments may be presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.Furthermore, specific implementations shown and described are only examples and should not be construed as the only way to implement the present disclosure unless specified otherwise herein. Elements, circuits, and functions may be shown in block diagram form in order not to obscure the present disclosure in unnecessary detail.Conversely, specific implementations shown and described are exemplary only and should not be construed as the only way to implement the present disclosure unless specified otherwise herein. Additionally, block definitions and partitioning of logic between various blocks is exemplary of a specific implementation. It will be readily apparent to one of ordinary skill in the art that the present disclosure may be practiced by numerous other partitioning solutions. For the most part, details concerning timing considerations and the like have been omitted where such details are not necessary to obtain a complete understanding of the present disclosure and are within the abilities of persons of ordinary skill in the relevant art.Those of ordinary skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout this description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal for clarity of presentation and description. It will be understood by a person of ordinary skill in the art that the signal may represent a bus of signals, wherein the bus may have a variety of bit widths and the present disclosure may be implemented on any number of data signals including a single data signal.The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general-purpose processor, a special-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor (may also be referred to herein as a host processor or simply a host) may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. A general-purpose computer including a processor is considered a special-purpose computer while the general-purpose computer is configured to execute computing instructions (e.g., software code) related to embodiments of the present disclosure.Also, it is noted that the embodiments may be described in terms of a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe operational acts as a sequential process, many of these acts may be performed in another sequence, in parallel, or substantially concurrently. In addition, the order of the acts may be re-arranged. A process may correspond to a method, a thread, a function, a procedure, a subroutine, a subprogram, etc. Furthermore, the methods disclosed herein may be implemented in hardware, software, or both. If implemented in software, the functions may be stored or transmitted as one or more instructions or code on computer-readable media. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.It should be understood that any reference to an element herein using a designation such as "first," "second," and so forth does not limit the quantity or order of those elements, unless such limitation is explicitly stated. Rather, these designations may be used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. In addition, unless stated otherwise, a set of elements may comprise one or more elements.In an effort to make details in figures clearer, certain marked sections of some figures may be shown in an expanded view in another figures. In some cases section markings may obscure parts of a figure, but will be clear in the expanded view. Everything shown in an expanded view should be considered part of the corresponding figure, even some details that might be obscured in the corresponding figure by the section markings. Further, any discussion of a figure in this disclosure also applies to its expanded views, if any. As used herein, the term "substantially" in reference to a given parameter, property, or condition means and includes to a degree that one of ordinary skill in the art would understand that the given parameter, property, or condition is met with a small degree of variance, such as, for example, within acceptable manufacturing tolerances. By way of example, depending on the particular parameter, property, or condition that is substantially met, the parameter, property, or condition may be at least 90% met, at least 95% met, or even at least 99% met.As used herein, "serial communication link" means a communication link that transmits information as a serial group of bits. The protocol of the link includes a group of bits as an information payload, which may be of various sizes and may include other bits such as, for example, start bits, stop bits, parity bits, and address bits. The physical layer of the link may be a wired bus, such as, for example, RS-232, 12C, and SMBus. The physical layer of the link also may be wireless signals such as, for example, Infrared DataAssociation (IrDA) signals.As used herein, the term "frame" defines a group of predetermined number of bits transferred on a serial communication link. As one example, in serial communication links such as a Universal Asynchronous Receiver/Transmitter (UART), a Universal Synchronous Receiver/Transmitter (USRT), or a Universal Synchronous/ AsynchronousReceiver/Transmitter (US ART), a frame may be defined as 10 bits to include a start bit, an 8-bit data payload, a parity bit, and a stop bit. The frame for one of these serial communication protocols may also be different lengths, such as, for example only, 8 bits to include a start bit, a 7-bit data payload, and a stop bit. As another example, an I2C serial communication protocol (or other protocols with multiple slave devices) may include longer frame sizes to allow inclusion of a slave address as well as a data payload.Reference throughout this specification to "one embodiment," "an embodiment," or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present disclosure. Thus, the phrases "in one embodiment," "in an embodiment," and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.Some embodiments described herein relate to techniques for providing a uniform latency between an occurrence of an event at a bus master and its reception at a slave. In some embodiments, the event is communicated on a serial bus and the indicator of the event arrives at a slave coupled to the serial bus. In some embodiments, the uniform latency may be a fixed delay already known between the master and the slave. In other embodiments, the uniform latency may be communicated between the master and slave with timing information included. Still other embodiments described herein provide prioritization of multiple events that may occur during any given frame.In dedicated serial communication systems, there is sometimes a need for transmitting "side information" of certain occurrences (e.g. , events) between regular data communication packages (the transmission of the side information is referred to, herein, as "event transmission"). The event transmission should not destroy the main data communication packages, but the event transmissions should still identify uniquely the time of the event. As an example, the communication link may be based on a UART, or its synchronous version USRT, and the communication may be a U(S)ART frame.Further, if a system supports multiple such events, then the system, according to one embodiment of the disclosure, will prioritize if two or more events occur too frequently (e.g. , close in time) to be transmitted in individual frames and still provide the correct timing information. The present disclosure describes systems, devices and methods to prioritize these events in a way that seeks to ensure the highest priority event is transmitted even if a lower priority event comes first, but too close in time to be able to complete the transmission of the low priority event before one has to start the transmission of the high priority event.Even though the main purpose of a communication link may be to transfer a certain type of data, the transmitter may need to inform the receiver of certain events taking place at the transmitter. A non-limiting system example is a microcontroller (MCU) controlling multiple complex display drivers on a display, such as, for example, a Liquid Crystal Display, (LCD), an Organic Light-Emitting Diode (OLED) display, etc. The display drivers might have complex circuitry for capacitive touch measurement requiring configuration and control by the MCU. Timing information (e.g. , events) like horizontal synchronization (HSYNC) and vertical synchronization (VSYNC) might be necessary to time (e.g. , synchronize) touch operations to the update rate of the display, for example, to compensate for the noise introduced by the display drivers.Although embodiments of the disclosure may refer to "events," e.g., "event frame," "event insertion logic," "event recovery logic," the term "event" is not limited to an event driven system and is intended to encompass side information, generally, including side information about the regular data that is transferred from a transmitter to a receiver.FIG. 1 A is a block diagram of a transmitter 120 and a receiver 140 with a serial communication link 130 according to an embodiment of the present disclosure. In one embodiment, the transmitter 120 and receiver 140 may be a master and a slave configured for synchronous communication over, for example, a serial peripheral interface. The transmitter 120 may include a processor 122, event insertion logic 124, and serial interface 126. The processor 122 may be configured to send regular data to the receiver 140 over the communication link 130. The event insertion logic 124 may be configured to provide event information to the receiver 140 using the communication link 130. The event information may be related to events that are created at the transmitter 120 or, in another embodiment, event information provided to thetransmitter 120 about events external to the transmitter 120. By way of non-limiting example, event information may include timing information, event type information, status information, etc. In various embodiments, the event insertion logic 124 may be configured to insert the event into a serial communication stream encoded at the serial interface 126 and transmitted on the communication link 130. The serial interface 126 and serial interface 146 may be configured to translate data into frames for transfer over the communication link 130, as well as recover data from transmitted frames. Some routine elements related to synchronous communication are not shown to simplify FIG. 1A, such as the clock (Ck) line.On the receiver 140 side, the receiver 140 may include event recovery logic 142, a processor 144, and a serial interface 146. The event recovery logic 142 may be configured to recover event information according to the various embodiments described in this disclosure.While the embodiments described with reference to FIG. 1 relate to synchronous communication, one of ordinary skill in the art would understand that the principles are applicable to asynchronous communication.A general description of processes for transmission of event frames follows, with reference to FIGS. IB to IE.FIG. IB shows a flowchart of an event transmission with uniform delay based on a transferred delay value according to an embodiment of the disclosure. In operation 150, an event is received. The event may be related to an event generated at a transmitter or be received by the transmitter for an external event. In operation 152, an event delay is determined, the event delay being defined as the time between a predefined bit position of a present frame being transmitted when the event occurs and the event. In operation 154, the event frame corresponding to the event is generated and the event delay is inserted into an event delay field of the event frame. By way of non-limiting example, the delay value may be a clock count or a value from which a time or clock count is recoverable. In one embodiment, the delay value may indicate where in the present frame (i.e. , the ongoing frame) the event occurred compared to a predefined point in the present frame (e.g. , the start of the present frame, the end of the frame, etc.). If there was no ongoing transmission or the event frame was otherwise not delayed then the delay value may be indicative of no delay or "0." In operation 156, the event frame having the delay value is sent over a serial communication link. In one embodiment, if there was an ongoing frame then the event frame may be sent back-to-back with the ongoing frame.FIG. 1C shows a flowchart of an event transmission with uniform delay based on transferred delay value according to an embodiment of the disclosure. In operation 160, an event frame including an event delay is received over a serial communication link. In operation 162, the event frame is decoded to recover the event delay and an event indicator. In operation 164, a number of clock cycles corresponding to the event delay are waited. In operation 166, a receiver-side event is asserted responsive to the event indicator after waiting the number of clock cycles.FIG. ID shows a flowchart of an event transmission with uniform delay according to an embodiment of the disclosure. In operation 170, an event is received. The event may be received related to an event generated at a transmitter or be received by the transmitter for external event. In operation 172, the event is delayed according to a uniform delay. In one embodiment, the uniform delay may be the length of a frame. In operation 174, an event frame is generated that corresponds to the event. In one embodiment, the event frame may include an event indicator indicative of which event of a set of events the event frame corresponds to, and an event frame indicator indicating that that the event frame is, in fact, an event frame. In operation 176, the delayed event frame is transmitted on a serial communication link.FIG. IE shows a flowchart of an event transmission with uniform delay according to an embodiment of the disclosure. In operation 180, an event frame is received over a serial communication link. In operation 182, the event frame is decoded to recover an event indicator. In operation 184, a receiver-side event is immediately asserted responsive to the recovered event indicator.One of ordinary skill in the art would understand that an event frame may comprise one or more fields. For example, a frame may have fields that comprise one or more bits, the one or more bits configurable to be indicative of the various information described in connections with the various embodiments of the present disclosure. One of ordinary skill in the art will recognize many permutations for the fields and the bits that comprise the fields.FIG. 2A is a detailed timing diagram illustrating transmission of certain events over a serial communication link with a delay count included in the event transmission (e.g., a transferred count value), according to an embodiment of the disclosure. For the discussion of FIG. 2A, FIGS. 2B - 2D show marked sections of FIG. 2A in an expanded view in an effort to make the details of FIG. 2A easier to view. In this embodiment, when an event occurs, the event insertion logic 124 generates an event frame and the event frame is sent back-to-back (i. e. , immediately after) with a present frame for which transmission is ongoing. The event frame contains a delay field that indicates where (or when) in the present frame (i.e. , the ongoing frame) the event occurred compared to a predefined point in the present frame (e.g. , the start of the present frame). Thus, if there is no ongoing frame transmission, the event frame is sent immediately with the delay value 0. The event frame has the highest priority, so it will be sent before other pending data frames.FIG. 2A shows transmission of events with uniform delay based on transferred counter value for three different locations of the event relative to the present frame being transmitted. FIG. 2A shows as an example of a Universal Synchronous Receiver and Transmitter (USRT) where data is generated on the positive edge of the clock and the frame on the TxD line consists of a stop bit, 9 data bits, no stop bits, and the signal line being high represents an IDLE state. The clock and TxD signals are shown as the top two waveforms.FIG. 2A shows three event transmission examples - signal group 220, signal group 240 and signal group 260 - where events occur at delay time 9, 4, and 0 relative to the present frame shown on the TxD signal. The reference for time (DLY=0) shown as signal group 260 corresponds to the clock cycle prior to the start bit. By way of non- limiting example, the event may be a physical input pin or a software-generated event. In the case of a physical pin, the physical pins may be configured to generate an event on a rising edge, a falling edge, or a toggling signal value.In an embodiment where the system supports multiple events, event insertion logic may be configured to encode an event number in the event frame (shown, for example, as EVO and EV1 in FIG. 2A) together with a delay value (shown, for example, as DLO, DL1,DL2 and DL3 in FIG. 2A) in the event frame. In various embodiments, event numbers may be associated (at the transmitter and/or receiver side) with event sources, event sub-modules, types of events, predefined information associated with the foregoing, and more. With multiple events, several events may occur within (e.g., during) the same present frame. Depending on the application, this may be solved by event insertion logic 124 configured to: (1) in one embodiment, prioritizing one event and discarding the other event(s), (2) in another embodiment, prioritizing one event and sending the remaining event frame(s) back-to-back but with an ERROR bit (not shown in FIG. 2A) to indicate incorrect timing, or (3) in yet another embodiment, sending as two events, but reserving one of the delay values for error signature.On the receiver side, the event recovery logic of the receiver may be configured to decode an event frame to find the event delay value (e.g., in clocks defined by rising and falling edges of Ck). The receiver then counts a number of clocks based on the event delay value from a predefined point in the received event frame and asserts the correct event line and the end of the delay. In the example shown in FIG. 2 A, the event recovery logic counts from the last bit of the event frame. As shown, the events from the transmitter side are then recovered on the receiver side with a fixed latency of 21 clocks. In various embodiments, the fixed latency may be implemented with registers and the size of the fixed latency may depend, at least in part, on the number of registers in the data path, from where the counters start counting on the transmitter and receiver side, etc.For signal group 220, a delay 222 between the start of present frame 212 and the occurrence of event 224 at transmitter (EV_IN) is 9 clocks. The event 224 gets transmitted out as an event frame 214 when the present frame 212 completes. At the end of the event frame 214, the receiver begins counting the number of clocks encoded in the event frame 214 as DL0-DL3 (9 clocks in this example) to create a delay 232. In one embodiment, the receiver may also use the event numbers EVO and EV1 to determine the source of the event 224 for this event frame 214. When the count terminates, the receiver asserts a receive side event 234 (EV_OUT), which is a uniform latency of 21 clocks relative to when the event 224 originally occurred at the transmitter.For signal group 240, a delay 242 between the start of the present frame 212 and the occurrence of event 244 at the transmitter (EV_IN) is 4 clocks. The event 244 gets transmitted out as an event frame 214 when the present frame 212 completes. At the end of the event frame 214, the receiver begins counting the number of clocks encoded in the event frame 214 on DL0-DL3 (4 in this case) to create a delay 252. The receiver may also use the event numbers EVO and EVl to determine the source of the event 244 for the event frame 214. When the count terminates, the receiver asserts a receive side event 254 at the receiver (EV_OUT), which is a uniform latency of 21 clocks relative to when the event 244 originally occurred at the transmitter.For signal group 260, a delay 262 between the start of the present frame 212 and the occurrence of event 264 at the transmitter (EV_IN) is 0 clocks. The event 264 gets transmitted out as an event frame 214 when the present frame 212 completes. At the end of the event frame 214, the receiver begins counting the number of clocks encoded in the event frame 214 on DL0-DL3 (0 in this case) to create a delay 272. The receiver may also use event numbers EVO and EVl to determine the source of the event 264 for this event frame 214. When the count terminates, the receiver asserts a receive side event 274 at the receiver (EV_OUT), which is a uniform latency of 21 clocks relative to when the event 264 originally occurred at the transmitter.One of ordinary skill in the art would understand that the delay bits and event number bits may be positioned differently relative to each other than described with reference to FIG. 2A. Moreover, other embodiments may use a different number of bits or different encodings for defining the event delay than described with reference to FIG. 2A. Also, other embodiments may use a different number of bits (including none) or different encodings for defining the source of the event than described with reference to FIG. 2A. As shown with bits in the event frame after the EV bits, the rest of the event frame (which may be positioned at various locations within the frame) includes a set of unique data bits that identify this frame as an event frame.Thus, while FIG. 2A shows a specific serial communication link protocol according to an embodiment of the disclosure, other embodiments may include other protocols including various data sizes and various control bits, and a packet may include multiple physical frames, not only a single frame as shown in FIG. 2A. FIG. 3A shows a detailed timing diagram illustrating transmission of certain events over a serial communication link with a predetermined delay for the event frame transmission. In this embodiment, when an event occurs, the event may be stored at the transmitter for a time corresponding to the length of a frame/packet. For the discussion of FIG. 3A, FIGS. 3B and 3C show marked sections of FIG. 3 A in an expanded view in an effort to make the details of FIG. 3A easier to view. As non-limiting examples, the event storage may be accomplished by putting the event into a shift register of this size, or by storing the event in a register bit while a counter counts down to zero. When the potentially ongoing frame is transmitted, the event frame has the highest priority such that no new frame is started until the delay times out and the event frame is generated. This delay ensures a uniform latency from the event occurrence at the transmitter until the event frame is received at the receiver.FIG. 3A shows, as an example, a Universal Synchronous Receiver and Transmitter (USRT) where data is generated on the positive edge of the clock and the frame on the TxD line includes a stop bit, 9 data bits, and no stop bits, and the signal line being high represents an IDLE state.FIG. 3A shows examples where events occur at time 0 (signal group 320) and at time 4 (signal group 360). The reference for time (DLY=0) corresponds to the clock cycle prior to the start bit. By way of non-limiting example, an event may be a physical input pin or a software-generated event. In the case of a physical input pin, the physical pins may be configured to generate the event on a rising edge, a falling edge, or a toggling signal value.FIG. 3A also shows that for a synchronous communication protocol, the transmitter and receiver may operate at a different (higher) frequency than the communication link, and an event may need to be synchronized to the communication module. In an asynchronous communication protocol, a delay may be fixed from an event until an event frame is generated at a resolution of a system clock.As with the embodiment discussed with reference to FIG. 2A, if a system using an embodiment of event transmission shown in FIGS. 3A (or 3D) supports multiple events, an event number may be encoded in an event frame. With multiple events, the multiple events may occur within the same present frame. Depending on the application this may be solved by insertion logic configured to: (1) in one embodiment, prioritize one event frame and discard the other event frames, or (2) in another embodiment, prioritize one event frame and send the remaining event frames back-to-back but with an ERROR bit (shown as 'ERR' in FIG. 3A) to indicate that the latency may not be uniform.On the receiver side, the receiver asserts its event output immediately (or after a fixed delay) when an event frame is received.In the example in FIG. 3A, as shown, an event from the transmitter side is then regenerated on the receiver side with a uniform latency of 23 clocks relative to when the event actually occurred. The size of the uniform latency depends on number of registers in the data path, from where the counters start counting on the transmitter and receiver side, etc.For signal group 320, the delay is shown as 0 clocks. Event 322 is delayed for an event delay frame 324 (e.g., 10 cycles) and is then transmitted out as an event frame 326 from the transmitter. The delay in the transmitter ensures that any ongoing frame being transmitted when the event 322 occurs is completed before (or at the same time as) the end of the delay. At the end of the event frame 326, the receiver asserts a receive side event 328 (EV_OUT), which is a uniform latency of 23 clocks relative to when the event 322 originally occurred at the transmitter.FIG. 3D shows another detailed timing diagram illustrating transmission of certain events over a serial communication link with a predetermined delay for the event frame transmission. For the discussion of FIG. 3D, FIGS. 3E and 3F show marked sections of FIG. 3D in an expanded view in an effort to make the details of FIG. 3D easier to view. For signal group 360, the delay is shown as 4 clocks. Event 362 is delayed for an event delay frame 364 (e.g. , 10 cycles) and is then transmitted out as an event frame 366 from the transmitter. The delay in the transmitter ensures that any ongoing frame being transmitted when the event 362 occurs is completed before (or at the same time as) the end of the delay. Note that in this example, an idle period 368 occurs on the serial communication link between the frame being transmitted when the event 362 occurred and the event frame 366. At the end of the event frame 366, the receiver asserts a receive side event 368 (EV_OUT), which is a uniform latency of 23 clocks relative to when the event 362 originally occurred at the transmitter.As discussed above with reference to FIG. 2A, the event frame may include bits, shown in event frame 366 as EV0-EV2, to indicate the source of the event 362 and a set of unique data bits that identify this as an event frame. While FIGS. 3A and 3D illustrate a specific serial communication link protocol, other embodiments may include other protocols including various data sizes, numbers and types of control bits, different encoding schemes, and the packet may include multiple physical frames, not only a single frame as shown in FIGS. 3 A and 3D.In its various embodiments, the present disclosure enables communication of timing for events on serial communication links with no need for additional lines in addition to those required by the communication system. The event is perceived by a slave as a uniform latency from the occurrence at the transmitter side regardless of where it happens in the communication package.A description of prioritization of event transmission follows with reference toFIGS. 4A-4C, FIG. 5 and FIGS. 6A-6C, according to embodiments of the disclosure. In one embodiment, prioritization logic may be part of event insertion logic, such as event insertion logic 124 (FIG. 1A).FIGS. 4A-4C show frame level timing diagrams illustrating different priority event timings and some errors that may occur over a serial communication link - errors addressed by embodiments of the present disclosure. The solid line arrows represent higher priority events and the dashed line arrows represent lower priority events.Similarly, the boxes with solid lines illustrate the communication frames containing information about the higher priority event and the boxes with dashed lines illustrate the communication frames containing information about the lower priority event.In FIG. 4A, a lower priority event 410 occurs first. However, since communication frame 414 has a certain duration, sending the lower priority event 410 immediately when it occurs results in loss of a higher priority event 412 (at least it cannot have correct timing) even though it has higher priority, because the higher priority event 412 has to wait until a lower priority event frame 416 completes. Thus, although the lower priority event frame 416 may be sent with correct timing, a higher priority event frame 418 is sent with an error indication to indicate that there may be an inconsistency between the latency of when the higher priority event 412 occurred and when the higher priority event frame 418 is received.In FIGS. 4B and 4C, the higher priority event and the lower priority event occur very close in time such that jitter in sampling of the event may produce a random order for when the event frames are transmitted. In FIG. 4B, a higher priority event 422 won so a higher priority event frame 426 is sent out first and at a proper time. Thus, a lower priority event 420 waits for the next frame and is sent with an error indication to indicate that there may be an inconsistency between the latency of when the lower priority event 420 occurred and when a lower priority event frame 428 is received. In FIG. 4C, a lower priority event 430 won so a lower priority event frame 436 is sent out first and at a proper time. Thus, a higher priority event 432 waits for the next frame and is sent with an error indication to indicate that there may be an inconsistency between the latency of when the higher priority event 432 occurred and when a higher priority event frame 438 is received.One method to correct for these inconstancies in prioritizing events is to use transmit hardware (which may include software implementations) that creates a uniform latency by inserting a delay from the event equal to the frame length before sending the event frame, such as, for example, by using the embodiment illustrated in FIG. 3. The delay time may be used to force a correct priority.In FIG. 3 the events are all delayed for one frame. When the delay duration times out, the transmitter checks if there are pending events in the pipeline of higher priority. If there are, the transmitter prioritizes the higher priority events by not starting any new frames until the higher priority event is ready to be transmitted, sends the high priority event, and then sends any events with lower priority afterward, and in the proper priority order - with an error flag set.FIG. 5 shows a logic diagram illustrating priority logic 510 as an example for prioritizing events on a serial communication link according to embodiments of the disclosure. FIG. 5 illustrates one possible implementation of this behavior; many other circuits and logical implementation may be used to accomplish the prioritization. The delay of events is illustrated in FIG. 5 as a delay circuit 520, here, shift registers. Event submodules (EVO, EV1 ,... EVn) are delineated in FIG. 5 by horizontal dashed lines. As stated above, counters may also be used for a delay. In this example, EVO has the highest priority, while increasing event numbers have decreasing priority. The output from a NOR-gate for each event submodule indicates that there is no pending event at that event submodule. A vertical AND-line 512 qualifies each event by indicating at each submodule that there are no pending events with higher priority. If an event has reached the last stage of delay circuit 520, then the qualified event output is asserted if there are no pending events with higher priority. If there are pending events with higher priority, the delay circuit 520 for that event submodule is stopped until all events with higher priority have been transmitted. Thus, regardless of the arrival time of events EVO through EVn, within a given data frame (or sequence of event frames) the highest priority event will be sent first followed by any pending lower priority events in the proper priority order.FIGS. 6A-6C are frame level timing diagrams illustrating event timings for prioritized events over a serial communication link. The solid lines represent higher priority events and the dashed lines represent lower priority events. Similarly, the boxes with solid lines illustrate the communication frames containing information about the higher priority event and the boxes with dashed lines illustrate the communication frames containing information about the lower priority event.The timing in FIGS. 6A-6C show results that would be achieved for a higher priority event relative to a lower priority event using the uniform delay prioritization discussed with reference to FIG. 5. In FIGS. 6A-6C EXT indicates when the event actual occurs as input to the priority logic and INT indicates when the event would be available for transmission (i.e., at the end of the shift register creating a uniform one frame delay).FIG. 6A illustrates a situation where a higher priority event 602 (solid lines) occurs after a lower priority event 606 (dashed lines) - but too close in time to be sent in different frames. A delayed higher priority event 604 (INT) wins in the priority logic and will be sent out as a higher priority event frame 610 when the uniform delay is over and will thus be received with a uniform latency relative to when the higher priority event 602 occurred at the transmitter. The lower priority event 606 lost in the priority logic and will thus be sent as a lower priority event frame 612 after the higher priority event frame 610 with an error indication to indicate that there may be an inconsistency between the latency of when the lower priority event 606 occurred and when the lower priority event frame 612 is received.FIG. 6B illustrates a situation where the two events (higher priority event 622 and lower priority event 626) happen simultaneously or almost simultaneously. Timing jitter in the sampling process determines which event is first registered. However, when the first occurring event has been delayed for one frame, the priority logic checks for pending higher priority events. This way, the higher priority event 622 always wins.As shown in FIG. 6B, Option A indicates that at the time of sampling the events, the higher priority event 622 was sampled first and a higher priority event frame 630 for the higher priority event 622 was sent out after the uniform delay and will thus be received with a uniform latency relative to when the higher priority event 622 occurred at the transmitter. The lower priority event 626 lost in the priority logic and will thus be sent as a lower priority event frame 632 after the higher priority event frame 630 with an error indication to indicate that there may be an inconsistency between the latency of when the lower priority event 626 occurred and when the lower priority event frame 632 is received.Also as shown in FIG. 6B, Option B indicates that at the time of sampling the events, the lower priority event 626 was sampled first. However, because the higher priority event 622 wins in the priority logic, a higher priority event frame 634 for the higher priority event 622 was sent out after the uniform delay and will thus be received with a uniform latency relative to when the higher priority event 622 occurred at the transmitter. The lower priority event lost 626 in the priority logic and will thus be sent as a lower priority event frame 636 after the higher priority event frame 634 with an error indication to indicate that there may be an inconsistency between the latency of when the lower priority event 626 occurred and when the lower priority event frame 636 is received.FIG. 6C illustrates yet another issue. If a higher priority event 642 occurs one frame later relative to a lower priority event 646, we may have a situation where jitter in the sampling time will determine transmission order. In Option A, the lower priority event 646 is sampled first and its event frame 650 may be transmitted without latency error and a higher priority event frame 652 for the higher priority event 642 is transmitted next without latency error. In Option B, the higher priority event 642 is sampled first and its event frame 654 may be transmitted without latency error. However, a lower priority event frame 656 will be sent after the higher priority event frame 654 with an error indication to indicate that there may be an inconsistency between the latency of when the lower priority event 646 occurred and when the lower priority event frame 656 is received. In either option, the higher priority event frame 652 or 654 is transmitted without a latency error.In many cases the two (or more) events happen randomly and this behavior is acceptable; the higher priority event frame is always transmitted at the right time.However, in some systems where a fixed relation exists between the two events it may be unacceptable that the lower priority event frame switches between being before or after the higher priority event frame. For such cases, an option may be included to turn off the prioritization that looks for other events in the pipe and just prioritize among those that are ready to be transmitted.FIG. 7 shows a flowchart of a process for prioritizing events according to an embodiment of the disclosure. In operation 700, two or more events are delayed for a delay time. In one embodiment, the delay time may correspond to a frame time. In operation 702, additional frames are prevented from being started on a serialcommunication link while any of the two or more events is being delayed. Inoperation 704, a first event frame is transmitted that corresponds to one of the two or more events. In one embodiment, the first event has the highest priority after its corresponding delay time. In operation 706, event occurrences that have a priority lower than the first event frame are held until after its corresponding delay time and no higher events are pending. In operation 708, the event frame(s) that correspond(s) to a highest priority event is repeatedly transmitted until event frames have been transmitted for all of the two or more events.FIG. 8 is a block diagram of a touch panel system including a system controller, a touch controller, and a display panel with serial communication links according to an embodiment of the present disclosure.In this system, a serial bus is used to distribute Vertical Synchronization (VS) and Horizontal Synchronization (HS) event information from the display controller 816 to all of the touch acquisition sub-systems via a single control line 817, which is also used for data/control transfers. As an example, the control line 817 might be the Master TxD of a USART channel, which is used to send configuration data to the source Driver ICs 834, and get Analog-to-Digital Control (ADC) samples representing touch data in return on the RxD line. The system Printed Circuit Board (PCB) 810 may be, for instance, a mobile phone, tablet, or any other system with a display, which supports touch sensing. As an example, the system PCB 810 may be connected to the TFT LCD panel 830 using a flexible printed circuit board 826, and the source driver ICs 834 may be mounted on the glass using silver epoxy. For some touch solutions, touch acquisition front-end 838 may be split and implemented on the display source driver ICs 834. The measurements may then be transferred back to the touch controller 818 where the Central Processing Unit(CPU) 812 (and possible Digital Signal Processing (DSP) unit) performs a post-processing operation to filter noise and determine, for example, whether someone touches the screen with one or more fingers, or if some other touch event occurred.The display controller 816, display source driver 836, and gate driver circuitry 832 in this embodiment may be totally unaware of the touch system. The display controller 816 controls the screen updating via control line 817. However, for the touch system it may be important to accurately synchronize its acquisition 845 to the display update 817 to avoid the noise from the source driver 834 ICs and gate driver 832 ICs. The touch controller 818 IC receives the HS/VS signals (i.e., events) from the display controller 816, and the event insertion logic 820 prioritizes these events. In one embodiment, the event insertion logic 820 may implement embodiments of a delay circuit and priority logic, such as a delay circuit 520 and priority logic 510 (FIG. 5). In one embodiment, the event insertion logic 820 may implement one or more of the processes for uniform delay described with references to FIGS. 1A-3. Embodiments of touch controller 818 IC may then translate the events into "frames" or "packages" before inserting these frames into a serial stream. In various embodiments, serializer 824 (Tx) will send event frames 821 before data frames 823 (i.e. , data frames have lowest priority). The touch acquisition front-end 838 in the source driver ICs 834 will de-serialize (deserializer 840) the serial stream and recover/decode the HS/VS events (event recovery 842) before passing them to the timing and control acquisition 844 stage.It should be noted that FIG. 8 is discussed as one example of system in accordance with embodiments of the present disclosure. One of ordinary skill in the art will appreciate that there are many other systems where there is a need to transmit timing details or other event details as additional "side information" relative to the regular data transmitted on serial communication links, and such systems may use embodiments of the present disclosure.As a non-limiting example of event prioritization with touch displays, VSYNC is sent for each new image update, while there are several HSYNCs between each VSYNC, representing new lines within the same image. The prioritizing logic ensures that even if VSYNC and HSYNC appear simultaneously (which is the case in some systems), VSYNC should be assigned the higher priority and will win. However, if an HSYNC appears one USART frame before the VSYNC, jitter in the sampling time will determine whether both are transmitted without errors (if the lower priority HSYNC is detected first) or the higher- priority VSYNC is transferred first, with an error identification on the later occurring event. In either option A or option B, the higher priority VSYNC is always transmitted at the right time, but it might be confusing if an HSYNC belonging to the previous image comes after the VSYNC (and hence a new image) - even if it has an error identification. The timing between HSYNC and VSYNC is application specific - in one particular display it will always behave the same way, and the same behavior should always be expected in the system. Hence, there may be applications where the prioritizing (at least for these two events) should be turned off - in the sense that the lower-priority event (HSYNC) is discarded if it arrives while the higher-priority VSYNC is being transmitted.Many of the functional units described in this specification may be described as modules, threads, or other segregations of programming code, in order to more particularly emphasize their implementation independence. Modules may be at least partially implemented in hardware, in one form or another. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the- shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable state machines, programmable logic devices, or the like.Modules may also be implemented using software, stored on a physical storage device (e.g., a computer-readable storage medium), in memory, or a combination thereof for execution by various types of processors.An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as a thread, object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several storage or memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. Where a module or portions of a module are implemented in software, the software portions are stored on one or more physical devices, which are referred to herein as computer-readable media.In some embodiments, the software portions are stored in a non-transitory state such that the software portions, or representations thereof, persist in the same physical location for a period of time. Additionally, in some embodiments, the software portions are stored on one or more non-transitory storage devices, which include hardware elements capable of storing non-transitory states and/or signals representative of the software portions, even though other portions of the non-transitory storage devices may be capable of altering and/or transmitting the signals. One example of a non-transitory storage device includes a read-only memory (ROM), which may store signals and/or states representative of the software portions for a period of time. However, the ability to store the signals and/or states is not diminished by further functionality of transmitting signals that are the same as or representative of the stored signals and/or states. For example, a processor may access the ROM to obtain signals that are representative of the stored signals and/or states in order to execute the corresponding software instructions.While the present disclosure has been described herein with respect to certain illustrated embodiments, those of ordinary skill in the art will recognize and appreciate that the present invention is not so limited. Rather, many additions, deletions, andmodifications to the illustrated and described embodiments may be made without departing from the scope of the invention as hereinafter claimed along with their legal equivalents. In addition, features from one embodiment may be combined with features of another embodiment while still being encompassed within the scope of the invention as contemplated by the inventors.Additional non-limiting embodiments of the disclosure include:Embodiment 1 : A touch panel system, comprising: a display system; a touch acquisition front-end; and a touch controller operatively coupled to the touch acquisition front-end by a serial communication link, wherein the touch controller includes a communication interface configured to: receive two or more event occurrences and delay the two or more event occurrences by a delay time corresponding to a uniform delay; determining a priority order for the two or more event occurrences; preventing additional frames from being started while any of the two or more delay circuits is delaying its corresponding event; transmitting an event frame corresponding to a highest priority event after its corresponding delay time; and holding event occurrences with a priority lower than the highest priority event in their corresponding delay circuit until after its corresponding delay time and no higher priority events are pending; repeating the transmitting the event frame corresponding to the highest priority event until event frames have been transmitted for all of the two or more event occurrences; wherein the event frame includes event identifier bits indicating the frame being transmitted is an event frame and event bits indicating which event of the two or more event occurrences is being transmitted in the event frame.Embodiment 2: The touch panel system according to Embodiment 1, wherein the communication interface comprises two or more delay circuits, each delay circuit configured for receiving an event occurrence corresponding to that delay circuit and delaying the event occurrence by a delay time corresponding to a frame time.Embodiment 3: The touch panel system according to any of Embodiments 1 or 2, wherein each of the two or more delay circuits comprises a shift register for shifting the event occurrence by a number of shifts equal to a number of bits in the frame time.Embodiment 4: The touch panel system according to any of Embodiments 1 through 3, wherein each of the two or more delay circuits comprises a counter for determining the delay time.Embodiment 5: The touch panel system according to any of Embodiments 1 through 4, wherein the communication interface is configured for communication according to a protocol selected from the group consisting of Universal Asynchronous Receiver/Transmitter, Universal Synchronous Receiver/Transmitter, and Universal Synchronous/ Asynchronous Receiver/Transmitter.Embodiment 6: The touch panel system according to any of Embodiments 1 through 5, wherein the uniform delay is equal to a frame time.Embodiment 7: The touch panel system according to any of Embodiments 1 through 6, wherein the communication interface is configured to insert an error indication into an event frame corresponding to an event held past its delay time while an event frame corresponding to at least one higher priority event is transmitted.Embodiment 8: The touch panel system according to any of Embodiments 1 through 7, wherein the touch acquisition front-end includes a noise cancellation circuit configured to filter noise responsive to event frames received over the communication link.Embodiment 9: The touch panel system according to any of Embodiments 1 through 8, wherein the noise cancellation circuit is configured to filter noise responsive to, at least in part, timing information related to events corresponding to received event frames and types of events corresponding to the received event frames.Embodiment 10: The touch panel system according to any of Embodiments 1 through 9, wherein the types of events comprise initiation of display update signals at the display system. Embodiment 11 : The touch panel system according to any of Embodiments 1 through 10, wherein the display update signals comprise horizontal synchronization and vertical synchronization signals.Embodiment 12: The touch panel system according to any of Embodiments 1 through 1 1, wherein the communication interface is configured to discard an event of the two or more events.Embodiment 13 : The touch panel system according to any of Embodiments 1 through 12, wherein the discarded event corresponds to a horizontal synchronization event that is received during a vertical synchronization event.Embodiment 14: The touch panel system of according to any of Embodiments 1 through 13, wherein a higher priority event corresponds to a vertical synchronization event and a lower priority event corresponds to a horizontal synchronization event.Embodiment 15 : A serial communication link receiver, comprising: a serial interface configured to receive one or more event frames and recover event information from the one or more event frames responsive to event recovery logic, the event recovery logic comprising: determining the timing of an event corresponding to one of the one or more event frames responsive to the difference between an actual time of receipt of the event frame and a known uniform delay.Embodiment 16: A serial communication link receiver, comprising: a serial interface configured to receive one or more event frames and recover event information from the one or more event frames responsive to event recovery logic, the event recovery logic comprising: determining the timing of an event corresponding to one of the one or more event frames responsive to the difference between an actual time of receipt of the event frame and a known uniform delay; and validate the determining timing information responsive to an error indicator encoded in the one or more event frames.Embodiment 17: A method of prioritizing events, comprising: delaying two or more events, each event delayed for at least a delay time corresponding to a uniform delay; and preventing additional frames from being started on a serial communication link while any of the two or more events is being delayed; transmitting an event frame corresponding to one of the two or more events with a highest priority after its corresponding delay time; and holding events with a priority lower than the event frame being transmitted until after a corresponding delay time and no higher priority events are pending; and repeating the transmitting the event frame corresponding to a highest priority event until event frames have been transmitted for all of the two or more events; wherein the event frame includes event identifier bits indicating the frame being transmitted is an event frame and event bits indicating which event of the two or more events is being transmitted in the event frame.Embodiment 18: The method according to Embodiment 17, wherein the uniform delay is equal to a frame time.Embodiment 19: The method according to any of Embodiments 17 or 18, wherein delaying the two or more events comprises shifting each event through a shift register for a number of shifts equal to a number of bits in the uniform delay.Embodiment 20: The method according to any of Embodiments 17 through 19, wherein delaying the two or more events comprises counting a number of bits in the uniform delay.Embodiment 21 : The method according to any of Embodiments 17 through 20, further comprising inserting an error indication into an event frame corresponding to an event held past its delay time while an event frame corresponding to at least one higher priority event is transmitted.Embodiment 22: The method according to any of Embodiments 17 through 21, further comprising: asserting one or more event indicators responsive to the two or more events; and responsive to the one or more asserted event indicators, holding at least one of the two or more events while the one or more event indicators is asserted.Embodiment 23: The method according to any of Embodiments 17 through 22, wherein holding the at least one of the two or more events comprises one or more of stopping a shift register or pausing a counter.Embodiment 24: The method according to any of Embodiments 17 through 23, further comprising asserting a no-qualified event indicator responsive to the one or more event indicators.Embodiment 25 : The method according to any of Embodiments 17 through 24, further comprising: de-asserting the one or more event indicators responsive to the two or more events; and responsive to the one more de-asserted event indicators, delaying the at least one of the two or more events while the one or more event indicators are de-asserted.Embodiment 26: The method according to any of Embodiments 17 through 25, further comprising asserting a qualified event indicator corresponding to a qualified event after its delay time and responsive to there being no asserted event indicators indicating higher priority events are pending. Embodiment 27: The method according to any of Embodiments 17 through 26, further comprising encoding an event frame corresponding to the qualified event responsive to the qualified event indicator.Embodiment 28: A serial communication link transmitter, comprising: two or more delay circuits, each delay circuit configured for receiving an event occurrencecorresponding to that delay circuit and delaying the event occurrence by a delay time corresponding to a frame time; and priority logic configured for: determining a priority order for two or more event occurrences; preventing additional frames from being started while any of the two or more delay circuits is delaying its corresponding event;transmitting an event frame corresponding to a highest priority event after itscorresponding delay time; holding event occurrences with a priority lower than the highest priority event in their corresponding delay circuit until after its corresponding delay time and no higher priority events are pending; and repeating the transmitting the event frame corresponding to the highest priority event until event frames have been transmitted for all of the two or more event occurrences; wherein the event frame includes event identifier bits indicating the frame being transmitted is an event frame and event bits indicating which event of the two or more event occurrences is being transmitted in the event frame.Embodiment 29: The serial communication link transmitter of Embodiment 28, wherein each of the two or more delay circuits comprises a shift register for shifting the event occurrence by a number of shifts equal to a number of bits in the frame time.Embodiment 30: The serial communication link transmitter according to any of Embodiments 28 or 29, wherein each of the two or more delay circuits comprises a counter for determining the delay time.Embodiment 31 : The serial communication link transmitter according to any of Embodiments 28 through 30, further comprising an interface configured forcommunication according to a protocol selected from the group consisting of Universal Asynchronous Receiver/Transmitter, Universal Synchronous Receiver/Transmitter, and Universal Synchronous/ Asynchronous Receiver/Transmitter.Embodiment 32: A serial communication link transmitter, comprising: priority logic comprising two or more priority modules corresponding to two or more event indicators, each priority module comprising: a shift register for shifting the event indicator by a number of shifts equal to a number of bits in a frame time; logic to indicate a pending event if any bit in the shift register is asserted; logic for holding the shift register from shifting if there is a pending event in a higher priority module; and logic for indicating the event is ready to transmit if the event has reached the end of the shift register and there are no pending events in higher priority modules; transmission circuitry configured for sending an event frame, wherein the transmission circuitry is configured to include in the event frame, event bits indicating which event of the two or more event indicators is being transmitted and event identifier bits indicating the frame being transmitted is an event frame. |
An online system monitoring technique quickly and efficiently identifies failures or other system errors arising during operation of an intermediate network node, such as a network switch. The technique comprises Keep Alive Buffer packets/cells ("KABs") that exercise data and control paths extending from every ingress port to every egress port in the switch. By exercising the data and control paths, the KABs enable testing of, and ensuring against, component failures, missing modules or other types of failure that can be detected as soon as possible, to thereby prevent data flow backup or other performance degradation in the switch. |
What is claimed is:1. A system adapted to quickly and efficiently identify failures or errors arising during operation of an intermediate network node, the system comprising:at least one source input/output module including an ingress port to receive data into the intermediate network node;at least one destination input/output module including an egress port to transmit the data from the intermediate network node;data and control paths extending through the intermediate network node between the ingress and egress ports;a test packet generator to generate a test packet and inject the test packet into the data path at the source input/output module, the test packet configured to exercise the data and control paths in the intermediate network node so as to detect failures or errors arising during operation of the intermediate network node;a control and status block (CSB) including a first register configured as a timer that determines when to generate and send the test packet and including a second register to record the source input/output module from which the test packet is sent; anda test packet receiver adapted to extract the test packet from the data path at the destination input/output module.2. The system of claim 1 wherein the source input/output module comprises a source input/output card (IOC) and the destination input/output module comprises a destination IOC, the system further comprising:at least one switch fabric card (SFC) module having a switch fabric configured to switch the data received at the ingress port of the source IOC to the egress port of the destination IOC.3. The system of claim 1 wherein the test packet generator comprises formatting logic coupled to registers of the CSB containing a plurality of registers that hold information relating to the generation of the test packet.4. The system of claim 3 wherein the first register configured as a timer is one of the plurality of registers.5. The system of claim 4 wherein the formatting logic is configured to reset the first timer, format the test packet using the information stored in the registers and inject the test packet into the data path.6. The system of claim 3 wherein the test packet receiver comprises extraction logic coupled to the registers of the CSB.7. The system of claim 6 wherein the extraction logic is configured to intercept the test packet from the data path, examine contents of the extracted test packet and update registers of the CSB.8. The system of claim 1 wherein the intermediate network node comprises a network switch.9. The system of claim 1 wherein the test packet includes a local route header, a transport header, and a cyclic redundancy check.10. The system of claim 9 wherein the local route header includes a virtual lane (VL) field containing a number that specifies the VL over which the test packet travels.11. The system of claim 10 wherein the test packet further comprises a packet header having an input link field that contains a value of an input (source) link port originating the test packet and a switch lifetime limit (SLL) field that contains a timestamp indicating when the test packet was created.12. The system of claim 11 wherein the test packet further comprises a cell header having a bit that identifies the test packet and an input switch port field containing a value of an input (source) switch port originating the test packet.13. The system of claim 12 wherein the values contained in the input switch port and input link fields are used to identify a source of the test packet.14. The system of claim 4 wherein the test packet is manifested as a FastKAB and a SlowKAB.15. The system of claim 14 wherein the FastKAB is generated and launched automatically by hardware such that it is regularly flowing throughout the switch to provide a periodic check of the intermediate network node.16. The system of claim 15 wherein the first register configured as a timer is a FastKABTiming register having a registered parameter that functions as the timer, the FastKABTiming register specifying a FastCheckPeriod that indicates how often receipt of the FastKAB is checked.17. The system of claim 16 wherein another register of the CSB is a FastKABEnable register that specifies the destination in the intermediate network node to which the FastKAB is sent and the source in the node from which to expect the FastKAB to be received.18. The system of claim 17 wherein yet another register of the CSB is a FastKABControl register having a FastVL field that holds a value specifying a virtual lane (VL) over which the FastKAB is transmitted, a FastEnaGenerate field that enables auto generation of the FastKAB, a FastEnaIntKABs field that enables interrupt upon detection of a missing FastKAB and a FastCreditLimit field that specifies a credit limit applied to the destination.19. The system of claim 14 wherein the SlowKAB is initiated by software to enable further diagnosis of a potential failure in the intermediate network node.20. The system of claim 19 wherein another register of the CSB is a SlowKABControl register having a SlowInject field that triggers generation of a SlowKAB, a SlowVL field specifying a destination virtual lane (VL), a SlowDestSwfPort field specifying a destination port and a SlowDestQuill field specifying a destination link layer device.21. The system of claim 14 wherein another register of the CSB is an AllKABResults register that provides readable state as to the number of test packets received on particular virtual lanes (VLs).22. The system of claim 21 wherein yet another register of the CSB is a FastKABResults register that provides a summary of the AllKABResults register directed to the FastKAB.23. The system of claim 22 wherein still yet another register of the CSB is a SlowKABControl register used to control generation and launching of the SlowKAB.24. A method for efficiently identifying failures or errors arising during operation of a network node, the method comprising the steps of:generating a test cell at a test cell generator of a source line card in the network node;injecting the test cell into a data path of the network node at the source line card;scheduling the test cell for switching at a switch fabric of the network node;switching the test cell through the switch fabric to a destination line card specified in the test cell;extracting the test cell from the data path at the destination line card in the network node;recording reception of the test cell at a test cell of the destination line card;updating results registers of a control and status block (CSB) data structure to record the source line card from which the test cell is sent; andnotifying a system processor of a failure if the test cell is not received at the test cell receiver.25. The method of claim 24 wherein the step of notifying comprises the step of automatically notifying the system processor of the failure without constant checking of status of reception of the test cell.26. The method of claim 24 wherein the step of generating comprises the step of generating the test cell at a frequency specified by a programmable Generate Period in the CSB data structure.27. The method of claim 26 wherein the step of recording comprises the step of checking reception of the test cell at a frequency specified by a programmable CheckPeriod in the CSB data structure.28. The method of claim 27 wherein the programmable CheckPeriod allows the system processor to determine the frequency at which results are checked, thereby limiting a maximum response time to detecting the failure.29. The method of claim 24 further comprising the steps of:loading an ingress timestamp into the test cell;generating an egress timestamp; andcalculating a difference between the ingress and egress timestamps to record a highest latency test cell.30. The method of claim 29 wherein the step of calculating comprises determining whether there is a bottleneck in the network node.31. The method of claim 24 wherein the step of generating comprises generating the test cell using parameters stored in appropriate registers of the CSB data structure.32. The method of claim 24 wherein the step of scheduling comprises sending a request for switching to an arbiter of the network node and, in response to the arbiter granting the request, sending the test cell to the switch fabric.33. The method of claim 32 wherein the step of scheduling further comprises exercising the arbiter through scheduling of the test cell for switching at the switch fabric.34. Apparatus for efficiently identifying failures or errors arising during operation of an intermediate network node, the apparatus comprising:means for receiving data at an ingress port of the intermediate network node;means for transmitting the data from an egress port of the intermediate network node;means for providing data and control paths that extend through the intermediate network node between the ingress and egress ports;means for exercising the data and control paths in the intermediate network node using a test cell to detect failures or errors arising during operation of the intermediate network node;means for holding information related to generation of the test cell;means for determining when to generate and send the test cell based on a timer; andmeans for recording an identity of a source line card from which the test cell is sent by updating results registers of a control and status block.35. A computer readable medium containing executable program instructions for efficiently identifying failures or errors arising during operation of an intermediate network node, the executable program instructions comprising program instructions for:receiving data at an ingress port of the intermediate network node;transmitting the data from an egress port of the intermediate network node;providing data and control paths that extend through the intermediate network node between the ingress and egress ports;generating a test cell based on information contained in registers of a control and status block (CSB) and in response to a timer; andexercising the data and control paths in the intermediate network node using the test cell to detect failures or errors arising during operation of the intermediate network node; andupdating results registers of the CSB to record source ports from which other test cells are sent. |
FIELD OF THE INVENTIONThe present invention relates to communications networks and, more specifically, to a technique for efficiently detecting failures or other system errors in an intermediate network node of a communications network.BACKGROUND OF THE INVENTIONCommunication in a computer network involves the exchange of data between two or more entities interconnected by communication links. These entities are typically software programs executing on computer platforms, such as end nodes and intermediate network nodes. Examples of an intermediate network node may be a router or switch that interconnects the communication links to enable transmission of data between the end nodes, such as servers having processor, memory and input/output (I/O) storage resources.Communication software executing on the end nodes correlates and manages data communication with other end nodes. The nodes typically communicate by exchanging discrete frames or packets of data according to predefined protocols. In this context, a protocol consists of a set of rules defining how the nodes interact with each other. In addition, network software executing on the intermediate nodes allows expansion of communication to other end nodes. Collectively, these hardware and software components comprise a communications network and their interconnections are defined by an under-lying architecture.The InfiniBand Architecture (IBA) is an I/O specification that defines a point-to-point, "switched fabric" technology used to, among other things, increase the aggregate data rate between processor and storage resources of a server. The IBA is described in the InfiniBand(TM) Architecture Specification Volume 1, Release 1.0.a, by InfiniBand Trade Association, Jun. 19, 2001, which specification is hereby incorporated by reference as though fully set forth herein. Broadly stated, the switched fabric technology may be embodied in a network switch configured to receive data traffic (IBA packets) from one or more input ports and forward that traffic over one or more output ports to an IBA communications network. A switch fabric of the network switch may interconnect a plurality of modules having input (ingress) and output (egress) ports that provide, e.g., Fibre Channel or Gigabit Ethernet link connections to the network.Some network switches include fault tolerant features that enable single error (fault) detection and correction. These switches are typically fully redundant such that there is no single point of failure. A failure is defined as an unpredictable event that arises in the switch. The architecture of the switch may account for congestion that leads to dropping of packets; this is not typically considered a failure. Higher-level protocols executing on the switch in various parts of the network may take a long time to respond to failures detected by those protocols. This latency may result in increased traffic loss and congestion, along with other problems. The present invention is directed, in part, to detecting failures or errors as soon as possible in the switch.In a fully redundant network switch system, any single fault only disables the module on which the fault occurs. Other modules in the switch may experience performance, but not functional, loss. Although the redundant network switch is single-fault tolerant, multiple simultaneous faults can still "cripple" the switch. To maintain a fault tolerant system, any single fault must be detected and repaired as soon as possible to avoid a multiple fault situation. The present invention is further directed to providing an assist that detects when there may be an actual error (fault) in the network switch so that the fault can be corrected to thereby reduce the possibility of multiple faults occurring at substantially the same time.SUMMARY OF THE INVENTIONThe present invention overcomes the disadvantages of the prior art by providing an online system monitoring technique that quickly and efficiently identifies failures or other system errors arising during operation of an intermediate network node, such as a network switch. The technique comprises Keep Alive Buffer packets/cells ("KABs") that exercise data and control paths extending from every ingress port to every egress port in the switch. By exercising the data and control paths, the KABs enable testing of, and ensuring against, component failures, missing modules or other types of failure that can be detected as soon as possible, to thereby prevent data flow backup or other performance degradation in the switch.According to the invention, the KABs are manifested in two forms: FastKABs and SlowKABs. A FastKAB is a minimum size packet that is generated by a KAB generator on an ingress path of the switch. FastKABs are preferably generated and "launched" automatically by switch hardware such that they are constantly flowing throughout the switch to provide a periodic check of the switch. SlowKABs, on the other hand, are initiated by software (executed by a processor) to enable further diagnosis of a potential failure in the switch. For example, a SlowKAB may be generated in response to a FastKAB failure, insertion of a new module within the switch, a non-responsive module to processor access or any other event that requires generation of such a processor-initiated diagnostic tool. A SlowKAB can be injected into the switch by software at any time.Broadly stated, each KAB is injected into and traverses the data path between the ingress and egress ports at line rate, similar to a packet that is received at, switched and forwarded from the switch. The KAB is injected into the data path at a low frequency that essentially "hides" the KAB behind the overhead of a link protocol and does not generally interfere with normal operating traffic. When traversing the data path, the KAB checks the ingress buffering and queuing system, the request and grant control paths, the serial links and transceivers, the switch fabric operation, the egress buffering and queuing system, and the scheduling functions of the switch.In the illustrative embodiment, the mere existence (reception) of the KABs, i.e., whether they traversed the data path of the switch, is recorded at the egress port. If KABs are not periodically received at an egress port, an indication is provided that there may be a malfunction in the switch. The malfunction indication may not be an actual error (fault), but rather could be congestion in the switch. Non-reception of KABs as a result of congestion may indicate that there is excessive traffic destined to the port that is missing the KABs. In this context, the invention provides a low-level diagnostic that monitors the internal performance of the switch.The KABs may also cooperate with any fault tolerant elements of the switch to enable failover operations that allow the switch to continue functioning in a manner that is transparent to high-level application software endpoints. To that end, the KABs may function as an assist to the fault tolerant elements to detect when there may be an actual error (fault). In addition, the KABs may be used in the initial design and debug of the switch, as well as in manufacturing test, diagnostics and performance measurement. Use of the KABs obviates the need for external network equipment attached to the physical switch platform to test the internal components and functions of the switch.BRIEF DESCRIPTION OF THE DRAWINGSThe above and further advantages of the invention may be better understood by referring to the following description in conjunction with the accompanying drawings in which like reference numerals indicate identical or functionally similar elements:FIG. 1 is a schematic block diagram of a communications network that may be advantageously used with the present invention;FIG. 2 is a schematic block diagram of a network switch having a plurality of input/output card (IOC) modules coupled to a switch fabric card (SFC) module;FIG. 3 is a schematic block diagram of an IOC module that may be advantageously used with the present invention;FIG. 4 is a schematic block diagram of a Quad Infiniband Link Layer (QUILL) that may be advantageously used with the present invention;FIG. 5 is a schematic block diagram of an ingress packet processor (IPP) that may be advantageously used with the present invention;FIG. 6 is a schematic block diagram of an egress packet processor (EPP) that may be advantageously used with the present invention;FIG. 7 is a schematic block diagram of the SFC module that may be advantageously used with the present invention;FIG. 8 is a schematic block diagram illustrating the format of a Keep Alive Buffer (KAB) in accordance with the present invention;FIG. 9 is a schematic block diagram of KAB generator and receiver logic in accordance with the present invention;FIG. 10 is a schematic block diagram illustrating various KAB Control and Status Block registers in accordance with the present invention;FIG. 11 is a schematic block diagram illustrating data path test coverage provided by the KABs within the network switch of FIG. 2; andFIG. 12 is a flowchart illustrating a sequence of steps for implementing the online system monitoring technique in accordance with the present invention.DETAILED DESCRIPTION OF AN ILLUSTRATIVE EMBODIMENTFIG. 1 is a schematic block diagram of a communications network that may be advantageously used with the present invention. The communications network is illustratively embodied as an InfiniBand Architecture (IBA) system area network 100 comprising a plurality of end nodes, such as processor nodes 110, a storage subsystem node 120 and input/output (I/O) chassis nodes 130, interconnected by intermediate network nodes, such an IBA router 150 and IBA switches 200. However, it will be understood to those skilled in the art that the inventive technique described herein may apply to other types of communications networks with end nodes and intermediate nodes that communicate by exchanging discrete packets of data according to predefined protocols. In this context, a protocol consists of a set of rules defining how the nodes interact/communicate with each other. For example, the nodes of communications network 100 communicate by exchanging IBA packets. An IBA packet is an indivisible unit of IBA data transfer and routing consisting of one or more headers, a packet payload and one or two cyclic redundancy checks (CRCs).Each processor node 110 includes at least one central processing unit, a memory and at least one host channel adapter coupled to a switch 200. The storage subsystem node 120 comprises a collection of storage devices organized in, e.g., a redundant array of inexpensive disks (RAID) configuration and connected to a switch 200 via a target channel adapter (TCA). Each I/O chassis node 130 comprises a collection of I/O modules adapted to provide connectivity to I/O devices and/or other computer networks, such as the Internet, coupled to, e.g., Fibre Channel and/or gigabit Ethernet links. Whereas the router 150 transports IBA packets between subnets of the network, the network switch 200 forwards those packets from one link to another of the same subnet.Network SwitchFIG. 2 is a schematic block diagram of switch 200 including a plurality of line card or input/output card (IOC) modules 300 and switch fabric card (SFC) modules 700. An example of a network switch that may be advantageously used with the present invention is the Director Switch available from InfiniSwitch Corporation, Westborough, Mass. The network switch 200 illustratively includes eight (8) IOC modules that connect the switch to the IBA network 100 and two (2) SFC modules 700. Each SFC contains a switch control processor (SCP 720) and a switch fabric 750 organized as a crossbar switch to interconnect data paths between the IOC modules 300 of the switch. Each SFC module also contains a central clock source 710 that distributes synchronous clock signals over radial clock lines 210 throughout the switch for use by logic on the modules. However, it will be apparent to those skilled in the art that other clock distribution methods, such as asynchronous clocking, may be used in connection with the inventive technique described herein.Both SFC modules 700 are functional and used during normal operation of the switch. The SFC modules and their co-resident system processors (SCPs) cooperate in a redundant arrangement to provide full connectivity and control for the switch in the event of a failure to either module. To that end, the SCP 720 on each SFC module communicates with its redundant SCP 720 over paths 220 to ensure the "healthiness" of each SFC module 700. In that event of a failure, the surviving SFC module assumes switching responsibilities to provide continuous, yet degraded, operation of the switch. Such continuous operation includes remapping of the data paths through the switch fabric 750, along with possible changing of the time-base clocking source, from the failed SFC module to the surviving SFC module.Although eight IOC modules are described herein, the configuration of the switch may be scaled to accommodate thirty-two (32) IOCs. Each IOC module 300 illustratively includes eight (8) 1* IBA ports 310, wherein each port accommodates 2.0 gigabits per second (Gbps) of data. Specifically, 2.5 Gbps of information are received by an ingress port 310 and are transmitted by an egress port 310; notably, 2.0 Gbps of the information are raw data with the remainder comprising encoding overhead. Therefore, 16 Gbps of data traffic flow are passed through ingress IOCs, forwarded to the SFC module 700 and switched to egress IOCs. Such large amounts of traffic are not feasibly transported over parallel buses of a backplane.Accordingly, the switch 200 employs serializer/deserializer (SERDES 280) devices to limit the number of physical wires constituting a backplane 250 of the switch. At the interface between the IOC modules 300 and the backplane, these SERDES devices convert parallel data to serial data for transmission over high bandwidth serial links of the backplane 250 to the SFC module 700. SERDES devices located at the interface between the SFC module and backplane re-convert the serial data to parallel data for processing on the module. Serial data transported throughout the switch is converted to parallel data on each module to allow use of, e.g., field programmable gate array (FPGA) devices that are configured to operate with parallel data.Specifically, each SCP 720 is coupled to each IOC 300 in the switch over a 781.25 megabit per second (Mbps) serial link 230. Each SCP 720 further communicates with its redundant SCP counterpart over two 10 Mbps Ethernet links 220. Data links 270 couple each SFC 700 to each IOC 300, wherein each data link 270 illustratively represents a bundle of four (4) 3.125 gigabit per second (Gbps) serial data links. Request/flow control signals flow over 3.125 Gbps control links 260 between each IOC 300 and each SFC 700. That is, requests for arbitration are passed over these serial control links 260 by IOCs to the SFCs and grants are returned by the SFCs to the IOCs over the links 260. In addition, flow control information provided by output queues of the IOCs to input queues of the IOCs flow over the serial links 260.IOC ModuleFIG. 3 is a schematic block diagram of an IOC module 300 that is partitioned into egress and ingress data paths for transmitting and receiving IBA packets to and from the network 100. The IOC may be embodied as one of many different "personality" line cards adapted to receive either 1* or 4 * IBA links. In some cases, an IOC may accommodate a 4* and 1* link arrangement. Broadly stated, the ingress data path of each IOC comprises logic that "understands" the format of packet bits received over IBA network links, along with logic that examines headers of the packets and places those packets onto queues that are scheduled for servicing by the crossbar switch fabric. The egress data path of each IOC comprises logic configured to receive a stream of packet cells from the ingress path of an IOC and reassemble those cells into a packet for transmission from the switch. Notably, an ingress path on a particular IOC utilizes the switch fabric 750 to send information to its corresponding egress path on that IOC.The IOC 300 comprises an egress packet processor (EPP 600) and an ingress packet processor (IPP 500) that cooperate with a plurality of Quad Infiniband Link Layer (QUILL) interface devices 400 to provide egress and ingress buffering and queuing systems for the egress and ingress data paths, respectively. A plurality of SERDES devices 280 is provided to translate data from parallel to serial (and serial to parallel) formats for transmission (and processing) throughout the switch. The QUILL devices 400 also form IBA link interfaces between IBA ports 310 of the IOC module 300 and the IBA network 100. There are illustratively two QUILL devices per IOC, wherein each QUILL 400 is configured to operate with a physical device interface, such as a TCA, that provides, e.g., Fibre Channel or gigabit Ethernet link connections to the switch. However, native IBA links can also be coupled to the switch via each QUILL.In the illustrative embodiment, each QUILL 400 forms either a 4*, 10 gigabit per second (Gbps) IBA link interface or four (4) 1*, 2.5 Gbps link interfaces that connect to either 4* or 1* IBA ports 310 of the switch 200. If the IOC operates in a 1* mode and data flows into the IOC over a 4* bundle of IBA links, the bundle is apportioned into four 1* data stream flows at an IBA physical device interface of the IOC. These four data streams over the four 1* links are interleaved for storage in the ingress buffering and queuing system. If the IOC operates in a 4* mode, i.e., where the 10 Gbps packet is not apportioned into four 2.5 Gbps streams, there is only a single stream presented to the buffering and queuing system and no interleaving is required.FIG. 4 is a schematic block diagram of a QUILL 400 comprising a link function that provides IBA layer 2 operations for each data flow entering the IOC. The link function includes state machine and look-up engine logic that cooperate to provide a look-up operation on an IBA packet received at the IOC to identify a storage location within the ingress buffering and queuing system of the IOC. Each QUILL comprises a plurality of, e.g., four, link finite state machines (FSMs), each coupled to a link/port serviced by the QUILL. The link FSMs are connected to a buffering system 420 comprising a plurality of first in/first out (FIFO) buffers 425.An ingress data path of the QUILL (i.e., ingress QUILL) comprises a receiver (Rx) FSM 410 or "deframer" that performs error checking and CRC checking on IBA packet data received from the network. An ingress portion of the FIFO buffering system 420 is configured to store the packet data and forward that data to inputs 432 of a selector circuit 430. An output 434 of the selector is coupled to a double data rate (DDR) bus system 440 arranged to pass the data to the IPP 500. In addition, the Rx FSM 410 extracts headers from the received packets to perform lookup operations into a lookup memory 320 using DLID and protection key (PKEY) index values of the headers in connection with a lookup table (LUT) engine FSM 450. As a result of the lookup operation, the DLID/PKEY index values are translated to a virtual output queue (VOQ) in the ingress buffering and queuing system. The ingress QUILL then forwards the received packet to the IPP on the ingress path.FIG. 5 is a schematic block diagram of the IPP 500 comprising logic 510 configured to segment and store a received packet as fixed size, 64-byte cells. The 64-byte cell size is reflective of a credit used in flow control for the IB architecture. Each packet is characterized as a data flow based on the IBA input port 310 at which the packet is received at the IOC. The packet data flow is segmented into the fixed size cells and stored in an external ("off-chip") ingress cell storage memory 340. Those stored cells are then enqueued onto VOQs 535 of a queuing system 530. Specifically, the IPP maintains a free list of 64-byte buffers in free list/link list memory 345 that are linked together to form a linked list of cells of a packet context 520. A packet context is an internal (i.e., within the switch) representation of a flow of cells associated with a packet. Once the linked list is formed, a head of the list is linked onto a VOQ 535 for transmission over the switch fabric 750. The queuing system 530 of the IPP is flexible enough such that all buffers may be destined to a particular VOQ or apportioned among many VOQs.Buffering and queuing on the ingress data path is based on a destination output virtual lane (VL) and output port. A VL is defined by the IB architecture as a basis for link level flow control. Each IB link preferably has 16 defined VLs; one VL is used for management traffic and the remaining 15 VLs are used for data traffic. The virtual lane concept has a significant role with respect to credits and congestion among switches in an IBA network. For example, an upstream node (such as another transmitting switch 200) within the IBA network 100 monitors buffer utilization in the switch. Within an IOC 300, credit information ("credits") flows from the IPP 500 back to each QUILL 400. In response, each QUILL generates a link packet using the credits received from the IPP and forwards that packet back to a transmitting node from which a previous packet was received at the switch. The credits contained in the link packet indicate to the transmitting node whether there are sufficient buffers (credits) for that node to send another packet.The ingress queuing system 530 of the switch is organized into VOQs 535, which are dependent upon the VLs and output ports on each IOC in the switch. Thus, each VOQ is associated with an output VL and an output port. Notably, there is a distinction between an input VL and an output VL, and the IBA specification provides a translation process for translating an input VL to an output VL. In the illustrative embodiment, each IOC has 64 ports with 16 VLs per port for a total of 1024 VOQs that are loaded by buffer manager logic 540 with cells destined for switching at the switch fabric. The VOQs are scheduled for servicing in the switch according to an IOC scheduling algorithm implemented by a scheduling function 1100. The scheduling function enables each IOC to arbitrate on a per VL/queue basis for access to the switch fabric 750 in order to transfer data.Although the IOC includes output queues, the architecture of the switch is primarily directed to an input buffering and queuing system. It is desirable to keep the output queues as shallow as possible. Flow control in the switch is configured to convey flow control information from output ports back to input ports of the switch. That is, information is fed back from each output IOC (each output VL on each output port) back to the ingress path of each IOC to effect arbitration and the manner in which cells are forwarded through the switch.FIG. 6 is a schematic block diagram of the EPP 600 comprising logic configured to receive and process a stream of cells switched by the switch fabric 750. The EPP resides on the egress data path of each IOC and comprises one output queue for each output VL for each output port on the IOC. In the illustrative embodiment, there are eight output ports with 16 output VLs per port for a total of 128 output queues on each egress path of the IOC. The stream of cells is stored in selected buffers of cell storage memory 620 until the cells are linked in a particular context for transmission from the switch over an egress link. As cells are received at the IOC from a switch port of the fabric, up to eight (8) contexts (one from each IOC) may be controlled by the EPP 600.A packet context manager 610 manages reassembly of cells into a packet context 520 using cell storage memory 620 and free list/link list memory 630, as described with the IPP. The cells of packets are fully stored in the cell storage memory 620, where they are retrieved in accordance with a VL scheduler 640 configured to perform a scheduling function using a VL arbitration table 642 (as defined by the IBA specification) and head information located on output VL queues 650 of each port. The head information pertains to packets stored in cell storage memory 620 for transmission over the egress links. Using the predetermined VL arbitration table 642, the VL scheduler 640 selects a packet for transmission over the egress link.The selected packet is removed from the output queue 650 and transferred from the EPP 600 to the QUILL 400. Referring again to FIG. 4, a packet context is received over a DDR bus system 460 from the EPP 600 and forwarded over an egress path of the QUILL (i.e., egress QUILL). The packet context flows over the egress path through an egress portion of the FIFO buffering system 420 to a transmitter (Tx) FSM 410 or "framer". From there, the packet is forwarded over egress links of the switch.SFC ModuleFIG. 7 is a schematic block diagram of the SFC module 700 comprising a clock source 710 and a switch fabric 750. The switch fabric 750 interfaces to the IOC modules 300, a flow control and arbiter (FLARB) device 760 and various SERDES devices 282, 284 (generally shown at 280). The switch fabric 750 preferably comprises two 10*10 cell alignment and switch engine (CASE 752) crossbar devices coupled to non-integrated receive (SERDES Rx 282) and transmit (SERDES Tx 284) devices that translate data from serial to parallel (and parallel to serial) formats. The FLARB 760 comprises a flow control mechanism 780 and a central arbiter 765 that controls both CASE devices 752 on the SFC 700 to, among other things, keep them in synchronization. Notably, the redundant SFC module 700 in the switch is not synchronized with its counterpart SFC module.Operationally, request/grant logic 560 (FIG. 5) of an IOC 300 sends a request over a control link 260 to the arbiter core 765 embodied on the FLARB device 760. The SERDES Rx device 282 receives data over a plurality of (e.g., four) high-speed serial data links 260 and transposes it to data over a parallel bus 730 operating at a lower frequency that can be handled by conventional FPGA logic. In particular, the SERDES device 282 translates serial data into parallel data and forwards that data to the arbiter 765, which implements a conventional SLIP arbitration algorithm. The arbiter 765 renders a decision based on all the requests received from all the IOCs and resolves any conflicts that may arise. In response, the arbiter issues grants over bus 730 that are converted by the SERDES Tx device 284 for transmission over links 260 to the logic 560 on the IOCs. Subsequently, the FLARB 760 issues configuration information to each of the CASE devices 752 over independent control lines 735 between the CASE and FLARB devices.The configuration information comprises control information that instructs each crossbar device 752 to connect an input switch port to an output switch port of the switch fabric at a particular time. The configuration information essentially synchronizes the switch such that ingress source IOCs transmit cells to the switch fabric 750 over serial links 270 for transmission to egress destination IOCs. Since the switch is based on synchronous switching, all arbitration, data transmission and switching aspects of the crossbar devices 752 are synchronized across those serial links, which are thereafter transposed into parallel links 740. The cells switched by the SFC 700 are then forwarded to the EPPs 600 of destination IOCs 300.Keep Alive Buffer (KAB)The invention is directed to an online system monitoring technique that quickly and efficiently identifies failures or other system errors arising during operation of an intermediate network node, such as network switch 200. The technique comprises Keep Alive Buffer packets/cells ("KABs") that exercise data and control paths extending from every ingress port to every egress port in the switch. By exercising the data and control paths, the KABs enable testing of, and ensuring against, component failures, missing modules or other types of failure that can be detected as soon as possible, to thereby prevent data flow backup or other performance degradation in the switch.Broadly stated, each KAB is generated and injected into the data path of the switch by ingress QUILL logic, and is subsequently extracted and checked by egress QUILL logic. The KAB is injected into the data path at a low frequency that essentially "hides" the KAB behind the overhead of a link protocol and does not generally interfere with normal operating traffic. The injected KAB traverses the data path between the ingress and egress ports (QUILLs) at line rate, similar to a packet that is received at, switched and forwarded from the switch. By traversing the data path between the ingress and egress QUILLs, the KABs exercise all major physical data and control paths of the switch. That is, the KABs exercise the ingress buffering and queuing system, the request and grant control paths, the SERDES serial links and transceivers, the switch fabric operation, the egress buffering and queuing system, and the scheduling functions of the switch. Other than the ingress and egress QUILLs, the logic of the switch treats the KABs as any other data packet/cell forwarded through the switch.The KAB is a minimum size IBA packet that is generated by a KAB generator 900a of an originating ingress QUILL. The minimum size packet comprises a single cell and is processed like any other IBA packet flowing through the switch. As valid minimum size packet defined by the IBA, the KAB has a size of 24 bytes comprising 8 bytes of local route header, 12 bytes of transport header, 0 bytes of payload and 4 bytes of packet start/end symbols and cyclic redundancy check (CRC) code. The packet start/end symbols are illustratively removed at points where the KAB exists, so the 24-byte IBA minimum size packet is essentially equivalent to a 22-byte KAB.FIG. 8 is a schematic block diagram illustrating the format of a KAB 800. Note that only pertinent fields of the KAB are illustrated. The KAB 800 comprises a minimum size KAB packet 802 having a local route header 810 with a 4-bit virtual lane (VL) field 812 containing a number from 0 to 15 that specifies the VL over which the KAB travels. A KAB version field 814 contains a 4-bit constant that allows future KAB formats to differ and co-exist with the present KAB format. A transport header 820 does not include any fields pertinent to the KAB and, as noted, there are zero (0) bytes of payload 830. CRC field 840 contains a 16-bit internal checksum that encompasses the entire contents of the packet 802.Upon generation by the KAB generator, the minimum size packet 802 is forwarded to the IPP 500 where a packet header 850 is affixed to the packet. The packet header 850 comprises a 4-bit input link field 852 containing a value of an input (source) link port originating the KAB. In the illustrative embodiment, only 1 bit of the input link field 852 is used to identify the QUILL 400 originating the KAB 800. A switch lifetime limit (SLL) field 854 contains a 32-bit timestamp indicating when the KAB was created.The timestamp is generated from a count value that is synchronized among all IOCs; therefore, the timestamps are equivalent for all IOCs in the system. The switch fabric 750 generates the count value and passes it to all IOCs every 4 usecs. The count value is carried as side band bits on the control links 260, i.e., those links that carry the request/grant and flow control bit stream information to the IOCs. The control bit stream information is illustratively apportioned into "control" cells. Certain lines (or fields) of the control cell carry flow control, grant information, overhead and timestamp (or SLL counter) values.The IPP 500 then transforms the minimum size packet 802 into a cell, as is the case for all packets processed by the switch. The IPP affixes (e.g., prepends) a cell header 860 onto the packet 802 prior to forwarding the cell (hereinafter "KAB 800") to the switch fabric 750. The minimum size packet is identified as a KAB 800 by assertion of a KAB bit 862 in the cell header 860 of the packet. The cell header 860 also includes a 5-bit input switch port field 864 containing a value of an input (source) switch port originating the KAB. In the illustrative embodiment, only 3 bits of the input switch port field 864 are used to identify an IOC 300 functioning as the source switch port of the switch. Thus, there is a total of 4 bits between the 3-bit input switch port field 864 and 1-bit input link field 852; the KAB receiver uses the contents (values) of these fields to identify one of the sixteen sources of the KAB.According to the invention, the KABs 800 are manifested in two forms: FastKABs and SlowKABs. A FastKAB is a minimum size packet that is generated by the KAB generator 900a of an ingress QUILL on a source IOC. FastKABs are preferably generated and "launched" automatically by switch hardware such that they are constantly flowing throughout the switch to provide a periodic check of the switch. That is, the FastKAB traverses the ingress buffering and queuing system of the source IOC, and is associated with requests and grants that enable it to be switched through the switch fabric 750. The FastKAB also traverses the egress buffering and queuing system of a destination IOC and is intercepted immediately before the egress IBA links by a KAB receiver 900b of an egress QUILL of the destination IOC. Only the reception of the KAB is recorded by the egress QUILL.SlowKABs are initiated by software (executed by the SCP 720) to thereby enable further diagnosis of a potential failure in the switch 200. For example, a SlowKAB may be generated in response to a FastKAB failure, insertion of a new module within the switch, a non-responsive module to processor access or any other event that requires generation of such a processor-initiated diagnostic tool. A SlowKAB can be injected into the switch by software at any time. As with the FastKAB, the test coverage of the SlowKABs may extend to the buffer and queuing structures of the switch, in addition to the switch fabric 750 and the arbiter 765.Two differences between SlowKABs and FastKABs are (1) FastKABs are generated automatically by the hardware (logic), whereas SlowKABs are generated only upon processor request and (2) the flow or, in IBA terminology, the virtual lane (VL) over which the KABs travel. As noted, there are sixteen VLs of which fifteen, e.g., VLs 0-14, are used for data and one, e.g. VL 15, for control/management traffic. In the illustrative embodiment, FastKABs and SlowKABs can travel over any VL with the exception that they do not travel over the same VL. A switch may utilize only a subset of the data VLs, e.g., VL 0, for data traffic, as well as VL 15 for control traffic. The KABs can then be allowed to run over those unused VLs without interfering with data or control throughput.KAB Generator and ReceiverFIG. 9 is a schematic block diagram of the KAB generator 900a and KAB receiver 900b comprising a KAB Control and Status Block (CSB) data structure 1000 containing a plurality of registers that hold information relating to the generation of, along with diagnostic information acquired by, the KAB 800. As described further herein, one of the registers is configured as a timer 910 that determines when to generate and send a KAB. The generator 900a also includes formatting logic 920 configured to generate the KAB. To that end, the formatting logic 920 is configured to reset the timer 910 when it expires, format the KAB 800 using the information stored in the registers and, in cooperation with DDR bus system 440, inject the KAB into the data path/stream. The KAB receiver 900b, on the other hand, includes extraction logic 960 that "siphons off" a KAB received at the egress QUILL. In particular, the KAB receiver extraction logic 960 intercepts the KAB 800 from the data path instead of transmitting it over an egress link, examines the contents of the extracted KAB and updates information stored in result memory registers of the CSB 1000. Note that, in the illustrative embodiment, the KAB generator and receiver logic is implemented in a FPGA device.KAB RegistersFIG. 10 is a schematic block diagram of the KAB CSB 1000. The CSB contains a plurality of data structures embodied as registers, including a FastKABEnable register 1010, a FastKABTiming register 1020, a FastKABControl register 1030, an AllKABResults register 1040, a FastKABResults register 1050 and a SlowKABControl register 1070. The FastKABEnable, FastKABTiming and FastKABControl registers pertain to FastKABs generated by the KAB generator, while the SlowKABControl register pertains to SlowKABs generated by the KAB generator 900a. The contents of the FastKABEnable, FastKABTiming, FastKABControl and SlowKABControl registers are all under processor (SCP) read/write access. That is, the processor reads and writes (preconfigures) the contents of those registers. Once these registers are configured (loaded) by software executed by the processor, the KAB logic automatically generates the FastKABs based on the information. The hardware logic never modifies those registers; it only uses the values.The FastKABEnable register 1010 specifies the destination in the switch to which FastKABs are sent and the source in the switch from which to expect FastKABs to be received. In the illustrative embodiment, software loads this register with a vector of QUILLs and IOCs present in the switch; those devices that are present are able to receive the KABs. As noted, there are two QUILLs per IOC and eight IOCs in the switch; thus, a vector of 16 bits is sufficient to represent all QUILLs (or ports) that can receive KABs. The FastKABEnable register 1010 is illustratively a 32-bit register of which the upper 16 bits are reserved and each of the lower 16 bits specifies a particular QUILL and port.The FastKABTiming register 1020 is a 32-bit register that includes a 4-bit field 1022 containing a programmable, registered parameter that functions as timer 910 to specify the frequency at which the FastKABs is checked, i.e., a FastCheckPeriod. At the extraction logic 960 of the KAB receiver 900b, reception of FastKABs is recorded and, if a FastKAB is not received within the FastCheckPeriod, an interrupt "flag" is generated to provide automatic failure notification of this event to the system processor (SCP). The FastCheckPeriod allows the system processor to determine the frequency at which the results are checked, thereby limiting the maximum response time to detecting a failure. Note that the interrupt is generated if no KABs are received during the FastCheckPeriod; yet, there can be more than one KAB received during that period and, as a result, a logical ORing function may be provided to record that event.The FastKABTiming register 1020 also includes a 24-bit field 1024 containing a programmable, registered parameter that specifies the frequency at which FastKABs are generated, i.e., a FastGeneratePeriod. Note that it is possible to set the FastCheckPeriod equal to the FastGeneratePeriod to ensure reception and checking of every KAB; that is, every time a KAB is generated, a check is performed to ensure that the KAB was received. If the CheckPeriod is set to twice the FastGeneratePeriod, then reception of one KAB is checked for every two KABs generated. This approach can be extended to check reception of KABs in a manner that accommodates unpredictable latency within the switch, e.g., bursts of packets causing various latencies.The FastKABControl register 1030 is a 32-bit register that includes a 4-bit FastVL field 1034 holding a value that specifies the VL over which FastKABs are transmitted. A 1-bit FastEnaGenerate field 1038 enables auto generation of FastKABs, while a 1-bit FastEnaIntKABs field 1036 enables interrupt upon detection of missing FastKABs. The FastKABControl register 1030 also includes a 6-bit FastCreditLimit field 1032 that specifies a credit limit applied to each link (or destination). The credit limit applies to the ingress buffering system and essentially reserves a plurality of buffers for each destination so that each destination has its credits accounted for independently. Separate credit accounting per destination prevents a fault at a single destination from very quickly consuming all destinations' credits, which would result in all destinations immediately declaring missing KABs, thereby hiding the original single fault.The AllKABResults register 1040 provides readable state as to the number of KABs received on particular VLs. The content of the AllKABResults register is an absolute count of the number of KABs received on a VL, apportioned per source (KAB generator 900a). This register 1040 can be read by the SCP 720 to determine the number of KABs received at a KAB receiver 900b from each source KAB generator 900a on each VL at any time.In the illustrative embodiment, there are sixteen AllKABResults registers 1040, one for each VL, wherein each register records the number of KABs received on that VL since the last time the register was cleared. Each AllKABResults register is illustratively 16 words (64 bytes) in length, wherein each word is 4 bytes and each word has two fields (and two reserve fields). For each switch port and QUILL, a 10-bit field 1042 indicates credits used. A fault in the ingress buffer system that is not returning credits can be diagnosed using field 1042. A 10-bit field 1044, for the same switch port and QUILL, indicates a received KAB count. That is, the content of field 1044 indicates how many KABs are received.The fields described above are illustratively implemented as a saturating count, wherein the contents of the register increment from 0 to the maximum and then stay at the maximum count e.g., 2<10> . The remaining fields of the register are organized for the different sources up to sixteen. Note that the register is associated with one VL from all sources and the other 15 registers are identical copies, just for different VLs (for a total of 16 registers). As noted, the KAB generator and receiver logic is illustratively implemented in a FPGA device; this device includes one memory that is shared between ingress and egress data paths. Accordingly, the receive KAB count field 1044 is located in the KAB receiver 900b, while the credits used field 1042 is located in the KAB generator 900a. The FastKABResults register 1050 provides a summary of the AllKABResults register 1040 directed to FastKABs (which typically travel over VL 15, but could be any VL). The FastKABResults register provides FastKAB results in a summary register that is much shorter than the AllKABResults register, i.e., the FastKABResults register is only one word in length instead of 16 words for the AllKABResults register. Specifically, the FastKABResults register is a 32-bit register wherein the upper 16 bits are reserved and each of the lower 16 bits is associated with a port and QUILL of the switch. As a result, the FastKABResults register 1050 is generally accessed in response to an interrupt that is generated because, e.g., FastKABs are missing at a particular receiver. The register 1050 can be quickly and efficiently accessed (read) to indicate exactly from what source KABs are not received.As FastKABs are received at a particular egress IOC, the KAB receiver 900b records the source (KAB generator 900a) from which the KABs are sent in the appropriate bit position of the register 1050. If any KABs are missing, the FastKABResults register 1050 can read to determine from which source the missing KABs were sent. These missing KABs are recorded for a particular period, e.g., the FastCheckPeriod as specified by the timer 910. Thus, when the timer 910 expires, the contents of the FastKABResults register 1050 are examined to determine the number of FastKABs that have arrived from the last time the timer expired. If any KABs are missing from any sources, a corresponding bit is set in the FastKABResults register. The contents of the FastKABResults register are then cleared. The contents of the AllKABResults register 1040 are cumulative for SlowKABs to ensure that every SlowKAB is forwarded through the switch. The AllKABResults register 1040 is generally not used for FastKABs.In essence, the FastKABResults register 1050 is the result of the FastCheckPeriod and the bits in the register are "sticky". A hidden register 1060 is located "behind" the register 1050; this hidden register is not accessible by the processor. The format of the hidden register 1060 is identical to that of the FastKABResults register 1050. The contents of this hidden register 1060 are cleared every FastCheckPeriod and each bit gets set when a KAB arrives. The results of the hidden register 1060 are parsed every FastCheckPeriod and those results are updated into the FastKABResults register 1050 because there may be some latency associated with the processor accessing the register; i.e., it may be more than a FastCheckPeriod before the processor accesses the register. The hidden register 1060 thus prevents overwriting of the results that caused the interrupt by the next results.The SlowKABControl register 1070 is used to control generation and launching of SlowKABs, and is generally similar to the FastKABControl register 1030 with a couple of exceptions. For example, there is no automatic generate, but rather a 1-bit SlowInject field 1072 that is a "read/write 1 to trigger" which means that if the bit is written as a "1", a SlowKAB is generated and sent with the parameters in this register. These parameters are stored in the lower eight bits of the register and include a 4-bit SlowVL field 1075 specifying a destination (internal) VL, a 3-bit SlowDestSwfPort field 1076 specifying a destination port and a 1-bit SlowDestQuill field 1078 specifying a destination QUILL. SlowCreditFree and SlowCreditLimit fields 1073, 1074 are similar to corresponding parameters in the FastKABControl register. The content of the SlowCreditLimit field 1074 is identical to that of register 1030; i.e., 64 credit blocks. The SlowCreditFree field 1073 contains a summary indicating the number of credits that are available overall for all SlowKABs on all VLs.The FastKABs and SlowKABs have different credit pools so as to avoid interfering with the automatic FastKABs that are running when diagnosing SlowKABs. Therefore, these KABs work from different credit pools or different credit counts. Each pool reserves its own credit and each is independent. As noted, the credits (or buffers) are reserved at the ingress buffering system in the IPP 500.OperationFIG. 11 is a schematic block diagram illustrating the data path test coverage 1100 of KABs within the switch 200. KABs are injected into the data path inside the ingress QUILL immediately before the IPP interface, and are extracted from that path inside the egress QUILL immediately after the EPP interface. The internal paths of the QUILLs 400 are generally not part of the test coverage provided by the KABs 800. That is, diagnostic coverage does not extend to the link engines and forwarding lookup engines/tables as those components are located in portions of the QUILL where the presence of KABs would interfere with line rate traffic. KABs are injected at rate guaranteed to be less than the incoming link flow control packet rate, so they do not cause any ingress line-rate storage issues. KABs are extracted in the over-speed domain egress logic, so their presence does not cause "holes" in egress output link utilization.Specifically, an ingress QUILL launches the FastKABs at a periodic, low rate that is less than the flow control overhead on an IBA link. Introduction of FastKABs into the switch generally does not interfere with the data throughput of the switch. For example, flow control packets received at the switch terminate at the ingress QUILL. These flow control packets are replaced by FastKABs that flow throughout the switch without affecting the data traffic received and forwarded throughout the switch. An egress QUILL is enabled to receive FastKABs from every ingress QUILL in the switch. If the enabled egress QUILL fails to receive a FastKAB for a predetermined time, e.g., a few consecutive cycles as defined by the FastCheckPeriod, it generates an interrupt to the processor (e.g., the SCP 720). In response, the processor may generate a SlowKAB or it may further investigate the switch by accessing certain statistics registers.Similar to a FastKAB, the SlowKAB is injected at an ingress QUILL and extracted at an egress QUILL. Unlike the FastKAB, however, the SlowKAB can be directed to run on a particular queuing structure in the switch. The function of the SlowKAB is identical of the FastKAB; namely, to have its existence recorded by the egress QUILL. A SlowKAB may interfere with data throughput; however, this anticipated and expected given the nature of the SlowKAB. That is, since the SlowKAB is employed for diagnostic purposes in response to an error or failure, performance is not a major concern with this KAB.FIG. 12 is a flowchart illustrating a sequence of steps for implementing the online system monitoring technique in accordance with the present invention. The sequence starts at Step 1200 and proceeds to Step 1202 where the KAB generator 900a at an ingress QUILL of a source IOC generates a minimum size KAB packet 802 using parameters stored in the appropriate registers of the KAB CSB 1000. The KAB packet is injected into the data path of the switch at the ingress QUILL of the source IOC in Step 1204 and passed to the IPP 500 where a timestamp is loaded into the SLL field 854 of the packet header 850 at Step 1206. At Step 1208, the IPP enqueues the KAB packet for switching to a proper destination IOC and, at Step 1210, schedules the KAB packet for switching at the switch fabric 750; i.e., a request for switching is sent to the arbiter 765.The IPP then transforms the KAB packet into a KAB cell (KAB) at Step 1212 and, in response to the arbiter granting the request, sends the KAB over the SERDES to the switch fabric at Step 1214. Note that although the KAB never traverses a data path within the arbiter, the arbiter 765 is "exercised" through scheduling of the KAB for switching at the switch fabric 750. In Step 1216, the SFC 700 switches the KAB through the switch fabric to the proper destination IOC and, to that end, sends the KAB over the SERDES to an egress path on the destination IOC. When the KAB is received at the egress path, the EPP 600 transforms the KAB to a KAB packet at Step 1218 and sends the KAB packet to an egress QUILL for transmission from the destination IOC of the switch (Step 1220).At Step 1222, the KAB receiver 900b at the egress QUILL extracts (siphons-off) the KAB packet from the data path on the destination IOC. The KAB receiver then subtracts the timestamp loaded in the packet header from a current time at Step 1224. The difference between those times may be used to record the highest latency KAB to determine, e.g., whether there is (was) a bottleneck somewhere in the switch. If appropriate, the difference information is recorded in an 8-bit field (using some of the reserved bits) of the AllKABResults register. In an embodiment of the invention, only the maximum latency recorded from a particular source on a particular VL is stored in this field. The contents of this field are read-only for access by the processor (SCP 720).At Step 1226, the KAB receiver 900b updates the results registers, e.g., the AllKABResults and FastKABResults registers, as appropriate, to record the source from which the KAB is sent. As noted, if any KABs are missing at the KAB receiver, the SCP 720 can read those registers to determine from which source the missing KABs were sent. The sequence ends at Step 1228.In the illustrative embodiment, the mere existence (reception) of the KABs, i.e., whether they traversed the data path of the switch, is recorded at the egress QUILL. If KABs are not periodically received at an egress port, an indication is provided that there may be a malfunction in the switch. The malfunction indication may not be an actual error (fault), but rather could be congestion in the switch. Non-reception of KABs as a result of congestion may indicate that there is excessive traffic destined to the port that is missing the KABs. In this context, the invention provides a low-level diagnostic that monitors the internal performance of the switch.While there has been shown and described an illustrative embodiment of an online system monitoring technique that quickly and efficiently identifies failures or other system errors arising during operation of a network switch, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. For example, the KABs 800 may also cooperate with any fault tolerant elements of the switch 200 to enable failover operations that allow the switch to continue functioning in a manner that is transparent to high-level application software endpoints. To that end, the KABs may function as an assist to the fault tolerant elements to detect when there may be an actual error (fault). In addition, the KABs may be used in the initial design and debug of the switch, as well as in manufacturing test, diagnostics and performance measurement. For this purpose, the KABs can be generated at line rate (faster than the illustrative rate described herein) to exercise all relevant components of the switch. Use of the KABs obviates the need for external network equipment attached to the physical switch platform to test the internal components and functions of the switch.The foregoing description has been directed to specific embodiments of this invention. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advantages. For example, it is understood that the various data structures described herein can include additional information while remaining within the scope of the present invention. While this description has been written with reference to the IBA specification, it should be noted that the principles of the invention apply to other "switched fabric" technologies. Further, it is expressly contemplated that the teachings of this invention can be implemented as software, including a computer-readable medium having program instructions executing on a computer, hardware, firmware, or a combination thereof. Accordingly, this description is to be taken only by way of example and not to otherwise limit the scope of the invention. It is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention. |
Technologies for media protection policy enforcement include a computing device having multiple operating systems and a data storage device partitioned into a number of regions. During execution of each of the operating systems, a policy enforcement module may intercept media access requests and determine whether to allow the media access requests based on platform media access policies. The mediaaccess policies may allow requests based on the identity of the executing operating system, the region of the data storage device, or the requested storage operation. Prior to loading a selected operating system, a firmware policy enforcement module may determine a region of the disk storage device to protect from the selected operating system. The firmware policy enforcement module may configurethe data storage device to prevent access to that region. The media access policies may be stored in one or more firmware variables. Other embodiments are described and claimed. |
1.A computing device used for media protection policy implementation in a multi-operating system environment, the computing device comprising:Data storage equipment, including multiple areas;A boot option module is established by the firmware environment of the computing device, the boot option module is used to select an operating system from a plurality of operating systems installed on the computing device; andThe policy implementation module is established by the firmware environment, and the policy implementation module is used to: (i) determine the area to be protected of the data storage device based on the selected operating system, wherein the data storage device is determined The area includes identifying the area of the data storage device that is not owned by the selected operating system; and (ii) configuring the data storage device to prevent access to the area of the data storage device;Wherein, the boot option module is further configured to load the selected operating system in response to the configuration of the data storage device.2.The computing device of claim 1, wherein the area of the data storage device includes a partition of the data storage device.3.The computing device of claim 1, wherein the area of the data storage device includes a partition table of the data storage device.4.The computing device according to claim 1, further comprising a second policy implementation module, the second policy implementation module being configured to:Intercepting a media access request during the execution of the selected operating system, where the media access request is used to specify a storage operation and a storage address;Determine the identity of the selected operating system;Identifying the second area of the data storage device that includes the storage address of the media access request; andDetermine whether to allow the media access request according to the second area of the data storage device, the identification of the selected operating system, and the storage operation of the media access request.5.The computing device of claim 1, wherein determining the area of the data storage device comprises: determining the area according to a media access policy of the computing device.6.The computing device of claim 5, wherein determining the area according to the media access policy comprises: reading the media access policy from non-volatile storage of the computing device.7.A method for implementing a media protection strategy in a multi-operating system environment, the method comprising:The pre-operating system firmware execution environment is loaded by the computing device;Using the firmware execution environment by the computing device to select an operating system from a plurality of operating systems installed on the computing device;The computing device utilizes the firmware execution environment to determine the area to be protected of the data storage device based on the selected operating system, wherein determining the area of the data storage device includes identifying the uncontrolled area of the data storage device State the area owned by the selected operating system;Configuring the data storage device by the computing device using the firmware execution environment to prevent access to the area of the data storage device; andThe computing device utilizes the firmware execution environment to load the selected operating system in response to configuring the data storage device.8.8. The method of claim 7, wherein determining the area of the data storage device includes determining a partition of the data storage device.9.7. The method of claim 7, wherein determining the area of the data storage device comprises: determining a partition table of the data storage device.10.The method of claim 7, wherein determining the area of the data storage device comprises: determining the area according to a media access policy of the computing device.11.The method of claim 10, wherein determining the area based on the media access policy comprises: reading the media access policy from non-volatile storage of the computing device.12.One or more computer-readable storage media including a plurality of instructions that, in response to being executed, cause a computing device to:Load the pre-operating system firmware execution environment;Using the firmware execution environment to select an operating system from a plurality of operating systems installed on the computing device;Using the firmware execution environment to determine the area to be protected of the data storage device based on the selected operating system, wherein determining the area of the data storage device includes identifying that the data storage device is not affected by the selected operation The area owned by the system;Configure the data storage device with the firmware execution environment to prevent access to the area of the data storage device; andThe firmware execution environment is utilized to load the selected operating system in response to configuring the data storage device.13.The one or more computer-readable storage media of claim 12, wherein determining the area of the data storage device comprises: determining a partition of the data storage device.14.The one or more computer-readable storage media of claim 12, wherein determining the area of the data storage device comprises: determining a partition table of the data storage device.15.The one or more computer-readable storage media of claim 12, further comprising a plurality of instructions that, in response to being executed, cause the computing device to perform the following steps:Intercepting a media access request during the execution of the selected operating system, the media access request specifying a storage operation and a storage address;Determine the identity of the selected operating system;Identifying the second area of the data storage device that includes the storage address of the media access request; andDetermine whether to allow the media access request according to the second area of the data storage device, the identification of the selected operating system, and the storage operation of the media access request.16.The one or more computer-readable storage media of claim 12, wherein determining the area of the data storage device comprises: determining the area according to a media access policy of the computing device.17.The one or more computer-readable storage media of claim 16, wherein determining the area according to the media access policy comprises: reading the media access from non-volatile storage of the computing device Strategy. |
Implementation of media protection strategy for multiple operating system environmentsThis application is the PCT international application number PCT/US2015/013786, the international filing date is January 30, 2015, and the application number entering the Chinese national phase is 201580003846.3, entitled "Implementation of the media protection strategy for the multi-operating system environment Divisional application of the invention patent application for computing equipment.Cross-references to related U.S. patent applicationsThis application requires the U.S. Provisional Patent Application S/N.61/936,614 filed on February 6, 2014, entitled "PARTITION PROTECTION SCHEME FOR A DUAL OS ENVIRONMENT" and 2014 Priority of U.S. Utility Patent Application S/N.14/298,312, entitled "Implementation of Media Protection Strategy for Multi-OS Environment", filed on June 6th.Background techniqueSome computing devices are equipped with multiple operating systems installed by manufacturers. For example, the computing device may include a general operating system such as Microsoft's WindowsTM (WindowsTM) and a mobile-oriented operating system such as AndroidTM (AndroidTM). In this multi-operating system environment, products equipped with this configuration run the risk of causing users to inadvertently delete "unnecessary" partitions or interfering data that are not owned by the currently active operating system.Many computing devices include firmware responsible for hardware initialization, lower-level hardware management, and management of the boot process. Specifically, in a device with multiple operating systems, during pre-boot, the platform firmware can select and load a boot target corresponding to one of the installed operating systems. The main platform firmware responsible for booting computing devices can be implemented according to the Unified Extensible Firmware Interface ("UEFI") specification, which has several versions published by the Unified EFI Forum. The UEFI specification specifies the interface between the firmware of the computing device and the operating system installed on the computing device.Description of the drawingsThe concepts described herein are illustrated in the drawings by way of example rather than limitation. For simplicity and clarity of presentation, the elements shown in the figure are not necessarily drawn to scale. Where deemed appropriate, reference signs are repeated between the drawings to indicate corresponding or similar elements.Figure 1 is a simplified block diagram of at least one embodiment of a computing device for media protection policy enforcement;Figure 2 is a simplified block diagram of at least one embodiment of the environment of the computing device of the system of Figure 1;3 is a schematic diagram of a schematic partitioning scheme of the data storage device of the computing device of FIGS. 1 and 2;4 is a simplified flowchart of at least one embodiment of a method for media protection policy enforcement that can be executed by the computing device of FIGS. 1 and 2;FIG. 5 is a schematic diagram of a storage driver stack that can be established by the computing device of FIG. 1 and FIG. 2; and6 is a simplified flowchart of at least one embodiment of a firmware method for media protection policy enforcement that can be executed by the computing device of FIGS. 1 and 2.detailed descriptionAlthough the concept of the present disclosure is susceptible to many different modifications and alternative forms, its specific embodiments have been shown in the drawings by way of example and will be described in detail here. However, it should be understood that it is not intended to limit the concept of the present disclosure to the specific form disclosed, but on the contrary, it is intended to cover all modifications, equivalents, and equivalents consistent with the present disclosure and the appended claims. substitution.References in the specification to "one embodiment", "an embodiment", "exemplary embodiment", etc. indicate that the described embodiment may include specific features, structures, or characteristics, but each embodiment may or may not necessarily include this Specific features, structure, or characteristics. Moreover, these phrases do not necessarily refer to the same embodiment. In addition, when describing a specific feature, structure, or characteristic in conjunction with an embodiment, it should be understood that, no matter whether it is explicitly described or not, it is within the knowledge of a person of ordinary skill in the art to realize such a feature, structure, or characteristic in combination with other embodiments. . In addition, it should be understood that items included in the list in the form of "at least one of A, B, and C" can refer to (A); (B); (C); (A and B); (B and C); Or (A, B, and C). Similarly, items included in a list of the form "at least one of A, B, and or C" can refer to (A); (B); (C); (A and B); (B and C); or ( A, B, and C).In some cases, hardware, firmware, software, or any combination thereof may be used to implement the disclosed embodiments. The disclosed embodiments can also be implemented as instructions carried on or stored on a transient or non-transitory machine-readable (eg, computer-readable) storage medium, and these instructions can be read and stored by one or more processors. carried out. A machine-readable storage medium can be implemented as any storage device, mechanism, or other physical structure for storing or transmitting information in a machine-readable form (for example, volatile or non-volatile memory, medium hard disk, or other medium equipment).In the drawings, certain structural features or method features may be shown in a specific arrangement and/or order. However, it should be understood that this particular arrangement and/or order may not be required. Instead, in certain embodiments, such features can be arranged in a different manner and/or order than shown in the schematic drawings. In addition, the inclusion of structural features or method features in specific figures does not imply that this feature is required in all embodiments, and in a certain embodiment, this feature may not be included or this feature may be combined with other features. combination.Referring now to FIG. 1, in one embodiment, the computing device 100 may boot multiple operating systems, toggle between them, or execute these operating systems in other ways. The computing device 100 may also include a data storage device that is partitioned or otherwise divided into parts that may be assigned to each of these operating systems. The computing device 100 controls access to the data storage partition according to one or more platform media access policies, which may be established by the platform manufacturer or configured by the end user in some embodiments. In some embodiments, during execution, the computing device 100 may intercept media access requests and determine whether to allow or deny these requests based on the media access policy. Additionally or alternatively, in some embodiments, before booting the operating system, the computing device 100 may protect each data storage partition by configuring a data storage controller based on a media access policy. Implementing a media access policy can protect the computing device 100 from accidental (or malicious) damage caused by user modifications to data partitions that are not controlled by the currently running operating system. Correspondingly, implementing media access policies can improve user experience and/or reduce manufacturer support costs, while allowing manufacturers to launch devices with multiple operating systems.The computing device 100 may be implemented as any type of device for performing the functions described herein. For example, the computing device 100 can be implemented without limitation as (but not limited to) smart phones, tablet computers, laptop computers, notebook computers, mobile computing devices, cellular phones, cell phones, messaging devices, wearable computing devices, in-vehicle communications Devices, desktop computers, server computers, workstations, distributed computing systems, multi-processor systems, consumer electronic devices, and/or any other computing devices configured to perform the functions described herein. As shown in FIG. 1, the exemplary computing device 100 includes a processor 120, an input/output subsystem 122, a memory 124, and a data storage device 126. Of course, in other embodiments, the computing device 100 may include other or additional components, such as those generally found in mobile and/or stationary computers (for example, various input/output devices). In addition, in certain embodiments, one or more of these illustrative components may be combined in another component or otherwise form a part thereof. For example, in some embodiments, the memory 124, or parts thereof, may be integrated in the processor 120.The processor 120 may be implemented as any type of processor capable of performing the functions described herein. For example, the processor 120 may be implemented as a single or multi-core processor, a digital signal processor, a microcontroller, or other processors or processing/control circuits. Similarly, the memory 124 may be implemented as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. During operation, the memory 124 may store various data and software used in the operation of the computing device 100, such as an operating system, applications, programs, function libraries, and drivers. The memory 124 is communicatively connected to the processor 120 through an I/O subsystem 122, which may be implemented as a circuit and/or component to facilitate input/output operations of the processor 120, the memory 124, and other components of the computing device 100. For example, the I/O subsystem 122 may be implemented as, or additionally include, a memory controller hub, an input/output control hub, a firmware device, a communication link (ie, a point-to-point link, bus link, wire, cable, Light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate input/output operations. In some embodiments, the I/O subsystem 122 may form part of a system on a chip ("SoC") and be integrated on a single integrated circuit chip along with the processor 120, memory 124, and other components of the computing device 100.The data storage device 126 may be implemented as any type of device configured for short-term or long-term data storage, such as memory devices and circuits, memory cards, hard disk drives, solid-state boot, or other data storage devices. The data storage device 126 may be logically divided into several partitions. For example, the data storage device 126 may include a system partition that stores data and firmware codes for the computing device 100. The data storage device 126 may also include several operating system partitions that store data files and executable files for each operating system of the computing device 100. In addition, the data storage device 126 includes a controller 128. The controller 128 may be implemented as a microcontroller, microprocessor, or any other control logic capable of controlling or enabling access to the data storage device 126 by the remainder of the computing device 100. In some embodiments, the controller 128 may be able to deny access to specific logical block addresses, sectors, tracks, or other address ranges within the data storage device 126. It should be noted that although the illustrative computing device 100 includes a single data storage device 126 divided into multiple logical partitions, the present disclosure is also applicable to a computing device 100 that includes multiple data storage devices 126.The computing device 100 further includes a communication circuit 130, which can be implemented as any communication circuit, device, or a collection thereof that enables communication between the computing device 100 and other remote devices. The communication circuit 130 may be configured to use any one or more communication technologies (for example, wireless or wired communication) and associated protocols (for example, Ethernet, Bluetooth, WiMAX, etc.) to implement such communication. The communication circuit 130 may be implemented as a network adapter, including a wireless network adapter.The computing device 100 also includes a non-volatile ("NV") storage device 132. The NV storage device 132 may be implemented as any device configured to continuously store data when the computing device 100 is powered down or disconnected from a power source. In the illustrative embodiment, the NV storage device 132 is a flash memory chip. In other embodiments, the NV storage device 132 may be implemented as a small amount of complementary metal oxide semiconductor (CMOS) memory or other non-volatile memory coupled with a backup battery. The NV storage device 132 can be used to store platform firmware for the computing device 100, as well as firmware configuration variables, such as configuration settings, boot targets, and other information that should be retained after rebooting. The NV storage device 132 generally has a relatively small storage capacity compared to the data storage device 126, but is available to the computing device 100 at the time of initial boot during the process of pre-booting the firmware execution environment. In some embodiments, the NV storage device 132 may be incorporated into one or more other components of the computing device 100, for example, I/O is incorporated into the subsystem 122.In some embodiments, the computing device 100 may also include a security processor 134. The security processor 134 may be implemented as any hardware or associated firmware or software configured to enhance the security and/or trustworthiness of the computing device 100. For example, the security processor 134 may be implemented as a trusted platform module (" TPM”). In some embodiments, the security processor 134 may form part of the I/O subsystem 122. The security processor 134 may be used to perform security authentication and/or verification on platform firmware variables or other configuration data of the computing device 100.Referring now to FIG. 2, in some embodiments, the computing device 100 establishes an environment 200 during operation. The illustrative environment 200 includes platform firmware 202 and many operating systems 210. The illustrative environment 200 includes two operating systems 210a and 210b; however, in other embodiments, additional operating systems 210 may be included.The platform firmware 202 includes a boot option module 204, a policy management module 206, and a policy implementation module 208. The various modules of the platform firmware 202 can be implemented as hardware, firmware, software, or a combination of the foregoing. The boot option module 204 is configured for execution in a pre-boot firmware environment. The boot option module 204 selects an operating system 210 from the installed operating systems 210, and loads the selected operating system 210. The boot option module 204 can select and load the boot target according to the UEFI specification.The policy management module 206 is configured to store, manage, and control access to one or more media access policies 214. The media access policy 214 can define platform-specific rules for access to the partitions of the data storage device 126, including whether to enable media access protection, whether to deny the operating system 210 that does not own a partition to access the partition, and whether to selectively allow not owning a partition. The operating system 210 of the partition accesses the partition, whether to allow access to the shared partition, and other access rules. Any non-volatile storage of the computing device 100 may be used to implement the media access policy 214. For example, in some embodiments, the media access policy 214 may be implemented as one or more firmware variables stored in the NV storage device 132. The policy management module 206 may be configured to allow the currently executing operating system 210 to access the media access policy 214, for example, by establishing a firmware variable interface in accordance with the UEFI specification. In some embodiments, the policy management module 206 may be configured to verify, authenticate, or otherwise protect access to the media access policy 214. For example, the policy management module 206 may store the media access policy 214 as one or more UEFI-certified variables (as described in the UEFI specification published by the UEFI Forum), or as the TPM published by the Trusted Computing Group The Trusted Platform Module (TPM) NV data described in the specification.The policy enforcement module 208 of the platform firmware 202 is configured to identify one or more areas of the data storage device 126 that will be protected from the selected operating system 210. For example, the policy enforcement module 208 may determine the partitions owned by the different operating systems 210 that will be protected from the selected operating system 210. The policy enforcement module 208 will configure the data storage device 126 to prevent access to the protected area or areas before the operating system 210 is booted. For example, the policy enforcement module 208 may program, configure, or otherwise direct the controller 128 of the data storage device 126 to prevent access to certain storage addresses or storage address ranges.Each operating system 210 includes a policy enforcement module 212. Each strategy implementation module 212 can be implemented as hardware, firmware, software, or a combination of the foregoing. Each policy enforcement module 212 is configured to intercept media access requests issued during the execution of the operating system 210, determine whether to allow the media access request based on the media access policy 214, and then allow or deny the media access request as the case may be. The media access request. Each media access request can specify storage operations (for example, read, write, create, delete, statistics, etc.) and storage addresses or address ranges. The policy enforcement module 212 can determine whether to allow or deny the media access request based on any combination of the following: the identity of the currently executing operating system 210; the identity, format, or attribution of the affected area of the data storage device 126; designated Storage operation; or any other criteria specified by the media access policy 214. In some embodiments, the policy enforcement module 212 may be implemented as a filter driver embedded in the storage driver stack of the operating system 210, as further described below in conjunction with FIGS. 4 and 5.Referring now to FIG. 3, a schematic diagram 300 illustrates an embodiment of a partitioning scheme for the data storage device 126 that the computing device 100 may adopt. Of course, in many embodiments, the data storage device 126 may include different numbers or types of partitions and/or different partitioning schemes. In an illustrative example, the data storage device 126 may include several blocks, each of which is individually addressable by a logical block address ("LBA"). Each LBA can be implemented as a simple integer in the range of zero to the maximum number of blocks of the data storage device 126. Each block may include a predetermined amount of data, such as 512 bytes of data or 4096 bytes of data. The exemplary data storage device 126 is divided into a partition table 302 and three data partitions 314, 316, and 318.The partition table 302 starts in the first block of the data storage device 126, that is, the block with an LBA of zero. The exemplary partition table 302 includes a protective master boot record ("PMBR") 304, a partition table header 306, and a number of partition table entries 308. PMBR 304 may be implemented as a legacy master boot record included in the first block (ie, LBA zero) of the data storage device 126 for compatibility purposes. The partition table header 306 starting at the second block (eg, LBA one) of the data storage device 126 may define the size, type, and other attributes of the partition table 302. Each partition table entry 308 can define the location, size, type, and other attributes of the partitions 314, 316, 318. For example, each partition table entry 308 may include a pointer that contains the LBA corresponding to the start and end of each data partition 314, 316, 318. The illustrative pointers 310, 312 include LBA values corresponding to the start address and the end address of the partition 314, respectively. The partition table entry 308 may further include pointers corresponding to the partitions 316, 318, which are omitted from the diagram 300 for clarity. As another example, each partition table entry 308 may include a globally unique identifier corresponding to the type of the associated data partition 314, 316, 318. The data partition type may indicate what operating system 210a, 210b owns the data partition 314, 316, 318.Each data partition may be owned by a specific operating system 210 or shared by multiple operating systems 210. In the illustrative example, partition 314 is owned by operating system 210a, partition 316 is owned by operating system 210b, and partition 318 is shared by two operating systems 210a, 210b. For example, consider that the operating system 210a is MicrosoftWindowsTM and the operating system 210b is Android. In this example, partition 314 may be implemented as a Windows data partition, partition 316 may be implemented as an Android system partition, and partition 318 may be implemented as a data partition shared by Windows™ and Android.As described further below, the computing device 100 may control access to the partition table 302 and/or the partitions 314, 316, 318 based on the media access policy 214. For example, the media access policy 214 may indicate whether media access protection should be enabled. If enabled, in some embodiments, the media access policy 214 may specify that the operating system 210 can only access partitions owned by or shared with the operating system 210. In the illustrative example, the operating system 210a may be allowed to access the partitions 314, 318 instead of the partition 316, and the operating system 210b may be allowed to access the partitions 316, 318 instead of the partition 314. As another example, the two operating systems 210a, 210b may be denied access to the partition table 302. Additionally or alternatively, in some embodiments, the media access policy 214 may allow the operating system 210 to selectively or read-only access to partitions not owned by the operating system 210. In an illustrative example, operating system 210a may be allowed to have read-only access to partition 316, and operating system 210b may be allowed to have read-only access to partition 318. Of course, these media access policies 214 are only illustrative, and other policies may be adopted in other embodiments.Referring now to FIG. 4, in use, the computing device 100 may execute a method 400 for media protection policy enforcement. The method 400 starts at block 402, in which the computing device 100 boots the operating system 210. As further described below in conjunction with FIG. 6, the computing device 100 may select the operating system 210 to be booted from the pre-boot firmware execution environment. The firmware execution environment can be terminated, for example, by calling the UEFI function ExitBootServices(). After booting the operating system 210, the computing device 100 is controlled by the operating system 210. In some embodiments, after booting the operating system 210, the platform firmware 202 can still provide limited services. For example, the platform firmware 202 may provide read-only access to firmware variables held in the NV storage device 132.In block 404, the computing device 100 may load the policy enforcement module 212. As described above, each operating system 210 can be configured to load a specific policy enforcement module 212. The computing device 100 can use any technology to prepare the policy enforcement module 212 for monitoring and/or intercepting media access requests, such as loading a kernel driver module or loading a user-mode executable file. In some embodiments, in block 406, the computing device 100 may load the policy enforcement module 212 in the storage driver stack of the operating system 210 as a filter driver. An exemplary storage driver stack 500 including multiple filter drivers is shown in FIG. 5.Many operating systems 210 access the data storage device 126 through a layered storage stack, where each layer is responsible for the specific abstraction layer of the data storage device 126. The exemplary storage driver stack 500 includes an application 502, a higher-level filter driver 504, a storage class driver 506, a lower-level filter driver 508, and a storage port driver 510 from the highest level to the lowest level. The application 502 may be implemented as a user-level application, a kernel process, a higher-level driver, or other process that can access data on the data storage device 126. The application 502 issues a media access request to the lower-level members of the storage stack. For example, the application 502 may use a file access API, a block access API, or any other storage interface established by the operating system 210 to issue a request.The storage class drive 506 may receive a media access request from the application 502 and convert the media access request into a lower-level request, for example, into a request for specifying a storage address in terms of a logical block address. The operating system 210 may create a storage class driver 506 for each type of data storage device 126 usable by the computing device 100, for example, a storage class driver 506 for each magnetic disk, removable optical disk, tape drive, or other device type. The storage port driver 510 can receive a lower-level media access request from the storage class drive 506, and convert the media access request into a lower-level media access request that can be transmitted to the data storage device 126 via a suitable interconnection bus . For example, the storage port driver 510 may generate a media access request suitable for a specific bus protocol implemented by the data storage device 126. The operating system 210 may establish a storage port driver 510 for each interconnect bus (for example, an ATA bus, a SCSI bus, or a USB bus) to which a data storage device 126 can be attached to the computing device 100.The filter drivers 504, 508 can intercept media access requests issued from higher-level members of the storage driver stack. After interception, the filter driver 504, 508 may allow the intercepted media access request to pass to the lower members of the storage drive stack, and request the media access before passing them to the lower members of the storage stack. Modify or reject the media access request by returning an error to the higher-level members of the storage drive stack. Generally, each filter driver 504, 508 can implement the same interface as another other member of the storage driver stack, allowing functions to be transparently added to the storage driver stack. In the illustrative embodiment, the policy enforcement module 212 is implemented as a lower-level filter driver 508. Thus, the policy enforcement module 212 can intercept the lower-level media access request including the logical block address of each request. Additionally or alternatively, in other embodiments, the policy enforcement module 212 may be implemented in any layer of the storage driver stack below the application 502, that is, implemented as a higher-level filter driver 504, a storage class driver 506, The lower filter driver 508, and/or the storage port driver 510, or a portion thereof.Referring back to FIG. 4, after the policy enforcement module 212 is loaded, in block 408, the computing device 100 runs the selected operating system 210 and any associated applications 502. In block 410, the computing device 100 monitors the media access request. The media access request can specify a storage operation (for example, read, write, create a file or block, delete a file or block, request information, etc.) and a specific storage address or storage address range associated with the storage operation (for example, one or more Logical block address). The media access request may be generated by any application, process, thread, higher-level driver, or other entity executing on the computing device 100. The media access request may be generated by an interactive application in response to a user request, for example, or may be generated without user interaction. As described above, when the media access request travels through the storage drive stack, it may be intercepted by the filter drivers 504, 508, for example. In block 412, the computing device 100 determines whether any media access requests have been received. If not, the method 400 loops back to block 410 to continue monitoring the medium access request. If one or more media access requests have been received, the method 400 proceeds to block 414.In block 414, the computing device 100 determines whether the media access request is allowed by the media access policy 214. In some embodiments, in block 416, the computing device 100 may use the platform firmware 202 to retrieve one or more of the media access policies 214. For example, the NV storage device 132 may be used to store the media access policy 214 as one or more firmware variables. The computing device 100 can use the runtime firmware variable interface provided by the platform firmware 202 to access those firmware variables. In some embodiments, the media access policy 214 may be implemented as encrypted, signed, or otherwise secure firmware variables. For example, the media access policy 214 can be implemented as one or more UEFI authentication variables as specified by the UEFI specification.Protecting or otherwise authenticating access to the media access policy 214 may allow the device provider to retain control of configuration changes of the computing device 100 after the device 100 is provided to the end user. For example, consider an embodiment in which the media access policy 214 is stored in the NV storage device 132 as a firmware variable. In this example, the media access policy 214 can be modified only through a physically existing local user interface ("UI") provided by the platform firmware 202, such as a graphical BIOS system. Thus, the platform firmware 202 can provide a configuration that certifies the existence of the media access policy 214 is qualified. In such embodiments, the platform firmware 202 may lock the media access policy 214 before executing an untrusted firmware driver, firmware application, option ROM, or operating system loader. Thus, in those embodiments, during the process of pre-booting untrusted parts of the firmware execution environment (for example, after the end of the UEFI drive execution environment ("DXE")) and during the runtime of the untrusted operating system 210 During the process, the media access policy 214 may be read-only.As another example, consider an implementation in which the media access policy 214 is stored in the NV storage device 132 as an authenticated firmware variable, such as a UEFI variable with an authenticated write attribute bit. In this example, the media access policy 214 can be accessed and updated at runtime during the execution of the untrusted operating system 210. However, only authenticated users or other entities can allow updates to the media access policy 214. In these embodiments, public key encryption may be used to authenticate, verify, or otherwise protect the media access policy 214, such as by using the creator's public/private key pair to sign authenticated firmware variables.As a third example, in some embodiments, the security processor 134 of the computing device 100 may be used to protect the media access policy 214. For example, the media access policy 214 may be stored in a secure storage area of the computing device 100 controlled by the TPM.Still referring to FIG. 4, in block 418, the computing device 100 may determine whether media access protection has been enabled. If media access protection is not enabled, media access requests can be allowed without further analysis. In some embodiments, one or more media access policies 214 may specify whether media access protection is enabled. In block 420, the computing device 100 may determine the identity of the currently executing operating system 210. The determination of whether to allow the media access request may depend on the identity of the currently executing operating system 210. In some embodiments, the identification of the currently executing operating system 210 may be implicitly known by the policy enforcement module 212. For example, in many embodiments, each operating system 210 may include a dedicated policy enforcement module 212, such as specific filter drivers 504, 508. In these embodiments, the identification of the operating system 210 may be hard-coded, hypothetical, or implicit to the specific policy enforcement module 212 in other ways.In block 422, the computing device 100 may determine the storage partition associated with the specific media access request. For example, the computing device 100 may determine the identification of the storage partition that contains the storage address included in the media access request. As described above, each storage partition may be owned by or assigned to one or more operating systems 210. To illustrate, referring back to FIG. 3, the computing device 100 can determine whether the media access request refers to the operating system 210a partition 314, the operating system 210b partition 316, the shared partition 318, or (in some embodiments) the partition table 302. In block 424, the computing device 100 may determine the storage operation associated with the media access request. For example, the computing device 100 may determine whether the storage request is a read request, a write request, a create request, a delete request, an information request, or other requests. In some embodiments, determining whether to allow a media access request may depend on the specified storage operation.In block 426, the computing device 100 determines whether to allow the media access request based on the media access policy 214. The determination of whether to allow a media access request can be based on any combination of the following: the identification of the currently executing operating system 210, the storage partition associated with the request, the storage operation associated with the request, and/or access by the media Other criteria specified by policy 214. For example, the computing device 100 may allow all access to storage partitions owned by the currently executing operating system 210. As another example, the computing device 100 may deny all access to storage partitions that are not owned by the currently executing operating system 210. As a third example, the computing device 100 may allow all access to storage partitions shared between the currently executing operating system 210 and one or more other operating systems 210. As a fourth example, the computing device 100 may allow read-only access to the partition table of the data storage device 126. In some embodiments, determining whether to allow access to storage partitions not owned by the currently executing operating system 210 may depend on the media access policy 214, which may be determined by the platform manufacturer, end user, or other To define. For example, based on the content of the media access policy 214, the computing device 100 may allow read access to storage partitions that are not owned by the currently executing operating system 210, but deny access to storage partitions that are not owned by the currently executing operating system 210. Write access. If the medium access request is allowed, the method 400 branches to block 428. If the media access request is not allowed, the method 400 branches to block 432, as described below.In block 428, the computing device 100 allows the media access request to proceed. In some embodiments, in block 430, the computing device 100 may pass the media access request to the lower members of the storage drive stack. For example, if the policy enforcement module 212 is implemented as a lower-level filter driver 508, the media access request can be passed down to the storage port driver 510, thereby allowing the media access request to proceed normally. After the request is allowed, the data storage device 126 may perform the requested storage operation and finally return the data or status information to the application 502 along the storage driver stack. After allowing the medium access request, the method 400 loops back to block 410 to continue monitoring the medium access request.Referring back to block 426, if the media access request is not allowed, the method 400 branches to block 432. In block 432, the computing device 100 denies the media access request. The computing device 100 may, for example, prevent media access requests from being passed down to lower members of the storage drive stack. In some embodiments, in block 434, the computing device 100 may return an informative error message to a higher-level member of the storage driver stack, such as the application 502. For example, the computing device 100 may return "access denied", "write protection", "invalid operation", or other appropriate error messages. In some embodiments, an error message may be displayed to the user, thereby allowing the user to determine why the media access request failed. After the medium access request is rejected, the method 400 loops back to block 410 to continue monitoring the medium access request.Referring now to FIG. 6, in use, the computing device 100 may execute a method 600 for media protection policy enforcement. The method 600 starts in block 602, in which the computing device 100 boots. In response to the user powering up or rebooting the computing device 100, in response to resuming the computing device 100 from a low-power sleep or hibernation state, or in other situations where the computing device 100 is initialized, the computing device 100 may boot. In some embodiments, the computing device 100 may be activated or restarted in response to a user guiding the computing device 100 (for example, by selecting a software or hardware toggle) to switch the operating system 210.At block 604, the computing device 100 loads the pre-operating system firmware execution environment. The firmware execution environment may be implemented as, for example, a drive execution environment as specified by the UEFI specification. In the firmware execution environment, the platform firmware 202 is fully controlled by the computing device 100. Within the firmware execution environment, the computing device 100 initializes platform hardware and additionally prepares the computing device 100 for use. The computing device 100 may load and launch firmware images for one or more firmware drivers or firmware applications. Firmware drivers and applications can be implemented as binary images that can provide pre-boot initialization and other services. In addition, in some embodiments, the firmware driver and application may install a firmware protocol interface or other services that remain resident during the execution of the operating system 210 and provide services for the computing device 100. For example, a firmware protocol interface may be installed to allow access to one or more firmware variables stored in the NV storage device 132.In block 606, the computing device 100 selects the operating system 210 to be loaded from those operating systems 210 installed on the computing device 100. The specific operating system 210 selected may depend on user selection, default boot sequence, or any other criteria for selection. In some embodiments, the computing device 100 may select the boot target associated with the selected operating system 210. The boot target may be implemented as a firmware application to be loaded and started by the computing device 100, such as an operating system loader, or a diagnostic application, a maintenance application, or a management applicationIn block 608, the computing device 100 determines one or more media address ranges to be protected from the selected operating system 210 based on the media access policy 214. Specifically, the computing device 100 may determine that the operating system 210 cannot access one or more logical block address ranges associated with storage partitions that are not owned by the operating system 210 and/or shared with the operating system 210. For example, the computing device 100 may determine the logical block addresses associated with storage partitions owned by the different operating systems 210 that should be protected from the selected operating system 210. Of course, in some embodiments, the computing device 100 may additionally or alternatively determine the medium address range that the operating system 210 will access. For example, the computing device 100 can determine that the data partition owned by the operating system 210 can be accessed by the operating system 210. In some embodiments, in block 610, the computing device 100 may retrieve the media access policy 214 from one or more firmware variables. As described above, the firmware variables may be stored in the NV storage device 132. The firmware variables can determine, for example, whether media access protection is enabled, whether to allow access to partitions not owned by the selected operating system 210, or any other access policy.As an illustration, referring again to FIG. 3, suppose the computing device 100 selects the operating system 210a as described above in conjunction with block 606. The computing device 100 may determine that the operating system 210a cannot access the data address in the partition 316 owned by the operating system 210b. In some embodiments, the computing device 100 may determine that the operating system 210a may be allowed to access data addresses in the partition 314.In block 612, the computing device 100 sets data access control to the data storage device 126 based on the protected media address range determined in block 608 as described above. In some embodiments, the computing device 100 may program, command, or otherwise instruct the controller 128 of the data storage device 126 to deny access within the protected media address range. For example, the computing device 100 may program the sector range protection feature of the controller 128 to prevent access to the protected media address range. Of course, in some embodiments, the computing device 100 may program, command, or otherwise instruct the controller 128 of the data storage device 126 to allow access within the specified medium address range. The computing device 100 may determine whether to specify a protected media address range or whether to specify an allowable media address range based on the ability of the controller 128. After the data access control is set, any process (including any operating system 210) executing on the computing device 100 may not be able to access media addresses within the protected media address range. When the computing device 100 is rebooted, restarted, or otherwise reset, the data access control to the data storage device 126 may be reset.In block 614, the computing device 100 boots the selected operating system 210. As described above, the firmware execution environment can load and execute the boot target associated with the selected operating system 210. During execution, the selected operating system 210 may not be able to access media addresses within the protected media address range. Therefore, in use, the operating system 210 may not be aware of storage partitions owned by other operating systems 210 at all.After booting the operating system 210, the computing device 100 can use the policy enforcement module 212 associated with that operating system 210 to monitor media access requests, for example, by executing the method 400 described above in conjunction with FIG. 4. Thus, the methods 400 and 600 are complementary. The computing device 100 may independently execute each method 400, 600, or may jointly execute the two methods 400, 600. For example, a computing device 100 with a hardware controller 128 that supports sector range protection may execute the method 600 during a pre-boot firmware environment and execute the method 400 after the operating system 210 is booted. Additionally or alternatively, such a computing device 100 with a hardware controller 128 supporting sector range protection may only execute the method 600 and not execute the method 400, for example, when the operating system 210 is executed without the policy enforcement module 212 . Additionally or alternatively, the computing device 100 that does not have the hardware controller 128 that supports sector range protection may execute the method 400 during the execution of the operating system 210 and not execute the method 600.Additionally or alternatively, when executed on the same computing device 100, each method 400, 600 may implement a different media access strategy 214. For example, the method 600 may use the hardware controller 128 to implement the media access policy 214 to completely deny access to partitions not owned by the selected operating system 210. Continuing that example, the method 400 may implement a media access policy 214 that allows selective access or read-only access to the shared partition of the data storage device 126.ExampleIllustrative examples of the technology disclosed herein are provided below. Embodiments of these techniques may include any one or more of the examples described below, and any combination thereof.Example 1 includes a computing device used for media protection policy implementation in a multi-operating system environment, the computing device includes: a data storage device including multiple regions; and a policy implementation module, the policy implementation module is used to: During the execution of the operating system of the computing device, a media access request is intercepted, wherein the media access request specifies a storage operation and a storage address; the identification of the operating system of the computing device is determined; The area including the storage address of the media access request is identified; and according to (i) the identified area of the data storage device, (ii) the identification of the operating system, and (iii) the The storage operation of the media access request determines whether the media access request is allowed.Example 2 includes the subject matter of Example 1, and wherein the request for determining whether to allow the media access includes: determining whether the area of the data storage device is owned by the operating system; and in response to determining the The area is owned by the operating system, and it is determined that the media access request is allowed.Example 3 includes the subject matter of any one of Examples 1 and 2, and wherein the method for determining whether to allow the media access request further includes: a method for determining whether to allow the area in response to determining that the area is not owned by the operating system The media access request.Example 4 includes the subject matter of any one of Examples 1-3, and wherein the method for determining whether to allow the media access request further includes: a method for determining whether the area is not owned by the operating system, according to the The media access policy of the computing device determines whether to allow the media access request.Example 5 includes the subject matter of any one of Examples 1-4, and wherein the method for determining whether to allow the medium access request according to the medium access policy includes a method for using the firmware environment of the computing device from the computing device The non-volatile storage device reads the media access policy.Example 6 includes the subject matter of any one of Examples 1-5, and wherein the method for determining whether to allow the medium access request according to the medium access policy includes: for determining whether the area of the data storage device is The operating system is shared with a second operating system, wherein the second operating system will not be executed during the determining process; and in response to determining that the area is shared, it is determined that the medium access request is allowed.Example 7 includes the subject matter of any one of Examples 1-6, and wherein, determining whether to allow the media access request according to the media access policy includes: determining whether to allow the media access request according to the storage operation of the media access request Allow the media access request.Example 8 includes the subject matter of any one of Examples 1-7, and wherein the method for determining whether to allow the medium access request according to the storage operation of the medium access request includes: for responding to the medium access request including A read operation determines that the medium access request is permitted; and in response to the medium access request including a write operation, it is determined that the medium access request is not permitted.Example 9 includes the subject matter of any one of Examples 1-8, and wherein, for determining whether to allow the medium access includes for using the firmware environment of the computing device to read from the non-volatile storage device of the computing device Take the media access strategy.Example 10 includes the subject matter of any one of Examples 1-9, and wherein determining whether to allow the media access includes determining whether media access protection is enabled according to the media access policy; and in response to determining that the media access protection is enabled; The medium access protection is used to determine whether to allow the medium access.Example 11 includes the subject matter of any one of Examples 1-10, and further includes: a second policy implementation module, the second policy implementation module being configured to: intercept the second operating system during the execution of the second operating system of the computing device 2. A media access request, the second media access request is used to specify a second storage operation and a second storage address; the identification of the second operating system is determined; the request for the data storage device includes the second media access The second area of the second storage address of the data storage device is identified; and the second storage operation according to the second area of the data storage device, the identification of the second operating system, and the second medium access request To determine whether to allow the second medium access request.Example 12 includes the subject matter of any of Examples 1-11, and wherein the policy enforcement module includes a filter driver of the computing device.Example 13 includes the subject matter of any one of Examples 1-12, and wherein the policy enforcement module is further configured to: in response to determining that the medium access request is allowed, pass the medium access request to the computing device The lower-level drive.Example 14 includes the subject matter of any one of Examples 1-13, and wherein the policy enforcement module is further configured to: in response to determining that the media access request is not allowed, reject the media access request.Example 15 includes the subject matter of any of Examples 1-14, and wherein the area of the data storage device includes a partition of the data storage device.Example 16 includes the subject matter of any of Examples 1-15, and wherein the area of the data storage device includes a partition table of the data storage device.Example 17 includes the subject matter of any one of Examples 1-16, and further includes: a boot option module established by the firmware environment of the computing device, the boot option module being configured to download from a plurality of devices installed on the computing device The operating system is selected from the operating system; and a second policy implementation module established by the firmware environment, the second policy implementation module is configured to (i) determine the data storage device based on the selected operating system The second area to be protected, and (ii) the data storage device is configured to prevent access to the second area of the data storage device; wherein the boot option module is further configured to respond to the The configuration of the data storage device is used to load the selected operating system.Example 18 includes a computing device used for media protection policy enforcement in a multi-operating system environment, the computing device including: a data storage device including multiple regions; a boot option module established by the firmware environment of the computing device , The boot option module is used to select an operating system from a plurality of operating systems installed on the computing device; and a policy implementation module established by the firmware environment, the policy implementation module is used to (i) based on The selected operating system determines the area to be protected of the data storage device and (ii) configures the data storage device to prevent access to the area of the data storage device; wherein, the boot option module further uses Loading the selected operating system in response to the configuration of the data storage device.Example 19 includes the subject matter of Example 18, and wherein the area of the data storage device includes a partition of the data storage device.Example 20 includes the subject matter of any one of Examples 18 and 19, and wherein the area of the data storage device includes a partition table of the data storage device.Example 21 includes the subject matter of any one of Examples 18-20, and wherein the area used to determine the data storage device includes a data storage device that is not owned by the selected operating system Area to be identified.Example 22 includes the subject matter of any one of Examples 18-21, and wherein determining the area of the data storage device includes determining the area according to a media access policy of the computing device.Example 23 includes the subject matter of any one of Examples 18-22, and wherein the method for determining the area according to the media access policy includes a method for reading the media access from a non-volatile storage device of the computing device Strategy.Example 24 includes the subject matter of any one of Examples 18-23, and further includes: a second policy enforcement module configured to: intercept media access requests during the execution of the selected operating system , The media access request is used to specify a storage operation and a storage address; determine the identification of the selected operating system; identify the second area of the data storage device that includes the storage address of the media access request And according to the second area of the data storage device, the identification of the selected operating system, and the storage operation of the media access request to determine whether to allow the media access request.Example 25 includes a method for implementing a media protection policy in a multi-operating system environment, the method comprising: intercepting a media access request by a computing device during the execution of an operating system of the computing device, the media access request Specify the storage operation and storage address; the computing device determines the identification of the operating system of the computing device; the storage of the data storage device of the computing device including the media access request is determined by the computing device The area of the address is identified; and the computing device is based on (i) the identified area of the data storage device, (ii) the identification of the operating system, and (iii) all of the media access request The storage operation is used to determine whether to allow the medium access request.Example 26 includes the subject matter of Example 25, and wherein determining whether to allow the media access request includes: determining whether the area of the data storage device is owned by the operating system; and in response to determining that the area is the Owned by the operating system, it is determined that the media access request is allowed.Example 27 includes the subject matter of any one of Examples 25 and 26, and wherein determining whether to allow the media access request further includes: in response to determining that the area is not owned by the operating system, determining that the media access is not allowed request.Example 28 includes the subject matter of any one of Examples 25-27, and wherein determining whether to allow the media access request further includes: in response to determining that the region is not owned by the operating system, according to the media of the computing device The access policy determines whether to allow the medium access request.Example 29 includes the subject matter of any one of Examples 25-28, and wherein determining whether to allow the media access request according to the media access policy includes using the firmware environment of the computing device to obtain non-volatile data from the computing device The storage device reads the media access policy.Example 30 includes the subject matter of any one of Examples 25-29, and wherein determining whether to allow the media access request according to the media access policy includes: determining whether the area of the data storage device is controlled by the operating system Shared with a second operating system, wherein the second operating system is not currently executed; and in response to determining that the area is shared, it is determined that the medium access request is allowed.Example 31 includes the subject matter of any one of Examples 25-30, and wherein determining whether to allow the medium access request according to the medium access policy includes: determining whether to allow the medium according to the storage operation of the medium access request Access request.Example 32 includes the subject matter of any one of Examples 25-31, and wherein determining whether to allow the media access request according to the storage operation of the media access request includes: in response to the media access request including a read operation, It is determined that the medium access request is permitted; and in response to the medium access request including a write operation, it is determined that the medium access request is not permitted.Example 33 includes the subject matter of any one of Examples 25-32, and wherein determining whether to allow the media access includes using the firmware environment of the computing device to read the media access policy from a non-volatile storage device of the computing device .Example 34 includes the subject matter of any one of Examples 25-33, and wherein determining whether to allow the media access includes: determining whether media access protection is enabled according to the media access policy; and in response to determining that the media access is enabled Protect, determine whether to allow access to the medium.Example 35 includes the subject matter of any one of Examples 25-34, and further includes: intercepting, by the computing device, a second media access request during the execution of a second operating system of the computing device, the second media access Request to specify a second storage operation and a second storage address; the computing device determines the identification of the second operating system; the computing device makes a second request to the data storage device including the second medium access request The second area of the storage address is identified; and the computing device is based on the second area of the data storage device, the identification of the second operating system, and the identification of the second medium access request The second storage operation determines whether to allow the second medium access request.Example 36 includes the subject matter of any of Examples 25-35, and wherein intercepting the media access request includes intercepting the media access request using a filter driver of the computing device.Example 37 includes the subject matter of any one of Examples 25-36, and further includes: passing the medium access request through the computing device in response to determining that the medium access request is allowed to go to the lower portion of the computing device Tier drive.Example 38 includes the subject matter of any of Examples 25-37, and further includes: denying, by the computing device, the medium access request in response to determining that the medium access request is not allowed.Example 39 includes the subject matter of any of Examples 25-38, and wherein identifying the area of the data storage device includes identifying a partition of the data storage device.Example 40 includes the subject matter of any of Examples 25-39, and wherein identifying the area of the data storage device includes identifying a partition table of the data storage device.Example 41 includes the subject matter of any one of Examples 25-40, and further includes: loading a firmware execution environment by the computing device before executing the operating system of the computing device; The computing device selects the operating system from a plurality of operating systems installed on the computing device; the computing device having the firmware execution environment determines the operating system of the data storage device based on the selected operating system A second area to be protected; the computing device having the firmware execution environment configures the data storage device to prevent access to the second area of the data storage device; and The computing device of the execution environment loads the selected operating system in response to configuring the data storage device.Example 42 includes a method for implementing a media protection policy in a multi-operating system environment, the method comprising: loading a pre-operating system firmware execution environment by a computing device; installing from the computing device having the firmware execution environment An operating system is selected among multiple operating systems on the computing device; the computing device having the firmware execution environment determines the area to be protected of the data storage device based on the selected operating system; The computing device of the firmware execution environment configures the data storage device to prevent access to the area of the data storage device; and the computing device having the firmware execution environment responds to configuring the data storage The device loads the selected operating system.Example 43 includes the subject matter of Example 42, and wherein determining the area of the data storage device includes determining a partition of the data storage device.Example 44 includes the subject matter of any of Examples 42 and 43, and wherein determining the area of the data storage device includes determining a partition table of the data storage device.Example 45 includes the subject matter of any one of Examples 42-44, and wherein determining the area of the data storage device includes identifying an area of the data storage device that is not owned by the selected operating system .Example 46 includes the subject matter of any of Examples 42-45, and wherein determining the area of the data storage device includes determining the area according to a media access policy of the computing device.Example 47 includes the subject matter of any of Examples 42-46, and wherein determining the area based on the media access policy includes reading the media access policy from a non-volatile storage device of the computing device.Example 48 includes the subject matter of any one of Examples 42-47, and further includes: intercepting, by the computing device, a media access request during execution of the selected operating system, the media access request specifying storage operations and storage Address; the computing device determines the identification of the selected operating system; the computing device identifies the second area of the data storage device that includes the storage address of the media access request; And the computing device determines whether to allow the media access request according to the second area of the data storage device, the identification of the selected operating system, and the storage operation of the media access request.Example 49 includes a computing device, the computing device including: a processor; and a memory, the memory having a plurality of instructions stored therein, which when executed by the processor cause the computing device to execute Example 25 The method of any one of -48.Example 50 includes one or more machine-readable storage media including a plurality of instructions stored thereon that, in response to being executed, cause the computing device to perform the operations in Examples 25-48 Any of the methods described.Example 51 includes a computing device that includes means for performing the method of any of Examples 25-48.Example 52 includes a computing device used for media protection policy enforcement in a multi-operating system environment, the computing device comprising: means for intercepting media access requests during the execution of the operating system of the computing device, the The media access request specifies a storage operation and a storage address; a device for determining the identification of the operating system of the computing device; and a storage address for the data storage device of the computing device that includes the media access request And means for identifying the area of the data storage device according to (i) the identified area of the data storage device, (ii) the identification of the operating system, and (iii) the storage of the media access request Operate to determine whether to allow the medium access request device.Example 53 includes the subject matter of Example 52, and wherein the means for determining whether to allow the medium access request includes means for determining whether the area of the data storage device is owned by the operating system; and A means for determining permission of the medium access request in response to determining that the area is owned by the operating system.Example 54 includes the subject matter of any one of Examples 52 and 53, and wherein the means for determining whether to allow the media access request further includes: in response to determining that the area is not owned by the operating system And determine the device that does not allow the medium access request.Example 55 includes the subject matter of any one of Examples 52-54, and wherein the means for determining whether to allow the media access request further includes: in response to determining that the area is not owned by the operating system A device that determines whether to allow the media access request according to the media access policy of the computing device.Example 56 includes the subject matter of any one of Examples 52-55, and wherein the means for determining whether to allow the media access request according to the media access policy includes a firmware environment for using the computing device from all The non-volatile storage device of the computing device reads the medium access policy.Example 57 includes the subject matter of any one of Examples 52-56, and wherein the means for determining whether to allow the media access request according to the media access policy includes: the device for determining the data storage device A device for whether the area is shared by the operating system and a second operating system, wherein the second operating system is not currently executed; and a device for determining that the medium access request is permitted in response to determining that the area is shared .Example 58 includes the subject matter of any one of Examples 52-57, and wherein the means for determining whether to allow the medium access request according to the medium access policy includes: the means for determining whether to allow the medium access request according to the medium access request The storage operation determines whether to allow the medium access request device.Example 59 includes the subject matter of any one of Examples 52-58, and wherein the means for determining whether to allow the medium access request according to the storage operation of the medium access request includes: The medium access request includes means for determining that the medium access request is permitted by a read operation; and means for determining that the medium access request is not permitted in response to the medium access request including a write operation.Example 60 includes the subject matter of any one of Examples 52-59, and wherein the means for determining whether to allow access to the medium includes using the firmware environment of the computing device from non-volatile storage of the computing device The device reads the media access policy.Example 61 includes the subject matter of any one of Examples 52-60, and wherein the means for determining whether to allow the media access includes: means for determining whether media access protection is enabled according to the media access policy; And means for determining whether to allow the medium access in response to determining that the medium access protection is enabled.Example 62 includes the subject matter of any one of Examples 52-61, and further includes: means for intercepting a second media access request during the execution of a second operating system of the computing device, the second media access request Designating a second storage operation and a second storage address; a means for determining the identification of the second operating system; a means for identifying the second storage address of the data storage device including the second medium access request Means for identifying the second area; and for determining according to the second area of the data storage device, the identification of the second operating system, and the second storage operation of the second medium access request Whether to allow the second medium to access the requested device.Example 63 includes the subject matter of any of Examples 52-62, and wherein the means for intercepting the media access request includes means for using a filter driver of the computing device to intercept the media access request.Example 64 includes the subject matter of any one of Examples 52-63, and further includes: in response to determining that the media access request is allowed to pass the media access request to a lower drive of the computing device Device.Example 65 includes the subject matter of any one of Examples 52-64, and further includes: means for denying the media access request in response to determining that the media access request is not allowed.Example 66 includes the subject matter of any of Examples 52-65, and wherein the means for identifying the area of the data storage device includes means for identifying a partition of the data storage device.Example 67 includes the subject matter of any of Examples 52-66, and wherein the means for identifying the area of the data storage device includes means for identifying a partition table of the data storage device.Example 68 includes the subject matter of any one of Examples 52-67, and further includes: means for loading a firmware execution environment before executing the operating system of the computing device; A device for selecting the operating system from a plurality of operating systems on the computing device; for using the firmware execution environment to determine the second area of the data storage device to be protected based on the selected operating system Means; means for using the firmware execution environment to configure the data storage device to prevent access to the second area of the data storage device; and means for using the firmware execution environment to respond to configuring the data A storage device to load the selected operating system.Example 69 includes a computing device used for media protection policy enforcement in a multi-operating system environment. The computing device includes: a device for loading a pre-operating system firmware execution environment; A device for selecting an operating system among multiple operating systems on the computing device; a device for using the firmware execution environment to determine a region to be protected of a data storage device based on the selected operating system; and a device for using the A firmware execution environment for configuring the data storage device to prevent access to the area of the data storage device; and means for using the firmware execution environment to load the selected data storage device in response to configuring the data storage device Operating system device.Example 70 includes the subject matter of Example 69, and wherein the means for determining the area of the data storage device includes means for determining a partition of the data storage device.Example 71 includes the subject matter of any one of Examples 69 and 70, and wherein the means for determining the area of the data storage device includes means for determining a partition table of the data storage device.Example 72 includes the subject matter of any one of Examples 69-71, and wherein the means for determining the area of the data storage device includes a data storage device that is not selected for the data storage device A device that identifies the area owned by the operating system.Example 73 includes the subject matter of any one of Examples 69-72, and wherein the means for determining the area of the data storage device includes means for determining the area according to the media access policy of the computing device Device.Example 74 includes the subject matter of any one of Examples 69-73, and wherein the means for determining the area according to the media access policy includes means for reading from a non-volatile storage device of the computing device The medium access strategy means.Example 75 includes the subject matter of any one of Examples 69-74, and further includes: the means for intercepting a media access request during the execution of the selected operating system, the media access request specifying a storage operation and Storage address; means for determining the identification of the selected operating system; means for identifying the second area of the data storage device including the storage address of the media access request; and The apparatus for determining whether to allow the medium access request according to the second area of the data storage device, the identification of the selected operating system, and the storage operation of the medium access request. |
Embodiments herein relate to a system, apparatus, and/or process for producing a spin orbit torque (SOT) electrode (102) that includes a first layer (102a) with a first side to couple with a free layer (110) of a magnetic tunnel junction (MTJ) (108) and a second layer (102b) coupled with a second side of the first layer opposite the first side, where a value of an electrical resistance in the first SOT layer is lower than a value of an electrical resistance in the second SOT layer and where a current applied to the SOT electrode is to cause current to preferentially flow in the first SOT layer to cause a magnetic polarization of the free layer to change directions. During production of the SOT electrode, the second layer may act as an etch stop. |
A spin orbit torque (SOT) electrode comprising:a first layer with a first side to couple with a free layer of a magnetic tunnel junction (MTJ); anda second layer coupled with a second side of the first layer opposite the first side, wherein a value of an electrical resistance in the first SOT layer is lower than a value of an electrical resistance in the second SOT layer.The SOT electrode of claim 1, wherein a value of spin conductivity in the first SOT layer is higher than a value of spin conductivity in the second SOT layer.The SOT electrode of claim 1, wherein a current applied to the SOT electrode is to cause current to preferentially flow in the first SOT layer to cause a magnetic polarization of the free layer to change directions.The SOT electrode of claim 3, wherein the magnetic polarization of the free layer is substantially perpendicular to the first side of the first layer.The SOT electrode of claim 1, 2, 3 or 4, wherein the first layer and the second layer include one or more of: graphene, TiS2, WS2, MoS2, TiSe2, WSe2, MoSe2, B2S3, Sb2S3, Ta2S, Re2S7, LaCPS2, LaOAsS2, ScOBiS2, GaOBiS2, AlOBiS2, LaOSbS2, BiOBiS2, YOBiS2, InOBiS2, LaOBiSe2, TiOBiS2, CeOBiS2, PrOBiS2, NdOBiS2, LaOBiS2, or SrFBiS2.The SOT electrode of claim 1, wherein the first side of the first layer has a smaller area than a side of the second layer opposite a side of the second layer coupled with the second side of the first layer. Po/A method for creating a package, comprising:coupling a first side of a first layer of a spin orbit torque (SOT) electrode to a first side of a second layer of the SOT electrode, wherein a value of an electrical resistance in the second layer is lower than a value of an electrical resistance in the first layer.The method of claim 7, further comprising coupling a first side of a free layer of a magnetic tunnel junction (MTJ) to a second side of the second layer opposite the first side.The method of claim 8, further comprising etching the package.The method of claim 9, wherein the second layer of the SOT electrode is an etch stop.The method of claim 8, further comprising, before etching the package:coupling a first side of an MTJ coupling layer to a second side of the free layer opposite the first side; andcoupling a first side of an MTJ fixed layer to a second side of the MTJ coupling layer.The method of claim 7, wherein a value of a spin conductivity in the first SOT layer is higher than a value of a spin conductivity in the second SOT layer.The method of claim 7, 8, 9, 10, 11 or 12, wherein the first layer or the second layer include one or more of graphene, TiS2, WS2, MoS2, TiSe2, WSe2, MoSe2, B2S3, Sb2S3, Ta2S, Re2S7, LaCPS2, LaOAsS2, ScOBiS2, GaOBiS2, AlOBiS2, LaOSbS2, BiOBiS2, YOBiS2, InOBiS2, LaOBiSe2, TiOBiS2, CeOBiS2, PrOBiS2, NdOBiS2, LaOBiS2, or SrFBiS2. |
FieldEmbodiments of the present disclosure generally relate to the field of magnetic random access memory (MRAM), and in particular the composition of spin orbit torque (SOT) electrodes.BackgroundThe background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.For in-plane polarized magnetic films, electron spin currents arising from the spin-Hall effect (SHE) within heavy metal has been shown to apply spin-transfer torques to a magnet. The SHE may be used to change a magnetic polarity of a free layer of a magnetic tunnel junction (MTJ) that may be used to implement MRAM.Brief Description of the DrawingsEmbodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.Figure 1 illustrates a stage during the manufacture of a MRAM stack with a MTJ that includes multiple layers of an SOT electrode, in accordance with one implementation of the invention.Figure 2 illustrates a stage during the manufacture of a MRAM stack with a MTJ that includes a layer of the SOT electrode as an etch stop, in accordance with one implementation of the invention.Figure 3 illustrates an example process that uses a high resistivity SOT layer as an etch stop during the manufacture of a MRAM stack, in accordance with one implementation of the invention.Figure 4 shows a complimentary metal-oxide-semiconductor (CMOS) stack that integrates an MRAM, in accordance with various embodiments.Figure 5 illustrates a computing device 500 in accordance with one implementation of the invention.Figure 6 illustrates an interposer 600 that includes one or more embodiments of the invention.Detailed DescriptionEmbodiments of the present disclosure generally relate to apparatuses, processes, or systems to manufacture or use MRAM. In legacy implementations, the MRAM may include a SOT electrode that may include a heavy metal, 2D material, Antiferromagnet (AFM) or topological insulator (TI). The SOT electrode may facilitate switching the magnetic field within a free layer of a MTJ magnetically coupled to the SOT electrode. The SOT may enable use of complex magnetic stacks developed with a synthetic antiferromagnet (SAF) to implement spin transfer torque memory by changing the polarity direction of the magnetic field in the magnetic free layer of the MTJ.In embodiments, the SOT electrode may be implemented as a multilayer SOT electrode having different layers with different values of electrical resistivity. In embodiments, a low resistivity SOT material, that may have high spin conductivity, as a top SOT layer that may be connected to a magnetic free layer of a MTJ, while the bottom SOT layer may be high resistivity, that may have low spin conductivity. In embodiments, the metal that may make up the bottom SOT layer may be generally thicker and maybe used as an etch stop during manufacturing.In legacy implementations, patterning an SOT electrode may have challenges. First, the SOT electrode is typically only a few nanometers thick, for example between .5 nanometers and 20 nanometers thick, and at the bottom of a large magnetic stack. In such a configuration, stopping an etching process on an exact film layer can be imprecise and may result in over-etching the layer. Over etching may adversely affect manufacturing yield, and may increase the SOT electrode interconnect resistance. For example, the legacy SOT electrode may be a local interconnect under the MTJ between two vias that connect to transistors. If the legacy SOT layer exceeds a resistance threshold, a higher voltage may need to be applied to achieve enough current density to switch the free layer magnet in the MTJ which may affect the operating efficiency of the MRAM device.Embodiments described herein may facilitate the MRAM manufacturing process by relaxing constraints on etching the MRAM and allowing for etching into the SOT electrode. In embodiments, the etching process may continue into the SOT electrode with a lower layer, and a higher electrical resistance, in the SOT electrode acting as an etch stop. When a current is applied to the SOT electrode, the high spin conductivity of the top SOT electrode layer may allow more current to flow in that top layer, adjacent to the magnetic free layer, and generate spin current for SOT switching of a magnetic free layer adjacent to the SOT electrode. Although part of the SOT electrode may be etched way, a low impedance interconnect from the SOT to the magnetic free layer of the MTJ is still available.In the following detailed description, reference is made to the accompanying drawings which form a part hereof, wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments in which the subject matter of the present disclosure may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).The description may use perspective-based descriptions such as top/bottom, in/out, over/under, and the like. Such descriptions are merely used to facilitate the discussion and are not intended to restrict the application of embodiments described herein to any particular orientation.The description may use the phrases "in an embodiment," or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous.The term "coupled with," along with its derivatives, may be used herein. "Coupled" may mean one or more of the following. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements indirectly contact each other, but yet still cooperate or interact with each other, and may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term "directly coupled" may mean that two or more elements are in direct contact.Various operations may be described as multiple discrete operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent.As used herein, the term "module" may refer to, be part of, or include an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.Figure 1 illustrates a stage during the manufacture of a MRAM stack with a MTJ that includes multiple layers of an SOT electrode, in accordance with one implementation of the invention. Diagram 100 is an embodiment of a MRAM stack that may include a multilayer SOT electrode 102. In embodiments, the SOT electrode 102 may include a low resistivity SOT layer 102a and a high resistivity SOT layer 102b. The low resistivity SOT layer 102a may be coupled with a magnetic free layer 104 which in turn may be coupled to a coupling layer 106. In embodiments, the coupling layer 106 may be coupled to an MTJ 108, that may include a magnetic free layer 110, a tunneling barrier 112, and a magnetic fixed layer 114. In embodiments, the magnetic free layer 110 may have high tunnel magnetoresistance (TMR) properties, and the magnetic fixed layer 114 may have high TMR properties. In embodiments, the high TMR layer may be implemented by one or more Ferromagnetic layer.In embodiments, the magnetic fixed layer 114 may be a fixed magnet having a fixed polarity. In embodiments, the polarity may be perpendicular to the SOT electrode 102. The tunneling barrier 112 may be a magnesium oxide (MgO) tunneling oxide.In embodiments, the magnetic fixed layer 114 of the MTJ 108 may be coupled with a coupling layer 116 that may be coupled to a synthetic anti-Ferro-magnet (SAF) layer 118. The SAF layer 118 may have a polarity direction 118a that may be perpendicular to a plane of the SOT electrode 102. The SAF layer 118 may facilitate maintaining a polarity direction 114a of the magnetic fixed layer 114. In embodiments, one or more capping metals 120 may be applied to the SAF layer 118 that may complete the layers of the MRAM stack 100. The MRAM stack 100 is in a partial etching process where the stack is being etched 100a, 100b, toward the magnetic free layer 104.In embodiments, the composition of the SOT electrode 102 may include one or more heavy metals, AFM, or topological insulator (TI). In embodiments, SOT electrode 102 may include spin orbit TI, 2D or 3D materials which may include, but are not limited to, one or more of: graphene, TiSe2, WSe2, MoS2, WSe2, MoSe2, B2S3, Sb2S3, Ta2S, Re2S7, LaCPS2, LaOAsS2, ScOBiS2, GaOBiS2, AlOBiS2, LaOSbS2, BiOBiS2, YOBiS2, InOBiS2, LaOBiSe2, TiOBiS2, CeOBiS2, PrOBiS2, NdOBiS2, LaOBiS2, or SrFBiS2. In embodiments, SOT electrode 426 may include spin orbit material that may exhibit a Rashba-Bychkov effect in the form ROCh2, where 'R' includes, but is not limited to, one or more of: La, Ce, Pr, Nd, Sr, Sc, Ga, Al, or In, and where "Ch" may be a chalcogenide which may include, but is not limited to, one or more of: S, Se, or Te.An AFM may include, but is not limited to, Co/Antiferro-magnet, Fe/Antiferro-magnet, Ni/Antiferro-magnet, MnGa/Antiferro-magnet, MnGeGa/Antiferro-magnet, or Bct-Ru/Antiferro-magnet. A TI may also include, but is not limited to, Bi2Se3, BixTeySe1-x-y, BixSb1-x, WSe2, WTe2, PtSe2, PtTe2, MoSe2, MoS2, or MoTe2, TiS2, WS2, TiSe2, B2S3, Sb2S3, Ta2S, Re2S7, LaCPS2, LaOAsS2, ScOBiS2, GaOBiS2, AlOBiS2, LaOSbS2, BiOBiS2, YOBiS2, InOBiS2, LaOBiSe2, TiOBiS2, CeOBiS2, PrOBiS2, NdOBiS2, LaOBiS2, or SrFBiS2.In embodiments, the SOT materials may be combined so that the material in the low resistivity SOT layer 102a may have a lower electrical resistance as compared to the high resistivity SOT layer 102b to achieve a higher efficiency of switching the polarity of the magnetic free layer 104. In embodiments, the lower resistance can be also achieved by mixing the materials with and/or doping with Cu, Al, or similar high-conductive materials.In embodiments, the SOT electrode 102 as well as the SOT layers 102a, 102b may be magnetically doped using the magnetic material (not shown) that may include ferromagnets such as cobalt (Co), iron (Fe), nickel (Ni), MnGa, MnGeGa, Bct-Ru, Gd, or Tb. The magnetic material (not shown) may include material with perpendicular magnetic anisotropy (PMA) with an anisotropy axis perpendicular to a plane of the SOT electrode 102.As a result, the SOT electrode 102 may have a net magnetic moment that may interact with the adjacent magnetic free layer, such as magnetic free layer 114, which may be similar to magnetic free layer 110 of Figure 1. As a result, this may apply an effective field on the free layer magnet in a direction opposite to the internal magnetic moment. This effective field may then break the symmetry of the spin orbit switching of the free layer, thereby enabling repeatable bidirectional current switching. The doped SOT layer may create an inplane exchange bias or a dipole field. This resulting effective field may generate an inplane magnetic field on the perpendicular magnetic free layer of the MTJ. This may then facilitate deterministic bidirectional switching of the MRAM by flipping the polarity of the magnetic free layer 110 depending on the direction of current flow through the SOT element 102. This may enable repeatable bidirectional switching of a perpendicular magnetic polarity within magnetic free layers such as magnetic free layer 110 within the MRAM.In embodiments, the partial MRAM stack 100 may be etched, for example on sides 100a, 100b to form a nano pillar. In embodiments, the etching process may include ion beam etching (IBE) or reactive ion etching (RIE).Figure 2 illustrates a stage during the manufacture of a MRAM stack with a MTJ that includes a layer of the SOT electrode as an etch stop, in accordance with one implementation of the invention. Diagram 200 is an embodiment of a MRAM stack that may be etched down to the high resistivity SOT layer 202b, which may be similar to the high resistivity SOT layer 102b of Figure 1.In embodiments, when a current 224 is applied to the SOT electrode 202, the current 224 may first flow through the second layer 202b along a current path 224a until the current reaches the low resistivity SOT layer 202a. At this point, a majority of the current 224 may preferentially flow along the lower resistivity current path 224b through the low resistivity SOT layer 202a. As a result, this may generate electron spins via the high spin conductivity of the low resistivity SOT layer 202a. As a result, these spins may impinge on the magnetic free layer 204 and as a result switch the polarity of the magnetic free layer 204. For example, the polarity may switch from 204a to 204b, or from 204b to 204a depending upon the direction of the current flow 224.In embodiments, the current flow 224 may also switch the polarity of the high TMR magnetic free layer 210. For example, polarity may switch from 210a to 210b, or from 210b to 210a depending upon the direction of the current flow 224.Figure 3 illustrates an example process that uses a high resistivity SOT layer as an etch stop during the manufacture of a MRAM stack, in accordance with one implementation of the invention. The process 300 be implemented by the techniques and materials described in Figures 1-2.At block 302, the process may include coupling a first side of a first layer of a SOT electrode to a first side of a second layer of the SOT electrode, wherein a value of an electrical resistance in the second layer is lower than a value of an electrical resistance in the first layer. In embodiments, the first layer of the SOT electrode may correspond to the high resistivity SOT layer 102b of Figure 1 and the second layer of the SOT electrode may correspond to the low resistivity SOT layer 102a. As a result, when current may be applied to the SOT electrode 102, current will preferentially flow in the low resistivity SOT layer 102a.In addition, the process may include coupling a first side of a free layer of a magnetic tunnel junction (MTJ) to a second side of the second layer opposite the first side. In embodiments, the free layer may be similar to the magnetic free layer 104 of Figure 1 that may be coupled with the MTJ 108. In other embodiments, the free layer may be similar to the magnetic free layer 110, where there is no magnetic free layer 104 or coupling layer 106. In embodiments, the magnetic free layer 104, coupling layer 106 may be coupled with the low resistivity SOT layer 102a.The process may include etching the package. In embodiments, the etching may be similar to the etching of the sides 100a, 100b of Figure 1. In embodiments, the etching may continue down to the second layer of the SOT electrode as an etch stop, as may be shown in Figure 2. The etching may continue through the low resistivity SOT layer 202a to expose the high resistivity SOT layer 202bImplementations of embodiments of the invention may be formed or carried out on a substrate, such as a semiconductor substrate. In one implementation, the semiconductor substrate may be a crystalline substrate formed using a bulk silicon or a silicon-on-insulator substructure. In other implementations, the semiconductor substrate may be formed using alternate materials, which may or may not be combined with silicon, that include but are not limited to germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, indium gallium arsenide, gallium antimonide, or other combinations of group III-V or group IV materials. Although a few examples of materials from which the substrate may be formed are described here, any material that may serve as a foundation upon which a semiconductor device may be built falls within the spirit and scope of the present invention.A plurality of transistors, such as metal-oxide-semiconductor field-effect transistors (MOSFET or simply MOS transistors), may be fabricated on the substrate. In various implementations of the invention, the MOS transistors may be planar transistors, nonplanar transistors, or a combination of both. Nonplanar transistors include FinFET transistors such as double-gate transistors and tri-gate transistors, and wrap-around or all-around gate transistors such as nanoribbon and nanowire transistors. Although the implementations described herein may illustrate only planar transistors, it should be noted that the invention may also be carried out using nonplanar transistors.Each MOS transistor includes a gate stack formed of at least two layers, a gate dielectric layer and a gate electrode layer. The gate dielectric layer may include one layer or a stack of layers. The one or more layers may include silicon oxide, silicon dioxide (SiO2) and/or a high-k dielectric material. The high-k dielectric material may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that may be used in the gate dielectric layer include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. In some embodiments, an annealing process may be carried out on the gate dielectric layer to improve its quality when a high-k material is used.The gate electrode layer is formed on the gate dielectric layer and may consist of at least one P-type workfunction metal or N-type workfunction metal, depending on whether the transistor is to be a PMOS or an NMOS transistor. In some implementations, the gate electrode layer may consist of a stack of two or more metal layers, where one or more metal layers are workfunction metal layers and at least one metal layer is a fill metal layer.For a PMOS transistor, metals that may be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, e.g., ruthenium oxide. A P-type metal layer will enable the formation of a PMOS gate electrode with a workfunction that is between about 4.9 eV and about 5.2 eV. For an NMOS transistor, metals that may be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals such as hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide. An N-type metal layer will enable the formation of an NMOS gate electrode with a workfunction that is between about 3.9 eV and about 4.2 eV.In some implementations, the gate electrode may consist of a "U"-shaped structure that includes a bottom portion substantially parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate. In another implementation, at least one of the metal layers that form the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions substantially perpendicular to the top surface of the substrate. In further implementations of the invention, the gate electrode may consist of a combination of U-shaped structures and planar, non-U-shaped structures. For example, the gate electrode may consist of one or more U-shaped metal layers formed atop one or more planar, non-U-shaped layers.In some implementations of the invention, a pair of sidewall spacers may be formed on opposing sides of the gate stack that bracket the gate stack. The sidewall spacers may be formed from a material such as silicon nitride, silicon oxide, silicon carbide, silicon nitride doped with carbon, and silicon oxynitride. Processes for forming sidewall spacers are well known in the art and generally include deposition and etching process steps. In an alternate implementation, a plurality of spacer pairs may be used, for instance, two pairs, three pairs, or four pairs of sidewall spacers may be formed on opposing sides of the gate stack.As is well known in the art, source and drain regions are formed within the substrate adjacent to the gate stack of each MOS transistor. The source and drain regions are generally formed using either an implantation/diffusion process or an etching/deposition process. In the former process, dopants such as boron, aluminum,antimony, phosphorous, or arsenic may be ion-implanted into the substrate to form the source and drain regions. An annealing process that activates the dopants and causes them to diffuse further into the substrate typically follows the ion implantation process. In the latter process, the substrate may first be etched to form recesses at the locations of the source and drain regions. An epitaxial deposition process may then be carried out to fill the recesses with material that is used to fabricate the source and drain regions. In some implementations, the source and drain regions may be fabricated using a silicon alloy such as silicon germanium or silicon carbide. In some implementations the epitaxially deposited silicon alloy may be doped in situ with dopants such as boron, arsenic, or phosphorous. In further embodiments, the source and drain regions may be formed using one or more alternate semiconductor materials such as germanium or a group III-V material or alloy. And in further embodiments, one or more layers of metal and/or metal alloys may be used to form the source and drain regions.One or more interlayer dielectrics (ILD) are deposited over the MOS transistors. The ILD layers may be formed using dielectric materials known for their applicabilityin integrated circuit structures, such as low-k dielectric materials. Examples of dielectric materials that may be used include, but are not limited to, silicon dioxide (SiO2), carbon doped oxide (CDO), silicon nitride, organic polymers such as perfluorocyclobutane or polytetrafluoroethylene, fluorosilicate glass (FSG), and organosilicates such as silsesquioxane, siloxane, or organosilicate glass. The ILD layers may include pores or air gaps to further reduce their dielectric constant.Figure 4 shows a CMOS stack that integrates an MRAM, in accordance with various embodiments. The MTJ 452, which may be in metal layer 3, may be similar to MTJ 108 of Figure 1, and may be coupled to the SOT 456, which may be in metal layer 2, and may be similar to SOT 102 of Figure 1 or SOT 202 of Figure 2. Magnetic via 458 may include magnetically active material in the via 458 that may apply an in-plane magnetic field to a magnetic free layer of the MTJ 452. The magnetic free layer may be similar to magnetic free layer 104 of Figure 1 or magnetic free layers 204, 210 of Figure 2.Sources for current flow through the SOT 456 may be through metal layer 1 via 462 and/or through metal layer 1 via 460. Bit line 450, which may be in metal layer 4, may provide current to the MTJ 452 that may be used to read a bit of the MRAM. Metal layer 0 468 may be at the bottom of the CMOS stack.Figure 5 illustrates a computing device 500 in accordance with one implementation of the invention. The computing device 500 houses a board 502. The board 502 may include a number of components, including but not limited to a processor 504 and at least one communication chip 506. The processor 504 is physically and electrically coupled to the board 502. In some implementations the at least one communication chip 506 is also physically and electrically coupled to the board 502. In further implementations,the communication chip 506 is part of the processor 504.Depending on its applications, computing device 500 may include other components that may or may not be physically and electrically coupled to the board 502. These other components include, but are not limited to, volatile memory (e.g., DRAM), nonvolatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 506 enables wireless communications for the transfer of data to and from the computing device 500. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 506 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 500 may include a plurality of communication chips 506. For instance, a first communication chip 506 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 506 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 504 of the computing device 500 includes an integrated circuit die packaged within the processor 504. In some implementations of the invention, the integrated circuit die of the processor includes one or more devices, such as MOS-FET transistors built in accordance with implementations of the invention. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 506 also includes an integrated circuit die packaged within the communication chip 506. In accordance with another implementation of the invention, the integrated circuit die of the communication chip includes one or more devices, such as MOS-FET transistors built in accordance with implementations of the invention.In further implementations, another component housed within the computing device 500 may contain an integrated circuit die that includes one or more devices, such as MOS-FET transistors built in accordance with implementations of the invention.In various implementations, the computing device 500 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 500 may be any other electronic device that processes data.Figure 6 illustrates an interposer 600 that includes one or more embodiments of the invention. The interposer 600 is an intervening substrate used to bridge a first substrate 602 to a second substrate 604. The first substrate 602 may be, for instance, an integrated circuit die. The second substrate 604 may be, for instance, a memory module, a computer motherboard, or another integrated circuit die. Generally, the purpose of an interposer 600 is to spread a connection to a wider pitch or to reroute a connection to a different connection. For example, an interposer 600 may couple an integrated circuit die to a ball grid array (BGA) 606 that can subsequently be coupled to the second substrate 604. In some embodiments, the first and second substrates 602/604 are attached to opposing sides of the interposer 600. In other embodiments, the first and second substrates 602/604 are attached to the same side of the interposer 600. And in further embodiments, three or more substrates are interconnected by way of the interposer 600.The interposer 600 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, a ceramic material, or a polymer material such as polyimide. In further implementations, the interposer may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group III-V and group IV materials.The interposer may include metal interconnects 608 and vias 610, including but not limited to through-silicon vias (TSVs) 612. The interposer 600 may further include embedded devices614, including both passive and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors,fuses, diodes, transformers, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio-frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices may also be formed on the interposer 600. In accordance with embodiments of the invention, apparatuses or processes disclosed herein may be used in the fabrication of interposer 600.EXAMPLESExample 1 may be a SOT electrode comprising: a first layer with a first side to couple with a free layer of a MTJ; and a second layer coupled with a second side of the first layer opposite the first side, wherein a value of an electrical resistance in the first SOT layer is lower than a value of an electrical resistance in the second SOT layer.Example 2 may include the SOT electrode of example 1, wherein a value of spin conductivity in the first SOT layer is higher than a value of spin conductivity in the second SOT layer.Example 3 may include the SOT electrode of example 1, wherein a current applied to the SOT electrode is to cause current to preferentially flow in the first SOT layer to cause a magnetic polarization of the free layer to change directions.Example 4 may include the SOT electrode of example 3, wherein the magnetic polarization of the free layer is substantially perpendicular to the first side of the first layer.Example 5 may include the SOT electrode of any one of examples 1-5, wherein the first layer and the second layer include one or more of: graphene, TiS2, WS2, MoS2, TiSe2, WSe2, MoSe2, B2S3, Sb2S3, Ta2S, Re2S7, LaCPS2, LaOAsS2, ScOBiS2, GaOBiS2, AlOBiS2, LaOSbS2, BiOBiS2, YOBiS2, InOBiS2, LaOBiSe2, TiOBiS2, CeOBiS2, PrOBiS2, NdOBiS2, LaOBiS2, or SrFBiS2.Example 6 may include the SOT electrode of example 1, wherein the first side of the first layer has a smaller area than a side of the second layer opposite a side of the second layer coupled with the second side of the first layer.Example 7 may be an apparatus comprising: a MTJ having a free layer; a first layer of a first side of a SOT electrode coupled with the free layer; and a second layer of the SOT electrode coupled with a second side of the first layer opposite the first side, wherein a value of an electrical resistance in the first SOT layer is lower than a value of an electrical resistance in the second SOT layer.Example 8 may include the apparatus of example 7, wherein a value of a spin conductivity in the first SOT layer is higher than a value of a spin conductivity in the second SOT layer.Example 9 may include the apparatus of example 7, wherein current applied to the SOT electrode is to cause current to preferentially flow in the first SOT layer to cause a magnetic polarization of the free layer to change direction.Example 10 may include the apparatus of example 9, wherein the magnetic polarization of the free layer is substantially perpendicular to the first side of the first layer.Example 11 may include the apparatus of example 9, wherein current applied is a first current; and further comprising wherein a second current applied to the SOT electrode in an opposite direction to the first current will cause current to preferentially flow in the first SOT layer to cause a magnetic polarization of the free layer to change directions.Example 12 may include the apparatus of any one of examples 7-12, wherein the first layer of the SOT electrode and the second layer of the SOT electrode include one or more of : graphene, TiS2, WS2, MoS2, TiSe2, WSe2, MoSe2, B2S3, Sb2S3, Ta2S, Re2S7, LaCPS2, LaOAsS2, ScOBiS2, GaOBiS2, AlOBiS2, LaOSbS2, BiOBiS2, YOBiS2, InOBiS2, LaOBiSe2, TiOBiS2, CeOBiS2, PrOBiS2, NdOBiS2, LaOBiS2, or SrFBiS2.Example 13 may include the apparatus of example 7, wherein the first side of the first layer of the SOT electrode has a smaller area than a side of the second layer of the SOT electrode opposite a side of the second layer of the SOT electrode coupled with the second side of the first layer.Example 14 may be a method for creating a package, comprising: coupling a first side of a first layer of a SOT electrode to a first side of a second layer of the SOT electrode, wherein a value of an electrical resistance in the second layer is lower than a value of an electrical resistance in the first layer.Example 15 may include the method of example 14, further comprising coupling a first side of a free layer of a MTJ to a second side of the second layer opposite the first side.Example 16 may include the method of example 15, further comprising etching the package.Example 17 may include the method of example 16, wherein the second layer of the SOT electrode is an etch stop.Example 18 may include the method of example 15, further comprising, before etching the package: coupling a first side of an MTJ coupling layer to a second side of the free layer opposite the first side; and coupling a first side of an MTJ fixed layer to a second side of the MTJ coupling layer.Example 19 may include the method of example 14, wherein a value of a spin conductivity in the first SOT layer is higher than a value of a spin conductivity in the second SOT layer.Example 20 may include the method of any one of examples 14-19, wherein the first layer or the second layer include one or more of graphene, TiS2, WS2, MoS2, TiSe2, WSe2, MoSe2, B2S3, Sb2S3, Ta2S, Re2S7, LaCPS2, LaOAsS2, ScOBiS2, GaOBiS2, AlOBiS2, LaOSbS2, BiOBiS2, YOBiS2, InOBiS2, LaOBiSe2, TiOBiS2, CeOBiS2, PrOBiS2, NdOBiS2, LaOBiS2, or SrFBiS2.Various embodiments may include any suitable combination of the above-described embodiments including alternative (or) embodiments of embodiments that are described in conjunctive form (and) above (e.g., the "and" may be "and/or"). Furthermore, some embodiments may include one or more articles of manufacture (e.g., non-transitory computer-readable media) having instructions, stored thereon, that when executed result in actions of any of the above-described embodiments. Moreover, some embodiments may include apparatuses or systems having any suitable means for carrying out the various operations of the above-described embodiments.The above description of illustrated embodiments, including what is described in the Abstract, is not intended to be exhaustive or to limit embodiments to the precise forms disclosed. While specific embodiments are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the embodiments, as those skilled in the relevant art will recognize.These modifications may be made to the embodiments in light of the above detailed description. The terms used in the following claims should not be construed to limit the embodiments to the specific implementations disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation. |
A clock gating system (CGS) includes a digital power estimator configured to generate indications of a predicted energy consumption per cycle of a clock signal and a maximum energy consumption per cycle of the clock signal. The CGS further includes a voltage-clock gate (VCG) circuit coupled to the digital power estimator. The VCG circuit is configured to gate and un-gate the clock signal based on the indications prior to occurrence of a voltage droop event and using hardware voltage model circuitry of the VCG circuit. The VCG circuit is further configured to gate the clock signal based on an undershoot phase associated with the voltage droop event and to un-gate the clock signal based on an overshoot phase associated with the voltage droop event. |
CLAIMS:1. A clock gating system (CGS) comprising:a digital power estimator configured to generate indications of a predictedenergy consumption per cycle of a clock signal and a maximum energy consumption per cycle of the clock signal; anda voltage-clock gate (VCG) circuit coupled to the digital power estimator and configured to gate and un-gate the clock signal based on the indications prior to occurrence of a voltage droop event and using hardware voltage model circuitry of the VCG circuit,wherein the VCG circuit is further configured to gate the clock signal based on an undershoot phase associated with the voltage droop event and to un gate the clock signal based on an overshoot phase associated with the voltage droop event.2. The CGS of claim 1, wherein the VCG circuit is configured to gate the clock signal to reduce or eliminate voltage droop in a supply voltage, and further comprising a primary device and at least one other device configured to operate with or without monitoring a voltage level of the supply voltage, wherein the primary device and at least one other device are configured to share a same power distribution network voltage supply or are each coupled to a respective dedicated and private power distribution network having an independent voltage supply.3. The CGS of claim 2, further comprising a VCG configuration register (VCR) coupled to the VCG circuit and configured to store indications of characteristics of a power delivery network (PDN) that is configured to generate the supply voltage.4. The CGS of claim 3, wherein the VCR is further configured to store indications of a center frequency of the PDN and a bandwidth of the PDN.5. The CGS of claim 3, wherein the VCR is further configured to store the indications of the predicted energy consumption per cycle of the clock signal and the maximum energy consumption per cycle of the clock signal.6. The CGS of claim 1, wherein the VCG circuit includes event-per-cycle first- in, first-out (FIFO) retimed circuitry (EFRC) configured to sample, hold, and release predicted energy values during a clock gating and un-gating operation performed by the VCG circuit.7. The CGS of claim 1, wherein the hardware voltage model circuitry includes a voltage threshold multiplexer (VTM) selector circuit configured to select a configurable voltage threshold for clock gating by the VCG circuit from among a first voltage threshold and a second voltage threshold.8. The CGS of claim 7, further comprising a VCG configuration register (VCR) coupled to the VCG circuit and configured to store indications of the first voltage threshold and the second voltage threshold.9. The CGS of claim 7, wherein the hardware voltage model circuitry includes a digital filter configured to determine a predicted voltage-response based on the indications of the predicted energy consumption per cycle of the clock signal and the maximum energy consumption per cycle of the clock signal.10. The CGS of claim 9, wherein the hardware voltage model circuitry further includes a comparison circuit configured to initiate clock-gating of a first cycle of the clock signal in response to a determination that the predicted voltage-response exceeds the configurable voltage threshold.11. The CGS of claim 10, wherein the VCG circuit is further configured to adjust the configurable voltage threshold, in response to gating the first cycle of the clock signal, from the first voltage threshold to the second voltage threshold for a second cycle of the clock signal following the first cycle.12. The CGS of claim 1, wherein the digital power estimator is further configured to provide the indications to the VCG circuit multiple cycles of the clock signal prior to occurrence of the voltage droop event to enable the VCG circuit to gate the clock signal during a clock cycle associated with the undershoot phase.13. The CGS of claim 1, wherein the digital power estimator comprises one or more weighted event indication generators (WEIGs) configured to determine the predicted energy consumption per cycle and the maximum energy consumption per cycle.14. The CGS of claim 13, wherein at least one WEIG of the one or more WEIGs comprises:a first multiplication circuit configured to multiply an event count per cycle of the clock signal by a set of energy weights to determine a first set of weighted event indications; anda second multiplication circuit configured to multiply a maximum event count per cycle of the clock signal by the set of energy weights to determine a second set of weighted event indications.15. The CGS of claim 14, wherein the digital power estimator further comprises a first addition circuit coupled to each of the one or more WEIGs and configured to determine the predicted energy consumption per cycle based on first sets of weighted event indications from the one or more WEIGs.16. The CGS of claim 15, wherein the digital power estimator further comprises a second addition circuit coupled to each of the one or more WEIGs and configured to determine the maximum energy consumption per cycle based on second sets of weighted event indications from the one or more WEIGs.17. The CGS of claim 1, further comprising a VCG performance buffer (VPB) coupled to the VCG circuit and configured to generate an output signal indicating an accumulated count of gated cycles of the clock signal.18. The CGS of claim 17, further comprising a performance monitoring unit (PMIJ) coupled to the VPB and configured to track performance of a processor based on the accumulated count of gated cycles.19. The CGS of claim 18, wherein the PMIJ is configured to track the performance of the processor during gating of the clock signal by the VCG circuit.20. A method comprising:receiving, at a voltage-clock gate (VCG) circuit from a digital power estimator, indications of a predicted energy consumption per cycle of a clock signal and a maximum energy consumption per cycle of the clock signal;in response to the indications, gating the clock signal, wherein the clock signal is gated prior to occurrence of a voltage droop event and using hardware voltage model circuitry of the VCG circuit, and wherein the clock signal is gated based on an undershoot phase associated with the voltage droop event; andun-gating the clock signal based on an overshoot phase associated with thevoltage droop event.21. The method of claim 20, further comprising selecting, by a voltage threshold multiplexer (VTM) selector circuit of the hardware voltage model circuitry, a configurable voltage threshold for clock gating by the VCG circuit from among a first voltage threshold and a second voltage threshold.22. The method of claim 21, further comprising accessing, from a VCG configuration register (VCR), indications of the first voltage threshold and the second voltage threshold.23. The method of claim 22, further comprising determining, by a digital filter of the hardware voltage model circuitry, a predicted voltage-response based on the indications.24. The method of claim 23, further comprising initiating, by a comparison circuit of the hardware voltage model circuitry, clock-gating of a first cycle of the clock signal in response to a determination that the predicted voltage-response exceeds the configurable voltage threshold.25. The method of claim 24, further comprising, in response to gating the first cycle of the clock signal, adjusting the configurable voltage threshold from the first voltage threshold to the second voltage threshold for a second cycle of the clock signal following the first cycle.26. A computer-readable medium storing instructions executable by a processor to initiate, perform, or control operations, the operations comprising:receiving, at a voltage-clock gate (VCG) circuit from a digital power estimator, indications of a predicted energy consumption per cycle of a clock signal and a maximum energy consumption per cycle of the clock signal;in response to the indications, gating the clock signal, wherein the clock signal is gated prior to occurrence of a voltage droop event and using hardware voltage model circuitry of the VCG circuit, and wherein the clock signal is gated based on an undershoot phase associated with the voltage droop event; andun-gating the clock signal based on an overshoot phase associated with thevoltage droop event.27. The computer-readable medium of claim 26, further comprising selecting, by a voltage threshold multiplexer (VTM) selector circuit of the hardware voltage model circuitry, a configurable voltage threshold for clock gating by the VCG circuit from among a first voltage threshold and a second voltage threshold.28. The computer-readable medium of claim 27, further comprising accessing, from a VCG configuration register (VCR), indications of the first voltage threshold and the second voltage threshold.29. An apparatus comprising:means for generating indications of a predicted energy consumption per cycle of a clock signal and a maximum energy consumption per cycle of the clock signal; andmeans for gating and for un-gating the clock signal based on the indications prior to occurrence of a voltage droop event and using hardware voltage model circuitry, wherein the means for gating and un-gating is configured to gate the clock signal based on an undershoot phase associated with the voltage droop event and to un-gate the clock signal based on an overshoot phase associated with the voltage droop event.30. The apparatus of claim 29, further comprising means for executing instructions coupled to the means for generating and the means for gating and un-gating. |
PROACTIVE CLOCK GATING SYSTEM TO MITIGATE SUPPLY VOLTAGEDROOPSCROSS-REFERENCE TO RELATED APPLICATIONS[0001] The present application claims priority from U.S. Prov. Pat. App.No. 62/728,972, filed September 10, 2018 and entitled“ELECTRONIC DEVICE AND METHOD TO ESTIMATE A VOLTAGE BASED ON EXPECTED CURRENT OR ENERGY CONSUMPTION,” from U.S. Prov. Pat. App. No. 62/728,982, filed September 10, 2018 and entitled“ELECTRONIC DEVICE AND METHOD TO SELECTIVELY GATE A CLOCK SIGNAL IN RESPONSE TO AN ESTIMATED VOLTAGE DROP,” from U.S. Prov. Pat. App. No. 62/728,990, filed September 10, 2018 and entitled“ELECTRONIC DEVICE AND METHOD TO INDICATE A COUNT OF GATED CLOCK CYCLES,” from U.S. Prov. Pat. App. No. 62/729,001, filed September 10, 2018 and entitled“ELECTRONIC DEVICE AND METHOD TO SELECT VOLTAGE THRESHOLDS,” and from U.S. Pat. App. No. 16/563,563, filed September 6, 2019 and entitled“PROACTIVE CLOCK GATING SYSTEM TO MITIGATE SUPPLY VOLTAGE DROOPS,” each of which is incorporated herein by reference in its entirety.FIELD[0002] This disclosure is generally related to the field of supply voltage droop mitigation in processing systems. More specifically, some aspects are directed to proactive clock gating and dynamically reconfiguring activation thresholds to mitigate supply voltage droops.DESCRIPTION OF RELATED ART[0003] An electronic device may include a processor that executes instructions to perform operations. For example, an electronic device may include a vector processor that executes instructions to perform operations, such as modulation and demodulation, machine-learning, and image processing.[0004] In some circumstances, a processor may be associated with a transition from a low-power state to a high-power state, resulting in a voltage“droop.” For example, a vector processor may transition from a low-power state to a high-power state by executing a very-wide data vector instruction. Fast and large current transients (di/dt) in a power delivery network (PDN) result in supply voltage droops (voltage noise) that can degrade processor performance. In this case, power consumption by the processor may “spike” in response to execution of the very-wide data vector instruction, resulting in supply voltage droop. In other cases, the voltage droop is induced due to regular periodic and alternating transitions between low power and high power states, thus creating a resonating condition at the processing system and the PDN. The voltage droops reduce energy efficiency of the electronic device. In some cases, the reliability of operation of the electronic device is compromised due to incorrect operation.[0005] Certain electronic devices use voltage guard bands to compensate for voltage droop. For example, a supply voltage may be increased to protect the supply voltage from falling below a particular value. Such a technique increases power consumption. Other electronic devices may perform other operations, such as by stalling a processor in response to detecting (or predicting) a voltage droop. However, this technique may not mitigate voltage droop due to clock timing margin (e.g., the inverse of clock frequency minus path delay) remaining the same. In another example, latency of activation of the mitigation technique impacts instruction scheduling at an electronic device, making the technique less effective.[0006] Certain electronic devices use analog circuit or digital circuit-based voltage sensors or monitors to track supply voltage variations and use various mitigation mechanisms, such as frequency reduction or slowdown of processor execution, such as by introducing stall during processor execution. Such techniques are reactive in nature and are performed a few clock cycles after the voltage-droop has occurred. These techniques are less effective in reducing the voltage guard band of the system, since the voltage droop is already introduced in the PDN. In other implementations, voltage droop inducers (or“aggressors”) may share voltage power supply rails with other processor components (e.g.,“victims” of the voltage droop inducers), which may further degrade the guard band due to voltage rail sharing between the aggressor and victim processor entities. In some implementations, such circuit-based techniques are slow to transition the electronic device from a mitigation state to a full performance state of the electronic device. [0007] Hence, there is a need for a solution that can mitigate voltage droops without loss in processor performance, while activating quickly and effectively reducing voltage degradation.SUMMARY[0008] In a particular example, a clock gating system (CGS) includes a digital power estimator configured to generate indications of a predicted energy consumption per cycle of a clock signal and a maximum energy consumption per cycle of the clock signal. The CGS further includes a voltage-clock gate (VCG) circuit coupled to the digital power estimator. The VCG circuit is configured to gate and un-gate the clock signal based on the indications prior to occurrence of a voltage droop event and using hardware voltage model circuitry of the VCG circuit. The VCG circuit is further configured to gate the clock signal based on an undershoot phase associated with the voltage droop event and to un-gate the clock signal based on an overshoot phase associated with the voltage droop event.[0009] In another particular example, a method includes receiving, at a voltage-clock gate (VCG) circuit from a digital power estimator, indications of a predicted energy consumption per cycle of a clock signal and a maximum energy consumption per cycle of the clock signal. The method further includes, in response to the indications, gating the clock signal. The clock signal is gated prior to occurrence of a voltage droop event and using hardware voltage model circuitry of the VCG circuit, and the clock signal is gated based on an undershoot phase associated with the voltage droop event. The method further includes un-gating the clock signal based on an overshoot phase associated with the voltage droop event.[0010] In another particular example, a computer-readable medium stores instructions executable by a processor to initiate, perform, or control operations. The operations include receiving, at a voltage-clock gate (VCG) circuit from a digital power estimator, indications of a predicted energy consumption per cycle of a clock signal and a maximum energy consumption per cycle of the clock signal. The operations further include, in response to the indications, gating the clock signal. The clock signal is gated prior to occurrence of a voltage droop event and using hardware voltage model circuitry of the VCG circuit, and the clock signal is gated based on an undershoot phase associated with the voltage droop event. The operations further include un-gating the clock signal based on an overshoot phase associated with the voltage droop event.[0011] In another particular example, an apparatus includes means for generating indications of a predicted energy consumption per cycle of a clock signal and a maximum energy consumption per cycle of the clock signal. The apparatus further includes means for gating and for un-gating the clock signal based on the indications and using hardware voltage model circuitry. The means for gating and un-gating is configured to gate the clock signal based on an undershoot phase associated with the voltage droop event and to un-gate the clock signal based on an overshoot phase associated with the voltage droop event.[0012] One particular advantage provided by at least one of the disclosed embodiments is reduced latency or reduced power consumption associated with voltage droop mitigation. Another particular advantage provided by at least one of the disclosed embodiments is enhanced operation of devices of a shared power delivery network (PDN). Other aspects, advantages, and features of the present disclosure will become apparent after review of the entire application, including the following sections: Brief Description of the Drawings, Detailed Description, and the Claims.BRIEF DESCRIPTION OF THE DRAWINGS[0013] FIG. 1 A is a diagram of a processor that includes a proactive clock-gating system (PCGS) containing a voltage-clock-gating (VCG) circuit, a digital power estimator (DPE), a VCG performance buffer (VPB), a performance monitoring unit (PMU), and a VCG configuration register (VCR).[0014] FIG. 1B illustrates certain examples of a set of energy weights used by the DPE and a set of other values, such as a voltage threshold and a relaxed threshold associated with the VCG circuit of FIG. 1A.[0015] FIG. 1C illustrates an example of the DPE of FIG. 1A.[0016] FIG. 2A is a schematic diagram illustrating an example of the VCG circuit of FIG. 1A. [0017] FIG. 2B is a diagram illustrating aspects of event-per-cycle first-in, first-out (FIFO) retimed circuitry (EFRC) that may be included in the VCG circuit of FIG. 1 A.[0018] FIG. 3A is a diagram illustrating aspects of the VPB of FIG. 1 A, which may be used to measure a number of VCG clock gating cycles.[0019] FIG. 3B is a diagram illustrating aspects a voltage threshold multiplexer (VTM) selector that may be used inside the VCG circuit of FIG. 1 A to determine or modify a voltage violation threshold for clock gating.[0020] FIG. 4 is a diagram illustrating the processor in FIG. 1 A sharing a power delivery network (PDN) voltage supply with other devices and processors.[0021] FIG. 5A is a flow chart of an example of a method of operation of the PCGS of FIG. 1A.[0022] FIG. 5B is a flow chart of an example of a method of operation of the PCGS of FIG. 1A.[0023] FIG. 6 is a block diagram of an electronic device including the processor of FIG. 1A.DETAILED DESCRIPTION[0024] A proactive clock gating system (PCGS) may include a voltage-clock gate (VCG) circuit configured to proactively perform supply voltage droop mitigation operations by gating and un-gating a clock domain. The VCG circuit may perform or initiate the voltage droop mitigation operations before occurrence of a voltage droop event and based on voltage estimation prediction performed using a hardware voltage model of the VCG circuit. The VCG circuit may clock gate a root clock signal (also referred to as a global clock signal) when a voltage droop is in an“undershoot” phase as predicted by the hardware voltage model. The VCG circuit may un-gate the root clock signal when in the voltage droop is in an“overshoot” phase.[0025] In some examples, the PCGS tracks voltage droop without requiring system components (e.g., different devices and processors) to track a voltage level of the system components. Thus, in connection with operation of the PCGS, a device need not monitor its operational voltage when running at a specific programmed clock frequency either on a shared rail or on using a dedicated private rail (e.g., where each device receives an independent supply of voltage from a power delivery network (PDN)). In some examples, a VCG configuration register (VCR) is configured to store indications of one or more characteristics of the PDN, such as a center frequency and bandwidth of the PDN. The VCR may be configured to store indications of an energy weight used by the system, which may be sent through a digital power estimator (DPE) to the VCG circuit and used to predict voltage used by the system. In some examples, the center frequency and bandwidth are time invariant for the PDN.[0026] In some aspects, the PCGS and the VCG circuit provide effective voltage droop mitigation in connection with a“shared” rail (e.g., where the PDN is shared by multiple homogeneous and heterogeneous devices). In some examples, the PCGS is integrated in a voltage droop inducing device (e.g., a processor or another device that may be referred to herein as an“aggressor”). The PCGS may suppress or mitigate voltage noise (e.g., a voltage droop event) at a source of the voltage noise by clock-gating before occurrence of the voltage droop event, thus preventing or reducing negative impact on the“victims” sharing the same rail as the aggressor.[0027] In some examples, the VCG circuit includes event-per-cycle first-in, first-out (FIFO) retimed circuitry (EFRC) (e.g., as part of a front-end pipeline of the VCG circuit where the VCG circuit is coupled to a high frequency clocked processor). The EFRC may be configured to sample, hold, and release each power trace during clock gating and un-gating performed by the VCG circuit. A VCG clock enable signal may be provided to the VCG circuit and may be“timing critical” (and hence the retiming may be performed to gate a high frequency clock and to clock gate the EFRC). In some examples, the EFRC“freezes” certain values (e.g., energy and maximum energy per cycle transmission and other normalized values) that are used during operation by the VCG circuit.[0028] In some examples, a voltage threshold multiplexer (VTM) selector is used to dynamically reconfigure a voltage droop mitigation threshold used by the hardware voltage model of the VCG circuit. The VCG circuit may trigger voltage droop mitigation in response to a predicted voltage droop exceeding the voltage droop mitigation threshold. In some examples, the VTM selector inhibits clock gating of consecutive clock cycles (e.g., so that a clock cycle following a gated clock cycle is ungated). The VTM may provide flexibility for a processing system to obtain improved performance at the cost of higher power by enabling a relaxed voltage threshold setting a cycle after a voltage droop violation (and after clock cycle gating by the VCG circuit). In a particular example, indications of the voltage threshold and the relaxed voltage thresholds are stored in the VCR.[0029] In some implementations, one or more DPEs are used to provide the VCG circuit indications of a normalized energy per cycle and a maximum energy per cycle (e.g., where the energy is computed a few cycles ahead of certain processor execution operations associated with high power in a processor pipeline stage). The DPE may include of one or more weighted event indication generators configured to provide the VCG circuit with indications of predicted and maximum current or energy per cycle.[0030] The VCG circuit may achieve zero-cycle latency in response to a voltage-droop event by predicting a future consumption of energy. Thus, clock gating can be performed to mitigate effects of voltage droop, which may improve performance as compared to systems that reactively address effects of voltage droop after occurrence of the voltage droop.[0031] In some examples, the VCG circuit achieves the zero-cycle latency by predicting the future consumption of energy early enough to account for the difference between i) the number of pipeline stages from the DPE in a processor logic pipeline path to the execution of a high power voltage droop inducing stage and ii) the sum of the pipeline stages from the DPE to transmit the energy and maximum energy information to the VCG circuit and the number of VCG pipeline stages.[0032] In some examples, the PCGS includes a VCG performance buffer (VPB) and gated clock count circuitry configured for performance measurement. In some examples, the VPB and the gated clock count circuitry are configured to measure system performance when a global clock of the system is clock-gated.[0033] Referring to FIG. 1 A, a processor is depicted and generally designated 100. The processor 100 is configured to perform clock gating to mitigate voltage droop. [0034] The processor 100 includes multiple devices, such as a device 190 and a device 192. In some examples, the devices 190, 192 share a power supply source (e.g., a voltage rail). In some examples, one of the devices 190 corresponds to an“aggressor” device (e.g., a device that induces voltage droops), and the other of the devices 192 corresponds to a“victim” device (e.g., a device that suffers performance degradation as a result of voltage droops induced by the aggressor device).[0035] FIG. 1 A illustrates that the device 190 includes processor logic 108 and a digital power estimator (DPE) 104. FIG. 1 A also illustrates that the device 192 includes processor logic 194, a voltage clock gate (VCG) circuit 106, a VCG configuration register (VCR) 102, a VCG performance buffer VPB 110, and a performance monitoring unit (PMU) 118. In some examples, the VCR 102, the DPE 104, the VCG circuit 106, the VPB 110, the PMU 118, and clock circuitry 198 are included in a clock gating system (CGS), such as a proactive clock-gating system (PCGS) 199. One or more components of the processor 100 may include one or more flip-flop (FF) circuits, as illustrated in the example of FIG. 1A.[0036] In a particular example, the VCG circuit 106 is coupled to the VCR 102 and to the DPE 104. The DPE 104 is coupled to the VCR 102.[0037] In one example, the VCG circuit 106 is coupled to the processor logic 108 and to the VPB 110. In some examples, the processor logic 108 and the VPB 110 are coupled to the PMU 118. In some implementations, the processor logic 108 includes digital signal processor (DSP) logic. In some implementations, the processor logic 108 includes a vector processor pipeline. In some implementations, the processor logic 108 includes a superscalar general-purpose processor pipeline or a very long instruction word (VLIW) processor. In some implementations, the processor logic 108 includes a slave co-processor associated with a master processor (e.g., the processor logic 194). In some examples, the processor logic 194 includes superscalar processor logic, such as a superscalar processor pipeline.[0038] In some examples, the VCG circuit 106 is configured to operate based on a zero latency voltage response loop and has a zero cycle response time. The VCG circuit 106 may be configured to gate a clock signal 112 (e.g., a“root” clock or“global” clock of the processor 100. In some examples, the clock signal 112 is generated using a phase locked loop (PLL), as an illustrative example. The VCG circuit 106 may be configured to gate the clock signal 112 to generate a gated clock signal 137. The VCG circuit 106 may be configured to gate an entire clock domain by gating the clock signal 112 (e.g., instead of controlling a single stall point, as performed by certain conventional devices).[0039] In some examples, the DPE 104 is configured to detect events at the processor 100, such as a set of events 116. To illustrate, the set of events 116 may include processor pipeline events, memory access events, arithmetic events, one or more other events, or a combination thereof. One or more events of the set of events 116 may include an operation performed by the processor logic 108, an operation performed by the processor logic 194, or a combination thereof. The DPE 104 may be configured to detect (or search for) the set of events 116 during each cycle of the gated clock signal 137. In contrast to some stall-based voltage droop mitigation techniques, predicted current (or energy per cycle) may be computed by the DPE 104 and sent to the VCG circuit 106 earlier than a cycle causing the voltage-droop event.[0040] The DPE 104 may be configured to estimate the energy consumption every clock cycle based on a weighted count of the set of events 116. The weighed count is based on a set of energy weights 120, where each energy weight is associated with a corresponding event type of the set of events 116. In some implementations, the VCR 102 is configured to store indications of the set of energy weights 120 and to output the indications of the set of energy weights 120 to the DPE 104. In some examples, the set of energy weights 120 is determined based on pre-silicon gate-level simulation or post- silicon characterization of benchmarks associated with the processor logic 108 and is scaled (or quantized) to fit a range (e.g., a bit length) of an output of the DPE 104.[0041] For each cycle of the clock signal 112, the DPE 104 may output a first indication of a predicted energy 122 (e.g., a projection of actual energy consumption) associated with the cycle and a second indication of a maximum energy 124 (e.g., a projection of the maximum likely energy consumption) associated with the cycle. For example, the first indication may specify the predicted energy 122 that is to be used by the processor logic 108 during a particular cycle of the gated clock signal 137, and the second indication may specify the maximum energy 124 that is to be used by the processor logic 108 during the particular cycle. In a particular example, the DPE 104 is configured represent the first indication and the second indication using a plurality of bits, such as a string of nine bits, as an illustrative example.[0042] In some examples, the DPE 104 is configured to determine one or more of the predicted energy 122 per cycle or the maximum energy 124 per cycle based on a weighted sum. In some examples, the weighted sum is based on counts of event types of the set of events 116 and is further based on the set of energy weights 120. For example, the DPE 104 may be configured to determine the predicted energy 122 per cycle as a summation of a count of event types per cycle multiplied by a corresponding energy weight (of the set of energy weights 120) for each event type. Alternatively or in addition, the DPE 104 may be configured to determine the maximum energy 124 per cycle by multiplying a maximum count per clock cycle for each event type (e.g., determined based on a theoretical maximum possible energy during processor execution, which may exclude certain events that cannot occur simultaneously in a particular cycle) with a corresponding energy weight of the event type. In a particular example, the DPE 104 is configured to provide indications of the predicted energy 122 per cycle and the maximum energy 124 per cycle to the VCG circuit 106.[0043] FIG. 1B illustrates certain examples of information that may be stored by the VCR 102. In FIG. 1B, the VCR 102 stores the set of energy weights 120 used by the DPE 104 to perform certain operations herein, such as estimation of the predicted energy 122 per cycle and the maximum energy 124 per cycle.[0044] The VCR 102 can also store other values used to estimate voltage, such as values 130 illustrated in the example of FIG. 1B. For example, the values 130 include voltage thresholds, PDN information (e.g., configuration data associated with a PDN bandwidth and a PDN frequency), etc.[0045] In some implementations, the VCR 102 includes two 32-bit memory mapped configuration registers storing indications of the set of energy weights 120 and characteristics associated with a PDN of the processor 100. The characteristics of the PDN may be configured using fields of the VCR 102. The PDN may be associated with a center frequency (CF) range, such as 48-600 megahertz (MHz), as an illustrative example. The PDN may be associated with a bandwidth (BW) range, such as a BW range of 6-300 MHz, as an illustrative example. The CF range and the BW range may be indicated in settings stored at the VCR 102. The CF range and the BW range may be tuned post silicon to match characteristics of the PDN. The PDN settings may be different based on particular device characteristics (e.g., package and boardcharacteristics) and may be reconfigured after fabrication of a system-on-chip (SoC).[0046] FIG. 1C is a diagram illustrating certain aspects of an example of the DPE 104 of FIG. 1 A. The DPE 104 may include one or more weighted event indication generators (WEIGs), such as a representative WEIG 117. The example of FIG. 1C also depicts that the DPE 104 includes a WEIG 115 and a WEIG 119. In otherimplementations, the DPE 104 may include a different number of WEIGs.[0047] The DPE 104 is configured to determine the predicted energy 122 per cycle and the maximum energy 124 per cycle. In the example of FIG. 1C, the WEIGs 115, 117, and 119 may each include one or more flip-flop (FF) circuits configured to store indications of the set of events 116, the set of energy weights 120, and a maximum event count per cycle 123.[0048] In some examples, the set of events 116 includes microarchitecture events associated with the processor 100 of FIG. 1 A. As illustrative examples, the set of events 116 may include one or more wide-data vector arithmetic events, one or more pipeline events, one or more memory operation events, one or more other events, or a combination thereof. The maximum event count per cycle 123 is the theoretical maximum possible event count per cycle during processor execution, which may exclude certain events that cannot occur simultaneously in a particular cycle.[0049] In the example of FIG. 1C, the WEIG 117 includes a multiplication circuit 182 configured to multiply a count of the number of the set of events 116 per cycle by a corresponding weight of the set of energy weights 120 (e.g., a pipeline or flip-flop staged version of generated energy weights). In FIG. 1C, the WEIG 117 further includes a multiplication circuit 184 configured to multiply the maximum event count per cycle 123 of each event type of the set of events 116 by a corresponding weight of the set of energy weights 120. [0050] In some examples, the set of energy weights 120 are based on pre-silicon simulation of the processor 100. For example, operation of the processor 100 can be simulated via a simulation program that tracks events at the processor 100 during simulation. Alternatively or in addition, in some examples, the set of energy weights 120 can be determined by tracking post-silicon operation of the processor 100 or another processor. In some implementations, the processor 100 corresponds to a processing system having a relatively predictable sequence of operations.[0051] The WEIG 117 is configured to multiply (e.g., using the multiplication circuit 182) the count of each event type per cycle by a corresponding weight of the set of energy weights 120 to generate a first set of weighted event indications 170. The WEIG 117 is further configured to multiply (e.g., using the multiplication circuit 184) the maximum event count of each event type per cycle 123 by a corresponding weight of the set of energy weights 120 to generate a second set of weighted event indications 172. In the example of FIG. 1C, the DPE 104 includes a first addition circuit 186 configured to sum the first set of weighted event indications 170 from each WEIG (e.g., the WEIGs 115, 117, and 119) to determine the predicted energy 122 per cycle.FIG. 1C also depicts that the DPE 104 includes a second addition circuit 188 configured to sum the second set of weighted event indications 172 from each WEIG (e.g., the WEIGs 115, 117, and 119) to determine the maximum energy 124 per cycle.[0052] One or more aspects of FIGS. 1 A, 1B, and 1C improve device performance. For example, by determining and providing the predicted energy 122 per cycle and the maximum energy 124 per cycle to the VCG circuit 106, a voltage droop can be predicted and mitigated proactively. As a result, latency may be improved as compared to other devices that use digital critical timing path monitors and analog voltage sensors to trigger voltage droop mitigation.[0053] FIG. 2 A depicts certain illustrative aspects of examples of the DPE 104, the VCG circuit 106, and the clock circuitry 198. In FIG. 2A, the VCG circuit 106 is coupled to the DPE 104 and to the clock circuitry 198. For example, in FIG. 2 A, the VCG circuit 106 includes one or more input flip-flops coupled to the DPE 104 and further includes one or more output flip-flops (e.g., a flip-flop 209) coupled to the clock circuitry 198. [0054] In the example of FIG. 2A, the VCG circuit 106 has a hardware voltage model enabled by a digital filter 230, such as a digital band-pass second order infinite impulse response (IIR) filter. Alternatively or in addition, the hardware voltage model of the VCG circuit 106 may be implemented in a variety of devices and circuits for sensing voltage dynamically, and is not limited to the particular implementation of FIG. 2A.[0055] In some examples, for each cycle of the clock signal 112, the VCG circuit 106 provides an indication of a clock control signal 210 to a global clock gater 216. In some examples, the VCG circuit 106 performs global clock gating and un-gating based on a voltage-prediction decision generated by the VCG circuit 106. In some aspects, operation of the VCG circuit 106 has certain benefits over conventional techniques, such as stall-based voltage droop mitigation. For example, conventional voltage droop mitigation techniques may be unable to introduce stall during VLIW instruction packet execution and may be ineffective for processors executing instructions that are issued and executed over multiple cycles. As another example, conventional voltage droop mitigation techniques may wait to mitigate voltage droop over a few cycles (e.g., 4 cycles) before stalling. As an additional example, clock gating by the VCG circuit 106 may preserve a relationship within and across instruction packets executed by the processor 100. As a further example, use of the VCG circuit 106 may reduce or eliminate scheduler dependencies encountered in stall-based techniques and may not involve changing instruction scheduling. As another example, use of the VCG circuit 106 may provide more power (due to voltage droop reduction effectiveness) benefit compared to stall-based techniques.[0056] In some implementations, the VCG circuit 106 is configured to proactively mitigate voltage droop by gating the clock signal 112 in response to determining that a voltage-response is predicted to be equal to or less than a value of a configurable voltage threshold 202. In some examples, the configurable voltage threshold 202 is received from a voltage threshold multiplexer (VTM) selector circuit 203. In some examples, the VCG circuit 106 clock gates the clock signal 112 when a voltage droop is in an undershoot phase as predicted by the hardware voltage model. The VCG circuit 106 may be configured to un-gate the clock signal 112 when the voltage droop is in an overshoot phase. [0057] The VCG circuit 106 may be configured to determine the configurable voltage threshold 202 using an indication of the maximum energy 124 per cycle in combination with PDN configuration settings 204 retrieved from the VCR 102. The VCG circuit 106 may be configured to determine, based on the predicted energy 122, a normalized predicted energy 206 per cycle and to use the normalized predicted energy 206 per cycle and the PDN configuration settings 204 to determine a predicted voltage-response 208.[0058] When the predicted voltage-response 208 is equal or less than the configurable voltage threshold 202, the clock control signal 210 is combined with one or more global clock enables 212 to generate the gated clock signal 137 through the global clock gater 216. Thus, in some examples, the clock control signal 210 enables clock-gating of the processor 100 of FIG. 1 A in response to an estimated voltage droop event that is predicted to occur. The VCG circuit 106 may be configured to un-gate the processor 100 in response to detecting an overshoot phase associated with a particular voltage margin, thus avoiding a voltage-droop event during an un-gating event. In some cases, techniques in accordance with FIG. 2A improve performance of the system as a whole due to quick reaction time of engaging and disengaging voltage droop mitigation as compared to other devices that use digital and analog circuit monitors to trigger droop mitigation.[0059] FIG. 2B depicts an illustrative example of a device 200 that is included in the processor 100 of FIG. 1 A. The device 200 includes a processor fetch, issue, control, memory access, and execution pipeline 270 (e.g., a processor pipeline of the processor 100), the VCG circuit 106, and the clock circuitry 198.[0060] In FIG. 2B, the VCG circuit 106 includes event-per-cycle (FIFO) retimed circuitry (EFRC) 299. In some examples, the EFRC 299 is included in a front-end pipeline of the VCG circuit 106 that is coupled to one or more DPEs, such as the DPE 104.[0061] In FIG. 2B, the VCG circuit 106 is configured to generate or receive a digital signal 254 indicative of expected energy to be consumed by an electronic device (e.g., the processor 100) during a time period (e.g., during a particular cycle of the clock signal 112). For example, the digital signal 254 may be generated by or provided from the DPE 104.[0062] In some implementations, multiple DPEs (e.g., the DPE 104 and a DPE l04b) are coupled to an adder 255 of the device 200, and a sum of indications of energy per cycle from the multiple DPEs may be computed by the adder 255 to generate the digital signal 254 representing current per cycle as estimated by the multiple DPEs. The digital signal 254 hence may indicate one or more of the predicted energy 122 or the maximum energy 124 as estimated by more than more than one DPE.[0063] The VCG circuit 106 is configured to generate the clock control signal 210 in response to an estimated voltage drop of the electronic device exceeding theconfigurable voltage threshold 202 during the time period.[0064] The device 200 further includes the clock circuitry 198 coupled to an output of the VCG circuit 106. The clock circuitry 198 is configured to receive the clock signal 112 and to selectively gate the clock signal 112 responsive to the clock control signal 210, generating the gated clock signal 137.[0065] The EFRC 299 may be configured to sample, hold, and release predicted energy and maximum energy values during a clock gating operation performed by the VCG circuit 106. To illustrate, in some implementations, the EFRC 299 is configured to store first values corresponding to energy per cycle consumption associated with scheduled instructions or scheduled packets of the electronic device. The energy per cycle may be determined based on the set of energy weights 120. The EFRC 299 may be configured to store second values based on the clock control signal 210. In a particular example, the EFRC 299 may also store other transformed and other normalized values generated based on the set of energy weights 120 and the values 130 in FIG. 1B (e.g., one or more threshold configuration settings, a center frequency configuration setting, a bandwidth configuration setting, one or more other settings, or a combination thereof).[0066] During operation, the device 200 may receive, at the VCG circuit 106, the digital signal 254. The digital signal 254 is indicative of expected energy (e.g., one or more of the predicted energy 122 per cycle or the maximum energy 124 per cycle) to be consumed by an electronic device during a time period. The device 200 may be configured to generate, based on the expected energy per cycle, the clock control signal 210 in response to an estimated voltage drop of the electronic device exceeding the configurable voltage threshold 202 during the time period. The device 200 may be configured to selectively gate the clock signal 112 using the clock circuitry 198 responsive to the clock control signal 210.[0067] In some examples, the EFRC 299 includes FIFO storage circuitry, which may be implemented using retiming logic to ensure cycle accuracy of the clock control signal 210. In some examples, the EFRC 299 is configured to receive (e.g., via the digital signal 254) and hold (e.g.,“freeze”) values of energy per cycle, maximum energy per cycle, and computed values of PDN configurations that are to be sent to the digital filter 230.[0068] The EFRC 299 may include data and control buses each having a bus width of one or more bits. Data from the buses may propagate through one or more multiplexers of the EFRC 299 (e.g., a representative multiplexer 281) based on the value of the clock control signal 210. In a particular example, the clock control signal 210 is provided to each of the multiplexers as a select signal. A logic one value of the clock control signal 210 may cause the multiplexer 281 to output a digital signal 283.[0069] Each of the multiplexers may be configured to receive an output signal as a feedback input signal, such as by receiving the output signal 283 at a first input of the multiplexer 281. The multiplexer 281 may be configured to receive digital signal 283, and other multiplexers of the EFRC 299 may be configured to receive an output of the previous multiplexer as an input. One or more multiplexer outputs may be stored at flip-flop circuits of the EFRC 299, such as illustrated in the example of FIG. 2B.[0070] In FIG. 2B, multiple gated clock signals 137 may be generated by the clock circuitry 198 for each of a plurality of N processors (where N indicates a positive integer greater than one). In some examples, a first gated clock signal 137 is provided to a first processor (e.g., to the processor fetch, issue, control, memory access, and execution pipeline 270), a second gated clock signal 138 is provided to a second processor, and a third gated clock signal 139 is provided to the Nth processor. [0071] By using one or more aspects of FIG. 2B, voltage droop can be predicted rather than reactively detected. As a result, overall performance may be enhanced, voltage may be reduced, or a combination thereof.[0072] FIG. 3 A depicts certain aspects of an example of the VCG performance buffer (VPB) 110. The VPB 110 is coupled to the VCG circuit 106, such as via the flip-flop 209 of the VCG circuit 106. The VPB 110 is also coupled to the PMU 118 and to the clock circuitry 198.[0073] The VPB 110 may be configured to count gated clock cycles for performance measurement. The VPB 110 may be configured to measure performance when the global clock of the system (e.g., the clock signal 112) is clock-gated.[0074] In the example of FIG. 3A, the VPB 110 includes a gate 301 (e.g., an AND gate), a counter 302 (e.g., an increment and decrement counter), a sequential buffer 303, and a comparison circuit 304.[0075] The counter 302 may be configured to increment in response to the clock signal 112. In some implementations, the processor logic 108 is deactivated in response to gating of the clock signal 112 by the VCG circuit 106. Upon disengagement by the VCG circuit 106 and restarting of the gated clock signal 137, the VPB 110 may be configured to begin decrementing the counter 302 and to transmit an enable signal (e.g., a l-bit indicator) to the PMU 118 to cause the PMU 118 to begin counting cycles of the clock signal 112.[0076] In some examples, the global clock gater 216 is responsive to the one or more other global clock enables 212. In a particular example, the global clock gater 216 is configured to generate, based on the clock signal 112 and the one or more other global clock enables 212, the gated clock signal 137.[0077] The sequential buffer 303 is configured to store an accumulated count of gated clock cycles of the gated clock signal 137. The comparison circuit 304 is coupled to an output of the sequential buffer 303 and is configured to generate an output signal 306 indicative of whether the accumulated count exceeds zero. [0078] The gate 301 is configured to receive the clock control signal 210 and the output signal 306. The counter 302 is configured to selectively increment or decrement the accumulated count based on an output of the gate 301 and based on an output of the sequential buffer 303. In a particular example, when the clock control signal 210 indicates no clock gating, the accumulated count is decremented by one, and when the clock control signal 210 indicates clock gating, the accumulated count is incremented by one. In a particular example, decrementing of the accumulated count is disabled when the output signal 306 indicates that the accumulated count is zero or when the clock control signal 210 indicates clock gating.[0079] Thus, in some examples, the counter 302 is configured to increment the accumulated count while the clock control signal 210 indicates clock gating. The counter 302 may be configured to decrement the accumulated count, until the accumulated count equals zero, while the clock control signal 210 does not indicate clock gating.[0080] The global clock gater 216 is configured to generate the gated clock signal 137 based on selectively gating cycles of the clock signal 112 responsive to the clock control signal 210 and to the one or more global clock enables 212. In the example of FIG. 3A, the global clock gater 216 includes a gate 320 (e.g., an AND gate) configured to receive the one or more global clock enables 212 and the clock control signal 210. A latch 322 is coupled to receive an output signal from the gate 320. A gate 324 (e.g., an AND gate) is configured to selectively pass or gate the clock signal 112 based on an output of the latch 322. The latch 322 and the sequential buffer 303 are clocked by the clock signal 112. In the example of FIG. 3A, the global clock gater 216 includes a buffer 326 configured to buffer an output of the gate 324 to generate the gated clock signal 137.[0081] In some implementations, the PMU 118 is coupled to an output of the clock circuitry 198. The performance monitoring unit 118 monitors performance of a processor (e.g., the processor 100). The performance monitoring unit 118 and the processor 100 are responsive to the gated clock signal 137. Processor cycles and processor performance events in PMU 118 are used to calculate performance statistics. However, if the clocks to the PMU 118 are gated due to an indicator, the PMU counts cannot account for cycles clocks are being gated (e.g., due to voltage droop mitigation). In an example, the performance monitoring unit 118 is in a clock domain of the clock signal 112 and, when the clock signal 112 is gated to mitigate voltage droop at the processor 100, the performance monitoring unit 118 (which is sampling on the gated clock signal 137) does not detect one or more cycles of the clock signal 112.[0082] The counter 302 may be configured to count a number of gated clock cycles.The comparison circuit 304 may be configured to provide an output signal 306 (e.g., a l-bit signal) to the PMU 118 indicating the number of gated clock cycles.[0083] The PMU 118 is responsive to the output signal 306 of the comparison circuit 304 to determine a number of cycles of the clock signal 112 that have elapsed while the clock signal 112 is gated. To illustrate, the PMU 118 is configured to determine a number of gated clock cycles during which the output signal 306 indicates that the accumulated count exceeds zero. The PMU 118 is configured to adjust a performance measurement of the processor at least partially based on the number of gated clock cycles during which the output signal 306 indicates that the accumulated count exceeds zero.[0084] Operation of the VPB 110 is described in accordance with a particular implementation in which a PMU event is used to calculate the number of clock cycles of the clock signal 112 that are clock-gated by a voltage-droop mitigation mechanism (e.g., the VCG circuit 106) that generates the clock control signal 210. In one example, the VCG circuit 106 generates the gated clock signal 137 by gating the root clock (e.g., the clock signal 112) of the master processor and a vector coprocessor including the PMU 118. The count of gated clock cycles in the sequential buffer 303 is reset on boot-up. When the VCG circuit 106 engages due to a voltage droop event (e.g., the clock control signal 210 indicates voltage clock gating, or that the clock is disabled), the gated clock signal 137 is generated, and the counter 302 is incremented.[0085] When the count is greater than 0, the comparison circuit 304 asserts the output signal 306 indicating that clock gating is engaged. During this time, the PMU 118 cannot sample this indicator because the gated clock signal 137 used for sampling by the PMU 118 is off. After the clock gating disengages (e.g., the clock control signal 210 does not indicate clock gating, or that the clock is enabled), the clock signal 112 is no longer gated, and the VPB 110 starts decrementing the counter 302 responsive to the un-gated tap of the clock signal 112. Concurrently, the PMU 118 starts sampling the output signal 306 indicator of a non-zero accumulated count responsive to the clock signal 112, which is also now un-gated. The output signal 306 indicator of a non-zero accumulated count remains high until the decremented value in the sequential buffer 303 reaches zero. Because the number of cycles during processor execution when the processor remains un-gated (i.e., no clock gating due to clock control signal 210 having a value of“1”) is greater than the number of gated clock cycles (i.e., clock gating due to clock control signal 210 having a value of“0”), the accumulated count does not overflow the sequential buffer 303.[0086] By generating the output signal 306 to indicate a number of gated clock cycles, the PMU 118 can adjust performance monitoring to account for such gated clock cycles. As a result, performance may be monitored with increased accuracy.[0087] FIG. 3B depicts an illustrative example of components that may be included in the VCG circuit 106. The components illustrated in FIG. 3B include the VTM selector circuit 203 and the digital filter 230.[0088] In some examples, the VTM selector circuit 203 is configured to select among multiple different voltage thresholds, such as a first voltage threshold 370 and a second voltage threshold 372. The VTM selector circuit 203 may dynamically select, during operation of the processor 100 and based on the first voltage threshold 370 and the second voltage threshold 372, between performance improvement at the cost of increased power and power savings at the cost of reduced performance. In some examples, the first voltage threshold 370 corresponds to“threshold config” illustrated in FIG. 2 A, and the second voltage threshold 372 corresponds to“relaxed threshold config” illustrated in FIG. 2A.[0089] The VTM selector circuit 203 and the digital filter 230 are coupled to inputs of a comparison circuit 356. A logic circuit 358 is coupled to the comparison circuit 356 and to the flip-flop 209.[0090] The VTM selector circuit 203 is responsive to the clock control signal 210 to selectively output the first voltage threshold 370 or the second voltage threshold 372 as the configurable voltage threshold 202 to the comparison circuit 356. As a result, a criterion used to determine whether clock gating by the VCG circuit 106 is to be performed for a particular clock cycle can be adjusted based on whether clock gating is performed following an immediately preceding gated clock cycle. This reduces or eliminates a likelihood that clock gating is performed for two consecutive clock cycles.[0091] A first buffer 380 is coupled to a first input of the VTM selector circuit 203 and is configured to store the first voltage threshold 370. A second buffer 382 is coupled to a second input of the VTM selector circuit 203 and is configured to store the second voltage threshold 372. The buffers 380, 382 may correspond to registers, such as the VCR 102, and may store the first voltage threshold 370 and the second voltage threshold 372 in the values 130, as shown in FIG. 1B.[0092] The VTM selector circuit 203 is configured to select one of the first voltage threshold 370 or the second voltage threshold 372 as the configurable voltage threshold202 responsive to the clock control signal 210. For example, the VTM selector circuit203 may include a multiplexer that receives the clock gating control signal 210 as a control input. The VTM selector circuit 203 outputs the configurable voltage threshold 202 to the comparison circuit 356.[0093] The digital filter 230 is configured to generate the predicted voltage-response 208. In an example, the digital filter 230 is an infinite impulse response (IIR)-type filter configured to generate estimated voltage values based on estimated current or energy per cycle values indicative of expected current or energy to be consumed by an electronic device, such as the digital filter 230. In a particular implementation, the digital filter 230 includes an IIR-type second order band pass filter.[0094] The comparison circuit 356 is configured to receive the predicted voltage- response 208 and to generate a comparison signal 374 that indicates whether the predicted voltage-response 208 is less than or equal to the configurable voltage threshold 202. In an illustrative example, in response to the predicted voltage-response 208 being less than or equal to the configurable voltage threshold 202, the comparison circuit 356 outputs the comparison signal 374 having a first value (e.g.,“1”), and in response to the predicted voltage-response 208 greater than the configurable voltage threshold 202, the comparison circuit 356 outputs the comparison signal 374 having a second value (e.g.,“0”).[0095] The logic circuit 358 and the flip-flop 209 are configured to generate the clock control signal 210 based on the comparison signal 374. To illustrate, the logic circuit 358 may be configured to generate an output signal based on an inverted version of the comparison signal 374. The output signal is received by the flip-flop 209 and is output as the clock control signal 210.[0096] During operation, in a first clock cycle, the clock control signal 210 may have a “1” value, indicating no clock gating. As a result, in the second clock cycle the first voltage threshold 370 is selected by the VTM selector circuit 203 and compared to the predicted voltage-response 208 in the second clock cycle. In response to the predicted voltage-response 208 being less than or equal to the first voltage threshold 370, the comparison signal 374 has a“1” value, causing the clock control signal 210 to have a “0” value for the next clock cycle (e.g., a third clock cycle) that follows the second clock cycle.[0097] During the third clock cycle, the comparison circuit 356 is configured to compare the predicted voltage-response 208 during the third clock cycle to the second voltage threshold 372 in response to the clock control signal 210 having the“0” value, indicating clock gating during the previous clock cycle (i.e., during the second clock cycle).[0098] The second voltage threshold 372 may have a value set to reduce or eliminate clock gating during consecutive clock cycles. For example, the second voltage threshold 372 may have a different value than the first voltage threshold 370 to prevent the clock control signal 210 from indicating that clock gating is to be applied for that clock cycle. As a result, in such implementations, the VTM selector circuit 203 allows for clock gating only every other clock cycle. The second voltage threshold 372 takes effect through the VTM selector circuit 203 a cycle immediately following the clock cycle where the predicted voltage-response 208 is less than or equal to the first voltage threshold 370 and prevents the processor (e.g., the processor 100) from being gated for two consecutive cycles during voltage-droop mitigation. [0099] The first voltage threshold 370 and the second voltage threshold 372 can be adjusted post-silicon during operation of the processor 100 based on targeted performance and power-savings. In some examples, the first voltage threshold 370 and the second voltage threshold 372 have equal values, which may keep the clock signal 112 gated for two consecutive clock cycles during voltage-droop mitigation. In some implementations, a voltage droop threshold may be set“aggressively” by selecting values of zero for first voltage threshold 370 and the second voltage threshold 372. A greater value (e.g., 127) may be selected to disable the VCG circuit 106 from engaging during any voltage-droop event. The second voltage threshold 372 may take effect, through the VTM selector circuit 203, one cycle after the cycle the VCG circuit 106 engages.[00100] Thus, the second voltage threshold 372 may take effect, through the VTM selector circuit 203, a cycle immediately after voltage-droop mitigation engages following a first cycle that violates a threshold for voltage droop mitigation. The second voltage threshold 372 enables the VCG circuit 106 to clock gate every other cycle, thus preventing consecutive clock gating and limiting performance degradation.In this example, the second voltage threshold 372 prevents the processor 100 from being gated for consecutive cycles during voltage-droop mitigation.[00101] FIG. 4 depicts an example of a system 400 having a shared power delivery network (PDN) 406. In some examples, the PDN 406 includes a voltage rail that is common to multiple homogeneous or heterogeneous devices or processors, such as the processor 100, a device or processor 410, a device or processor 412, a device or processor 418, a device or processor 422, and a device or processor 426.[00102] The processor 100 includes the PCGS 199 that includes any of the VCG circuit 106, the EFRC 299, the VTM selector circuit 203, the DPE 104, the VPB 110, the RMEG 118, the clock circuitry 198, and the global clock gater 216.[00103] The processor 100 is configured to receive a core power supply voltage from the PDN 406. In some examples, the device or processor 410 is coupled to and shorted to the PDN 406. For example, the device or processor 410 may include one or more input/output (I/O) devices coupled to and shorted to one or more I/O terminals of the system 400.[00104] In some examples, the processor 100 is coupled to or includes a head switch 414. The head switch 414 may be configured to generate a supply voltage, such as a power-gated core supply 425, that is based on a parent power supply voltage provided by the PDN 406. Further, the processor 100 can share the same parent supply among multiple processors or devices each having a corresponding head switch configured to receive the power supply voltage from the PDN 406. For example, in FIG 4, the devices or processors 418, 422, and 426 include head switches 416, 420, and 424, respectively, and share the same PDN 406 as the processor 100. The head switches 416, 420, and 424 are configured to receive the power supply voltage from the PDN 406, and the devices or processors 418, 422, and 426 are configured to receive the power-gated core supply 425. In some examples, the devices or processors 418, 422, and 426 are included in the processor logic 108 and 194 of FIG. 1 A.[00105] In a particular example, when a predicted voltage-response of the processor 100 violates the configurable voltage threshold, the PCGS 199 proactively clock-gates the processor 100 when a predicted voltage droop event is about to occur. In some cases, techniques described in this disclosure reduce latency as compared to other devices that use digital circuit monitors such as critical path monitors or analog voltage sensors to trigger droop mitigation. Using a technique in accordance with the disclosure, an aggressor (e.g., the processor 100) with a PCGS 199 may improve voltage noise immunity for other victims (e.g., any of the devices 410, 412, 418, 422, and 426) that may not include a PCGS 199 and that may be connected to the same shared rail as the aggressor. The PCGS 199 may prevent the voltage noise inducer (or “aggressor”) from injecting noise by using proactive and predictive clock gating, thus reducing or eliminating voltage noise propagation through either the PDN 406 (which could affect the device 410) or through the power-gated core supply 425 (which could affect the devices 410, 412, 418, 422, 426). In certain conventional devices that do not use a PCGS technique, a victim is unprotected from the voltage noise propagated on the shared rail, which can result in a failed processing operation by the victim due to pipeline execution slow down or due to functional failures caused by voltage noise. In some cases, the failed processing operation increases voltage on the entire shared rail, increasing power to devices supplied by the same voltage rail and causing the devices to consume more power via the shared rail. If the aggressor uses a conventional reactive voltage droop mitigation technique, then voltage reduction may be associated with latency. In this example, the reaction time of a conventional technique may permit voltage noise to propagate to a victim device prior to voltage droop mitigation taking effect.[00106] The system 400 includes a primary device (e.g., the processor 100) and at least one other device (e.g., any of the devices 410, 412, 418, 422, 426) configured to operate with or without monitoring a voltage level of the supply voltage of the shared PDN 406 (or a voltage level of the power-gated core supply 425). For example, in some implementations, by using the PCGS 199, the processor 100 and the devices 410, 412, 418, 422, 426 are not required to monitor voltage level of the supply voltage of the shared PDN 406 (or a voltage level of the power-gated core supply 425). In some examples, the primary device and the at least one other device are configured to share a same power distribution network voltage supply, such as a voltage supply of the shared PDN 406. In some examples, one or both of the primary device and the at least one other device are each coupled to a respective dedicated and private power distribution network having an independent voltage supply. Thus, certain circuitry associated with voltage level monitoring can be omitted, increasing power savings and circuit area available for other device components.[00107] Referring to FIG. 5A, a particular illustrative example of a method is depicted and generally designated 500. In some examples, operations of the method 500 are performed by the processor 100.[00108] The method 500 includes receiving, at a voltage-clock gate (VCG) circuit from a digital power estimator, indications of a predicted energy consumption per cycle of a clock signal and a maximum energy consumption per cycle of the clock signal, at 502. For example, the DPE 104 may be configured to generate an indication of the predicted energy 122 per cycle of the clock signal 112 and an indication of the maximum energy 124 per cycle of the clock signal 112, and the VCG circuit 106 may be configured to receive the indications from the DPE 104. In some examples, the DPE 104 is configured to provide the indications to the VCG circuit 106 multiple cycles of the clock signal 112 prior to occurrence of the voltage droop event to enable the VCG circuit 106 to gate the clock signal 112 during a clock cycle associated with the undershoot phase.[00109] The method 500 further includes, in response to the indications, gating the clock signal, at 504. The clock signal is gated prior to occurrence of a voltage droop event and using hardware voltage model circuitry of the VCG circuit, and the clock signal is gated based on an undershoot phase associated with the voltage droop event.To illustrate, the VCG circuit 106 may be configured to gate the clock signal 112 so that the clock signal 112 is gated while the predicted voltage-response 208 is less than or equal to the configurable voltage threshold 202 (e.g., while a voltage droop event is in an undershoot phase). The VCG circuit 106 includes hardware voltage model circuitry, such as any of the digital filter 230, the VTM selector circuit 203, and the comparison circuit 356, as illustrative examples.[00110] The method 500 further includes un-gating the clock signal based on an overshoot phase associated with the voltage droop event, at 506. To illustrate, the VCG circuit 106 may be configured to un-gate the clock signal 112 so that the clock signal 112 is un-gated while the predicted voltage-response 208 is greater than theconfigurable voltage threshold 202 (e.g., while a voltage droop event is in an overshoot phase).[00111] Referring to FIG. 5B, a particular illustrative example of a method is depicted and generally designated 550. In some examples, operations of the method 550 are performed by the processor 100.[00112] The method 550 includes using one or more digital power estimators (DPE) to generate a sequence of normalized energy per cycle and maximum energy per cycle, where the energy is computed one or more cycles ahead of processor execution, at 552. The method 550 further includes, in response to the sequence, configuring a voltage- clock gate (VCG) circuit of a proactive clock gating system (PCGS) to mitigate a voltage-droop event through gating during undershoot and un-gating during overshoot of the entire clock domain, at 554. [00113] The method 550 further includes achieving zero-cycle latency of mitigation of the voltage-droop event by predicting a future consumption of energy prior to the voltage-droop event, at 556. The method 550 further includes dynamicallyreconfiguring voltage-droop mitigation thresholds through a voltage threshold multiplexer (VTM) selector to obtain various performance and power tradeoffs, at 558.[00114] The method 550 further includes retiming to sample, hold, and release data in event-per-cycle first-in, first-out retimed circuitry (EFRC), a DPE power trace, and other normalized values from a voltage configuration register (VCR) each clock cycle accounting for global clock gating and un-gating by the VCG circuit, at 560. The method 550 further includes measuring a count of VCG clock gating cycles for accurate performance tallying using a VCG performance buffer (VPB), at 562.[00115] Referring to FIG. 6, a block diagram of a particular illustrative example of an electronic device is depicted and generally designated 600. The electronic device 600 may include a system-in-package or system-on-chip device and may include the processor 100. In some examples, a voltage supply for the electronic device 600 and the processor 100 may be supplied from a power supply 644. The processor 100 may include a proactive clock gating system (PCGS) including the VCG circuit 106. The VCG circuit 106 may be coupled to the DPE 104 and to the VCR 102.[00116] The PCGS may be configured to compute an estimated voltage occurrence of a voltage droop event. The VCG circuit 106 includes the EFRC 299 to retime and pipeline stage control and data signals each clock cycle prior to internal computation and decision making by the VCG circuit 106.[00117] The VCG circuit 106 may be configured to generate clock enables to gate and un-gate clock signals provided to the processor logic 108, 194 during voltage droop events in the processor 100. The VTM selector circuit 203 may dynamicallyreconfigure the VCG circuit 106 to provide different programmable options of voltage thresholds that determine clock gating by the VCG circuit 106.[00118] The PCGS may be configured to detect voltage droops in the voltage supply and provide a gated clock signal. The performance loss due to voltage droop mitigation may be accurately measured using the VPB 110, which may provide such information to the PMU 118. Various details of the PCGS have been omitted from the example depicted in FIG. 6, but aspects of the PCGS may be implemented using one or more aspects described above. Although not shown, one or more other processors 100 may share the same power supply and may also be included in the electronic device 600.[00119] Depending on the particular implementation, the electronic device 600 may correspond to a mobile device (e.g., a cellular phone), a computer (e.g., a server, a laptop computer, a tablet computer, or a desktop computer), an access point, a base station, a wearable electronic device (e.g., a personal camera, a head-mounted display, or a watch), a vehicle control system or console, an autonomous vehicle (e.g., a robotic car or a drone), a home appliance, a set top box, an entertainment device, a navigation device, a personal digital assistant (PDA), a television, a monitor, a tuner, a radio (e.g., a satellite radio), a music player (e.g., a digital music player or a portable music player), a video player (e.g., a digital video player, such as a digital video disc (DVD) player or a portable digital video player), a robot, a healthcare device, an Internet of Things (IoT) device, another electronic device, or a combination thereof.[00120] The electronic device 600 may further include one or more memories, such as a memory 632. The memory 632 may be coupled to the processor 100, to another memory, or to both. The memory 632 may be configured to store instructions 633 that are executable by the processor 100, by another processor, or both. The memory 632 may include random access memory (RAM), magnetoresistive random access memory (MRAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), one or more registers, a hard disk, a removable disk, a compact disc read-only memory (CD-ROM), another memory device, or a combination thereof.[00121] FIG. 6 also shows a display controller 626 that is coupled to the processor 100 and to a display 628. A coder/decoder (CODEC) 634 can also be coupled to the processor 100. A speaker 636 and a microphone 638 can be coupled to the CODEC 634. [00122] FIG. 6 also indicates that a wireless controller 640 can be coupled to the processor 100 and to an antenna 642. In a particular example, the processor 100, the display controller 626, the memory 632, the CODEC 634, and the wireless controller 640 are included in a system-in-package or system-on-chip device 622. In a particular example, an input device 630 and the power supply 644 are coupled to the system-on- chip device 622. Moreover, in a particular example, as illustrated in FIG. 6, the display 628, the input device 630, the speaker 636, the microphone 638, the antenna 642, and the power supply 644 are external to the system-on-chip device 622. However, each of the display 628, the input device 630, the speaker 636, the microphone 638, the antenna 642, and the power supply 644 can be coupled to a component of the system-on-chip device 622, such as to an interface or to a controller.[00123] In conjunction with the described embodiments, a computer-readable medium (e.g., the memory 632) stores instructions (e.g., the instructions 633) executable by a processor (e.g., the processor 100) to initiate, perform, or control operations. The operations include receiving, at a voltage-clock gate (VCG) circuit (e.g., the VCG circuit 106) from a digital power estimator (e.g., the DPE 104), indications of a predicted energy consumption (e.g., the predicted energy 122) per cycle of a clock signal (e.g., the clock signal 112) and a maximum energy consumption (e.g., the maximum energy 124) per cycle of the clock signal. In some examples, the indications are received from a VCG configuration register (e.g., the VCR 102) and include PDN characteristics, such as a PDN center frequency and a PDN bandwidth.The operations further include, in response to the indications, gating the clock signal. The clock signal is gated prior to occurrence of a voltage droop event and using hardware voltage model circuitry of the VCG circuit, and the clock signal is gated based on an undershoot phase associated with the voltage droop event. The operations further include un-gating the clock signal based on an overshoot phase associated with the voltage droop event.[00124] In conjunction with the described embodiments, an apparatus includes means (e.g., the DPE 104) for generating indications of a predicted energy consumption (e.g., the predicted energy 122) per cycle of a clock signal (e.g., the clock signal 112) and a maximum energy consumption (e.g., the maximum energy 124) per cycle of the clock signal. The apparatus further includes means (e.g., the VCG circuit 106) for gating and for un-gating the clock signal based on the indications prior to occurrence of a voltage droop event and using hardware voltage model circuitry. The means for gating and un gating is configured to gate the clock signal based on an undershoot phase associated with the voltage droop event and to un-gate the clock signal based on an overshoot phase associated with the voltage droop event. In some implementations, the apparatus further includes means (e.g., the processor fetch, issue, control, memory access, and execution pipeline 270) for executing instructions coupled to the means for generating and the means for gating and un-gating.[00125] The foregoing disclosed devices and functionalities may be designed and configured into computer files (e.g., RTL, GDSII, GERBER, etc.) stored on computer readable media. Some or all such files may be provided to fabrication handlers who fabricate devices based on such files. Resulting products include semiconductor wafers that are then cut into semiconductor die and packaged into a semiconductor chip. The chips are then employed in devices described above.[00126] Those of skill would further appreciate that the various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[00127] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, hard disk, a removable disk, a compact disc read-only memory (CD-ROM), or any other form of storage medium known in the art. An exemplary non-transitory (e.g. tangible) storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application-specific integrated circuit (ASIC). The ASIC may reside in a computing device or a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a computing device or user terminal.[00128] The previous description of the disclosed embodiments is provided to enable a person skilled in the art to make or use the disclosed embodiments. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other embodiments without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope possible consistent with the principles and novel features as defined by the following claims. |
A method of manufacturing an integrated circuit may include the steps of annealing a gate structure and a halo section disposed over a substrate using a first temperature, implanting dopants to form drain and source regions, and annealing drain and source regions at a second temperature. The second temperature is substantially less than the first temperature. |
What is claimed is: 1. A method of manufacturing an integrated circuit, the method comprising:annealing a halo section and a gate structure disposed over a substrate, the annealing using a first temperature; implanting dopants to form drain and source regions; and annealing the drain and source regions at a second temperature, the second temperature between 200[deg.] C. and 300[deg.] C. less than the first temperature. 2. The method of claim 1, wherein the step of implanting dopants comprises providing a dopant implant of between 1*10<15>cm<-2 >and 5*10<15>cm<-2>.3. The method of claim 1, wherein the step of annealing drain and source regions at a second temperature comprises providing a furnace anneal.4. The method of claim 1, wherein the gate structure comprises a gate insulator of a high-k material.5. The method of claim 1, wherein the step of implanting dopants comprises providing dopants as to self-amorphize drain and source regions.6. The method of claim 5, wherein the step of annealing at a second temperature comprises recrystallizing the self-amorphized drain and source regions.7. The method of claim 1, further comprising a solid-phase epitaxy process in which dopants in drain and source regions are activated.8. A method of manufacturing an integrated circuit, the method comprising:annealing a halo section and a gate structure disposed over a substrate, the annealing using a first temperature; implanting dopants to form drain and source regions; and annealing the drain and source regions at a second temperature, the second temperature being at least 200[deg.] C. less than the first temperature; wherein the first temperature is at least 800[deg.] C. and the second temperature is approximately between 500[deg.] and 600[deg.] C. 9. A method of manufacturing an integrated circuit having source and drain regions, the method comprising:providing a gate structure over a substrate; implanting dopants into the substrate to produce a halo section; annealing the gate structure and the halo section at a first temperature; forming a source region and a drain region after annealing the gate structure and the halo section; and annealing the source and drain regions at a second temperature, the second temperature being less than the first temperature, wherein the second temperature is between 500[deg.] C. and 600[deg.] C. 10. The method of claim 9, wherein the implanting dopants step produces a halo section having a dopant concentration of approximately 1*10<18 >cm<3>.11. The method of claim 9, wherein the gate structure includes a polysilicon gate electrode.12. The method of claim 9, wherein the first temperature is greater than approximately 800[deg.] C.13. The method of claim 9, wherein the step of forming a source region and a drain region comprises implanting a dopant into the subtrate to form self-amortized source and drain regions.14. The method of claim 13, wherein annealing the source region and drain region at a second temperature acts to recrystallize the source and drain regions.15. The method of claim 9, further comprising removing the gate structure and providing a second gate structure over the substrate, the second gate structure including a high-k gate insulator.16. A method for producing an ultra-large scale integrated circuit comprising:performing a high temperature rapid thermal anneal on a gate structure and a halo section disposed over a semiconductor substrate; implanting dopants into the substrate to form self-amorphized source and drain regions; and performing a low temperature rapid thermal anneal on the self-amorphized source and drain regions; wherein the step of performing a high temperature rapid thermal anneal comprises annealing the gate structure and halo section at a temperature greater than approximately 800[deg.] C. and the step of performing a low temperature rapid thermal anneal comprises annealing the self-amorphized source and drain regions at a temperature between approximately 500[deg.] C. and 600[deg.] C. 17. The method of claim 16, further comprising implanting dopants into the substrate to form self-amorphized source and drain extensions and performing a low temperature rapid thermal anneal on the self-amorphized source and drain extensions.18. The method of claim 16, wherein the step of implanting dopants step comprises implanting heavy dopants into the substrate at a dose of between approximately 1*10<15 >cm<-2 >and 5*10<15 >cm<-2>.19. The method of claim 16, wherein the step of performing a low temperature rapid thermal anneal recrystallizes the self-amorphized source and drain regions.20. The method of claim 16, further comprising removing the gate structure and providing a second gate structure disposed over the substrate, the second gate structure including a high-k gate insulator. |
CROSS REFERENCE TO RELATED APPLICATIONSThis patent application is related to U.S. Pat. No. 6,200,869, by Yu et al., entitled "A Method of Fabricating an Integrated Circuit with Ultra-Shallow Drain/Source Extensions"; U.S. Pat. No. 6,180,476, by Yu, entiled "Dual Amorphization Implat Process for Ultra-Shallow Drain and Source Extencions"; U.S. Pat. No. 5,985,726, by Yu et al., entiled "A Damascence Process for Forming Ultra-Shallow Source/Drain Extensions and Pocket in ULSI MOSFET"; and U.S. Pat. No. 6,225,173, by Yu, entitled "Recessed Channel Structure for Manufacturing Sallow Source/Drain Extensions" all filed on Nov. 6, 1998 and assigned to the assignee of the present invention. In addition, this patent application is related to U.S. Pat. No. 6,271,095, by Yu, entitled "A Locally Confined Deep Pocket Process for ULSI MOSFETS"; U.S. Pat. No. 6,184,097, by Yu, entitled "A Process for Forming Ultra-Shallow Source/Drain Extensions"; and U.S. Pat. No. 6,225,176, by Yu, entitled "Step Drain and Source Junction Formation", all filed on Feb. 22, 1999 and assigned to the assignee of the present invention. The present application is also related to U.S. Pat. No. 6,265,293, by Yu, entitled "CMOS Transistors Fabricated in Optimized RTA Scheme", filed on Aug. 27, 1999 and assigned to the assignee of the present invention. The present application is also, related to U.S. Pat. No. 6,333,244, by Yu, entitled "CMOS Fabrication Process with Differential Rapid Thermal Anneal Scheme", filed on Jan. 26, 2000 and assigned to the assignee of the present invention. The present application is also releated to U.S. application Ser. No. 09/597,623, by Yu, entitled "Dual Amorphization Process Optimized to Reduce Gate Line Over-Melt" and U.S. application Ser. No. 09/597,098, by Yu, entitled "Process Utilizing a Cap Layer Optimized to Reduce Gate Line Over-Melt" filed on Jun. 20, 2000 and assigned to the assignee of the present invention.FIELD OF THE INVENTIONThe present invention relates generally to the field of integrated circuits and to methods of manufacturing integrated circuits. More particularly, the present invention relates to a low thermal budget method of manufacturing an integrated circuit with self-amorphized source/drain junctions and extensions.BACKGROUND OF THE INVENTIONIntegrated circuits (ICs), such as, ultra-large scale integrated (ULSI) circuits, can include as many as one million transistors or more. The ULSI circuit can include complementary metal oxide semiconductor (CMOS) field effect transistors (FETs) or MOSFETs. The transistors can include semiconductor gates disposed between drain and source regions. The drain and source regions are typically heavily doped with a P-type dopant (boron) or an N-type dopant (phosphorous).The drain and source regions generally include a thin extension that is disposed partially underneath the gate to enhance the transistor performance. Shallow source and drain extensions help to achieve immunity to short-channel effects which degrade transistor performance for both N-channel and P-channel transistors. Short-channel effects can cause threshold voltage roll-off and drain-induced barrier-lowering. Thus, controlling short channel effects is important to assuring proper semiconductor operation.Conventional techniques utilize a double implant process to form shallow source and drain extensions. According to the conventional process, the source and drain extensions are formed by providing a transistor gate structure without sidewall spacers on a top surface of a silicon substrate. The silicon substrate is doped on both sides of the gate structure via a conventional doping process, such as, a diffusion process or ion implantation process. Without the sidewall spacers, the doping process introduces dopants into a thin region (i.e., just below the top surface of the substrate) to form the drain and source extensions as well as to partially form the drain and source regions.After the drain and source extensions are formed, silicon dioxide spacers, which abut lateral sides of the gate structure, are provided over the source and drain extensions. The substrate is doped a second time to form the deeper source and drain regions. The source and drain extensions are not further doped due to the blocking capability of the silicon dioxide spacers.As the critical dimensions of transistors continue to shrink, control of thermal budget in IC fabrication is very important. The formation of ultra-shallow source/drain extensions and a super localized halo profile is critical to control short-channel effects. In conventional CMOS processes, high temperature (e.g., >1000[deg.] C.) rapid thermal annealing (RTA) is used to activate the dopant in the source, drain, halo, etc. With continually shrinking MOSFET dimensions, high-k materials (e.g., Al2O3, TiO2, ZrO2, etc.) may be used as gate insulators. Unfortunately, high-k materials tend to react with silicon at high temperatures. As such, the processing temperature has to be kept low (e.g., <800[deg.] C.) if high-k materials are to be used as gate dielectrics.Thus, there is a need for a manufacturing process for CMOS integrated circuits in which post-gate processing temperatures are lower such that high-k materials used as gate insulators do not react with silicon. Further, there is a need for a transistor fabrication process which uses a differential anneal strategy. Even further, there is a need for using a heavy dose dopant implant for the shallow source/drain extension and deep source/drain contact junctions such that self-amorphization is possible. Even further still, there is a need for an IC manufacturing process in which a steep source/drain junction is obtained.SUMMARY OF THE INVENTIONOne aspect of one embodiment relates to a method of manufacturing an integrated circuit. The method includes annealing a gate structure and a halo section disposed over a substrate using a first temperature, implanting dopants to form drain and source regions, and annealing drain and source regions at a second temperature. The second temperature is substantially less than the first temperature.Briefly, another aspect of an exemplary embodiment is related to a process of forming source and drain regions in an integrated circuit. The process includes providing a heavy-dose shallow source and drain extension implant which forms a self-amorphized shallow source and drain extension, providing a heavy-dose deep source and drain implant which forms a self-amorphized deep source and drain, and recrystallizing the shallow source and drain extension and deep source and drain.Briefly, another aspect of an exemplary embodiment is related to a method of manufacturing a transistor on an ultra-large scale integrated circuit. The transistor has active regions including a source and a drain and a gate insulator made of a high-k material. The method includes the steps of implanting a dopant into a substrate to form a source and drain, in which the dopant has a dosage which causes source and drain to be self-amorphized, and recrystallizing the self-amorphized source and drain by applying a furnace anneal.Other principle features and advantages of the present invention will become apparent to those skilled in the art upon review of the following drawings, the detailed description, and the appended claims.BRIEF DESCRIPTION OF THE DRAWINGSThe exemplary embodiments will hereafter be described with reference to the accompanying drawings, wherein like numerals denote like elements, and:FIG. 1 is a cross-sectional view of a portion of an integrated circuit fabricated in accordance with an exemplary embodiment of the present invention;FIG. 2 is a cross-sectional view of a portion of the integrated circuit illustrated in FIG. 1, showing a halo implant step;FIG. 3 is a cross-sectional view of a portion of the integrated circuit illustrated in FIG. 2, showing a self-amorphization shallow source/drain extension implant step; andFIG. 4 is a cross-sectional view of a portion of the integrated circuit illustrated in FIG. 3, showing a spacer formation step and a self-amorphization deep source/drain junction implant step.DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTSReferring to FIG. 1, a portion 10 of an integrated circuit (IC) or chip includes a substrate 12, a gate stack 14, a source region 16, a drain region 18, a source extension 20, a drain extension 22, and halo sections 24. Portion 10 is preferably part of an ultra-large-scale integrated (ULSI) circuit having millions or more transistors. Portion 10 is manufactured as part of the IC on a semiconductor wafer, such as, a silicon wafer.Substrate 12 is any of a variety of semiconductor materials, such as, silicon. Substrate 12 is preferably a P-type substrate. In an exemplary embodiment, gate stack 14 includes a polysilicon gate electrode or conductor 30 disposed over a gate dielectric or insulator 28, such as thermally grown silicon dioxide. Gate stack 14 is aligned between active regions in substrate 12. Active regions are areas in portion 10 including impurities or dopants such as a p-type dopant (e.g., boron) or an n-type dopant (e.g., phosphorous). Gate stack 14 is located between spacers 26. Spacers 26 are preferably silicon dioxide (SiO2) structures which abut lateral sides of gate stack 14 and are provided at least partially over source region 16 and drain region 18.Source region 16 and drain region 18 are formed by ion implantation or doping. The dopant is activated by thermal activation or annealing as described below. Source extension 20 is a shallower extension of source region 16. Drain extension 22 is a shallower extension of drain region 18. Preferably, source extension 20 and drain extension 22 extend at least partially below gate stack 14. In one embodiment, these extensions are 20-40 nm deep. In one embodiment, source/drain regions 18 and 20 are 60-100 nm deep. In one embodiment, the width of each extension region is 30-50 nm.In one method of forming portion 10, different temperature annealing processes are used. A high temperature (e.g., 800[deg.] C.) rapid thermal annealing (RTA) is used for annealing gate conductor 30 and halo sections 24. A very low temperature (e.g., 500-600[deg.] C.) RTA or furnace anneal is used to activate the dopant in source region 16, drain region 18, source extension 20, and drain extension 22.A heavy dose (e.g., between 1*10<15>cm<-2 >and 5*10<15>cm<-2>) dopant implant is used for the shallow S/D extensions 20; 22 and deep S/D contact junctions. The dopant used can be arsenic (As) or antimony (Sb) for n-channel MOSFET and boron di-flouride (BF2) for p-channel MOSFET. Different from conventional processes, a heavy dose implant creates self-amorphization in both shallow S/D extension area and deep S/D contact junction area.A very low (e.g., 500-600[deg.] C.) anneal is enough to recrystallize the self-amorphized source region 16 and drain region 18 including extensions 20 and 22. Dopant inside the regions 16 and 18 and 20 and 21 become well activated during the mechanism of solid-phase epitaxy. Due to the low thermal budget used, steep junctions 32 and 34 are obtained, which is desirable for transistors with small dimensions.The method of forming portion 10 is described below with reference to FIGS. 1-4. The method advantageously forms portion 10 including self-amorphized source/drain junctions and extensions. Self-amorphization means that without using the traditional species for an amorphization implant (e.g., Si, Ge), the dopant itself (e.g., Sb, As, etc.) can create an amorphous region during implantation. When the dopant mass is heavy enough, it rearranges the crystalline structure of the silicon substrate during the implantation, leaving an amorphous layer in the silicon substrate.In FIG. 2, a cross-sectional view of portion 10 illustrates portion 10 after a conventional CMOS fabrication process is followed to form gate insulator 28 and gate conductor 30. Conductor 30 can be polysilicon or polysilicon germanium material which is doped in the process to be highly conductive or a metal material. Conductor 30 can be deposited by chemical vapor deposition.After conductor 30 is provided, a halo implant is performed to form halo sections 24. Gate stack 14 and halo sections 24 are annealled. In one embodiment, a high temperature (e.g., 800[deg.] C.) RTA is used for proper activation of the dopants without causing gate insulator 28 to react with silicon. Halo sections 24 are 30-70 nm deep and 10-50 nm wide. Preferably, sections 24 are formed by implanting a P-type dopant at 1*10<13>-1*10<14>cm<-2 >dose to achieve approximately 1*10<18 >cm<-3 >concentration for an N-channel transistor. N-type dopants are utilized for a P-channel transistor. Ion implantation devices which charge boron (for N-channel) to an energy of 5-20 KeV can form regions 24.Preferably, conductor 30 is 1,000-2,000 Å thick and 50-200 nm wide. Insulator 28 is 1,000-2,000 Å wide and 15-30 Å thick. Insulator 28 can be silicon dioxide, nitride, or a high-k dielectric material. Conductor 30 can be covered by a cap material such as silicon nitride or silicon oxynitride to protect conductor 30 from the implantation of ions for regions 24. Alternatively, conductor 30 can be doped during the formation of regions 24.In FIG. 3, portion 10 includes shallow source/drain extension layers 40 and 42 are implanted under a high dose (e.g., between 1*10<15>cm<-2 >and 5*10<15>cm<-2>). Layers 40 and 42 are shown as stippled areas in FIG. 3. In an exemplary embodiment, heavy dopants, such as, BF2, As, or Sb are used. Due to the high dose of dopants, shallow source/drain extension layers 40 and 42 become self-amorphized. Self-amorphization occurs because heavy mass dopants displace silicon atoms in substrate 12. Amorphization allows dopants to be activated at lower temperatures. In an exemplary embodiment, shallow source/drain extension layers 40 and 42 have a thickness of 10-30 nm.In FIG. 4, dielectric spacers 26 are formed by ion deposition and etch back processes. Spacers 26 can be an oxide or nitride material. A deep source/drain junction implant is provided under a heavy dose (e.g., between 5*10<15>cm<-2 >and 1*10<16>cm<-2>). Under the heavy dose, deep source drain junction regions 44 and 46 become self-amorphized, shown in FIG. 4 as stippled areas. In one embodiment, dopants such as BF2, As, or Sb are used.The amorphization implant in FIG. 4 is deeper than that in FIG. 3. In an exemplary embodiment, the depth of this deep amorphous region is 50-100 nm. The purpose of the amorphization implant in FIG. 4 is to define the deep S/D junctions. When both shallow S/D extension (layers 40 and 42) and deep S/D junction regions (regions 44 and 46) are amorphized, the dopant can be activated by the same low-temperature anneal for recrystallization (or solid-phase epitaxy). A low temperature (e.g., 500-600[deg.] C.) RTA is applied to recrystallize the whole amorphous layer (regions 40, 42, 44, and 46) and form regions 16, 18, 20, and 22 (FIG. 1). In one embodiment, a solid-phase epitaxy process occurs in which all dopants in deep source/drain (regions 44 and 46 in FIG. 4) and shallow source/drain extensions (regions 40 and 42 in FIG. 4) are activated. Conventional CMOS fabrication process steps may then be taken to complete the manufacturing of the IC. In one embodiment, conductor 30 can also be amorphized during the doping steps.The process described with reference to FIGS. 1-4 is particularly advantageous in light of the need for smaller integrated circuits. As smaller MOSFETs are designed, high-k materials, such as, Al2O3, TiO2, and ZrO2 are used as the gate insulator. Unfortunately, high-k materials react with silicon at high temperatures. As such, lower temperatures must be used. In an exemplary embodiment of the circuit fabrication process described above, the reaction of high-k materials with high temperatures is avoided by the use of a damascene or sacrificial gate structure. The gate structure is removed after regions 24 are formed and high temperature anneal is formed according to the process described with respect to FIG. 2. A new gate structure is formed after the high temperature anneal. The new gate structure includes a high-k gate insulator the transistor is completed utilizing the process steps described with reference to FIGS. 3, 4, and 1. A very low thermal budget CMOS manufacturing process with self-amorphized source/drain junctions and extensions allow integrated circuits and transistors to be manufactured smaller and smaller.While the embodiments illustrated in the FIGURES and described above are presently preferred, it should be understood that these embodiments are offered by way of example only. Other embodiments may include, for example, different techniques for selectively annealing various integrated circuit structures. The invention is not limited to a particular embodiment, but extends to various modifications, combinations, and permutations that nevertheless fall within the scope and spirit of the appended claims. |
Technologies for shadow stack management include a computing device that, when executing a translated call routine in a translated binary, pushes a native return address on to a native stack of the computing device, adds a constant offset to a stack pointer of the computing device, executes a native call instruction to a translated call target, and, after executing the native call instruction, subtracts the constant offset from the stack pointer. Executing the native call instruction pushes a translated return address onto a shadow stack of the computing device. The computing device may map two or more virtual memory pages of the shadow stack onto a single physical memory page. The computing device may execute a translated return routine that pops the native return address from the nativestack, adds the constant offset to the stack pointer, and executes a native return instruction. Other embodiments are described and claimed. |
1.A computing device for shadow stack management, the computing device includes:Calling the module, which is used to:Pushing the local return address into the local stack of the computing device;In response to pushing the local return address to the local stack, adding a constant offset to the stack pointer of the computing device;In response to the constant offset being added to the stack pointer, execute a local call instruction to the converted call target; andSubtracting the constant offset from the stack pointer in response to the execution of the local call instruction; andThe processor is configured to push the converted return address to the shadow stack of the computing device in response to the execution of the local call instruction.2.The computing device according to claim 1, further comprising a memory management module for:The multiple virtual memory pages of the shadow stack are mapped to the first physical memory page.3.The computing device of claim 1, further comprising:A binary file conversion module, which is used to execute a converted calling routine of a converted binary file, wherein the converted calling routine corresponds to a local calling instruction of the local binary file, and the converted calling target is the same as the The local call target corresponding to the local call instruction;Wherein, pushing the local return address includes pushing the local return address in response to the execution of the converted calling routine.4.The computing device according to claim 3, wherein the binary file conversion module is further used for:Generating the converted binary file according to the local binary file, wherein the converted binary file includes the converted calling routine; andExecute the converted binary file;Wherein, executing the converted calling routine includes executing the converted calling routine in response to execution of the converted binary file.5.The computing device according to claim 1, further comprising a binary file conversion module for checking whether the stack pointer exceeds the value associated with the shadow stack in response to the constant offset being added to the stack pointer The pre-allocated virtual address range.6.The computing device according to any one of claims 1-5, further comprising:Returns the module, which is used to:Pop the local return address from the local stack of the computing device in response to subtracting the constant offset from the stack pointer;Adding the constant offset to the stack pointer in response to the local return address being popped from the local stack;Executing a local return instruction in response to the constant offset being added to the stack pointer, the constant offset being added to the stack pointer is popped from the local stack in response to the local return address; andSubtracting the constant offset from the stack pointer in response to the execution of the local return instruction;Wherein, the processor is further configured to pop the converted return address from the shadow stack in response to the execution of the local return instruction.7.The computing device of claim 6, further comprising:The memory management module is configured to map a plurality of virtual memory pages of the shadow stack to a first physical memory page;Wherein, the return module is further configured to confirm the converted return address in response to the execution of the local return instruction.8.The computing device of claim 7, wherein:Popping the local return address from the local stack includes: popping the local return address into a first register of the computing device; andConfirming the converted return address includes:Determine the temporary local return address associated with the converted return address;Determining whether the temporary local return address matches the first register of the computing device;In response to a determination that the temporary local return address does not match the first register, determining a corrected converted return address based on the content of the first register; andIn response to the determination of the corrected converted return address, jump to the corrected converted return address.9.8. The computing device of claim 8, wherein determining the corrected converted return address comprises:Determining whether the converted binary file includes the converted return address of the local return address represented by the contents of the temporary storage register; andIn response to the determination that the converted binary file does not include the converted return address of the local return address represented by the contents of the temporary storage register, the converted return address is generated according to the local binary file. The converted binary file.10.The computing device of claim 6, further comprising:The binary file conversion module is used to execute the converted return routine of the converted binary file, wherein the converted return routine corresponds to the local return instruction of the local binary file;Wherein, popping the local return address includes popping the local return address in response to execution of the converted return routine.11.The computing device of claim 10, further comprising:The binary file conversion module is configured to: (i) generate the converted binary file according to the local binary file, wherein the converted binary file includes the converted return routine, and (ii) execute the converted binary file Convert binary files;Wherein, executing the converted return routine includes executing the converted return routine in response to execution of the converted binary file.12.A method for shadow stack management, the method includes:Pushing the local return address into the local stack of the computing device by the computing device;Adding, by the computing device, a constant offset to the stack pointer of the computing device in response to pushing the local return address to the local stack;The computing device executes a local call instruction to the converted call target in response to adding the constant offset to the stack pointer, wherein executing the local call instruction includes pushing the converted return address to the The shadow stack of the computing device; andThe computing device subtracts the constant offset from the stack pointer in response to executing the local call instruction.13.The method according to claim 12, further comprising:The multiple virtual memory pages of the shadow stack are mapped to the first physical memory page by the computing device.14.The method according to claim 12, further comprising:The converted calling routine of the converted binary file is executed by the computing device, wherein the converted calling routine corresponds to the local calling instruction of the local binary file, and the converted calling target is the same as the local calling instruction Corresponding to the local call target;Wherein, pushing the local return address includes pushing the local return address in response to executing the converted calling routine.15.The method according to claim 14, further comprising:Generating the converted binary file according to the local binary file by the computing device, wherein the converted binary file includes the converted calling routine; andExecuting the converted binary file by the computing device;Wherein, executing the converted calling routine includes executing the converted calling routine in response to executing the converted binary file.16.The method of claim 12, further comprising: checking, by the computing device, whether the stack pointer exceeds the pre-allocation associated with the shadow stack in response to adding the constant offset to the stack pointer The virtual address range.17.The method according to claim 12, further comprising:Popping the local return address from the local stack of the computing device by the computing device in response to subtracting the constant offset from the stack pointer;Adding, by the computing device, the constant offset to the stack pointer in response to popping the local return address from the local stack;A local return instruction is executed by the computing device in response to adding the constant offset to the stack pointer, and adding the constant offset to the stack pointer is in response to popping the local stack from the local stack. Local return address, wherein executing the local return instruction includes popping the converted return address from the shadow stack; andThe constant offset is subtracted from the stack pointer by the computing device in response to executing the local return instruction.18.The method according to claim 17, further comprising:Mapping, by the computing device, a plurality of virtual memory pages of the shadow stack to a first physical memory page; andThe computing device confirms the converted return address in response to executing the local return instruction.19.The method of claim 18, wherein:Popping the local return address from the local stack includes popping the local return address into a first register of the computing device; andConfirming the converted return address includes:Determine the temporary local return address associated with the converted return address;Determining whether the temporary local return address matches the first register of the computing device;In response to determining that the temporary local return address does not match the first register, determining a corrected converted return address based on the contents of the first register; andIn response to determining the corrected converted return address, jump to the corrected converted return address.20.The method of claim 19, wherein determining the corrected converted return address comprises:Determining whether the converted binary file includes the converted return address of the local return address represented by the contents of the temporary storage register; andIn response to determining that the converted binary file does not include the converted return address of the local return address represented by the contents of the temporary storage register, all the converted return addresses including the converted return address are generated according to the local binary file. The binary file has been converted.21.The method according to claim 17, further comprising:The computing device executes the converted return routine of the converted binary file, wherein the converted return routine corresponds to the local return instruction of the local binary file;Wherein, popping the local return address includes popping the local return address in response to executing the converted return routine.22.The method according to claim 21, further comprising:The computing device generates the converted binary file according to the local binary file, wherein the converted binary file includes the converted return routine; andExecuting the converted binary file by the computing device;Wherein, executing the converted return routine includes executing the converted return routine in response to executing the converted binary file.23.A computing device including:Processor; andA memory in which a plurality of instructions are stored, when the instructions are executed by the processor, the computing device executes the method according to any one of claims 12-22.24.One or more machine-readable storage media including a plurality of instructions stored thereon that, in response to being executed, cause a computing device to perform the method according to any one of claims 12-22.25.A computing device comprising a module for executing the method according to any one of claims 12-22. |
Shadow stack manipulation technology for binary file conversion systemThis application is a divisional application of the patent application of the same name with the application number 201680030120.3 filed on May 24, 2016.Cross-references to related applicationsThis application claims priority to the U.S. utility model patent application serial number 14/748,363 filed on June 24, 2015, entitled "TECHNOLOGIES FOR SHADOWSTACK MANIPULATION FOR BINARY TRANSLATION SYSTEMS".Background techniqueA typical computing device supports the execution of binary code including instructions for a specific instruction set architecture (ISA). The binary file conversion system generates the converted binary file based on the original or local binary file. Binary file conversion can be used to execute ISA-specific binary files on computing devices that support different ISAs without recompiling the original binary files. Additionally or alternatively, binary file conversion can be used to take advantage of new instructions or other features that are supported by a specific computing device but not included in the original binary file to improve performance through dynamic optimization, in order to enforce security policies or for other purposes. Purpose.Most processors support local call and return instructions, which are used to perform subroutine calls and returns, and are very common in compiled binary files. Many processors include dedicated hardware for optimizing calls and returns, for example, stack-based return prediction hardware (e.g., return stack buffer). Many binary file conversion systems cannot directly use local call and return instructions without breaking compatibility, and therefore imitate call and return instructions with jump instructions. However, the use of jump instructions may not take advantage of the optimized call/return hardware of the processor. To allow the use of local call and return instructions, some binary file conversion systems maintain a shadow stack in memory. However, a typical shadow stack implementation requires several expensive memory load and/or store instructions to switch between the local stack and the shadow stack. For example, a typical implementation of a converted calling routine can perform four load/store operations: store the value of the stack pointer to the local stack save area, load the value of the stack pointer from the shadow stack save area, execute the call instruction, and store the stack pointer The new value of is stored in the shadow stack save area, and the value of the stack pointer is loaded from the local stack save area.Description of the drawingsThe concepts described herein are illustrated in the drawings by way of example and not by way of limitation. For simplicity and clarity of explanation, the elements shown in the figure are not necessarily drawn to scale. Where deemed appropriate, reference numerals are repeated in the figures to indicate corresponding or similar elements.Figure 1 is a simplified block diagram of at least one embodiment of a computing device for shadow stack manipulation;Figure 2 is a simplified block diagram of at least one embodiment of an environment that can be established by the computing device of Figure 1;3 is a simplified flowchart of at least one embodiment of a method for shadow stack manipulation that can be performed by the computing device of FIGS. 1 and 2;Fig. 4 is a schematic diagram showing a memory management layout that can be established by the computing devices of Figs. 1 and 2; andFIG. 5 is a simplified flowchart of at least one embodiment of a method for converted return address verification that can be performed by the computing device of FIG. 1 and FIG. 2.Detailed waysAlthough the concept of the present disclosure is susceptible to various modifications and alternative forms, specific embodiments thereof have been illustrated in the drawings by way of example, and will be described in detail herein. However, it should be understood that it is not intended to limit the concept of the present disclosure to the specific form disclosed, but on the contrary, it is intended to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.References in the specification to "one embodiment", "an embodiment", "an illustrative embodiment", etc. indicate that the described embodiment may include specific features, structures, or characteristics, but each embodiment may or may not necessarily include The specific feature, structure, or characteristic. Furthermore, such phrases do not necessarily refer to the same embodiment. In addition, when a specific feature, structure, or characteristic is described in conjunction with an embodiment, regardless of whether it is explicitly described, the embodiment is submitted when those skilled in the art know that such a feature, structure, or characteristic can be implemented in combination with other embodiments. In addition, it should be understood that items included in the list in the form of "at least one of A, B, and C" may represent (A); (B); (C); (A and B); (A and C) ; (B and C); or (A, B and C). Similarly, items listed in the form of "at least one of A, B, or C" can mean (A); (B); (C); (A and B); (A and C); (B And C); or (A, B, and C).In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments can also be implemented as instructions, which are carried or stored on one or more transient or non-transitory machine-readable (for example, computer-readable) storage media, and the instructions can be executed by one or more Each processor reads and executes. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure used to store or transmit information in a machine-readable form (for example, volatile or non-volatile memory, medium disk, or other medium device ).In the drawings, some structural or method features may be shown in a specific arrangement and/or order. However, it should be understood that such a specific arrangement and/or order may not be required. Conversely, in some embodiments, these features may be arranged in a different manner and/or order than shown in the illustrative drawings. In addition, the inclusion of structural or method features in a particular drawing does not imply that such features are required in all embodiments, and in some embodiments, these features may not be included, or these features may be combined with other features.Referring now to FIG. 1, in an illustrative embodiment, the computing device 100 for shadow stack manipulation includes a binary file conversion system. In use, as described in more detail below, the computing device 100 generates and executes the converted binary file based on the local binary file. The local binary file includes one or more call and/or return instructions, and the converted binary file includes corresponding converted call routines and converted return routines, respectively. The computing device 100 uses the local call and return instructions that reference the shadow stack in the virtual memory to execute the converted call and return. The shadow stack is located in the virtual memory at a constant offset from the local stack of the computing device 100. In some embodiments, the computing device 100 may map the virtual memory pages of the shadow stack to a reduced number of physical pages. The computing device 100 can improve the performance of the calling and returning routines in the binary file conversion system by avoiding the execution of several memory load and store instructions. In addition, the computing device 100 can reduce memory consumption by mapping the shadow stack to a reduced number of physical memory pages. Mapping the shadow stack to a reduced number of physical pages can also improve binary file conversion performance by improving the cache hit rate referenced by the shadow stack memory.The computing device 100 may be embodied as any type of computing or computer device capable of performing the functions described herein, including but not limited to computers, desktop computers, workstations, laptop computers, notebook computers, tablet computers, mobile computing devices, wearable computing Devices, network devices, web devices, distributed computing systems, processor-based systems, and/or consumer electronics devices. As shown in FIG. 1, the computing device 100 illustratively includes a processor 120, an input/output subsystem 122, a memory 124, a data storage device 126, and a communication circuit 128. Of course, in other embodiments, the computing device 100 may include other or additional components, such as those commonly found in desktop computers (e.g., various input/output devices). Additionally, in some embodiments, one or more of the illustrative components may be incorporated into or otherwise form part of another component. For example, in some embodiments, the memory 124 or a portion thereof may be incorporated into the processor 120.The processor 120 may be embodied as any type of processor capable of performing the functions described herein. The processor 120 may be embodied as a single-core or multi-core processor(s), a digital signal processor, a microcontroller, or other processors or processing/control circuits. Similarly, the memory 124 may be embodied as any type of volatile or non-volatile memory or data storage device capable of performing the functions described herein. In operation, the memory 124 may store various data and software used during the operation of the computing device 100, for example, an operating system, applications, programs, libraries, and drivers. The memory 124 is communicatively coupled to the processor 120 via an I/O subsystem 122, which may be embodied to facilitate input/output operations with respect to the processor 120, the memory 124, and other components of the computing device 100 Circuits and/or components. For example, the I/O subsystem 122 may be embodied as or otherwise include a memory controller center, an input/output control center, firmware devices, communication links (ie, point-to-point links, bus links, wires, cables, light guides, Printed circuit board traces, etc.), and/or other components and subsystems used to facilitate input/output operations. In some embodiments, the I/O subsystem 122 may form part of a system on chip (SoC) and be incorporated into a single integrated circuit chip along with the processor 120, memory 124, and other components of the computing device 100.The data storage device 126 may be embodied as any type of device or multiple devices configured for short-term or long-term storage of data, for example, memory devices and circuits, memory cards, hard disk drives, solid state drives, or other data storage devices. The data storage device 126 may store binary executable files, local binary files, or other binary data used to encode computer programs.The communication circuit 128 of the computing device 100 may be embodied as any communication circuit, device, or collection thereof that can realize communication between the computing device 100 and other remote devices through a network. The communication circuit 128 may be configured to use any one or more communication technologies (for example, wired or wireless communication) and associated protocols (for example, Ethernet, WiMAX, etc.) to achieve such communication.In some embodiments, the computing device 100 may also include one or more peripheral devices 130. The peripheral device 130 may include any number of additional input/output devices, interface devices, and/or other peripheral devices. For example, the peripheral device 130 may include typical input/output devices, such as a display, a keyboard, a mouse, a touch screen, and/or other peripheral devices.Referring now to FIG. 2, in an illustrative embodiment, the computing device 100 establishes an environment 200 during operation. The illustrative environment 200 includes a binary file conversion module 202, a call module 204, a return module 206, and a memory management module 208. The various modules of the environment 200 may be embodied as hardware, firmware, software, or a combination thereof. For example, various modules, logic, and other components of the environment 200 may form part of the processor 120 or other hardware components of the computing device 100 or be established by the processor 120 or other hardware components of the computing device 100 in other ways. Therefore, in some embodiments, any one or more of the modules of the environment 200 may be embodied as a circuit or collection of electronic devices (for example, a binary file conversion circuit, a calling circuit, etc.).The binary file conversion module 202 is configured to process the local binary file 210 and generate and execute the converted binary file 212 based on the local binary file 210. The converted binary file 212 may include one or more converted call routines and converted return routines, which correspond to the local call instructions and the local return instructions of the local binary file 210, respectively. Each converted call routine is associated with a converted call target in the converted binary file 212, and each converted call target corresponds to a local call target of a corresponding local call instruction. In some embodiments, as described further below, the binary file conversion module 202 may be configured to check whether the stack pointer of the computing device 100 exceeds the pre-allocated virtual address range associated with the shadow stack.The calling module 204 is configured to execute a calling routine of the converted binary file 212. In particular, the calling module 204 is configured to push the local return address onto the local stack of the computing device 100, add a constant offset to the stack pointer of the computing device 100 in response to pushing the local return address onto the local stack, and In response to adding a constant offset to the stack pointer, a local call instruction is executed to the converted call target. The stack pointer may be embodied as a register defined by the architecture of the processor 120, for example, RSP or ESP. Executing the local call instruction causes the processor 120 to push the converted return address to the shadow stack of the computing device 100. The call module 204 is also configured to subtract a constant offset from the stack pointer in response to the execution of the local call instruction.The return module 206 is configured to execute a return routine of the converted binary file 212. In particular, the return module 206 is configured to pop the local return address from the local stack, add a constant offset to the stack pointer in response to pops the local return address from the local stack, and add a constant offset to the stack pointer in response to adding the constant offset to the stack pointer. Execute local return instruction. Executing the local return instruction causes the processor 120 to pop the translated return address from the shadow stack and jump to the translated return address. The return module 206 is also configured to subtract a constant offset from the stack pointer in response to the execution of the local return instruction. In addition, in some embodiments, the return module 206 may be configured to confirm the converted return address in response to the execution of the local return instruction. Confirm the converted return address to verify that the converted return address corresponds to the local return address previously popped from the local stack.The memory management module 208 is configured to map multiple virtual memory pages of the shadow stack to a smaller number of physical memory pages. For example, all virtual memory pages of the shadow stack can be mapped to a single physical memory page. The conflict between shadow stack entries can be detected and corrected by confirming the converted return address as described above.Referring now to FIG. 3, in use, the computing device 100 may perform a method 300 for shadow stack manipulation. The method 300 begins in block 302, in which the computing device 100 may map a series of multiple virtual memory pages of the shadow stack to a reduced number of physical pages. For example, the computing device 100 may map all virtual pages associated with the shadow stack to a single physical page. By mapping multiple virtual pages to a single physical page, the computing device 100 can reduce the amount of physical memory 124 required to store the shadow stack. Of course, mapping multiple virtual pages to a single physical page introduces the risk of conflict; that is, the risk that multiple entries in the shadow stack will occupy the same physical memory location. However, the shadow stack is usually sparsely filled with data, and the risk of collisions may be low. The conflict can be detected and/or corrected through the return address confirmation process described below in conjunction with block 326 of FIG. 5. In addition, although described as mapping virtual memory pages as part of method 300, it should be understood that computing device 100 may map virtual memory pages at another time or as part of another process. For example, the virtual memory page may be mapped by the operating system of the computing device 100 in response to a page fault or at any other suitable time. The computing device 100 may map virtual memory pages before storing any data in the shadow stack to prevent potential data loss.In block 304, the computing device 100 executes the converted code from the converted binary file 212. As described above, the computing device 100 may convert part or all of the local binary file 210 into the converted binary file 212, and then execute the code from the converted binary file 212. The converted code may include a binary code suitable for execution on the processor 120, for example, a binary code suitable for a specific processor architecture of the processor 120, or a binary code that uses dedicated processor instructions or other features supported by the processor 120 Code.In block 306, the computing device 100 determines whether the converted call operation is being performed. The computing device 100 may use any method to determine whether the calling operation is being performed. For example, in some embodiments, the computing device 100 may determine at the conversion time that the calling operation should be performed, and then include the calling routine or other instructions in the converted binary file 212 at the location of the calling operation. In some embodiments, the computing device 100 may dynamically detect calling routines. If the invocation operation is not performed, the method 300 jumps forward to block 316, which is described below. If the call operation is being performed, the method 300 proceeds to block 308.In block 308, the computing device 100 pushes the local return address used for the translated call operation to the local stack of the computing device 100. The local return address is the return address that pushes the corresponding call instruction through the local binary file 210 to the local stack. For example, the return address may be the address of the next instruction after the call instruction in the local binary file 210 (for example, the next sequential value of the instruction pointer register of the processor 120). The computing device 100 may determine the local return address at the conversion time. The computing device 100 may push the local return address to the local stack by writing the value of the local return address into the memory at the memory location identified by the stack pointer register of the processor 120, for example, by using the processor 120 to execute PUSH instruction.In block 310, the computing device 100 adds a constant offset to the stack pointer register of the processor 120 (e.g., RSP or ESP). After adding a constant offset, the stack pointer register points to the location in the memory corresponding to the shadow stack. The constant offset can be embodied as any constant integer value representing the distance between the local stack and the shadow stack in the virtual memory, and can be based on the virtual memory layout used by the operating system, application, or other executable code of the computing device 100 To choose. The computing device 100 can use an arithmetic instruction to add a constant offset to the stack pointer without requiring additional memory load or storage (for example, by using an ADD instruction that includes a constant offset as an immediate value). In some embodiments, the computing device 100 may perform a stack boundary check operation to ensure that the new value of the stack pointer does not exceed the pre-allocated virtual address range of the shadow stack.In block 312, the computing device 100 executes a local call instruction on the address of the converted call target. Because the stack pointer register of the processor 120 has been updated to point to the shadow stack, the execution of the local call instruction causes the processor 120 to push the converted return address to the shadow stack. The converted return address corresponds to the next instruction after the local call instruction in the converted binary file 212 (for example, the next sequential value of the instruction pointer register of the processor 120).After executing the call instruction, the processor 120 continues to execute the method 300 from the converted call target in block 314, and the computing device 100 subtracts a constant offset from the stack pointer register (eg, RSP or ESP) in block 314. Therefore, after subtracting the constant offset, the stack pointer register points to the local stack of the computing device 100. The computing device 100 can use an arithmetic instruction to subtract the constant offset from the stack pointer without requiring additional memory load or storage (for example, by using a SUB instruction that includes the constant offset as an immediate value). After restoring the local stack, the method 300 proceeds to block 316 where the computing device 100 may continue to execute the converted binary file 212, as described further below.Referring now to FIG. 4, a schematic diagram 400 shows one potential embodiment of a memory management layout that can be established by the computing device 100. As shown, the computing device 100 establishes a virtual memory space 402 and a physical memory space 404. The virtual memory space 402 includes a local stack 406. As shown, the local stack 406 includes a number of virtual pages 408. In use, the stack pointer register of the processor 120 may include the address 410 of the top of the local stack 406.The computing device 100 maintains a set of page maps 412 to map memory pages between the virtual memory space 402 and the physical memory space 404. The page map 412 may be embodied as, for example, page table entries in a page table maintained by the operating system of the computing device 100. As shown, each virtual page in the virtual page 408 of the local stack 406 is mapped to a physical page 414 in the physical memory space 404. The local stack 406 may occupy the same amount of memory in both the virtual memory space 402 and the physical memory space 404.As shown in FIG. 4, the virtual memory space 402 also includes a shadow stack 416. The shadow stack is located in the virtual memory space 402 at a constant offset 418 from the local stack 406. Therefore, adding the constant offset 418 to the address 410 at the top of the local stack 406 results in the address 420 at the top of the shadow stack 416. Therefore, the shadow stack 416 can occupy the same amount of virtual memory space as the local stack 406. Illustratively, each of the virtual pages 408 of the shadow stack 416 maps to a single physical page 414 in the physical memory space 404. Therefore, compared to the virtual memory space 402, the shadow stack 416 occupies less memory in the physical memory space 404.Referring again to FIG. 3, in block 316, the computing device 100 determines whether a converted back operation is being performed. The computing device 100 may use any method to determine whether the return operation is being performed. For example, in some embodiments, the computing device 100 may determine at the conversion time that the return operation should be performed, and then include the return routine or other instructions in the converted binary file 212 at the location of the return operation. In some embodiments, the computing device 100 may dynamically detect the return routine. If the return operation has not been performed, the method 300 loops back to block 304 to continue executing the converted binary file 212. If the return operation is being performed, the method 300 proceeds to block 318.In block 318, the computing device 100 pops the local return address from the local stack into the temporary storage register of the processor 120. As described above in connection with block 306, the local return address may have been previously pushed onto the local stack through the converted call routine. The computing device 100 may pop the local return address from the local stack by reading the value of the local return address from the memory at the memory location identified by the stack pointer register of the processor 120 (for example, by executing a POP instruction using the processor 120). The temporary storage register may be embodied as any temporary storage location accessible by the processor 120. To improve performance, the temporary storage register content can be accessible without performing additional memory loads and/or stores.In block 320, the computing device 100 adds a constant offset to the stack pointer register of the processor 120 (e.g., RSP or ESP). The computing device 100 adds the same offset as described above in conjunction with block 308. Therefore, after adding a constant offset, the stack pointer points to the location in the memory corresponding to the shadow stack. The computing device 100 can use an arithmetic instruction to add a constant offset to the stack pointer without requiring additional memory load or storage (for example, by using an ADD instruction that includes a constant offset as an immediate value).In block 322, the computing device 100 executes a local return instruction. Because the stack pointer register of the processor 120 has been updated to point to the shadow stack, the execution of the local return instruction causes the processor 120 to pop the converted return address from the shadow stack. After popping the converted return address, executing the local return instruction causes the processor 120 to jump to the converted return address.After executing the local return instruction, the processor 120 continues to execute the method 300 in block 324, in which the computing device 100 subtracts the constant offset from the stack pointer register (eg, RSP or ESP). Therefore, after subtracting the constant offset, the stack pointer register points to the local stack of the computing device 100. The computing device 100 can use an arithmetic instruction to subtract the constant offset from the stack pointer without requiring additional memory load or storage (for example, by using a SUB instruction that includes the constant offset as an immediate value).In block 326, the computing device 100 confirms that the converted return address has been converted. As described above, in some embodiments, mapping multiple virtual pages of the shadow stack to a single physical page may cause conflicts between shadow stack entries. If there is a conflict, execution of the local return instruction may cause the computing device 100 to jump to an incorrect translated return address. The converted return address is confirmed to determine whether the converted return address matches the local return address popped from the local stack and stored in the temporary storage register as described above in conjunction with block 318. If the converted return address does not match, the computing device 100 jumps to the correct converted return address. The computing device 100 may use any suitable return target confirmation mechanism provided by the binary file conversion system. A potential embodiment of the method for returning target confirmation is described below in conjunction with FIG. 5. As another example, the computing device 100 may use the conversion time branch target confirmation technology described in International Patent Application Publication No. WO 2014/189510 A1. After confirming the converted return address, the method 300 loops back to block 304 to continue executing the converted binary file 212.Referring now to FIG. 5, in use, the computing device 100 may execute a method 500 for converted return address confirmation. The method 500 begins in block 502, in which the computing device 100 determines a temporary local return address associated with the currently translated return address. The currently translated return address corresponds to the return address popped from the shadow stack as described above in connection with block 324 of FIG. 3. For example, the current converted return address may be determined based on the contents of the instruction pointer register of the processor 120. The temporary local return address is the address in the local binary file 210 corresponding to the converted return address. The relationship between the converted return address and the local return address may be determined by the computing device 100 at the time of conversion.In block 504, the computing device 100 compares the temporary local return address with the contents of the temporary storage register. As described above in connection with block 318 of FIG. 3, at the beginning of the converted call operation, the temporary storage register stores the data popped from the local stack. In block 506, the computing device 100 determines whether the temporary local return address matches the contents of the temporary storage register. If so, the converted return address has been successfully confirmed, and the method 500 is completed. As described above in conjunction with FIG. 3, the computing device 100 may continue to execute the converted binary file 212 starting from the converted return address. If the temporary local return address does not match the contents of the temporary storage register, the method 500 proceeds to block 508.In block 508, the computing device 100 finds or creates the translated return address based on the contents of the temporary storage register. The computing device 100 uses the binary file conversion system to find the converted return address corresponding to the local return address stored in the temporary storage register in the converted binary file 212. If there is no such converted return address, the computing device 100 can generate an appropriate converted code in the converted binary file 212. In block 510, the computing device 100 jumps to the corrected translated return address determined as described above in connection with block 508. After jumping to the converted return address, the converted return address has been successfully confirmed, and the method 500 is completed. As described above in connection with FIG. 3, the computing device 100 may continue to execute the converted binary file 212 starting from the corrected converted return address.ExampleIllustrative examples of the technology disclosed herein are provided below. Embodiments of the technology may include any one or more of the examples described below and any combination thereof.Example 1 includes a computing device for shadow stack management. The computing device includes: a call module for: pushing a local return address to the local stack of the computing device; in response to pushing the local return address to the local stack, The constant offset is added to the stack pointer of the computing device; in response to the constant offset added to the stack pointer, the local call instruction is executed to the converted call target; and the local call instruction is executed in response to the execution of the local call instruction. And a processor for pushing the converted return address onto the shadow stack of the computing device in response to the execution of the local call instruction.Example 2 includes the subject matter of Example 1, and also includes a memory management module for mapping a plurality of virtual memory pages of the shadow stack to the first physical memory.Example 3 includes the subject of any one of Examples 1 and 2, and also includes: a binary file conversion module for executing the converted calling routine of the converted binary file, where the converted calling routine is the same as the local binary file The call instruction corresponds, and the converted call target corresponds to the local call target of the local call instruction; wherein pushing the local return address includes pushing the local return address in response to the execution of the converted calling routine.Example 4 includes the subject of any one of Examples 1-3, and wherein the binary file conversion module is also used to: generate a converted binary file according to the local binary file, wherein the converted binary file includes the converted calling routine; and Executing the converted binary file; wherein, executing the converted calling routine includes executing the converted calling routine in response to the execution of the converted binary file.Example 5 includes the subject matter of any one of Examples 1-4, and also includes a binary file conversion module for checking whether the stack pointer exceeds the pre-allocated value associated with the shadow stack in response to adding a constant offset to the stack pointer Virtual address range.Example 6 includes the subject of any one of Examples 1-5, and further includes: a return module for: popping a local return address from the local stack of the computing device in response to subtracting a constant offset from the stack pointer; In response to the local return address being popped from the local stack, a constant offset is added to the stack pointer; in response to the constant offset being added to the stack pointer, the local return instruction is executed, and the constant offset is added to the stack pointer in response to the local return The address is popped from the local stack; and in response to the execution of the local return instruction, a constant offset is subtracted from the stack pointer; wherein the processor is also used to pop the converted return from the shadow stack in response to the execution of the local return instruction address.Example 7 includes the subject of any one of Examples 1-6, and further includes: the memory management module is used to map a plurality of virtual memory pages of the shadow stack to the first physical memory page; wherein, the return module is also used to respond to the local The execution of the return instruction confirms the converted return address.Example 8 includes the subject matter of any one of Examples 1-7, and wherein popping the local return address from the local stack includes: popping the local return address into the first register of the computing device; and confirming the converted return address includes: Determine the temporary local return address associated with the converted return address; determine whether the temporary local return address matches the first register of the computing device; respond to the determination that the temporary local return address does not match the first register, and based on the first register To determine the corrected converted return address; and in response to the determination of the corrected converted return address, jump to the corrected converted return address.Example 9 includes the subject matter of any one of Examples 1-8, and wherein determining the corrected converted return address includes: determining whether the converted binary file includes the converted return address of the local return address represented by the contents of the temporary storage register ; And in response to the determination that the converted binary file does not include the converted return address of the local return address represented by the contents of the temporary storage register, the converted binary file including the converted return address is generated according to the local binary file.Example 10 includes the subject of any one of Examples 1-9, and also includes: the binary file conversion module is used to execute the converted return routine of the converted binary file, where the converted return routine is the same as the local return of the local binary file The instructions correspond; where popping the local return address includes popping the local return address in response to the execution of the converted return routine.Example 11 includes the subject of any one of Examples 1-10, and also includes: the binary file conversion module is used to: (i) generate a converted binary file based on the local binary file, where the converted binary file includes the converted return example And (ii) executing the converted binary file; wherein executing the converted return routine includes executing the converted return routine in response to the execution of the converted binary file.Example 12 includes a method for shadow stack management. The method includes: pushing the local return address to the local stack of the computing device by the computing device; The offset is added to the stack pointer of the computing device; the computing device executes a local call instruction to the converted call target in response to adding a constant offset to the stack pointer, where executing the local call instruction includes pushing the converted return address into To the shadow stack of the computing device; and subtracting a constant offset from the stack pointer by the computing device in response to executing a local call instruction.Example 13 includes the subject matter of Example 12, and further includes: mapping, by the computing device, a plurality of virtual memory pages of the shadow stack to the first physical memory page.Example 14 includes the subject matter of any one of Examples 12 and 13, and also includes: a converted calling routine of a converted binary file executed by a computing device, wherein the converted calling routine corresponds to a local calling instruction of the local binary file , And the converted call target corresponds to the local call target of the local call instruction; wherein, pushing the local return address includes pushing the local return address in response to executing the converted calling routine.Example 15 includes the subject matter of any one of Examples 12-14, and further includes: generating a converted binary file by the computing device according to the local binary file, wherein the converted binary file includes the converted calling routine; and executing by the computing device The converted binary file; where executing the converted calling routine includes executing the converted calling routine in response to executing the converted binary file.Example 16 includes the subject matter of any of Examples 12-15, and also includes: checking by the computing device whether the stack pointer exceeds the pre-allocated virtual address range associated with the shadow stack in response to adding a constant offset to the stack pointer .Example 17 includes the subject matter of any of Examples 12-16, and further includes: popping, by the computing device, the local return address from the local stack of the computing device in response to subtracting the constant offset from the stack pointer; and by the computing device in response to Pop the local return address from the local stack and add a constant offset to the stack pointer; the computing device executes the local return instruction in response to adding the constant offset to the stack pointer, and adding the constant offset to the stack pointer is in response to Popping the local return address from the local stack, where executing the local return instruction includes popping the converted return address from the shadow stack; and the computing device subtracts a constant offset from the stack pointer in response to executing the local return instruction.Example 18 includes the subject matter of any one of Examples 12-17, and further includes: mapping, by the computing device, a plurality of virtual memory pages of the shadow stack to the first physical memory page; and by the computing device in response to executing a local return instruction The return address has been converted for confirmation.Example 19 includes the subject matter of any one of Examples 12-18, and wherein popping the local return address from the local stack includes: popping the local return address into the first register of the computing device; and confirming the converted return address includes: Determine the temporary local return address associated with the converted return address; determine whether the temporary local return address matches the first register of the computing device; in response to determining that the temporary local return address does not match the first register, and based on the first register Content to determine the corrected converted return address; and in response to determining the corrected converted return address, jump to the corrected converted return address.Example 20 includes the subject matter of any one of Examples 12-19, and wherein determining the corrected converted return address includes: determining whether the converted binary file includes the converted return address of the local return address represented by the contents of the temporary storage register ; And in response to determining that the converted binary file does not include the converted return address of the local return address represented by the contents of the temporary storage register, generating the converted binary file including the converted return address based on the local binary file.Example 21 includes the subject matter of any one of Examples 12-20, and further includes: a converted return routine of a converted binary file executed by a computing device, wherein the converted return routine corresponds to a local return instruction of the local binary file ; Wherein, popping up the local return address includes popping up the local return address in response to executing the converted return routine.Example 22 includes the subject matter of any one of Examples 12-21, and further includes: generating, by a computing device, a converted binary file based on a local binary file, wherein the converted binary file includes a converted return routine; and executing by the computing device The converted binary file; wherein, executing the converted return routine includes executing the converted return routine in response to executing the converted binary file.Example 23 includes a computing device including: a processor; and a memory in which a plurality of instructions are stored, which when executed by the processor cause the computing device to perform the method of any one of Examples 12-22.Example 24 includes one or more machine-readable storage media including a plurality of instructions stored thereon that, in response to being executed, cause the computing device to perform the method of any one of Examples 12-22.Example 25 includes a computing device that includes modules for performing the method of any of Examples 12-22.Example 26 includes a computing device for shadow stack management. The computing device includes: a module for pushing a local return address to the local stack of the computing device; A module for adding a constant offset to the stack pointer of a computing device; a module for executing a local call instruction to a converted call target in response to adding a constant offset to the stack pointer, wherein executing the local call instruction includes adding The conversion return address is pushed onto the shadow stack of the computing device; and a module for subtracting a constant offset from the stack pointer in response to executing a local call instruction.Example 27 includes the subject matter of Example 26, and further includes a module for mapping a plurality of virtual memory pages of the shadow stack to the first physical memory page.Example 28 includes the subject matter of any one of Examples 26 and 27, and also includes: a module for executing a converted calling routine of a converted binary file, wherein the converted calling routine is the same as the local calling instruction of the local binary file Corresponding, and the converted call target corresponds to the local call target of the local call instruction; wherein the module for pushing the local return address includes a module for pushing the local return address in response to the execution of the converted calling routine.Example 29 includes the subject matter of any one of Examples 26-28, and further includes: a module for generating a converted binary file based on a local binary file, wherein the converted binary file includes a converted calling routine; and a module for executing The module of the converted binary file; among them. The module for executing the converted calling routine includes a module for executing the converted calling routine in response to executing the converted binary file.Example 30 includes the subject matter of any of Examples 26-29, and also includes: a method for checking whether the stack pointer exceeds the pre-allocated virtual address range associated with the shadow stack in response to adding a constant offset to the stack pointer Module.Example 31 includes the subject matter of any of Examples 26-30, and further includes: a module for popping a local return address from the local stack of the computing device in response to subtracting a constant offset from the stack pointer; for responding to A module that pops a local return address from the local stack and adds a constant offset to the stack pointer; a module that executes a local return instruction in response to adding a constant offset to the stack pointer, adds a constant offset to the stack pointer It is in response to popping the local return address from the local stack, where executing the local return instruction includes popping the converted return address from the shadow stack; and a module for subtracting a constant offset from the pointer in the stack in response to executing the local return instruction .Example 32 The subject matter of any one of Examples 26-31, and further includes: a module for mapping a plurality of virtual memory pages of the shadow stack to the first physical memory page; Convert the module whose return address is confirmed.Example 33 includes the subject matter of any of Examples 26-32, and wherein the module for popping the local return address from the local stack includes a module for popping the local return address into the first register of the computing device; and The module for confirming the converted return address includes: a module for determining the temporary local return address associated with the converted return address; a module for determining whether the temporary local return address matches the first register of the computing device; In response to determining that the temporary local return address does not match the first register and determining the corrected converted return address based on the contents of the first register; and in response to determining the corrected converted return address to jump to The corrected module with the converted return address.Example 34 includes the subject matter of any one of Examples 26-33, and wherein the module for determining the corrected converted return address includes: for determining whether the converted binary file includes a local return represented by the contents of a temporary storage register A module for the converted return address of the address; and a module for generating a converted return address based on the local binary file in response to determining that the converted binary file does not include the converted return address of the local return address represented by the contents of the temporary storage register The module of the converted binary file.Example 35 includes the subject matter of any one of Examples 26-34, and also includes: a module for executing a converted return routine of a converted binary file, wherein the converted return routine is the same as the local return instruction of the local binary file Correspondence; wherein, the module for popping up the local return address includes a module for popping up the local return address in response to the execution of the converted return routine.Example 36 includes the subject matter of any one of Examples 26-35, and further includes: a module for generating a converted binary file based on a local binary file, wherein the converted binary file includes a converted return routine; and for executing A module for the converted binary file; wherein the module for executing the converted return routine includes a module for executing the converted return routine in response to the execution of the converted binary file. |
Some novel features pertain to an integrated device that includes a substrate, a first interconnect coupled to the substrate, and a second interconnect surrounding the first interconnect. The second interconnect may be configured to provide an electrical connection to ground. In some implementations, the second interconnect includes a plate. In some implementations, the integrated device also includes a dielectric material between the first interconnect and the second interconnect. In some implementations, the integrated device also includes a mold surrounding the second interconnect. In some implementations, the first interconnect is configured to conduct a power signal in a first direction. In some implementations, the second interconnect is configured to conduct a grounding signal in a second direction. In some implementations, the second direction is different from the first direction. In some implementations, the integrated device may be a package-on-package (PoP) device. |
CLAIMS1. An integrated device comprising:a substrate;a first interconnect coupled to the substrate; anda second interconnect surrounding the first interconnect, the second interconnect configured to provide an electrical connection to ground.2. The integrated device of claim 1, wherein the second interconnect comprises a plate.3. The integrated device of claim 1, further comprising a dielectric material between the first interconnect and the second interconnect.4. The integrated device of claim 1, further comprising a mold surrounding the second interconnect.5. The integrated device of claim 1, wherein the first interconnect is configured to provide an electrical path for a power signal in a first direction.6. The integrated device of claim 5, wherein the second interconnect is configured to provide an electrical path for a grounding signal in a second direction.7. The integrated device of claim 1, wherein the first interconnect is one of at least a plated interconnect and/or wire bond.8. The integrated device of claim 1, wherein the integrated device comprises one of at least an interposer, a package device, and/or a package-on-package (PoP) device.9. The integrated device of claim 1, wherein the integrated device is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.10. An apparatus comprising:a substrate; andan interconnect means coupled to the substrate, the interconnect means configured to provide an electrical connection to ground.1 1. The apparatus of claim 10, wherein the interconnect means comprises:a first interconnect; anda second interconnect surrounding the first interconnect, wherein the second interconnect comprises a plate.12. The apparatus of claim 1 1, further comprising a dielectric material between the first interconnect and the second interconnect.13. The apparatus of claim 10, further comprising a mold surrounding the interconnect means.14. The apparatus of claim 1 1, wherein the first interconnect is configured to provide an electrical path for a power signal in a first direction.15. The apparatus of claim 14, wherein the second interconnect is configured to provide an electrical path for a grounding signal in a second direction.16. The apparatus of claim 11 , wherein first interconnect is one of at least a plated interconnect and/or wire bond.17. The apparatus of claim 10, wherein the apparatus comprises one of at least an interposer, a package device, and/or a package-on-package (PoP) device.18. The apparatus of claim 10, wherein the apparatus is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.19. A method of fabricating an integrated device, the method comprising:forming a first interconnect on a substrate; andproviding a second interconnect on the substrate, the second interconnect surrounding the first interconnect and configured to provide an electrical connection to ground.20. The method of claim 19, wherein the forming the first interconnect above the substrate comprises plating the first interconnect on the substrate.21. The method of claim 19, wherein the forming the first interconnect above the substrate comprises wire-bonding the first interconnect on the substrate.22. The method of claim 19, wherein the second interconnect comprises a plate.23. The method of claim 19, further comprising forming a dielectric layer between the first interconnect and the second interconnect.24. The method of claim 19, further comprising forming a mold surrounding the second interconnect.25. The method of claim 19, wherein the first interconnect is configured to provide an electrical path for a power signal in a first direction.26. The method of claim 25, wherein the second interconnect is configured to provide an electrical path for a grounding signal in a second direction.27. The method of claim 26, wherein the second direction is different from the first direction.28. The method of claim 19, wherein the integrated device comprises one of at least an interposer, a package device, and/or a package-on-package (PoP) device.29. The method of claim 19, wherein the integrated device is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer. |
INTEGRATED DEVICE COMPRISING COAXIALINTERCONNECTCROSS-REFERENCE TO RELATED APPLICATION[0001] This applications claims priority to and the benefit of U.S. Non-Provisional Application no. 14/329,646 filed in the U.S. Patent Office on July 11, 2014, the entire content of which is incorporated herein by reference.BACKGROUNDField[0002] Various features relate, generally, to an integrated device and, more specifically, to an integrated device including an interconnect surrounding another interconnect and providing a connection to ground.Background[0003] FIG. 1 illustrates a first cross-sectional view of a conventional integrated device 100 (e.g., a package-on-package (PoP) integrated device). The conventional integrated device 100 includes a first package 102 and a second package 104. The first package 102 may include a first substrate 106, a first die 108, a first set of solder balls 1 10, and a first set of interconnects 112. The first set of solder balls 1 10 may electrically connect the first substrate 106 with the first die 108. The first substrate 106 may include electrical interconnects 114 and dielectric layers 1 16. The electrical interconnects 1 14 may traverse horizontally and/or vertically throughout the first substrate 106 to electrically connect various components contacting the first substrate 106. For example, the electrical interconnects 1 14 may electrically connect one or more solder balls 110 with one or more interconnects 1 12. The electrical interconnects 1 14 may be (at least) partially surrounded by the dielectric layers 116.[0004] The second package 104 may include a second substrate 1 18, a second die 120, and a second set of solder balls 122. The second set of solder balls 122 may electrically connect the second substrate 1 18 with the second die 120. The second substrate 118 may include electrical interconnects 124 and dielectric layers 128. A mold 124 may exist in any portion of the space between the first substrate 106 and the second substrate 118. For example, the mold 124 may encapsulate (at least) a portion of the first set of interconnects 1 12, the first set of solder balls 1 10, and/or the first die 108.[0005] The first set of interconnects 112 may electrically connect the first substrate 106 with the second substrate 1 18. Each interconnect 1 12 may carry a power signal or a ground signal (e.g., a signal connected to ground).[0006] FIG. 2 is a second cross-sectional view of the conventional integrated device 100. The second cross-sectional view illustrated in FIG. 2 is along line 126 in FIG. 1. As illustrated in FIG. 2, a number (e.g., eight) interconnects (e.g., interconnects 1 121-8) may electrically connect the first substrate 106 with the second substrate 1 18. However, such designs have limitations. Any two interconnects carrying a power signal must be separated by at least one interconnect carrying a ground signal; otherwise, the power signals may interfere with each other, thereby causing unacceptable levels of insertion loss and/or isolation. Of the eight interconnects 1 121-8shown in FIG. 2, four alternating interconnects (e.g., interconnects 112i, 1 123, 1 125, 1 127) may carry a power signal while the other four alternating interconnects (e.g., interconnects 1 122, H24, 1 126, H2s) may carry a ground signal. Such designs do not allow for a power signal to be transmitted through every interconnect 1 12 (e.g., all of the interconnects 112i_s). For example, if more than four power signal connections are needed between the first substrate 106 and the second substrate 1 18, additional interconnects 1 12 must be added (beyond the eight interconnects 1 12i_s already illustrated in FIG. 2). Additional interconnects would undesirably expand the size of the overall conventional integrated device 100. Therefore, existing designs may benefit from enhancements that allow power signals to be conducted through every interconnect while maintaining acceptable levels of isolation and/or insertion loss.SUMMARY[0007] The following presents a simplified summary of one or more examples and/or aspects of the present disclosure, in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated features of the disclosure, and is intended neither to identify key or critical elements of all aspects of the disclosure nor to delineate the scope of any or all aspects of the disclosure. Its sole purpose is to present some concepts of one or more aspects of the disclosure in a simplified form as a prelude to the more detailed description that is presented later. [0008] Various features, apparatus and methods described herein provide an integrated device that includes a substrate, a first interconnect coupled to the substrate, and a second interconnect surrounding the first interconnect and configured to provide an electrical connection to ground.[0009] A first example provides an integrated device that includes a substrate, a first interconnect coupled to the substrate, and a second interconnect surrounding the first interconnect and configured to provide an electrical connection to ground. According to some aspects, the second interconnect includes a plate. According to some aspects, the integrated device includes a dielectric material between the first interconnect and the second interconnect. According to some aspects, a mold surrounds the second interconnect. According to some aspects, the first interconnect is configured to conduct a power signal in a first direction. According to some aspects, the second interconnect is configured to conduct a grounding signal in a second direction. According to some aspects, the second direction is different from the first direction. According to some aspects, the integrated device includes one of at least an interposer, a package device, and/or a PoP device. In some aspects, the integrated device is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.[0010] A second example provides an apparatus that includes a substrate, a first interconnect coupled to the substrate, and a second interconnect surrounding the first interconnect and configured to provide an electrical connection to ground. According to some aspects, the second interconnect includes a plate. According to some aspects, the apparatus includes a dielectric material between the first interconnect and the second interconnect. According to some aspects, a mold surrounds the second interconnect. According to some aspects, the first interconnect is configured to conduct a power signal in a first direction. According to some aspects, the second interconnect is configured to conduct a grounding signal in a second direction. According to some aspects, the second direction is different from the first direction. According to some aspects, the apparatus includes one of at least an interposer, a package device, and/or a PoP device. In some aspects, the apparatus is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer. [0011] A third example provides a method that includes providing a first interconnect above a substrate, and providing a second interconnect above the substrate, wherein the second substrate surrounds the first substrate and is configured to provide an electrical connection to ground. According to some aspects, the second interconnect includes a plate. According to some aspects, the integrated device includes a dielectric material between the first interconnect and the second interconnect. According to some aspects, a mold surrounds the second interconnect. According to some aspects, the first interconnect is configured to conduct a power signal in a first direction. According to some aspects, the second interconnect is configured to conduct a grounding signal in a second direction. According to some aspects, the second direction is different from the first direction. According to some aspects, the integrated device includes one of at least an interposer, a package device, and/or a PoP device. In some aspects, the integrated device is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.[0012] These and other examples and/or aspects of the disclosure will become more fully understood upon a review of the detailed description, which follows. Other aspects, features, and embodiments of the present disclosure will become apparent to those of ordinary skill in the art, upon reviewing the following description of specific, exemplary embodiments of the present disclosure in conjunction with the accompanying figures.DRAWINGS[0013] Various features, nature and advantages may become apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout.[0014] FIG. 1 illustrates a first cross-sectional view of a conventional integrated device.[0015] FIG. 2 illustrates a second cross-sectional view of the conventional integrated device.[0016] FIG. 3 illustrates a cross-sectional view of a first exemplary integrated device. [0017] FIG. 4 illustrates a side perspective view of exemplary coaxial connections in the first exemplary integrated device.[0018] FIG. 5 illustrates a cross-sectional view of a second exemplary integrated device.[0019] FIG. 6 illustrates a side perspective view of exemplary coaxial connections in the second exemplary integrated device.[0020] FIGs. 7A-7D illustrate various aspects of an exemplary coaxial connection.[0021] FIG. 8 illustrates a first exemplary sequence for providing / fabricating the exemplary coaxial connections in the first exemplary integrated device.[0022] FIG. 9 illustrates an exemplary sequence for providing / fabricating the first exemplary integrated device.[0023] FIG. 10 illustrates a top cross-sectional view of the first exemplary integrated device.[0024] FIG. 11 illustrates a top cross-sectional view of the second exemplary integrated device.[0025] FIG. 12 illustrates an exemplary sequence for providing / fabricating exemplary coaxial connections in a third exemplary integrated device.[0026] FIG. 13 illustrates an exemplary sequence for providing / fabricating the third exemplary integrated device.[0027] FIG. 14 illustrates a top cross-sectional view of the third exemplary integrated device.[0028] FIG. 15 illustrates a top cross-sectional perspective view of a fourth exemplary integrated device.[0029] FIG. 16 illustrates an exemplary flow diagram of a method for providing / fabricating an integrated device.[0030] FIG. 17 illustrates various electronic devices that may integrate an integrated device, a semiconductor device, a die, an integrated circuit and/or printed circuit board (PCB) described herein.DETAILED DESCRIPTION[0031] In the following description, specific details are given to provide a thorough understanding of the various aspects of the disclosure. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For example, circuits may be shown in block diagrams in order to avoid obscuring the aspects in unnecessary detail. In other instances, well-known circuits, structures and techniques may not be shown in detail in order not to obscure the aspects of the disclosure.Overview[0032] Some novel features pertain to an integrated device (e.g., a package-on- package (PoP) integrated device) that includes a substrate, a first interconnect coupled to the substrate, and a second interconnect surrounding the first interconnect and configured to provide an electrical connection to ground. The second interconnect may include a plate. The integrated device may include a dielectric material between the first interconnect and the second interconnect. A mold may surround the second interconnect. The first interconnect may be configured to conduct a power signal in a first direction. The second interconnect may be configured to conduct a grounding signal in a second direction. The second direction may be different from the first direction. The integrated device may include one of at least an interposer, a package device, and/or a PoP device.Terms and Definitions[0033] An interconnect is an element or component that allows or facilitates an electrical connection between two points, elements and/or components. In some implementations, an interconnect may include a trace, a via, a pad, a pillar, a redistribution metal layer, and/or an under bump metallization (UBM) layer. In some implementations, an interconnect is an electrically conductive material that provides an electrical path for a signal (e.g., data signal, ground signal, power signal). An interconnect may include more than one element /component.[0034] A netlist is defined as a set of interconnects, a set of active elements (e.g., transistor) and/or a set of passive elements (e.g., resistor, capacitor) that form and/or define the connectivity of a circuit in an integrated device.First Exemplary Integrated Device[0035] FIG. 3 illustrates an integrated device 300 (e.g., a PoP integrated device) that includes a first package 302 and a second package 304. The first package 302 may include a first substrate 306, a first die 308, a first set of solder balls 310, and at least one coaxial connection 312. The first set of solder balls 310 may electrically connect the first substrate 306 with the first die 308. The first substrate 306 may include various materials without deviating from the scope of the present disclosure. As non-limiting examples, the first substrate 306 may include silicon, glass, ceramic, a wafer, and/or various organic materials. The first substrate 306 may include electrical interconnects 314 and 315, and dielectric layers 316. The electrical interconnects 314 and/or 315 may include various materials without deviating from the scope of the present disclosure. As a non-limiting example, the electrical interconnects 314 and/or 315 may include copper. The interconnects 314 and/or 315 may include one or more traces, vias and/or pads. The electrical interconnects 314 and/or 315 may traverse horizontally and/or vertically throughout the first substrate 306 to electrically connect various components contacting the first substrate 306. For example, the electrical interconnects 314 and/or 315 may electrically connect one or more solder balls 310 and one or more coaxial connections 312. The electrical interconnects 314 and/or 315 may be (at least) partially surrounded by the dielectric layers 316. The dielectric layers 316 may include various materials without deviating from the scope of the present disclosure. As a non-limiting example, the dielectric layers 316 may include silicon nitrade (SiN).[0036] The second package 304 may include a second substrate 318, a second die 320, and a second set of solder balls 322. The second set of solder balls 322 may electrically connect the second substrate 318 with the second die 320. The second substrate 318 may include various materials without deviating from the scope of the present disclosure. As non-limiting examples, the second substrate 318 may include silicon, glass, ceramic, a wafer, and/or various organic materials. The second substrate 318 may include electrical interconnects 324 and dielectric layers 326. The electrical interconnects 324 may include various materials without deviating from the scope of the present disclosure. As a non-limiting example, the electrical interconnects 324 may include Al. The electrical interconnects 324 may traverse horizontally and/or vertically throughout the second substrate 318 to electrically connect various components contacting the second substrate 318. For example, the electrical interconnects 324 may electrically connect one or more solder balls 322 and one or more coaxial connections 312. The electrical interconnects 324 may be (at least) partially surrounded by the dielectric layers 326. The dielectric layers 326 may include various materials without deviating from the scope of the present disclosure. As a non-limiting example, the dielectric layers 326 may include SiN. [0037] A mold 334 may exist in any portion of the space between the first substrate 306 and the second substrate 318. For example, the mold 334 may (at least) partially surround the coaxial connections 312, the first set of solder balls 310, and/or the first die 308.[0038] The coaxial connection 312 (e.g., coaxial transmission line) may connect the first substrate 306 with the second substrate 318. The coaxial connection 312 may include a first interconnect 328 (e.g., a signal interconnect configured to transmit a power signal), an insulation material 330, and a second interconnect 332 (e.g., a interconnect providing an electrical connection to ground). The insulation material 330 may include various materials without deviating from the scope of the present disclosure. As a non-limiting example, the insulation material 330 may include SiN. In some implementations, the insulation material 330 is a dielectric layer. In some implementations, the insulation material 330 may be an encapsulation layer (e.g., mold, epoxy). In some implementations, the insulation material 330 may be the same material as the mold 334.[0039] The insulation material 330 may surround (at least) a portion of the first interconnect 328. The insulation material 330 may electrically insulate the first interconnect 328 from the second interconnect 332, thereby preventing signals in the first interconnect 328 from shorting through the second interconnect 332.[0040] The first interconnect 328 may electrically connect the first substrate 306 with the second substrate 318. For example, the first interconnect 328 may electrically connect the electrical interconnects 314 of the first substrate 306 with the electrical interconnects 324 of the second substrate 318. The first interconnect 328 may also be electrically coupled to the interconnect 315. The first interconnect 328 may be configured to conduct a power signal in a first direction, such as from the first substrate 306 to the second substrate 318.[0041] The second interconnect 332 may be a plate. The plate may include metal (e.g., Al). The second interconnect 332 may be configured to provide an electrical connection to ground. The second interconnect 332 may surround (at least) a portion of the insulation material 330. As such, the second interconnect 332 may surround (at least) a portion of the first interconnect 328. The second interconnect 332 may be configured to provide an electrical path for a grounding signal (e.g., a signal destined to ground) in a second direction. The second direction may be different from the first direction (described supra). For example, the grounding signal may be conducted from the second substrate 318 to the first substrate 306. The grounding signal may be conducted in other directions that will be readily apparent to one of ordinary skill in the art. The second interconnect 332 may be electrically coupled to the interconnect 314.[0042] In some implementations, the interconnect 314 and/or the second interconnect 332 are part of a first netlist for a power distribution network (PDN) of the integrated device. For example, the interconnect 314 and/or the second interconnect 332 may be part of a ground netlist for a PDN of the integrated device.[0043] In some implementations, the interconnect 315 and/or the first interconnect 328 are part of a second netlist for a power distribution network (PDN) of the integrated device. For example, the interconnect 315 and/or the first interconnect 328 may be part of a power netlist or a data signal netlist for a PDN of the integrated device.[0044] Although the cross-sectional view illustrated in FIG. 3 shows two coaxial connections 312 (e.g., a left-hand-side coaxial connection 312 and a right-hand-side coaxial connection 312), the integrated device 300 may also include additional coaxial connections (e.g., one or more coaxial connections behind and/or in-front-of the right- hand-side coaxial connection 312 and/or the left-hand-side coaxial connection 312), as illustrated in FIG. 4.[0045] FIG. 4 illustrates an angled perspective view of exemplary coaxial connections 400 in the first exemplary integrated device 300. In some implementations, the coaxial connections 400 is an interconnect means (e.g., coaxial interconnect means). The exemplary coaxial connections 400 may include a number (e.g., eight) of individual coaxial connections (e.g., coaxial connections 312, 404) in a row (e.g., row 410). However, one of ordinary skill in the art will understand that the row 410 may include as few as one coaxial connection (e.g., only coaxial connection 312) or as many as hundreds, thousands, or millions of coaxial connections (or more) without deviating from the scope of the present disclosure. As described in greater detail supra, each coaxial connection 312, 404 may include a first interconnect 328, 406 (e.g., a signal interconnect), an insulation material 330, 408, and a second interconnect 332 (e.g., a grounding interconnect). The second interconnect 332 may be a metal plate that is shared among one or more coaxial connections (e.g., coaxial connections 312, 404 share the same second interconnect 332). The second interconnect 332 surrounds the first interconnect 328 of coaxial connection 312 as well as the first interconnect 406 of coaxial connection 404. As described in greater detail supra, the second interconnect 332 may be configured to provide an electrical connection or path to ground. [0046] Each coaxial connection (e.g., coaxial connection 312, 404) may conduct both a power signal as well as a grounding signal (e.g., a signal destined to ground). As described supra with reference to FIGs. 1-2, existing integrated devices (e.g., conventional integrated device 100) include connections (e.g., interconnects 1 12, 1121-8) between substrates (e.g., substrates 106, 118) that can transmit only a power signal or a grounding signal. As such, existing integrated devices may require at least two connections (e.g., interconnect 1 12i and interconnect 1 122) to transmit both a power signal as well as a grounding signal. However, the present disclosure provides various examples and aspects of a coaxial connection (e.g., coaxial connection 312) that can transmit both a power signal as well as a grounding signal.[0047] In some implementations, the first interconnect 328 and the interconnect 406 may be part of the same netlist or different netlist a power distribution network (PDN) of the integrated device.[0048] In some implementations, at least one or more of the first interconnects (e.g., interconnects 328, 406) surrounded by the insulation material are inner interconnects of a coaxial interconnect / an interconnect means (e.g., coaxial interconnect means). In some implementations, the second interconnect 332 is an outer interconnect of a coaxial interconnect / an interconnect means (e.g., coaxial interconnect means). In some implementations, the one or more inner interconnects provide a first electrical path for a power signal, and the outer interconnect provide a second electrical path for a ground signal. In some implementations, the combination of at least one or more of the first interconnects (e.g., interconnects 328, 406), the insulation material, and/or the second interconnect 332 is configured to operate as a coaxial interconnect / an interconnect means (e.g., coaxial interconnect means).Second Exemplary Integrated Device[0049] FIG. 5 illustrates a cross-sectional view of a second exemplary integrated device 500. The second exemplary integrated device 500 may include a coaxial connection 502. The coaxial connection 502 may connect the first substrate 306 with the second substrate 318. The substrate 306 may include interconnects 314, 315 and 515. The interconnects 314, 315 and/or 515 may include one or more traces, vias and/or pads. The coaxial connection 502 may include two (or more) first interconnects 328, 504 (e.g., signal interconnects configured to transmit a power signal), an insulation material 330, and a second interconnect 332 (e.g., a interconnect providing an electrical connection to ground). The first interconnects 328, 504 may electrically connect the first substrate 306 with the second substrate 318. For example, the first interconnects 328, 504 may respectively electrically connect the electrical interconnects 315 and 515 of the first substrate 306 with the electrical interconnects 324 of the second substrate 318. The first interconnects 328, 504 may be configured to conduct power signals in a first direction, such as from the first substrate 306 to the second substrate 318.[0050] The insulation material 330 may surround (at least) a portion of the first interconnects 328, 504. The insulation material 330 may electrically insulate the first interconnects 328, 504 from the second interconnect 332, thereby preventing signals in the first interconnects 328, 504 from shorting through the second interconnect 332. The insulation material 330 surrounding each first interconnects 328, 504 may vary based on various design parameters. For example, the power and/or amperage of the signal in the first interconnects 328, 504 may affect the type and/or amount of the insulation material 330 surrounding that the first interconnects 328, 504.[0051] The second interconnect 332 may be configured to provide an electrical connection to ground. The second interconnect 332 may be a plate. The plate may include metal (e.g., Al). The second interconnect 332 may surround (at least) a portion of the insulation material 330. As such, the second interconnect 332 may surround (at least) a portion of the first interconnects 328, 504. The second interconnect 332 may be configured to conduct a grounding signal (e.g., a signal destined to ground) in a second direction. The second direction may be different from the first direction (described supra). For example, the grounding signal may be conducted from the second substrate 318 to the first substrate 306. The grounding signal may be conducted in other directions that will be readily apparent to one of ordinary skill in the art. The second interconnect 332 may be electrically coupled to the first interconnect 314.[0052] The mold 334 may exist in any portion of the space between the first substrate 306 and the second substrate 318. For example, mold 334 may surround the coaxial connection 502, the first set of solder balls 310, and/or the first die 308.[0053] Each coaxial connection 502 may conduct two (or more) power signals (e.g., a power signal in the first interconnect 328 and another power signal in the first interconnect 504) as well as a grounding signal (e.g., a signal in the second interconnect 332 and destined to ground). As described supra with reference to FIGs. 1-2, existing integrated devices (e.g., conventional integrated device 100 in FIGs. 1-2) include connections (e.g., interconnects 1 12, 1 12i_8in FIGs. 1-2) between substrates (e.g., substrates 106, 118 in FIGs. 1-2) that transmit only a power signal or a grounding signal. As such, existing integrated devices may require at least two connections (e.g., interconnect 1 12i and interconnect 1 122in FIG. 2) to transmit both a power signal as well as a grounding signal. However, the present disclosure provides various examples of a coaxial connection (e.g., coaxial connection 502 in FIG. 5) that transmits two (or more) power signals as well as a grounding signal.[0054] In some implementations, the interconnect 314 and/or the second interconnect 332 are part of a first netlist for a power distribution network (PDN) of the integrated device. For example, the interconnect 314 and/or the second interconnect332 may be part of a ground netlist for a PDN of the integrated device.[0055] In some implementations, the interconnect 315 and/or the first interconnect328 are part of a second netlist for a power distribution network (PDN) of the integrated device. For example, the interconnect 315 and/or the first interconnect 328 may be part of a power netlist or a data signal netlist for a PDN of the integrated device.[0056] In some implementations, the interconnect 515 and/or the interconnect 504 are part of a third netlist for a power distribution network (PDN) of the integrated device. For example, the interconnect 515 and/or the interconnect 504 may be part of a power netlist or a data signal netlist for a PDN of the integrated device.[0057] In some implementations, the second netlist and the third netlist are part of the same netlist, while in some instances, the second netlist and the third netlist are different netlists.[0058] Although the cross-sectional view illustrated in FIG. 5 shows two coaxial connections 502 (e.g., a left-hand-side coaxial connection 502 and a right-hand-side coaxial connection 502), the integrated device 500 may also include additional coaxial connections (e.g., one or more coaxial connections behind and/or in-front-of the right- hand-side coaxial connection 502 and/or the left-hand-side coaxial connection 502), as illustrated in FIG. 6.[0059] FIG. 6 illustrates a side perspective view of exemplary coaxial connections 600 in the second exemplary integrated device 500. The exemplary coaxial connections 600 may include a number (e.g., eight) of individual coaxial connections (e.g., coaxial connections 502, 602) in rows (e.g., row 604 and row 606). However, one of ordinary skill in the art will understand that each row (e.g., row 604 and/or row 606) may include as few as one coaxial connection or as many as hundreds of coaxial connections (or more) without deviating from the scope of the present disclosure. As described in greater detail supra, each coaxial connection (e.g., coaxial connection 502) may include two (or more) first interconnects 328, 504 (e.g., a signal interconnect), an insulation material 330, and a second interconnect 332 (e.g., a grounding interconnect). The second interconnect 332 may be a metal plate that is shared among one or more coaxial connections (e.g., coaxial connections 502, 602 share the same second interconnect 332). The second interconnect 332 may surround the first interconnects 328, 504 of coaxial connection 502 as well as the first interconnects 608, 610 of coaxial connection 602. As described in greater detail supra, the second interconnect 332 may be configured to provide an electrical connection to ground.[0060] In some implementations, at least one or more of the first interconnects (e.g., interconnects 328, 406, 608, 610) surrounded by the insulation material are inner interconnects of a coaxial interconnect / an interconnect means (e.g., coaxial interconnect means). In some implementations, the second interconnect 332 is an outer interconnect of a coaxial interconnect / an interconnect means (e.g., coaxial interconnect means). In some implementations, the one or more inner interconnects provide a first electrical path for a power signal, and the outer interconnect provide a second electrical path for a ground signal. In some implementations, the combination of at least one or more of the first interconnects (e.g., interconnects 328, 406), the insulation material, and/or the second interconnect 332 is configured to operate as a coaxial interconnect / an interconnect means (e.g., coaxial interconnect means).Exemplary Aspects of a Coaxial Connection / Interconnects[0061] Generally, FIGs. 7A-7D illustrate various aspects of a coaxial connection (e.g., coaxial connection 312). Specifically, FIG. 7A shows a top cross-sectional view of the exemplary coaxial connection 312. The first interconnect 328 may have a diameter 702. For example, to meet a characteristic impedance value of approximately 50 ohms, the diameter 702 may have exemplary values of about 10μιη-100μιη. The first interconnect 328 and the insulation material 330, collectively, may have a diameter 704. An exemplary range of values for the diameter 704 is about 40μ-400μιη. The insulation material 330 may have a thickness equal to the difference between the diameter 704 and the diameter 702. An exemplary range of values for the thickness of the insulation material 330 (e.g., the difference between diameter 704 and the diameter 702) is about 30μιη-300μιη (depending upon the dielectric constant of the insulation material 330). [0062] FIG. 7B shows a side perspective view of the exemplary coaxial connection 312. As described in greater detail supra, the first exemplary coaxial connection 312 may include the first interconnect 328 and the insulation material 330. In some configurations, the first exemplary coaxial connection 312 may also include a shield 712 that surrounds (at least) a portion of the insulation material 330. The shield 712 may provide structural and/or mechanical support to the first interconnect 328 and/or the insulation material 330. For example, the shield 712 may hold the insulation material 330 around the first interconnect 328.[0063] FIG. 7C shows a top view of various electrical aspects of the exemplary coaxial connection 312. As described in greater detail supra, the insulation material 330 may be located between two interconnects (e.g., the first interconnect 328 and the second interconnect 332). Accordingly, the dielectric material 330 may have a capacitance 722. Generally, capacitance is directly proportional to the surface area of the interconnect plates (e.g., the circumference of the first interconnect 328 and the circumference of the second interconnect 332) and inversely proportional to the separation distance between the plates (e.g., the thickness of the dielectric material 330, as described in greater detail supra). Also, the capacitance may be a function of the permittivity of the dielectric (e.g., the dielectric material 330).[0064] Because dielectric material 330 is located between two interconnects (e.g., the first interconnect 328 and the second interconnect 332), a magnetic field 724 may exist between the two interconnects (e.g., between the first interconnect 328 and the second interconnect 332). For example, if the current in the first interconnect 328 is traveling downwards (e.g., into the page), then the magnetic field 724 will be in a clockwise direction, as illustrated in FIG. 7C. The value of the magnetic field 724 may be determined using various methods known to one of ordinary skill in the art, such as Ampere's Law.[0065] FIG. 7D shows a side view of various electrical aspects of the exemplary coaxial connection 312. The first interconnect 328 may be configured to conduct an electrical signal (e.g., a power signal) in a first direction 732 (e.g., from top to bottom). The second interconnect 332 (see FIGs. 7A, 7C) may be configured to conduct an electrical signal (e.g., a grounding signal, such as a signal destined to ground) in a second direction 734 (e.g., from bottom to top). As illustrated in FIG. 7D, the second direction 734 may be different from the first direction 732. As described in greater detail supra, the dielectric material 330 may have a capacitance 722, and a magnetic field 724 may exist between the first interconnect 328 and the second interconnect 332 (see FIGs. 7A, 7C).Exemplary Sequence for Providing / Fabricating Exemplary Coaxial Connections in the First Exemplary Integrated Device[0066] FIG. 8 illustrates a first exemplary sequence 800 for providing / fabricating the exemplary coaxial connections 400 in the first exemplary integrated device 300. The sequence 800 may include various stages. One of ordinary skill in the art will understand that the order of some of the stages illustrated in FIG. 8 may be changed without deviating from the scope of the present disclosure. In some implementations, several stages may be combined into a single stage. Detailed descriptions of various elements mentioned infra are provided supra and therefore will not be repeated.[0067] Stage 1 of FIG. 8, illustrates a state after an interconnect (e.g., the second interconnect 332) is provided. The second interconnect 332 may include one or more holes 802.[0068] Stage 2 illustrates a state after another interconnect (e.g., the first interconnect 328) is provided above (e.g., on top of) a substrate (e.g., the substrate 306) using a plating process. The plating process may include providing multiple layers of an electrically-conductive material on top of one another to produce a column or pillarlike shape extending upwards from the substrate 306. Although this example refers to a plating process, one of ordinary skill in the art will understand that various techniques may be used to provide the first interconnect 328 above the substrate 306 without deviating from the scope of the present disclosure. In some implementations, a substrate is provided (e.g., formed) and a plating process is performed to form the interconnect 328. The substrate that is provided may include one or more interconnects (e.g., traces, vias, pads).[0069] Stage 3 illustrates a state after the second interconnect 332 is provided above (e.g., on top of) the substrate 306 such that the first interconnects 328 are placed through / inside the holes 802 of the second interconnect 332. Afterwards, at least some space 804 may exist between the first interconnect 328 and the second interconnect 332.[0070] Stage 4 illustrates a state after a dielectric material 330 is provided in the space 804 between the first interconnect 328 and the second interconnect 332.[0071] Stage 5 illustrates a state after a mold 334 (e.g., an encapsulation mold) is provided. The mold 334 may surround (at least) a portion of the second interconnect 332. The mold 334 may provide structural / mechanical support to the first interconnect 328, the insulation material 330, and/or the second interconnect 332. In some implementations, stage 4 may be optional and the mold 334 may be provided in the space 804.Exemplary Sequence for Providing / Fabricating First and Second Exemplary Integrated Devices[0072] FIG. 9 illustrates an exemplary sequence 900 for providing / fabricating the first exemplary integrated device 300. The sequence 900 may include various stages. One of ordinary skill in the art will understand that the order of some of the stages illustrated in FIG. 9 may be changed without deviating from the scope of the present disclosure. Moreover, in some implementations, several stages may be represented into a single stage. Detailed descriptions of various elements mentioned infra are provided supra and therefore will not be repeated.[0073] Stage 1 of FIG. 9 illustrates a state after a substrate (e.g., substrate 306) is provided. The substrate includes dielectric layers and interconnects (e.g., traces, vias, pads).[0074] Stage 2 illustrates a state after an interconnect (e.g., the first interconnect 328) is provided above (e.g., on top of) the substrate 306 using a plating process, as described in greater detail supra. One of ordinary skill in the art will understand that various techniques may be used to provide the first interconnect 328 above the substrate 306 without deviating from the scope of the present disclosure.[0075] Stage 3 illustrates a state after another interconnect (e.g., the second interconnect 332) may be provided above (e.g., on top of) the substrate 306. In some configurations, the second interconnect 332 is a metal plate with holes. At least some space 804 may exist between the first interconnect 328 and the second interconnect 332.[0076] Stage 4 illustrates a state after a dielectric material 330 is provided in the space 804 between the first interconnect 328 and the second interconnect 332.[0077] Stage 5 illustrates a state after a die 308 is provided (e., coupled) to the substrate. As shown in stage 5, the die 308 is coupled to a set of solder balls 310. The set of solder balls 310 are coupled to the interconnects of the substrate 306. The die 308 may form an electrical connection with the set of solder balls 310 and interconnects of the substrate 306. In some implementations, the die 308 may be provided and coupled to the substrate before the interconnects 328 and/or 332 are provided (e.g., formed) on the substrate.[0078] Stage 6 illustrates a state after a mold 334 (e.g., an encapsulation mold) is provided. The mold 334 may surround (at least) a portion of the first interconnect 328, the second interconnect 332, the dielectric material 330, the set of solder balls 310, and/or the die 308. The mold 334 may provide structural and/or mechanical support to the first interconnect 328, the second interconnect 332, the dielectric material 330, the set of solder balls 310, and/or the die 308.[0079] FIG. 10 illustrates a top cross-sectional view of the first exemplary integrated device 300. The integrated device 300 may include one or more coaxial connections 312 in a row 410 on one or more sides of the die 308. Each coaxial connection 312 may include a first interconnect 328, a second interconnect 332 surrounding the first interconnect 328, and an insulation material 330 between the first interconnect 328 and the second interconnect 332. The row 410 of coaxial connection(s) 312 may be surrounded by the mold 334. Detailed descriptions of various elements mentioned supra have already been provided herein and therefore will not be repeated.[0080] FIG. 1 1 illustrates a top perspective view of the second exemplary integrated device 500. The integrated device 500 may include one or more sets of coaxial connections 502 in rows 604, 606 on one or more sides of the die 308. Each coaxial connection 502 may include first interconnects 328, 504, a second interconnect 332 surrounding the first interconnects 328, 504, and an insulation material 330 between the first interconnects 328, 504 and the second interconnect 332. The rows 604, 606 of coaxial connection(s) 502 may be surrounded by the mold 334. Detailed descriptions of various elements mentioned supra have already been provided herein and therefore will not be repeated.Exemplary Sequence for Providing / Fabricating Exemplary Coaxial Connections in a Third Exemplary Integrated Device[0081] FIG. 12 illustrates an exemplary sequence 1200 for providing / fabricating exemplary coaxial connections in a third exemplary integrated device (e.g., the integrated device 1400 illustrated in FIG. 14). The sequence 1200 may include various stages. In some implementations, several stages may be represented as a single stage. One of ordinary skill in the art will understand that the order of some of the stages illustrated in FIG. 12 may be changed without deviating from the scope of the present disclosure. Detailed descriptions of various elements mentioned infra are provided supra and therefore will not be repeated.[0082] Stage 1 of FIG. 12 illustrates a state after an interconnect (e.g., the second interconnect 332) is provided. The second interconnect 332 may include one or more holes 802.[0083] Stage 2 illustrates a state after another interconnect (e.g., the first interconnect 1202) is provided on a substrate (e.g., the substrate 306). In some implementations, the interconnect 1202 is a wire bond. The first interconnect 1202 may be provided above (e.g., on top of) the substrate 306 using a wire-bonding process. Various types of wire-bonding may be implemented without deviating from the scope of the present disclosure. For example, "wire-bonding" may refer to ball bonding, wedge bonding, and/or compliant bonding. The wire-bonding process may produce a round end of the first interconnect 1202 (e.g., a solder ball-like portion at the bottom end of the first interconnect 1202) and a vertical portion extending above the round end. Although this example refers to a wire-bonding process, one of ordinary skill in the art will understand that various other techniques may be used to provide the first interconnect 1202 above the substrate 306 without deviating from the scope of the present disclosure.[0084] Stage 3 illustrates a state after the second interconnect 332 is provided above (e.g., on top of) the substrate 306 such that the first interconnects 1202 are placed through / inside the holes 802 of the second interconnect 332. Afterwards, at least some space 804 may exist between the first interconnect 1202 and the second interconnect 332.[0085] Stage 4 illustrates a state after a dielectric material 330 is provided in the space 804 between the first interconnect 328 and the second interconnect 332. The exemplary coaxial connection 1204 includes the first interconnect 1202, the insulation material 330, and the second interconnect 332. In some implementations, the coaxial connection 1204 may includes several first interconnects.[0086] Stage 5 illustrates a state after a mold 334 (e.g., an encapsulation mold) is provided. The mold 334 may surround (at least) a portion of the second interconnect 332. The mold 334 may provide structural / mechanical support to the first interconnect 1202, the insulation material 330, and/or the second interconnect 332.[0087] In some implementations, at least one or more of the first interconnects (e.g., interconnects 1202) surrounded by the insulation material 804 are inner interconnects of a coaxial interconnect / an interconnect means (e.g., coaxial interconnect means). In some implementations, the second interconnect 332 is an outer interconnect of a coaxial interconnect / an interconnect means (e.g., coaxial interconnect means). In some implementations, the one or more inner interconnects provide a first electrical path for a power signal, and the outer interconnect provide a second electrical path for a ground signal. In some implementations, the combination of at least one or more of the first interconnects (e.g., interconnects 1202), the insulation material 804, and/or the second interconnect 332 are configured to operate as a coaxial interconnect / an interconnect means (e.g., coaxial interconnect means).Exemplary Sequence for Providing / Fabricating Third and Fourth Exemplary Integrated Devices[0088] FIG. 13 illustrates an exemplary sequence 1300 for providing / fabricating the third exemplary integrated device (e.g., the integrated device 1400 illustrated in FIG. 14). The sequence 1300 may include various stages. One of ordinary skill in the art will understand that the order of some of the stages illustrated in FIG. 13 may be changed without deviating from the scope of the present disclosure. Detailed descriptions of various elements mentioned infra are provided supra and therefore will not be repeated.[0089] Stage 1 of FIG. 13 illustrates a state after a substrate (e.g., substrate 306) is provided. The substrate may include dielectric layers and one or more interconnects (e.g., traces, vias, pads).[0090] Stage 2 illustrates a state after an interconnect (e.g., the first interconnect 1202) is provided above (e.g., on top of) the substrate 306 using a wire-bonding process, as described in greater detail supra. ). In some implementations, the interconnect 1202 is a wire bond. One of ordinary skill in the art will understand that various techniques may be used to provide the first interconnect 1202 above the substrate 306 without deviating from the scope of the present disclosure.[0091] Stage 3 illustrates a state after another interconnect (e.g., the second interconnect 332) is provided above (e.g., on top of) the substrate 306. The second interconnect 332 may be a metal plate with holes. At least some space 804 may exist between the first interconnect 1202 and the second interconnect 332.[0092] Stage 4 illustrates a state after a dielectric material 330 is provided in the space 804 between the first interconnect 1202 and the second interconnect 332. [0093] Stage 5 illustrates a state after a die 308 is provided (e., coupled) to the substrate. As shown in stage 5, the die 308 is coupled to a set of solder balls 310. The set of solder balls 310 are coupled to the interconnects of the substrate 306. The die 308 may form an electrical connection with the set of solder balls 310 and interconnects of the substrate 306.[0094] Stage 6 illustrates a state after a mold 334 (e.g., an encapsulation mold) may be provided. The mold 334 may surround (at least) a portion of the second interconnect 332. The mold 334 may provide structural / mechanical support to the first interconnect 328, the second interconnect 1202, the dielectric material 330, the set of solder balls 310, and/or the die 308.[0095] In some implementations, at least one or more of the first interconnects (e.g., interconnects 1202) surrounded by the insulation material 804 are inner interconnects of a coaxial interconnect / an interconnect means (e.g., coaxial interconnect means). In some implementations, the second interconnect 332 is an outer interconnect of a coaxial interconnect / an interconnect means (e.g., coaxial interconnect means). In some implementations, the one or more inner interconnects provide a first electrical path for a power signal, and the outer interconnect provide a second electrical path for a ground signal. In some implementations, the combination of at least one or more of the first interconnects (e.g., interconnects 1202), the insulation material 804, and/or the second interconnect 332 are configured to operate as a coaxial interconnect / an interconnect means (e.g., coaxial interconnect means).[0096] In some implementations, the interconnect 332 and/or an interconnect in the substrate 306 are part of a first netlist for a power distribution network (PDN) of the integrated device. For example, the interconnect 332 and/or an interconnect in the substrate 306 may be part of a ground netlist for a PDN of the integrated device.[0097] In some implementations, the interconnect 1202 and/or another interconnect in the substrate 306 are part of a second netlist for a power distribution network (PDN) of the integrated device. For example, the interconnect 1202 and/or another interconnect in the substrate 306 may be part of a power netlist or data signal netlist for a PDN of the integrated device.[0098] In some implementations, at least some of the interconnects 1202 (e.g., wire bond) may be part of the same netlist of a PDN. In some implementations, at least some of the interconnects 1202 (e.g., wire bond) may be part of the same netlist of a PDN. [0099] FIG. 14 illustrates a top cross-sectional view of the third exemplary integrated device 1400. The third exemplary integrated device 1400 may include one or more coaxial connections 1204 in a row 1402 on one or more sides of the die 308. Each coaxial connection 1204 may include a first interconnect 1202, a second interconnect 332 surrounding the first interconnect 1202, and an insulation material 330 between the first interconnect 1202 and the second interconnect 332. The row 1402 of coaxial connection(s) 1204 may be surrounded by the mold 334. Detailed descriptions of various elements mentioned supra have already been provided herein and therefore will not be repeated.[00100] FIG. 15 illustrates a top cross-sectional view of a fourth exemplary integrated device 1500. The fourth exemplary integrated device 1500 may include one or more sets of coaxial connections 1502 in rows 1506, 1508 on one or more sides of the die 308. Each coaxial connection 1502 may include first interconnects 1202, 1504, a second interconnect 332 surrounding the first interconnects 1202, 1504, and an insulation material 330 between the first interconnects 1202, 1504 and the second interconnect 332. The rows 1506, 1508 of coaxial connection(s) 1502 may be surrounded by the mold 334. Detailed descriptions of various elements mentioned supra have already been provided herein and therefore will not be repeated.Exemplary Methods for Providing / Fabricating an Integrated Device Including a Coaxial Connection[00101] FIG. 16 illustrates an exemplary flow diagram of exemplary methods for providing / fabricating an integrated device including a coaxial connection. The exemplary methods may provide / fabricate any one or more of the integrated devices illustrated supra. One of ordinary skill in the art will understand that the order of some of the blocks illustrated in FIG. 16 may be changed without deviating from the scope of the present disclosure. Also, one of ordinary skill in the art will also understand that any one or more of the blocks illustrated in FIG. 16 may be combined without deviating from the scope of the present disclosure. Optional blocks are illustrated in dashed lines. Detailed descriptions of various elements mentioned infra are provided supra and therefore will not be repeated. The exemplary methods described herein may be performed by an apparatus (e.g., a manufacturing device).[00102] At block 1602, the apparatus may provide (e.g., form) a first interconnect above (e.g., on top of) a substrate. For example, referring to stage 2 in FIG. 8, the first interconnect 328 may be provided (e.g., formed) on the substrate 306. As described in greater detail supra, the first interconnect 328 may be provided (e.g., formed) on the substrate 306 using various techniques. An example of such a technique is a plating process, as described in greater detail supra. Accordingly, the providing of the first interconnect above the substrate may include plating the first interconnect 328 on the substrate 306 (see e.g., FIG. 8). Another example of such a technique is a wire-bonding process, as described in greater detail supra. Accordingly, the providing of the first interconnect above the substrate may include wire-bonding the first interconnect 1202 on the substrate 306 (see e.g., FIG. 12). Alternative techniques for providing a first interconnect on a substrate are known to one of ordinary skill in the art and therefore are within the scope of the present disclosure.[00103] At block 1604, the apparatus may provide (e.g., form) a second interconnect above the substrate. The second substrate may surround the first interconnect and be configured to provide a connection to ground. For example, referring to stage 3 in FIG. 8, the apparatus may provide (e.g., form) the second interconnect 332 on the substrate 306. In some configurations, the second interconnect 332 may be a metal plate including holes 802. The first interconnects 328 may be provided (e.g., positioned) inside / through the holes 802 of the second interconnect 332. As illustrated in FIGs. 7A, 7C, the second interconnect 332 surrounds the first interconnect 328. The second interconnect 332 may be configured to provide an electrical connection to ground.[00104] At block 1606, the apparatus may provide (e.g., form) a dielectric material between the first interconnect and the second interconnect. For example, referring (again) to stage 3 in FIG. 8, at least some space 804 may exist between the first interconnect 328 and the second interconnect 332. The dielectric material 330 may be provided (e.g., formed) into the space 804 between the first interconnect 328 and the second interconnect 332, as illustrated in stage 4 in FIG. 8.[00105] At block 1608, the apparatus may provide (e.g., form) an encapsulation mold surrounding the second interconnect. For example, referring to stage 5 in FIG. 8, the mold 334 may be provided (e.g., formed) surrounding the second interconnect 332. The mold 334 may provide mechanical and/or structural support to the first interconnect 328, the second interconnect 332 surrounding the first interconnect 328, and/or the insulation material 330 between the first interconnect 328 and the second interconnect 332. Exemplary Electronic Devices[00106] FIG. 17 illustrates various electronic devices that may be integrated with any of the aforementioned integrated device, semiconductor device, integrated circuit, die, interposer and/or package. For example, a mobile telephone 1702, a laptop computer 1704, and a fixed location terminal 1706 may include an integrated device 1700 described herein. The integrated device 1700 may be, for example, any of the integrated circuits, dies, interposer, or packages described herein. The devices 1702, 1704, 1706 illustrated in FIG. 17 are merely exemplary. Other electronic devices may also feature the integrated device 1700 including, but not limited to, mobile devices, hand-held personal communication systems (PCS) units, portable data units such as personal digital assistants, GPS enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communications devices, smartphones, tablet computers or any other device that stores or retrieves data or computer instructions, or any combination thereof.[00107] One or more of the components, steps, features, and/or functions illustrated in FIGS. 3, 4, 5, 6, 7A, 7B, 7C, 7D, 8, 9, 10, 11, 12, 13, 14, 15 and/or 16 may be rearranged and/or combined into a single component, step, feature or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from the disclosure. FIGS. 3, 4, 5, 6, 7A, 7B, 7C, 7D, 8, 9, 10, 11, 12, 13, 14, 15 and/or 16 and its corresponding description in the present disclosure are not limited to dies and/or (integrated circuits) IC. In some implementations, FIGS. 3, 4, 5, 6, 7A, 7B, 7C, 7D, 8, 9, 10, 1 1, 12, 13, 14, 15 and/or 16 and its corresponding description may be used to manufacture, create, provide, and/or produce integrated devices. In some implementations, an integrated device may include a die package, an IC, a wafer, a semiconductor device, and/or an interposer.[00108] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation or aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term "aspects" does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation. The term "coupled" is used herein to refer to the direct or indirect coupling between two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered coupled to one another— even if they do not directly physically touch each other.[00109] Also, it is noted that the embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed.[00110] The various features of the disclosure described herein can be implemented in different systems without departing from the disclosure. It should be noted that the foregoing aspects of the disclosure are merely examples and are not to be construed as limiting the disclosure. The description of the aspects of the present disclosure is intended to be illustrative, and not to limit the scope of the claims. As such, the present teachings can be readily applied to other types of apparatuses and many alternatives, modifications, and variations will be apparent to those skilled in the art. |
A system (100) comprising a processing logic adapted to activate multiple security levels for the system and a storage coupled to the processing logic via a bus (11), the bus adapted to transfer information between the storage and the processing logic. The system also comprises a monitoring logic coupled to the processing logic and comprising a range of addresses associated with a predetermined security level of the system. The monitoring logic obtains an address associated with the information. If a current security level matches the predetermined security level and if the address does not correspond to the range of addresses, the monitoring logic restricts usage of the system. |
CLAIMS What is claimed is: 1. A method, comprising: obtaining an address associated with information transferred between a storage and a processing logic, said processing logic associated with a current security level; determining whether said address corresponds to a range of addresses associated with a predetermined security level; determining whether a current security level associated with said processing logic corresponds to said predetermined security level; and wherein, if the current security level corresponds to said predetermined security level, and if said address does not correspond to said range of addresses, generating an alert signal. 2. The method of Claim 1, wherein, if the current security level does not correspond to said predetermined security level, and if said address corresponds to said range of addresses, generating the alert signal. 3. The method of Claim 1 or 2, wherein obtaining the address associated with the information comprises obtaining an address associated with an instruction fetched from the storage. 4. The method of Claim 1 or 2, wherein obtaining said address comprises obtaining a virtual address associated with an instruction executed by the processing logic, and further comprising generating the alert signal if the instruction does not match said information. 5. A system, comprising: a processing logic adapted to activate multiple security levels for the system; a storage coupled to the processing logic via a bus, said bus adapted to transfer information between said storage and said processing logic; and a monitoring logic coupled to the processing logic and comprising a range of addresses associated with a predetermined security level of the system; wherein the monitoring logic obtains an address associated with said information; wherein, if a current security level matches said predetermined security level and if said address does not correspond to said range of addresses, the monitoring logic restricts usage of the system. 6. The system of claim 5, wherein, if the current security level does not match said predetermined security level and if said address corresponds to said range of addresses, the monitoring logic restricts usage of the system. 7. The system of Claim 6, wherein said information comprises data written to said storage, and wherein said address comprises a destination address to which the data is written. 8. The system of Claim 7, wherein said destination address corresponds to a memory stack in said storage, the memory stack dedicated to the predetermined security level. 9. The system of Claim 5, wherein said information comprises an instruction fetched from the storage, and wherein said address comprises a memory address from which the instruction is fetched; and wherein the monitoring logic uses execution data received from the processing logic to determine whether the instruction is executed in accordance with predetermined requirements. 10. The system of Claim 5, wherein said address comprises a virtual address provided by said processing logic upon executing an instruction associated with said virtual address; and wherein, if said monitoring logic determines that the instruction does not match said information, the monitoring logic restricts usage of the system. 11. A system, comprising: a check logic adapted to obtain an address associated with information transferred between a first storage and a processor; and a second storage comprising a range of addresses associated with a predetermined security level of the system; wherein, if the check logic determines that a current security level of the system matches the predetermined security level, and if the check logic determines that said address does not match said range of addresses, the check logic generates an alert signal. 12. The system of Claim 11, wherein, if the check logic determines that the current security level of the system does not match the predetermined security level, and if the check logic determines that said address matches said range of addresses, the check logic generates the alert signal. |
MONITOR MODE INTEGRITY VERIFICATIONThis relates to secure mode operation in data processing. BACKGROUNDMobile electronic devices such as personal digital assistants (PDAs) and digital cellular telephones are increasingly used for electronic commerce (e-commerce) and mobile commerce (m-commerce). It is desired for the programs that execute on the mobile devices to implement the e-commerce and m-commerce functionality in a secure mode to reduce the likelihood of attacks by malicious programs and to protect sensitive data.For security reasons, most processors provide two levels of operating privilege: a lower level of privilege for user programs; and a higher level of privilege for use by the operating system. The higher level of privilege may or may not provide adequate security for m-commerce and e-commerce, however, given that this higher level relies on proper operation of operating systems with vulnerabilities that may be publicized. In order to address security concerns, some mobile equipment manufacturers implement a third level of privilege, or secure mode, that places less reliance on corruptible operating system programs, and more reliance on hardware-based monitoring and control of the secure mode. U.S. Patent Publication No. 2003/0140245, entitled "Secure Mode for Processors Supporting MMU and Interrupts," incorporated herein by reference, describes a hardware-monitored secure mode for processors. A flexible architecture providing a third level of privilege, such as that described above, may be exploitable by software attacks. Thus, there exists a need for methods and related systems to eliminate the potential for malicious software to manipulate the system into entering a secure mode and executing non-secure instructions.SUMMARYDisclosed herein are techniques for verifying the integrity of a secure mode (e.g., monitor mode) of a system. An illustrative embodiment includes a system comprising a processing logic adapted to activate multiple security levels for the system and a storage coupled to the processing logic via a bus, the bus adapted to transfer information between the storage and the processing logic. The system also comprises a monitoring logic coupled to the processing logic and comprising a range of addresses associated with a predetermined security level of the system. The monitoring logic obtains an address associated with the information. If a current security level matches the predetermined security level and if the address does not correspond to the range of addresses, the monitoring logic restricts usage of the system.Another embodiment includes a system comprising a check logic adapted to obtain an address associated with information transferred between a first storage and a processor, and a second storage comprising a range of addresses associated with a predetermined security level of the system. If the check logic determines that a current security level of the system matches the predetermined security level, and if the check logic determines that the address does not match the range of addresses, the check logic generates an alert signal.Yet another embodiment includes a method that comprises obtaining an address associated with information transferred between a storage and a processing logic, the processing logic associated with a current security level. The method also includes determining whether the address corresponds to a range of addresses associated with a predetermined security level, and determining whether a current security level associated with the processing logic corresponds to the predetermined security level. The method also includes, if the current security level corresponds to the predetermined security level, and if the address does not correspond to the range of addresses, generating an alert signal. BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a computing system constructed in accordance with at least some embodiments of the invention; FIG. 2 shows a portion of the megacell of FIG. 1 in greater detail, and in accordance with embodiments of the invention;FIG. 3 shows various security modes used by the system of FIG. 1, in accordance with embodiments of the invention;FIG. 4A shows a detailed view of the megacell of FIG. 2, in accordance with preferred embodiments of the invention;FIG. 4B shows a storage associated with the megacell of FIG. 4A, in accordance with embodiments of the invention; andFIG. 5 shows a flow diagram of an exemplary method in accordance with embodiments of the invention. DETAILED DESCRIPTION OF THE EMBODIMENTSFIG. 1 shows a computing system 100 constructed in accordance with at least some embodiments of the invention. The computing system 100 preferably comprises the ARM<(R)> TrustZone<(R)> architecture, but the scope of disclosure is not limited to any specific architecture. The computing system 100 may comprise a multiprocessing unit (MPU) 10 coupled to various other system components by way of a bus 11. The MPU 10 may comprise a processor core 12 that executes applications, possibly by having one or more processing pipelines. The MPU 10 may further comprise a security state machine (SSM) 56 which, as will be more fully discussed below, aids in allowing the computer system 100 to enter a secure mode for execution of secure software, such as m-commerce and e-commerce software.The computing system 100 may further comprise a digital signal processor (DSP) 16 that aids the MPU 10 by performing task-specific computations, such as graphics manipulation and speech processing. A graphics accelerator 18 may couple both to the MPU 10 and DSP 16 by way of the bus 11. The graphics accelerator 18 may perform necessary computations and translations of information to allow display of information, such as on display device 20. The computing system 100 may further comprise a memory management unit (MMU) 22 coupled to random access memory (RAM) 24 by way of the bus 11. The MMU 22 may control access to and from the RAM 24 by any of the other system components such as the MPU 10, the DSP 16 and the graphics accelerator 18. The RAM 24 may be any suitable random access memory, such as synchronous RAM (SRAM) or RAMBUS (TM)-type RAM.The computing system 100 may further comprise a USB interface 26 coupled to the various system components by way of the bus 11. The USB interface 26 may allow the computing system 100 to couple to and communicate with external devices.The SSM 56, preferably a hardware-based state machine, monitors system parameters and allows the secure mode of operation to initiate such that secure programs may execute from and access a portion of the RAM 24. Having this secure mode is valuable for any type of computer system, such as a laptop computer, a desktop computer, or a server in a bank of servers. However, in accordance with at least some embodiments of the invention, the computing system 100 may be a mobile (e.g., wireless) computing system such as a cellular telephone, personal digital assistant (PDA), text messaging system, and/or a computing device that combines the functionality of a messaging system, personal digital assistant and a cellular telephone. Thus, some embodiments may comprise a modem chipset 28 coupled to an external antenna 30 and/or a global positioning system (GPS) circuit 32 likewise coupled to an external antenna 34.Because the computing system 100 in accordance with at least some embodiments is a mobile communication device, computing system 100 may also comprise a battery 36 which provides power to the various processing elements. The battery 36 may be under the control of a power management unit 38. A user may input data and/or messages into the computing system 100 by way of the keypad 40. Because many cellular telephones also comprise the capability of taking digital still and video pictures, in some embodiments the computing system 100 may comprise a camera interface 42 which may enable camera functionality, possibly by coupling the computing system 100 to a charge couple device (CCD) array (not shown) for capturing digital images.Inasmuch as the systems and methods described herein were developed in the context of a mobile computing system 100, the remaining discussion is based on a mobile computing environment. However, the discussion of the various systems and methods in relation to a mobile computing environment should not be construed as a limitation as to the applicability of the systems and methods described herein to just mobile computing environments.In accordance with at least some embodiments of the invention, many of the components illustrated in FIG. 1, while possibly available as individual integrated circuits, are preferably integrated or constructed onto a single semiconductor die. Thus, the MPU 10, digital signal processor 16, memory controller 22 and RAM 24, along with some or all of the remaining components, are preferably integrated onto a single die, and thus may be integrated into a computing device 100 as a single packaged component. Having multiple devices integrated onto a single die, especially devices comprising a multiprocessor unit 10 and RAM 24, may be referred to as a system-on-a-chip (SoC) or a megacell 44. While using a system-on-a-chip may be preferred, obtaining the benefits of the systems and methods as described herein does not require the use of a system-on-a-chip.FIG. 2 shows a portion of the megacell 44 in greater detail. The megacell 44 comprises CPU 46 which couples to security state machine (SSM) 56 by way of a security monitoring (SECMON) bus 73, also described below. The CPU 46 couples to memories 400 comprising the RAM 24 and ROM 48 by way of an instruction bus 50, a data read bus 52 and a data write bus 54. The buses 50, 52 and 54 are collectively referred to as "bus 401." The instruction bus 50 may be used by the CPU 46 to fetch instructions for execution from one or both of the RAM 24 and ROM 48. Data read bus 52 may be the bus across which data reads from RAM 24 propagate. Likewise, data writes from the CPU 46 may propagate along data write bus 54 to the RAM 24. Buses 50, 52 and 54 couple to the SSM 56 by way of a group of connections collectively referred to as "bus 403."The ROM 48 and the RAM 24 are partitioned into public and secure domains. Specifically, the ROM 48 comprises a public ROM 68, accessible in non-secure mode, and a secure ROM 62, accessible in secure mode. Likewise, the RAM 24 comprises a public RAM 64, accessible in non-secure mode, and a secure RAM 60, accessible in secure mode. In at least some embodiments, the public and secure domain partitions in the ROM 48 and the RAM 24 are virtual (i.e., non-physical) partitions generated and enforced by a memory management unit (not specifically shown) in the CPU 46.Secure ROM 62 and secure RAM 60 preferably are accessible only in secure mode. In accordance with embodiments of the invention, the SSM 56 monitors the entry into, execution during and exiting from the secure mode. The SSM 56 preferably is a hardware-based state machine that monitors various signals within the computing system 100 (e.g., instructions on the instruction bus 50, data writes on the data write bus 52 and data reads on the data read bus 54) and activity in the CPU 46 through SECMON bus 73. Each of the secure and non-secure modes may be partitioned into "user" and"privileged" modes. Programs that interact directly with an end-user, such as a web browser, are executed in the user mode. Programs that do not interact directly with an end-user, such as the operating system (OS), are executed in the privileged mode. By partitioning the secure and non-secure modes in this fashion, a total of four modes are made available. As shown in FIG. 3, in order of ascending security level, these four modes include the non-secure user mode 300, the non-secure privileged mode 302, the secure user mode 306, and the secure privileged mode 304. There is an intermediate monitor mode 308, described further below, between the modes 302 and 304. The computer system 100 may operate in any one of these five modes at a time. The computer system 100 may switch from one mode to another. FIG. 3 illustrates a preferred mode- switching sequence 298. The sequence 298 is preferred because it is more secure than other possible switching sequences. For example, to switch from the non-secure user mode 300 to the secure privileged mode 304, the system 100 should first pass through non-secure privileged mode 302 and the monitor mode 308. Likewise, to pass from the secure user mode 306 to the non-secure user mode 300, the system 100 should switch from the secure user mode 306 to the secure privileged mode 304, from the secure privileged mode 304 to the monitor mode 308, from the monitor mode 308 to the non-secure privileged mode 302, and from the non-secure privileged mode 302 to the non-secure user mode 300.Each mode switch is enacted by the adjustment of bits in the CPSR 82 and the SCR 84. The CPSR 82 comprises a plurality of mode bits. The status of the mode bits determines which mode the computer system 100 is in. Each mode corresponds to a particular combination of mode bits. The mode bits may be manipulated to switch modes. For example, the bits may be manipulated to switch from mode 300 to mode 302.The SCR 84 comprises a non-secure (NS) bit. The status of the NS bit determines whether the computer system 100 is in secure mode or non-secure mode. In at least some embodiments, an asserted NS bit indicates that the system 100 is in non-secure mode. In other embodiments, an asserted NS bit indicates that the system 100 is in secure mode. Adjusting the NS bit switches the system 100 between secure and non-secure modes. Because the status of the NS bit is relevant to the security of the system 100, the NS bit preferably is adjusted only in the monitor mode 308, since the monitor mode 308 is, in at least some embodiments, the most secure mode. More specifically, when the system 100 is in the monitor mode 308, the core 12 executes monitor mode software (not specifically shown) on the secure ROM 62, which provides a secure transition from the non-secure mode to the secure-mode and from the secure mode to the non-secure mode. In particular, the monitor mode software performs various security tasks to prepare the system 100 for a switch between the secure and non-secure modes. The monitor mode software may be programmed to perform security tasks as desired. If the core 12 determines that these security tasks have been properly performed, the monitor mode software adjusts the NS bit in the SCR register 84, thereby switching the system 100 from non- secure mode to secure mode, or from secure mode to non-secure mode. The mode of the system 100 is indicated by the signal on SECMON 73, show in FIG. 2. FIG. 4A shows a detailed view of the megacell 44 of FIG. 2. As shown in FIG. 4 A, the memories 400 couple to CPU 46 via instruction bus 401. The memories 400 also couple to SSM 56 via instruction buses 401 and 403. The CPU 46 comprises core 12 and the register bank 80 having CPSR register 82 and SCR register 84. The core 12 comprises an execution pipeline 404 which couples to an embedded trace macro cell (ETM)/SECMON interface 406 via bus 413. The interface 406 couples to the SSM 56 via ETM bus 405 and SECMON bus 73, which the interface 406 receives from the register bank 80. The SSM 56 comprises a physical address check logic (PACL) 408 and a virtual address check logic (VACL) 410. Both the PACL 408 and the VACL 410 couple to a storage 412. The storage 412 may comprise any suitable storage, e.g., registers, ROM, etc. The contents of the storage 412 may be modified by the core 12 via peripheral port 398 and bus 399 while the system 100 is in monitor mode. Both the PACL 408 and the VACL 410 are capable of generating security violation signals via buses 407 and 409, respectively.FIG. 4B shows a detailed view of the storage 412. Specifically, the storage 412 comprises a plurality of storage units (e.g., registers). The PACL 408 and the VACL 410 use the contents of these registers to verify the integrity of the monitor mode, as described further below. The storage 412 includes a PHYS_MON_CODE_START register 450 and aPHYS_MON_CODE_END register 452. These registers specify the physical start and end memory addresses, respectively, associated with the monitor code stored in the memories 400. The storage 412 further includes a PHYS_MON_STACK_START register 454 and a PHYS_MON_STACK_END register 456. These registers specify the physical start and end memory addresses, respectively, associated with a dedicated monitor mode stack stored in the memories 400. The storage 412 further includes a VIRT_MON_CODE_START register 458 and a VIRT_MON_CODE_END register 460. These registers specify the start and end virtual addresses, respectively, associated with the virtual memory space that is associated with the monitor mode code stored in the memories 400. The storage 412 still further comprises a VIRT_MON_STACK_START register 462 and a VIRT_MON_STACK_END register 464. These registers specify the start and end virtual addresses, respectively, associated with the virtual memory space that is associated with the dedicated monitor-mode stack stored in the memories 400. The storage 412 also comprises a VIRT_PERI_START register 466 and a VIRT_PERI_END register 468. These registers specify the start and end virtual addresses, respectively, associated with the virtual memory space associated with the peripheral port 398. In accordance with embodiments of the invention, the PACL 408 uses the bus 403 to obtain data associated with each instruction (or other type of data) the core 12 fetches from the memories 400. The PACL 408 ensures that any instruction fetch or data transfer occurring in monitor mode (i.e., as determined using the SECMON bus 73) is associated with a memory address that falls within an expected range of memory addresses. The expected range of memory addresses is programmed into the storage 412, e.g., into registers 450, 452, 454 and 456.As the core 12 fetches an instruction from the memories 400 via instruction bus 401, the PACL 408 obtains an address associated with the instruction using bus 403. The PACL 408 compares the address associated with the instruction to the expected range of physical memory addresses stored in the registers 450 and 452. If a match occurs, the PACL 408 does not take any action. However, if the address associated with the instruction does not fall within the expected range of addresses, and if the PACL 408 determines (i.e., using the SECMON bus 73) that the system 100 is in monitor mode, the PACL 408 generates a security violation signal on bus 407 that is transferred to the power reset control manager 66. In response to the security violation signal, the power reset control manager 66 may reset the system 100. The SSM 56 also may take any of a variety of alternative actions to protect the computer system 100. Examples of such protective actions are provided in the commonly owned patent application entitled, "System and Method of Identifying and Preventing Security Violations Within a Computing System," U.S. Patent Application No. 10/961,748, incorporated herein by reference. In some embodiments, the PACL 408 monitors the physical memory addresses associated with any suitable data obtained from any of the memories 400 for use by the core 12.In addition to monitoring instructions fetched while the system 100 is in monitor mode, the PACL 408 also may monitor write accesses present on the bus 401 whereby the core 12 writes data to one of the memories 400. Specifically, the PACL 408 ensures that the core 12 does not write data to a monitor mode memory stack in the memories 400 if the core 12 is not in monitor mode. Using bus 403, the PACL 408 obtains the destination memory address associated with a write access on the bus 401. If the PACL 408 is not in monitor mode and if the destination memory address falls within a range of addresses in the memories 400 reserved for use as a dedicated monitor mode stack (i.e., as specified by the registers 454 and 456), the PACL 408 may generate a security violation signal via bus 407. The security violation signal may be handled as described above. If the PACL 408 determines that the system is in monitor mode, then no security violation signal is generated.As described, the PACL 408 ensures that while the system 100 is in monitor mode, instructions fetched from memories 400 are secure and safe to use in the monitor mode.However, it is possible that the instructions that are fetched from the memories 400 are not the instructions that are actually executed by the core 12. Accordingly, the VACL 410 ensures not only that instructions executed by the core 12 are safe to execute in monitor mode, but also that the instructions are properly executed. To this end, the megacell 44 may comprise one or more virtual memories (not represented in FIG. 4A) usable by the core 12 while executing software code. While executing an instruction, any virtual address associated with that instruction is transferred from the execution pipeline 404 to the interface 406. In turn, the interface 406 transfers the virtual address to the VACL 410 via ETM bus 405 for security clearance. The VACL 410 ensures that the instruction, if executed in monitor mode (e.g., as determined by the SECMON bus 73), has a virtual address that falls within an expected range of virtual memory addresses. The expected range of virtual memory addresses is programmed into the storage 412 (i.e., registers 458 and 460). Thus, the VACL 410 receives the virtual address from the interface 406 via ETM bus 405 and compares the virtual address with the expected range of virtual memory addresses stored in the registers 458 and 460. If a match is found, the VACL 410 does not take any action. However, if the received virtual address does not fall within the range of expected addresses, and if the VACL 410 determines (using the SECMON bus 73) that the system 100 is in monitor mode, the VACL 410 issues a security violation signal via bus 409. The security violation signal is sent to the power reset control manager 66. In response to the security violation signal, the power reset control manager 66 may reset the system 100. The SSM 56 also may take any of a variety of alternative actions to protect the computer system 100. Examples of such protective actions are provided in the commonly owned patent application referenced above (Patent Application No. 10/961,748).As previously mentioned, the VACL 410 ensures not only that an instruction being executed by the core 12 is safe to execute in monitor mode, but also that the instruction is properly executed. Accordingly, the ETM bus 405 generated by the interface 406 indicates the execution status and any error flags associated with each instruction executed in the execution pipeline 404 while in monitor mode. The specific data used to verify execution status and execution errors may vary from implementation to implementation. Such verification may include determining whether a monitor mode instruction was valid, whether data associated with the instruction was valid, etc.In addition to the functions described above, the VACL 410 also ensures that when the system 100 is in monitor mode, data transfers (e.g., read/write operations) occur only to or from monitor mode code in the memories 400, to or from the dedicated monitor mode stack area in the memories 400, or to or from dedicated registers (e.g., the registers in storage 412) on the peripheral port 398. As described above, the execution pipeline 404 transfers the virtual address associated with each data transfer, if any, to the interface 406 via bus 413. The virtual address is transferred to the VACL 410 via the ETM bus 405. In turn, the VACL 410 determines whether the virtual address associated with the data transfer falls within one of the virtual address ranges specified by the registers 458, 460, 462, 464, 466 or 468. If the virtual address falls within one of these virtual address ranges, the VACL 410 does not take action. However, if the virtual address does not fall within one of these virtual address ranges, and further if the VACL 410 determines (using the SECMON bus 73) that the system 100 is in monitor mode, the VACL 410 issues a security violation signal via bus 409, as previously described. The VACL 410 also ensures that data transfers are properly executed while the system100 is in monitor mode. Specifically, in addition to the information described above, the ETM bus 405 also transfers to the VACL 410 execution information associated with each data transfer performed by the core 12. Such execution information may include execution status, error flags, etc. The particular execution information provided to the VACL 410 regarding the execution of a data transfer may vary from implementation to implementation.FIG. 5 shows a flow diagram of a method 500 in accordance with embodiments of the invention. The method 500 is applicable to operations of both the PACL 408 and the VACL 410. The method 500 begins by obtaining an instruction address or data transfer address (block 502). The instruction or data address may comprise a physical memory address or a virtual memory address. The method 500 also comprises comparing the obtained address to an expected address range (block 504). The expected address range is stored in one of the registers of the storage 412, as previously described. The method 500 further comprises comparing a current security level of the system with the security level associated with the address range (block 506). For example, the method 500 may determine whether the system is in monitor mode, since at least some of the registers stored in the storage 412 comprise address ranges associated with the monitor mode.If the address falls within the range of addresses (block 508), and if the current security level of the system matches the security level associated with the range (block 512), the method 500 comprises generating an alert signal (block 514). Similarly, if the address does not match the range of addresses (block 508), and if the current security level of the system matches the security level associated with the range of addresses (block 510), the method 500 comprises generating the alert signal (block 514).Those skilled in the art to which the invention relates will appreciate that the foregoing are just some illustrative embodiments and that there are other ways and variations of ways to implement the claimed invention. |
An embodiment of the present invention is a technique to provide a real-time threading service to an application in a multi-core environment. An executive is launched, within a most privilege level of an operating system (OS), on a real-time core in the multi-core environment. The real-time core is sequestered from the OS. A real-time thread is created in a least privilege level on the real-time core for an application using a library. The library is loaded by the application. The real-time thread shares a virtual address space with the application. |
1.A method comprising:Executing an execution program on a real-time (RT) core within a multi-core environment, the RT core is isolated from the OS, under the highest privilege level (MPL) of the operating system (OS);A library is created for the application using the library on the RT core at the lowest privilege level (LPL), the library being loaded by the application, the RT thread sharing a virtual address space with the application.2.The method of claim 1 wherein creating the RT thread comprises:Receiving a create request from the library, the library passing the create request from the application;Verify that the RT core is available;A raise request is sent to the execution program, the execution program initiating the RT thread on the RT core.3.The method of claim 1 further comprising:The page directory base register (PDBR) of the RT thread is changed to point to a page directory of a parent process within the OS, or to a copy of the page directory holding a subset of the parent process virtual address space.4.The method of claim 3, further comprising:Communicating with the application;Administering the pinning of the memory area used by the RT thread;The executive program is communicated via a shared memory buffer.5.The method of claim 4 wherein communicating with the application comprises:Receiving a wait request from the application to wait for the RT thread to stop;Receiving a signal from the execution program indicating that the RT thread has stopped;The blocking of the application is released to allow the application to receive an exit status from the RT thread.6.The method of claim 4 wherein managing the pinning of the memory region comprises: pinning a memory region to the RT thread;Tracking the memory area;Receiving, from the library, a notification that the RT thread terminates;The pinning of the memory area is released.7.The method of claim 1 further comprising:Manage resources on the multi-core environment.8.An article of manufacture comprising:A machine accessible medium comprising data that, when accessed by a machine, causes the machine to perform the following actions:Executing an execution program on a real-time (RT) core within a multi-core environment, the RT core is isolated from the OS, under the highest privilege level (MPL) of the operating system (OS);A library is created for the application using the library on the RT core at the lowest privilege level (LPL), the library being loaded by the application, the RT thread sharing a virtual address space with the application.9.The article of claim 8 wherein said causing said machine to perform data creating said RT thread comprises data that, when accessed by a machine, causes said machine to::Receiving a create request from the library, the library passing the create request from the application;Verify that the RT core is available;A raise request is sent to the execution program, the execution program initiating the RT thread on the RT core.10.The article of claim 8 wherein said data further comprises data that, when accessed by the machine, causes said machine to:The page directory base register (PDBR) of the RT thread is changed to point to a page directory of a parent process within the OS, or to a copy of the page directory holding a subset of the parent process virtual address space.11.The article of claim 10 wherein said data further comprises data that, when accessed by the machine, causes said machine to:Communicating with the application;Administering the pinning of the memory area used by the RT thread;The executive program is communicated via a shared memory buffer.12.The article of claim 11 wherein said causing said machine to execute data in communication with said application comprises data that, when accessed by a machine, causes said machine to::Receiving a wait request from the application to wait for the RT thread to stop;Receiving a signal from the execution program indicating that the RT thread has stopped;The blocking of the application is released to allow the application to receive an exit status from the RT thread.13.The article of claim 11 wherein said data causing said machine to perform pinning of said memory area comprises data that, when accessed by a machine, causes said machine to:Pinning to a memory area of the RT thread;Tracking the memory area;Receiving, from the library, a notification that the RT thread terminates;The pinning of the memory area is released.14.The article of claim 8 further comprising data that, when accessed by the machine, causes said machine to:Manage resources on the multi-core environment.15.A system comprising:a main core having an operating system (OS) that supports a highest privilege level and a lowest privilege level;a plurality of cores isolated from the OS, the core supporting the highest and lowest privilege levels;An application running under the lowest privilege level;Allowing the application to create a real-time (RT) threaded RT thread service on an isolated core, the RT threading service including:a driver running under the highest privilege level and initiated by the OS, the driverControlling the RT thread,Initiated on the isolated core by the driver and running at the highest privilege levelTo cause an execution program of the RT thread, the RT thread shares a virtual with the applicationAddress space, andA library loaded by the application running under the lowest privilege level.16.The system of claim 15 wherein said driver verifies that said isolated core is available and upon receipt of a create request from said library, sends a raise request to said available isolation core The execution program.17.The system of claim 15 wherein said execution program changes a page directory base register (PDBR) of said RT thread to point to a page directory of a parent process within said OS, or to a holding said parent A copy of the page directory of the subset of process virtual address spaces.18.The system of claim 15 wherein said driver releases the blocking of said application to allow said application to, after receiving a signal from said executing program indicating that said RT thread has stopped, Receive an exit status from the RT thread.19.The system of claim 15 wherein said execution program is pinned to a memory area of said RT thread, tracks said memory area, and is released after receiving said notification of said RT thread termination from said library Pinning to the memory area.20.The system of claim 15 wherein said driver manages resources on said multi-core environment. |
Real-time threading service for partitioned multiprocessor systemsbackgroundField of inventionEmbodiments of the invention relate to the field of operating systems, and more particularly to real-time threads.Description of related fieldsThe Real Time Operating System (RTOS) is an operating system (OS) developed for real-time applications. Typically, real-time applications require deterministic response times when interacting with real-world environments.Applications developed under existing OSs do not have a fully dedicated and predictable environment that is not limited by potential OS. It may need to be coordinated with a particular hardware and software platform that runs on or within an isolated environment. An isolated environment can be an independent process within its own virtual address space on the same or another processor, usually with a completely separate RTOS environment. Applications have to interact with this isolated environment through explicit message and data buffer interactions. This leads to inefficient use of resources and can lead to uncertain response times.BRIEF DESCRIPTION OF THE DRAWINGSEmbodiments of the present invention will be best understood by referring to the following description of the embodiments of the invention. In the figure:1 is a diagram showing a system in which one embodiment of the present invention may be implemented.2 is a diagram showing a multi-verification thread service, in accordance with one embodiment of the present invention.3 is a diagram showing component interactions of a real-time thread service, in accordance with one embodiment of the present invention.4 is a diagram showing a shared virtual address space in accordance with one embodiment of the present invention.FIG. 5 is a diagram showing a virtual address space map in accordance with one embodiment of the present invention.6 is a diagram showing continuous real-time thread events in accordance with one embodiment of the present invention.7 is a diagram showing modules that support real-time threading services in a multi-core environment, in accordance with one embodiment of the present invention.descriptionOne embodiment of the present invention is a technique for providing real-time threading services to applications in a multi-core environment. These real-time cores are isolated from the OS. The driver is launched in the kernel of the operating system (OS). The executive is initiated by the driver to bootstrap and control the isolated core. Create real-time threads for your application using libraries on the OS. The library exposes a user-level application programming interface (API) to communicate with drivers and executives. The real-time thread shares the virtual address space with the application and is controlled by the executor and driver.In the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in order to avoid obscuring the understanding of the invention.One embodiment of the invention may be described as a process, which is generally described as a flowchart, a flowchart, a block diagram, or a block diagram.Although a flowchart depicts an operation as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of operations can be rearranged. The process terminates when its operation is complete. A process can correspond to a method, a program, a process, a generation or a manufacturing method, and the like.One embodiment of the present invention is a technique for implementing real-time threading services for a multi-core or multi-processor system. The term "real time" as used herein refers to a deterministic time in response to a real world event or transaction. Each thread is exposed at the user level. Therefore, it can be called a lightweight thread because the amount of context information to be saved is small. This threading service is provided to support isolated symmetric multi-core or multi-processor systems (SMP) or nuclear multiprocessor systems (CMP). An isolated SMP/CMP platform is a multi-processor/multi-core system in which the host OS is booted and only a subset of cores or processors are known. The remaining processors are not visible to the OS. An invisible processor can also be referred to as an isolated processor. The driver works with the executor to allow the program to execute threads on the core/processor within the partitioned SMP platform isolated from the main OS via the API provided by the user-level library. Threading services also allow programmers to extend today's non-real-time OS with real-time subsystems using off-the-shelf and future multiprocessor (MP) platforms, where the real-time subsystem uses the processor in which it is isolated and shares the processor in OS space. Software model of the virtual address space. This makes it easy to port existing code and leverage the efficient work of multiple cores to quickly develop this OS extension without the limitations of existing OS.Elements of embodiments of the invention may be implemented by hardware, firmware, software, or any combination thereof. The term hardware generally refers to an element having a physical structure such as an electronic, electromagnetic, optical, electro-optic, mechanical, motor component, component, or device. The term software generally refers to logical structures, methods, procedures, programs, routines, processes, algorithms, formulas, functions, expressions, and the like. The term firmware generally refers to logical structures, methods, procedures, programs, routines, processes, algorithms, formulas, functions, expressions, etc. that are implemented or embodied in a hardware structure (eg, flash memory). Examples of firmware may include microcode, writable control storage, microprogramming structures.When implemented in software or firmware, the elements of an embodiment of the invention are essentially a code segment that performs the necessary tasks. The software/firmware may include actual code for performing the operations described in one embodiment of the invention or code for emulation or simulation operations. The program or code segments can be stored in a processor or machine accessible medium or transmitted over a transmission medium by a computer data signal embodied in a carrier or a signal modulated by a carrier. "Processor readable or accessible medium" or "machine readable or accessible medium" can include any medium that can store, transmit, or convey information. Examples of processor readable or machine accessible media include electronic circuitry, semiconductor memory devices, read only memory (ROM), flash memory, erasable ROM (EROM), erasable programmable ROM (EPROM), floppy disk, compact disk (CD) ROM, optical disk, hard disk, optical media, radio frequency (RF) link, etc. Computer data signals may include any signal capable of being propagated over a transmission medium such as an electronic network channel, fiber optic, air, electromagnetic, RF link, or the like. The code segments can be downloaded via a computer network such as the Internet, an intranet, and the like. Machine accessible media can be embodied as a product. The machine-accessible medium can include data that, when accessed by the machine, causes the machine to perform the following operations. The machine-accessible medium can also include program code embedded therein. The program code includes machine readable code that performs the operations described below. The term "data" as used herein refers to any type of information that is encoded for machine readable purposes. Therefore, it can include programs, code, data, files, and the like.All or part of the embodiments of the invention may be implemented by hardware, software, firmware or any combination thereof. A hardware, software or firmware element can have multiple modules coupled to each other. A hardware module is coupled to another module by mechanical, electrical, optical, or other physical connection. Software modules are coupled to another module by functions, procedures, methods, subroutines, or subroutine calls, bridges, links, parameters, variables, and argument transfers, function call returns, and the like. A software module is coupled to another module to receive variables, parameters, arguments, pointers, etc. and/or to generate or pass results, updated variables, pointers, and the like. The firmware module is coupled to another module by any combination of the above hardware and software coupling methods. A hardware, software or firmware module can be coupled to any of another hardware, software or firmware. A module can also be a software driver or interface that interacts with an operating system running on the platform. A module can also be a hardware driver configured to construct, build, initialize, send data to or receive data from a hardware device. A device can include any combination of hardware, software, and firmware modules.One embodiment of the invention may be described as a process, which is generally described as a flowchart, a flowchart, a block diagram, or a block diagram. Although a flowchart depicts an operation as a sequential process, many of the operations can be performed in parallel or concurrently. Loops or iterations in the flowchart can be described by a single iteration. It should be understood that the loop index or counter is maintained to update the associated counter or pointer. In addition, the order of operations can be rearranged. The process terminates when its operation is complete. Processes can correspond to methods, procedures, procedures, and the like. A block diagram can comprise blocks or modules describing elements, terms, components, devices, units, subunits, structures, methods, processes, functions, operations, functionality, tasks, and so forth. Functionality or operation can be performed automatically or manually.FIG. 1A is a diagram showing a system 100 in which one embodiment of the present invention may be implemented. System 10 includes a processor unit 110, a memory controller hub (MCH) 120, a main memory 130, an input/output controller hub (ICH) 140, an interconnect 145, a mass storage interface 150, and input/output (I/O) ) Devices 1801 to 180K.Processor unit 110 represents a central processing unit of any type of architecture, such as a processor utilizing hyper-threading, security, networking, digital media technology, multi-core processors, embedded processors, mobile processors, microcontrollers, digital signals Processor, superscalar computer, vector processor, single instruction multiple data (SIMD) computer, complex instruction set computer (CISC), reduced instruction set computer (RISC), very long instruction word (VLIW) or hybrid architecture. More specifically, processor unit 110 may have a multi-core or multi-processor architecture in which multiple cores or processors operate in parallel.MCH 120 provides control and configuration of memory and input/output devices such as main memory 130, ICH 140. The MCH 120 can be integrated into a chipset that integrates functions such as graphics, media, isolated execution mode, host-peripheral bus interface, memory control, power management, and more. Memory controller functions in MCH 120 or MCH 120 may be integrated in processor unit 110. In some embodiments, a memory controller internal or external to processor unit 110 can operate for all cores or processors in processor unit 110. In other embodiments, it may include different portions that may work separately for different cores or processors in processor unit 110.Main memory 130 stores system code and data. Main memory 130 is typically implemented in dynamic random access memory (DRAM), static random access memory (SRAM), or any other type of memory including memory that does not require refreshing. Main memory 130 may include multiple channels of memory device double rate (DDR2) DRAM. More specifically, memory 130 includes a multi-verification (RT) thread service 135. The multi-core RT thread service 135 provides services to applications to create and manage RT threads in a multi-core environment.The ICH 140 has a variety of features designed to support I/O capabilities. The ICH 140 may also be integrated with the MCH 20 as a chipset or separated from the MCH 20 to perform I/O functions. The ICH 140 may include multiple interfaces and I/O functions such as Peripheral Component Interconnect (PCI) bus interface, processor interface, interrupt controller, direct memory access (DMA) controller, power management logic, timers, system management Bus (SMBus), Universal Serial Bus (USB) interface, mass storage interface, low pin count (LPC) interface, etc.Interconnect 145 provides an interface to peripheral devices. Interconnect 145 can be point-to-point or connected to multiple devices. For the sake of clarity, not all interconnections are shown. It is envisioned that interconnect 145 can include any interconnect or bus, such as Peripheral Component Interconnect (PCI), PCI Express, Universal Serial Bus (USB), and Direct Media Interface (DMI).The mass storage interface 150 provides an interface to mass storage devices that store profile information such as code, programs, files, data, and applications. The mass storage device may include a compact disk (CD) read only memory (ROM) 152, a digital video/general disk (DVD) 154, a floppy disk drive 156 and a hard disk drive 158, and any other magnetic or optical storage device. The mass storage interface 150 provides a mechanism for reading machine accessible media. I/O devices 1801 through 180K may include any I/O device for performing I/O functions. Examples of I/O devices 1801 through 180K include controllers for input devices (eg, keyboards, mice, trackballs, click devices), media cards (eg, audio, video, graphic displays), network cards, and any other peripherals Controller.2 is a diagram showing the multiple verification time (RT) thread service 135 shown in FIG. 1 in accordance with one embodiment of the present invention. The multi-core RT thread service 135 includes a main core 210, N RT cores 2201 to 220N, an OS 230, a highest privilege level (MPL) 240, and a lowest privilege level (LPL) 250.The main core 210 is a core on which the OS 230 is loaded and operated. There are more than one primary core on which OS 230 can run. The N RT cores 2201 through 220N are cores or processors that are isolated by the basic input output system (BIOS) during boot, or they may be isolated by the OS 230. N RT cores 2201 through 220N are not visible to OS 230. They can be referred to as lightweight cores, corresponding to light threads that run at the user level as described later.Once booted by the user 205, the OS 230 can be loaded and run on the primary core 210. The OS 230 supports partitioned symmetric multiprocessing (SMP) systems. In one embodiment, OS 230 is a Microsoft Windows Server 2003 OS. It should be expected that other OSs that support partitioned MPs can also be used. OS 230 supports tiering that represents various privilege levels. The MPL 240 is the highest privilege level on which the OS 230 core runs. LPL level 250 is the lowest privilege level in which a user application or program is run. In one embodiment, MPL 240 and LPL 250 correspond to kernel mode and user mode, respectively, in the Microsoft Windows Server 2003 OS.The OS 230 has a driver 260, a library 285, and an application or OS thread called an application 280. Driver 260 is initiated by OS 230 when OS 230 is started. Subsequently, the driver 260 initiates N execution programs 2701 to 270N, each of which is used for each of the N RT cores 2201 to 220N. Each of the N RT cores 2201 through 220N has its own instance of the execution program. A single executive can also be initiated for all RT cores. Driver 260 and N executive programs 2701 through 270N operate in MPL 240. Application 280 is initiated by user 280 in LPL 250. It loads library 285. Application 280 then requests to create an RT thread, such as RT thread 290k, on RT core 290k. Each of the N RT cores 2201 through 220N can execute an RT thread on behalf of an OS application, if desired. An RT core that does not execute an RT thread is considered idle. In other embodiments of the invention, a single RT core may execute several RT threads on behalf of a single OS application or several RT threads on behalf of several OS applications. The OS 230 does not operate on the cores 2201 to 220N. Thus, executive programs 2701 through 270N and RT threads 2901 through 290N operate at MPL 240 and LPL 250 as supported by the core processor.3 is a diagram showing component interactions of an RT thread service, in accordance with one embodiment of the present invention.As described above, the driver 260 and the execution program 270 operate in the MPL 240. Application 280, library 285, and RT thread 290k run in LPL 250. Together, these components form a multi-core RT threading service for isolated SMP systems.Driver 260 initiates execution of program 270k on each of available cores 220k upon startup. It can initiate, join, and delete RT threads. It can pin the memory area allocated to the RT thread or application or unpin the area. It also maintains communication with all executive programs.Execution program 270k switches to or switches from RT thread 290k. In other words, it performs the conversion of tasks between MPL 240 and LPL 250, respectively. It also performs exception handling and other tasks such as preemption and signaling.Library 285 is a dynamic link library containing a number of useful functions to perform a variety of tasks related to providing threaded service support. It represents the services of the driver 260 from the primary core 210, including initiating and linking the RT thread 290k and memory pinning. It also acts as a proxy for the execution of program 270k, including managing thread exits. In addition, it can perform any runtime tasks such as heap management, debug printing, and synchronization.Application 280 creates threads using thread services that run on RT cores or processors 2201 through 220N shown in FIG. For example, it uses a thread service to create an RT thread 290k. The RT thread creation begins with a call from the application 280 within the OS 230 to the library 285 to request the creation of an RT thread. This call provides an entry point (for example, a function name) and an argument. Library 285 then requests driver 260 to allocate RT cores from N RT cores 2201 through 220N. Driver 260 determines or finds cores that can be used to create threads through N RT cores 2201 through 220N. Suppose core k is available. Driver 270 is then requested to pin down the memory page required for the RT thread to operate properly. The message is then sent to executive 270k to request it to initiate an RT thread on core 220k. Execution program 270k then creates RT thread 290k.Execution program 270k creates a page directory and table to map one-to-one as a virtual address space for application 280. It then switches to LPL 250 and jumps to the user's entry point. When RT thread 290k exits or an external event (e.g., an interrupt) occurs, control returns to executive 270k. Execution program 270k then either services the event and returns control to application 280, or clears and signals driver 260 and library 285 that RT thread 290k has completed execution. Most of the recoverable exceptions that occur during execution of the RT thread 290k may be processed by the executor 270k, such as completing a call to a user-defined handler within the OS 230.4 is a diagram showing a virtual address space shared by an RT thread in accordance with one embodiment of the present invention. The sharing of the virtual address space by the RT thread is achieved by the OS application and its real-time threads sharing the same page directory.The main core 210 has a page directory base register (PDBR) 415 that points to the page directory 430. The cores 2201 to 220N have PDBRs 4251 to 425N, respectively. Each application on the primary core 210 has its own page directory. The page directory is part of each application context and is therefore saved and restored on context switches. When the RT thread is created and executed, the PDBR of the associated core changes to the PDBR that originated the application. Alternatively, the PDBRs of cores 2201 through 220N may point to a copy of the parent process's page directory, which holds a subset of the parent process's virtual address space. Thus, the RT thread shares the same virtual address space with the application that invoked it.The page directory 430 contains pointers to the K page tables 4401 to 440K according to the physical memory requirements of the application at the moment. Page tables 4401 through 440K point to corresponding pages 4601 through 460M located within physical memory 450.In addition, processor pinning can also be performed to ensure that application pages used by the RT thread are not being evicted by the OS memory manager. This can be done using an API set in the OS kernel. Library 285 automatically pinches the code and data segments needed to execute RT thread 290k. The required locking and pagination can be used efficiently.FIG. 5 is a diagram showing a virtual address space map 500 executed by an execution program in accordance with one embodiment of the present invention. Map 500 shows the page directory of program 270k. All executive programs use the same page directory. The page directory of the execution program shown in FIG. 5 is different from the page directory of the RT thread shown in FIG. The allocation of the RT thread page directory is performed when the associated core is assigned to the application. On the other hand, the executor page directory can be used when the core has not been allocated. The map 500 includes an OS virtual address space 510, a physical memory address space 520, and an executable program virtual address space 530.The executive code is compiled as part of the driver 260 and is therefore loaded with a linear address greater than 2 gigabytes (2G). All dynamic allocations are performed from the OS system heap, which ensures that all executor memory is protected from user code and can only be accessed by kernel mode code. The Execution Program Page Directory is a small, one-to-one subset of OS system (>2G) linear memory. It can be used to map the structure required for the correct operation of the program. Examples of these structures are: Execution Program Code and Data, Global Descriptor Table (GDT), Interrupt Descriptor Table (IDT), Advanced Programmable Interrupt Controller (APIC) information, reasonably sized heaps or buffers (eg, 64K) ), executing program management structures, message buffers (for example, memory pipes), and stacks.The OS virtual address space 510 occupies the entire virtual address space provided by the primary core 210. It includes a memory area that is occupied by driver 260, executive program 270k, and executive memory structure 540. Execution program memory structure 540 can include an execution program heap, a GDT, and an IDT. Physical memory address space 520 contains memory regions that are mapped by software components within the OS. All memory allocations are done by the primary core 210 (Fig. 2). Mapping of the executor page directory/page table is done by pinning the page and then mapping a one-to-one physical/linear copy from the OS page directory to the executor page directory. For example, driver 260 can be mapped to memory regions 550 and 552, executable program 270k can be mapped to memory regions 560 and 562, and executive program memory structure 540 can be mapped to memory regions 570 and 572. Execution program virtual address space 530 corresponds to OS virtual address space 510 on a one-to-one mapping.6 is a diagram showing continuous RT thread events in accordance with one embodiment of the present invention. This continuous RT thread event includes user 205, OS 230, driver 260, executive 270k, application 280, and RT thread 290k. The order of these events is indicated by time scales A through L.Initially, at time A, the user 205 boots the system and the OS 230 is loaded. After initialization, at time B, OS 230 initiates driver 260 in kernel mode. At time C, driver 260 initiates execution of program 270k for all isolated cores. At this point, the multi-core RT thread service 135 is up and running. At time D, after initiating execution of program 270k, user 205 begins application 280, which can use multi-core RT thread service 135.At time E, application 280 requests to create an RT thread via library 285. Then create the appropriate structure and pin all relevant linear segments. This request is sent to the driver 260 via the library 285. Driver 260 then verifies that an available core exists. At time F, the driver 260 sends a request to the execution program 270k on the available core k to request the execution program 270k to spawn the RT thread. At time G, executive 270k raises RT thread 290K on available core 220k. RT thread 290k then runs in the lowest privilege level 250.At time H, application 280 terminates RT thread 290K by signaling the RT thread 290k to stop using the shared variable. At time I, application 280 couples RT thread 290k to driver 260 to request driver 260 to wait until RT thread 290k has actually stopped. At time J, RT thread 290k terminates and exits via a library function call. Control then passes to execution routine 270k. At time K, the executive program 270k notifies the driver 260 that the RT thread 290k is terminated. At time L, driver 260 signals application 280 to indicate that RT thread 290k has been joined. Driver 260 unblocks application 280 to allow it to receive the RT thread exit status and continue to run. At this point, application 280 has completed its use of RT thread 290k.7 is a diagram showing a module 700 that supports RT thread services in a multi-core environment, in accordance with one embodiment of the present invention. Module 700 includes a resource management function 710, a driver-execution program communication 720, a memory pinning function 730, and a memory pinning track 740.Resource management function 710 is based on a mechanism for remembering RT core activity. In one embodiment, the driver maintains two lists. The first list includes all unallocated or free execution programs. The second list includes the executors assigned to each user application that uses the RT thread service. Each time the RT thread is initiated, the driver finds the available executor in the first list and moves the executor to the second list. If all the executors have been allocated, the driver returns an error to the calling application. This list is linked by a pointer to the executor's header. Whenever possible, the execution program happens to belong to a list. With this list structure, you can implement additional or more complex resource management strategies.Driver-execute program communication 720 provides a communication mechanism between the driver and the execution program. In one embodiment, the communication mechanism uses a memory pipeline. This pipeline can be implemented with a circular memory buffer. Execution programs and drivers have their own pipelines as incoming message queues. The sender finds the appropriate message queue, writes the message to the queue, and signals the acceptor to read its queue. Each memory pipe has a lock to prevent multiple messages from being simultaneously written to the queue.The memory pinning function 730 allocates and pinches the memory area to the RT thread. Complete the above functions to ensure that the OS Virtual Memory Manager does not cause pages that are used by the RT thread to page out. These pages can be pinned down by the driver using the OS kernel's memory manager service. In one embodiment, the library automatically pinches the code and data segments needed to execute the RT thread. In the simplest method, process 930 locks all code and data segments, including heap buffers, when the library is loaded. This simple method can result in a large amount of memory being pinned for each application. A more efficient method can use on-demand locking and paging.A memory pinning tracking function 740 is performed to provide a memory pinning service to the OS main memory process. The driver does not rely on the user application to release all of its pinned areas before exiting. By tracking the pinned area, the driver can perform any cleanup if needed. Tracking can be performed by using a single-link list structure. In one embodiment, a Memory Descriptor List (MDL) in the Windows 2003 OS is used. All pinned memory regions from all applications using the RT thread service are recorded on this single list. Access to the list can be protected by a mutual exclusion mechanism. When the driver 260 receives a notification from the library that the RT thread is terminated, the function 740 unpins the memory area. This can be done by traversing the list of pinned buffers and unpinning any buffers allocated to the terminated RT thread.Embodiments of the present invention provide efficient RT thread services for partitioned multiprocessor systems. This threading service provides the user with the ability to run strong computations that require real-time performance (eg, media coding) on dedicated and predictable subsystems, without being affected by the arbitrariness and uncertainty nature of the OS scheduler. Applications and RT threads collaborate through multi-threaded collaboration models and communicate through the same virtual address space, making it easy to develop new applications and easily move existing applications. More specifically, it is no longer necessary to predetermine which part of the code to execute on the OS and which part to execute on the RT core or processor. Calculations can be done on both sides, allowing the OS thread to provide additional computing power within its alternate loop. In addition, the same code can run on both the OS and the RT core without recompiling. Finally, there is no such complex factor as encapsulation. It is no longer necessary to compile the same program for two OSs, or to compile two programs, one for each OS. The RT Threading Service only requires a standard OS (eg, Windows) development environment.While the invention has been described in terms of several embodiments, the embodiments of the invention The specification is therefore to be regarded as illustrative rather than restrictive. |
Semiconductor devices with improved transistor performance are fabricated by forming a composite oxide/nitride liner (24,25) under a gate electrode sidewall spacer (40). Embodiements include depositing a conformal oxide layer (24) by decoupled plasma deposition, depositing a conformal nitride layer (25) by decoupled plasma deposition, depositing a spacer layer (30) and then etching. |
WHAT IS CLAIMED IS: 1. A semiconductor device comprising: a gate electrode (21), having side surfaces, over an upper surface of a substrate (20) with a gate dielectric layer (24) therebetween; an oxide liner (24) on the side surfaces of the gate electrode (21) and the upper surface of the substrate (20); a nitride liner (25) on the oxide liner (24); and a sidewall spacer (40) on the nitride liner (25). 2. The semiconductor device according to claim 1, wherein: the oxide liner (24) comprises silicon oxide; the nitride liner (25) comprises silicon nitride; and the sidewall spacer (40) comprises a silicon oxide, silicon nitride or silicon oxynitride. 3. The semiconductor device according to claim 2, wherein the sidewall spacer (40) comprises a silicon oxide. 4. The semiconductor device according to claim 3, wherein the silicon oxide sidewall spacer (40) has a dielectric constant (k) no greater than about 3.9. 5. The semiconductor device according to claim 2, comprising shallow source/drain extensions (23) in the upper surface of the substrate (20) on each side of the gate electrode (21) under the sidewall spacer (40), wherein the source/drain extensions (23) contain a P-type impurity. 6. A method of manufacturing a semiconductor device, the method comprising: forming a gate electrode (21), having side surfaces, over an upper surface of a substrate (20) with a gate dielectric layer (24) therebetween; forming a composite liner comprising: an oxide liner (24) on the side surfaces of the gate electrode (21) and the upper surface of the substrate (20); and a nitride liner (25) on the oxide liner (24); and forming a sidewall spacer (40) on the composite liner. 7. The method according to claim 6, wherein: the oxide liner (24) comprises a silicon oxide; the nitride liner (25) comprises a silicon nitride; and the sidewall spacer (40) comprises a silicon oxide, silicon nitride or silicon oxynitride. <Desc/Clms Page number 8> 8. The method according to claim 7, comprising forming the sidewall spacer (40) of a silicon oxide having a dielectric constant (k) no greater than about 3.9. 9. The method according to claim 7, comprising depositing the silicon nitride liner (25) by decoupled plasma deposition at a temperature no greater than about 400 C and depositing the silicon oxide liner (24) by decoupled plasma deposition at a temperature no greater than about 400 C. 10. The method according to claim 7, comprising ion implanting a P-type impurity to form shallow source/drain extensions (23) in the upper surface of the substrate (20), using the gate electrode (21) as a mask, before forming the composite liner. |
<Desc/Clms Page number 1> COMPOSITE SPACER LINER FOR IMPROVED TRANSISTOR PERFORMANCE TECHNICAL FIELD The present invention relates to a semiconductor device having improved transistor performance and enabling methodology. The present invention has particular applicability in fabricating high density semiconductor devices with high speed integrated circuits having submicron design features and shallow junction depths. BACKGROUND ART The escalating demand for high density and performance impose severe requirements on semiconductor fabrication technology, particularly for enhanced transistor performance and high operating speed. Transistor performance depends upon various factors and can easily be degraded by various processing operations during fabrication, such as plasma deposition techniques wherein the substrate is exposed to high temperatures and plasmas, as during plasma enhanced chemical vapor deposition. The need for high operating speed also requires the use of dielectric materials having a relatively low dielectric constant, such as about 3.9 or less. The value of a dielectric constant (k) expressed herein is based upon the value of 1 for a vacuum. In implementing conventional fabrication techniques, as illustrated in Fig. 1, a gate electrode 11 is typically formed over a semiconductor substrate 10 with a gate dielectric layer 12, e. g. , gate oxide layer, therebetween. Ion implantation is then conducted to implant shallow source/drain extensions 13. An oxide liner 15 is then formed on side surfaces of gate electrode 11 and the upper surface of substrate 10, as at a thickness of about 50 A to about 200 to protect the substrate surface during subsequent etching to form sidewall spacers 16, typically formed of silicon nitride. Reference character 14 illustrates a moderate or heavy doped source/drain region typically implanted subsequent to forming sidewall spacers 16. Difficulties are encountered in implementing conventional semiconductor fabrication techniques, such as those used to form the structure illustrated in Fig. 1. For example, during high temperature processing, as during deposition of the silicon oxide liner 15 by low pressure chemical vapor deposition, typically at a temperature of about 700 C or higher, dopant impurities implanted into the source/drain extensions 13, such as P-type impurities, particularly boron (B) impurities, diffuse and segregate in the oxide liner 15. Such diffusion loss from the source/drain extensions are manifestly disadvantageous, as by increasing the resistance of the source/drain extensions. A prior attempt to resolve this problem comprises ion implanting the dopant impurity, e. g. , B or BF2, at an increased implantation dosage in order to compensate for the diffusion loss. However, this approach disadvantageously results in a deeper junction depth (Xj) which is inconsistent with the continuous drive toward miniaturization. Another disadvantage attendant upon conventional practices is that the use of oxide liner 15 to prevent substrate surface damage requires the use of a material for the sidewall spacer which can be selectively etched with respect to oxide liner 15, such as a silicon nitride or a silicon oxynitride which have a high dielectric constant (k), e. g. , in excess of about 7. <Desc/Clms Page number 2> There exists a continuing need for semiconductor devices having transistors with improved performance, shallow junction depths (Xj) and enhanced operating speed, and for enabling methodology. There exists a particular need for high density semiconductor devices having a design rule of about 0.12 micron and under with highly reliable transistors and high operating speed. DISCLOSURE OF THE INVENTION An advantage of the present invention is a high density semiconductor device having transistors with improved performance. Another advantage of the present invention is a method of fabricating a high density semiconductor device having transistors with improved performance. Additional advantages and other features of the present invention will be set forth in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from the practice of the present invention. The advantages of the present invention may be realized and obtained as particularly pointed out in the appended claims. According to an aspect the present invention, the foregoing and other advantages are achieved in part by a semiconductor device comprising: a gate electrode, having side surfaces, over an upper surface of a substrate with a gate dielectric layer therebetween; an oxide liner on the side surfaces of the gate electrode and the upper surface of the substrate; a nitride liner on the oxide liner; and a sidewall spacer on the nitride liner. Another aspect of the present invention is a method of manufacturing a semiconductor device, the method comprising: forming a gate electrode, having side surfaces, over an upper surface of a substrate with a gate dielectric layer therebetween; forming a composite liner comprising: an oxide liner on the side surfaces of the gate electrode and the upper surface of the substrate; and a nitride liner on the oxide liner; and forming a sidewall spacer on the composite liner. Embodiments of the present invention include depositing an initial silicon oxide liner directly on the side surfaces of the gate electrode and the upper surface of the substrate by decoupled plasma deposition, depositing a silicon nitride liner directly on the silicon oxide liner by decoupled plasma deposition, and then forming a layer of spacer material on the silicon nitride liner. Decoupled plasma deposition of the silicon oxide liner layer and silicon nitride liner layer is implemented at a temperature not greater than about 400 C, thereby minimizing exposure of the substrate to an elevated temperature in order to reduce diffusion of impurities out of the shallow source/drain extensions. Anisotropic etching is then conducted to form the sidewall spacers. Etching is then implemented to selectively remove the portions of the silicon nitride layer and silicon oxide layer from the upper surface of the gate electrode. Embodiments of the present invention further include forming the sidewall spacer from silicon dioxide, thereby enabling a reduction in the capacitance of the resulting structure vis-à-vis a structure comprising silicon nitride or silicon oxynitride sidewall spacers, thereby enhancing operating speed. Additional advantages and aspects of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein only the preferred embodiment of the present invention is shown and described, simply by way of illustration of the best mode contemplated for <Desc/Clms Page number 3> carrying out the present invention. As will be realized, the present invention is capable of other and different embodiments, and its several details are capable of modification in various obvious respects, all without departing from the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 schematically illustrates a conventional transistor structure. Figs. 2 though 4 schematically illustrate sequential steps of a method in accordance with an embodiment of the present invention. Fig. 5 schematically illustrates another inventive aspect. In Figs. 2 through 4, similar features or elements are denoted by similar reference characters. DESCRIPTION OF THE INVENTION The present invention addresses the continuing demand for miniaturization and highly reliable semiconductor devices. The present invention provides semiconductor devices with enhanced transistor performance, and enabling methodology, by strategically forming composite oxide/nitride liners on the side surfaces of the gate electrode and upper surface of the substrate vis-à-vis a conventional oxide liner, thereby enabling the use of oxide sidewall spacers having a lower dielectric constant (k) than conventional silicon nitride or silicon oxynitride sidewall spacers, with an attendant improvement in operating speed. Embodiments of the present invention further include depositing the oxide and nitride liner layers by decoupled plasma deposition techniques employing a relatively low temperature, such as about 400 C or less, thereby significantly reducing diffusion of impurities, such as P-type impurities, e. g. , B and BF2, while maintaining a relatively small junction depth (Xj) of about 200 to about 300 A. In addition, the oxide liner can be made arbitrarily thin to minimize impurity loss from segregation, while the decoupled plasma nitride layer can be made thick enough to act as a sufficient etch stop for the spacer etch. Decoupled plasma deposition basically comprises techniques wherein a plasma is generated in a region or chamber remote from the region or chamber wherein actual deposition occurs, as in a separate chamber. The plasma generated vapors are then transported to the deposition region or chamber. Hence, deposition may be implemented at a lower temperature than the temperature at which the plasma is generated. The use of such lower temperatures prevents diffusion of impurities out of the shallow source/drain extensions, thereby enabling a small junction depth to be maintained. Moreover, by conducting decoupled plasma deposition, the substrate is not exposed to plasma conditions, thereby minimizing substrate damage with an attendant improvement in transistor performance/reliability. Thus, by depositing the oxide and nitride liners by decoupled plasma deposition, the substrate is not exposed to the elevated temperature and plasma conditions as when depositing the liners in the same chamber in which the plasma is generated, as when the substrate is positioned under the generated plasma. And the oxide liner portion can be made very thin to minimize dopant segregation into the oxide liner. <Desc/Clms Page number 4> Embodiments of the present invention comprise depositing an initial silicon oxide liner on the upper surface and side surfaces of a gate electrode and the upper surface of the substrate surface, subsequent to ion implantation to form shallow source/drain extensions, by decoupled plasma depostion at a temperature less than about 400 C, at a minimal thickness, such as about 10 A to about 50 A. A silicon nitride liner is then deposited on the silicon oxide liner by decoupled plasma deposition at a temperature less than about 400 C, at an appropriate thickness, such as about 50 A to about 200 A. The silicon oxide liner and silicon nitride liner are substantially conformal. A substantially conformal spacer layer is then deposited, such as silicon dioxide. Advantageously, the silicon nitride portion of the composite liner functions as an etch stop layer during anisotropic etching to form the sidewall spacers. Subsequent processing may be implemented in a conventional manner by forming moderately or heavily doped source/drain implants followed by activation annealing. Selective etching is then conducted to remove the silicon nitride liner and silicon oxide liner portions from the upper surface of the gate electrode and silicon substrate prior to conventional silicide formation. It should be recognized that the initial silicon oxide liner and silicon nitride liner formed thereon can be deposited by any conventional deposition techniques with an attendant advantage in flexibility in selecting the sidewall spacer material, e. g. , a lower dielectric constant (k) material, such as silicon dioxide. However, by implementing decoupled plasma deposition of the silicon oxide and silicon nitride liners of the composite liner, the substrate is not exposed to plasma conditions with an attendant improvement in the transistor performance. Moreover, the use of a low temperature during decoupled plasma deposition (and thin oxide liner portion) avoids unnecessary diffusion and segregation of dopant impurities, such as B, from shallow source/drain extensions. An embodiment of the present invention is schematically illustrated in Figs. 2 through 4, adverting to Fig. 2, a gate electrode 21 typically doped polycrystalline, is formed over substrate 20, typically doped monocrystalline silicon, an epitaxial layer formed on the semiconductor substrate or a well region. Using the gate electrode 21, as a mask, impurities are ion implanted into substrate 20, such as B, for forming shallow source/drain extensions 23. Subsequently, an initial silicon oxide layer 24 is deposited by decoupled plasma deposition at a temperature less than about 400 C at a thickness of about 10 A to about 50 A. A silicon nitride layer 25 is then deposited by decoupled plasma deposition at a temperature less than about 400 C on silicon nitride layer 24, as at a thickness of about 50 to 200 A. Advantageously, during such low temperature decoupled plasma deposition techniques, substrate 20 is not exposed to plasma conditions with an attendant improvement in transistor performance. Moreover, the use of a low temperature during decoupled plasma deposition and thin oxide liner avoids diffusion of B from shallow source/drain extensions 23 into the composite liner 24,25, enabling the formation and maintenance of a shallow junction depth (Xj) of about 200 to about 300 . Subsequently, as schematically illustrated Fig. 3, a layer of spacer material 30 is deposited, such as silicon dioxide. Adverting to Fig. 4, anisotropic etching is then conducted to form sidewall spacers 40, typically at a thickness at the substrate surface of about 600 A to about 1,200 A. Advantageously, silicon <Desc/Clms Page number 5> nitride layer 25 serves as an etch stop layer during etching to form sidewall spacers 40, thereby avoiding damage to substrate 20. Subsequent processing includes selective removal of portions of the silicon nitride layer 25, as by etching with HF or a buffered oxide etch, and then removal of silicon oxide layer 24, as with hot phosphoric acid, from the upper surface of gate electrode 21 and substrate 20. Ion implantation is conducted to form moderately or heavily doped source/drain regions 41 resulting in the structure illustrated in Fig. 4, prior or subsequent to removing the portions of silicon nitride layer 25 and silicon oxide layer 40 from the upper surface of gate electrode 21. Another inventive aspect comprises the formation of CMOS devices with an N/P drive current ratio in an acceptable range, e. g. , about 1.8 to about 2.5. This objective is achieved by embodiments wherein the amount of Si, Ge and C in the layer between a strained Si cap layer and substrate is adjusted to balance the electron and hole mobilities. The amount of strain can be engineered by specific concentration adjustments to keep the transistors matched. For example, adverting to Fig. 5, a CMOS structure is schematically illustrated comprising a p-channel transistor and an n-channel transistor formed on substrate 50, typically Si. A layer 51 of Si-Ge-C is formed on Si substrate 50 and a strained Si layer 52 is formed on layer 51. Layer 51 may be formed at an appropriate thickness, such as about 100 A to about 200 A, while layer 52 may be formed at an appropriate thickness, such as about 100 A to about 300 A. The p-channel transistor comprises a gate electrode 54A formed on gate dielectric layer 53A with shallow source/drain extensions 56A and moderate or heavy source/drain regions 57A, typically after forming sidewall spacers 55A. The n-channel transistor comprises gate electrode 54B formed on gate dielectric layer 53B, shallow source/drain extensions 56B and moderate or heavy source/drain regions 57B, typically formed after forming sidewall spacers 55B. Alternatively, ion implantation may be implemented prior to etching to form the sidewall spacers. Layer 51 comprises Si at a concentration of about 60 to about 90 atomic percent, Ge at a concentration of about 10 to about 40 atomic percent and C at a concentration of about 1 to about 10 atomic percent. By adjusting the amounts of Si, Ge and C within these compositional ranges, the strain in the Si layer 52 can be adjusted to balance electron and hole mobility, thereby maintaining the N/P drive current ratios within an acceptable range of about 1.8 or, e. g. , about 1.8 to about 2.5. The present invention enables the fabrication of semiconductor devices exhibiting improved transistor performance and shallow junction depths (Xj), e. g. , of about 200 A to about 300 A, with reduced capacitance and, hence, increased operating speed, by employing silicon oxide sidewall spacers. Embodiments of the present invention avoid exposing the substrate to elevated temperatures and plasma conditions during liner depositions, with an attendant improvement in transistor performance consistent with the continuous drive for miniaturization. The present invention enjoys industrial utility in fabricating any various types of semiconductor devices. The present invention enjoys particular industrial utilizing fabricating high density semiconductor devices with a design rule of about 0.12 micron and under having increased operating speed. In the previous description, numerous specific details are set forth, such as specific materials, structures, reactants, processes, etc. , in order to provide a better understanding of the present invention, <Desc/Clms Page number 6> however, the present invention can be practiced without resorting to the details specifically set forth. In other instances, well-known processing materials and techniques have not been described in order not to unnecessarily obscure the present invention. Only the preferred embodiment of the invention and but a few examples of its versatility are shown and described in the present disclosure. It is to be understood that the present invention is capable of use in various other combinations and environments, and is capable of changes or modifications within the scope of the inventive concept as expressed herein. |
Systems, apparatuses and methods may provide for technology that collects state data from a plurality of input/output (IO) drivers, wherein each of the plurality of IO drivers is to tunnel traffic through a shared physical interface in accordance with a different protocol. The technology also determines, based on the state data, a bandwidth allocation of the shared physical interface among the plurality of IO drivers, and automatically initiates, based on the bandwidth allocation, a state change of a processor coupled to the shared physical interface |
A semiconductor apparatus comprising:one or more substrates; andlogic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to:collect state data from a plurality of input/output (IO) drivers, wherein each of the plurality of IO drivers is to tunnel traffic through a shared physical interface in accordance with a different protocol;determine, based on the state data, a bandwidth allocation of the shared physical interface among the plurality of IO drivers; andinitiate, based on the bandwidth allocation, a state change of a processor coupled to the shared physical interface.The semiconductor apparatus of claim 1, wherein the state change is to prevent one or more of a starvation condition or a failure in at least one of the plurality of IO drivers.The semiconductor apparatus of claim 1, wherein the state change is to include one or more of a clock frequency change, an operating voltage change, a power state change or a performance state change.The semiconductor apparatus of any one of claims 1 to 3, wherein the state data is to be collected from a first IO driver, a second IO driver, and a third IO driver, wherein the first IO driver is to tunnel traffic in accordance with a display protocol, wherein the second IO driver is to tunnel traffic in accordance with a storage protocol, and wherein the third IO driver is to tunnel traffic in accordance with a network protocol.The semiconductor apparatus of claim 4, wherein the bandwidth allocation is to prioritize the display protocol over the storage protocol, and wherein the bandwidth allocation is to further prioritize the storage protocol over the network protocol.The semiconductor apparatus of any one of claims 1 to 3, wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.At least one computer readable storage medium comprising a set of executable program instructions, which when executed by a computing system, cause the computing system to:collect state data from a plurality of input/output (IO) drivers, wherein each of the plurality of IO drivers is to tunnel traffic through a shared physical interface in accordance with a different protocol;determine, based on the state data, a bandwidth allocation of the shared physical interface among the plurality of IO drivers; andinitiate, based on the bandwidth allocation, a state change of a processor coupled to the shared physical interface.The at least one computer readable storage medium of claim 7, wherein the state change is to prevent one or more of a starvation condition or a failure in at least one of the plurality of IO drivers.The at least one computer readable storage medium of claim 7, wherein the state change is to include one or more of a clock frequency change, an operating voltage change, a power state change or a performance state change.The at least one computer readable storage medium of any one of claims 7 to 9, wherein the state data is to be collected from a first IO driver, a second IO driver, and a third IO driver, wherein the first IO driver is to tunnel traffic in accordance with a display protocol, wherein the second IO driver is to tunnel traffic in accordance with a storage protocol, and wherein the third IO driver is to tunnel traffic in accordance with a network protocol.The at least one computer readable storage medium of claim 10, wherein the bandwidth allocation is to prioritize the display protocol over the storage protocol.The at least one computer readable storage medium of claim 11, wherein the bandwidth allocation is to further prioritize the storage protocol over the network protocol.A method of operating a performance-enhanced computing system comprising:collecting state data from a plurality of input/output (IO) drivers, wherein each of the plurality of IO drivers tunnels traffic through a shared physical interface in accordance with a different protocol;determining, based on the state data, a bandwidth allocation of the shared physical interface among the plurality of IO drivers; andinitiating, based on the bandwidth allocation, a state change of a processor coupled to the shared physical interface.The method of claim 13, wherein the state change prevents one or more of a starvation condition or a failure in at least one of the plurality of IO drivers.The method of claim 13, wherein the state change includes one or more of a clock frequency change, an operating voltage change, a power state change or a performance state change. |
TECHNICAL FIELDEmbodiments generally relate to converged input/output (IO) connection management. More particularly, embodiments relate to influencing processor governance based on serial bus converged IO connection management.BACKGROUNDRecent developments in USB (Universal Serial Bus, e.g., USB4 Specification, Version 1.0, August 2019, USB 3.0 Promoter Group) technology may support the use of different high speed transport protocols to tunnel IO traffic through a shared USB hub (e.g., in a converged IO/CIO architecture). In such a case, a connection manager may manage the allocation of bandwidth across the various transport protocols in use. Conventional connection manager solutions, however, may result in ineffective power and performance management of tunneled IO transactions. For example, the connection manager may select a bandwidth allocation that results in a relatively low priority protocol experiencing starvation conditions (e.g., insufficient lane assignments) and/or failures.BRIEF DESCRIPTION OF THE DRAWINGSThe various advantages of the embodiments will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:FIG. 1 is a block diagram of an example of a multi-level scheduling architecture for tunneled paths according to an embodiment;FIG. 2 is an illustration of an example of a converged IO architecture according to an embodiment;FIG. 3 is a block diagram of an example of a feedback loop according to an embodiment;FIG. 4 is a flowchart of an example of a method of operating a performance-enhanced computing system according to an embodiment;FIG. 5 is a block diagram of an example of a more detailed converged IO architecture according to an embodiment;FIG. 6 is a block diagram of an example of a performance-enhanced computing system according to an embodiment;FIG. 7 is an illustration of an example of a semiconductor apparatus according to an embodiment;FIG. 8 is a block diagram of an example of a processor according to an embodiment; andFIG. 9 is a block diagram of an example of a multi-processor based computing system according to an embodiment.DESCRIPTION OF EMBODIMENTSTurning now to FIG. 1 , a multi-level scheduling architecture 10 is shown in which tunneling paths (e.g., "Path m," "Path k," "Path n," "Path q," with the exception of a high priority "Path 0") are organized into priority groups 12 (12a-12M). In general, a connection manager (not shown) may set up the tunneling paths for different protocols such as, for example, display protocols (e.g., DisplayPort/DP Standard, Version 2.0, June 26, 2019 , Video Electronics Standards Association), storage protocols (e.g., USB Specification 3.1, Rev. 1.0, July 26, 2013 , USB Implementers Forum), network protocols (e.g., Peripheral Components Interconnect Express/PCIe, PCI EXPRESS Base Specification 5.0, Version 1.0, May 28, 2019, PCI Special Interest Group ), and so forth.In an embodiment, the tunneling paths pass through a shared physical interface such as, for example, a USB hub (not shown). In one example, each group 12 is assigned a priority, with the highest priority group 12 being scheduled first by a priority group scheduler 16. Additionally, within each group 12, every path may be provided a weight that is used by path schedulers 14 to perform round robin scheduling. Thus, a path with weight X might have X packets scheduled for the path in a given round. In the illustrated example, the path scheduling and group scheduling information is used to initiate/trigger a state change 18 (e.g., clock frequency, operating voltage, power state and/or performance state change) in a processor (e.g., host processor, central processing unit/CPU, not shown) coupled to the shared physical interface. As will be discussed in greater detail, the state change 18 may prevent starvation conditions (e.g., insufficient lane assignments) and/or failures (e.g., user-visible failures) with respect to the different protocols. The illustrated solution therefore enables enhanced performance to be achieved in a converged IO architecture.FIG. 2 shows a converged IO architecture 20 in which a computing system 22 uses a shared physical interface to tunnel traffic (e.g., data, instructions) to a high resolution display 24 in accordance with a display protocol (e.g., DP), a high bandwidth storage device 26 (e.g., storage device that complies with Non-Volatile Memory/NVM EXPRESS Base Specification, Revision 1.4, June 10, 2019 ) in accordance with a storage protocol (e.g., PCIe), a cloud computing infrastructure 28 in accordance with a network protocol (e.g., Ethernet over USB 3.0/USB3), and so forth. In an embodiment, the display protocol tunnels carry periodic information such as, for example, isochronous information and interrupts. The storage protocol and network protocol tunnels, by contrast, may carry aperiodic information (e.g., control information, bulk information) in addition to periodic information. In one example, the periodic transfers are provided with a definite service opportunity (e.g., Path 0 in FIG. 1 ), whereas the aperiodic transfers are scheduled using round robin scheduling within a priority group 12 ( FIG. 1 , e.g., the fastest transfer on an otherwise idle bus). Therefore, the connection manager may prioritize the display protocol over the storage protocol and prioritize the storage protocol over the network protocol in terms of bandwidth.If the computing system 22 is conducting an active user activity such as, for example, playing a game served by the cloud computing infrastructure 28 and presenting the game on the display 24, while syncing data into the storage device 26, such a scenario may involve all three tunnel protocols being in peak usage (e.g., leading to a high bandwidth requirement). In such a case, the priority given to isochronous (e.g., display and/or audio activity) might lead to starvation of the storage or other USB activities. To prevent starvation conditions and other failures, knowledge of the USB4 tunneled protocol bandwidth requirements (e.g., and potential performance bottlenecks) are communicated from the connection manager to processor governors, which may change the operating state of the processor. The result is more effective management of processor capabilities for better performance and power conservation.Turning now to FIG. 3 , a feedback loop 30 is shown between a connection manager daemon 32 and a USB4 governor 34 (e.g., in a Linux architecture). In the illustrated example, the connection manager daemon 32 uses a connection manager 36 (e.g., device driver) to communicate with a PCIe bus driver 38. The PCIe bus driver 38 may communicate with a software stack 40 that includes a protocol adapter 42 (e.g., implemented at a protocol adapter layer and a transport layer), a control adapter 44 (e.g., implemented at a configuration layer and the transport layer), and a lane adapter 46 (e.g., implemented at the transport layer, a logical layer, and an electrical layer).In an embodiment, the USB4 governor 34 is coupled to user space governors 48 such as, for example, a dynamic voltage and frequency scaling (DVFS) governor and/or an active state power management (ASPM) governor. The user space governors 48 may change the operating state of a CPU 54 via CPU governor drivers 50 and a CPU driver 52. In the illustrated example, the feedback loop 30 includes connection manager feedback such as, for example, the bandwidth/power needs of the IO protocols in use. Accordingly, the state changes conducted by the user space governors 48 may be initiated and/or triggered by the connection manager 36 through the feedback loop 30 to prevent starvation conditions, failures, and so forth.FIG. 4 shows a method 60 of operating a performance-enhanced computing system. The method 60 may generally be implemented in a connection manager such as, for example, the connection manager 36 ( FIG. 3 ), already discussed. More particularly, the method 60 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc., in configurable logic such as, for example, programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), in fixed-functionality logic hardware using circuit technology such as, for example, application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof.For example, computer program code to carry out operations shown in the method 60 may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. Additionally, logic instructions might include assembler instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, state-setting data, configuration data for integrated circuitry, state information that personalizes electronic circuitry and/or other structural components that are native to hardware (e.g., host processor, central processing unit/CPU, microcontroller, etc.).Illustrated processing block 62 provides for collecting state data from a plurality of IO drivers, wherein each of the plurality of IO drivers is to tunnel traffic through a shared physical interface in accordance with a different protocol. The state data may generally indicate the level of activity and/or bandwidth demand from the different protocols and/or drivers. For example, block 62 might include collecting state data from a first IO driver, wherein the first IO driver tunnels traffic in accordance with a display protocol (e.g., DP handling periodic information). Block 62 may also collect state data from a second IO driver, wherein the second IO driver tunnels traffic in accordance with a storage protocol (e.g., USB handling periodic and/or aperiodic information). In another example, block 62 includes collecting state data from a third IO driver, wherein the third IO driver tunnels traffic in accordance with a network protocol (e.g., PCIe handling aperiodic information).Block 64 provides for determining, based on the collected state data, a bandwidth allocation of the shared physical interface among the plurality of IO drivers. In an embodiment, block 64 includes prioritizing the display protocol over the storage protocol. Additionally, block 64 may prioritize the storage protocol over the network protocol. In one example, the bandwidth allocation specifies (e.g., in terms of bits per second/bps, lanes, etc.) the amount of bandwidth allocated to the respective protocols, drivers and/or tunnels. Illustrated block 66 provides for automatically initiating, based on the bandwidth allocation, a state change of a processor coupled to the shared physical interface. The state change may include an increase or decrease in, for example, a clock frequency, an operating voltage, a power state (e.g., Advanced Configuration and Power Interface/ACPI power state), a performance state (e.g., ACPI performance state), etc., or any combination thereof. In an embodiment, the state change prevents one or more of a starvation condition or a failure in at least one of the plurality of IO drivers.Thus, when block 66 detects high bandwidth usage, the processor governors may be influenced and/or instructed to improve the processor performance and ensure that the bandwidth allocation is effective in improving the performance of the tunneled protocols. Subsequently, when the block 66 detects that the tunneled protocols are less bandwidth intensive, the processor governors may be influenced/instructed to enter a power conserving mode. Alternatively, block 66 may also adopt a "race to halt" mode when data (e.g., storage rather than display) intensive operation is detected in a protocol. Such an approach increases the processor clock to complete the data transfers quickly, followed by a halt to save power.In another example, if a PCIe-based network card is used in a system along with a USB3 high-resolution webcam, block 64 may treat the PCIe traffic as a non-isochronous transfer, which will not receive guaranteed bandwidth in the shared physical interface. By contrast, block 64 may treat the USB3 traffic as an isochronous transfer that receives guaranteed bandwidth in the shared physical interface. In such a case, block 66 might instruct the processor frequency scaling governor to switch to a higher frequency. Experimental results show an unexpected 41% increase (from 57 Gbps to 79 Gbps) of TCP (Transmission Control Protocol) throughput on the PCIe network card, when switching to a "performance" governor from a "powersave" governor in such a case. The illustrated method 60 may also be useful in improving block read and write throughput when using, for example, an NVM EXPRESS solid state drive (SSD) as secondary storage. The method 60 therefore enhances performance at least in terms of fewer starvation conditions and/or failures in a converged IO architecture.FIG. 5 shows a more detailed converged IO architecture 70. In the illustrated example, a USB4 connection manager 72 (e.g., including logic instructions, configurable logic, fixed-functionality hardware logic, etc., or any combination thereof) sets up a USB driver 74, a PCIe bus driver 76, and a display kernel mode driver 78 to tunnel USB4 packets into a host router 80, which is coupled to a USB4 port and physical layer (Phy) 82. The connection manager 72 may collect periodic/aperiodic usage state data 102 from the USB driver 74, high bandwidth device state data 104 from the PCIe bus driver 76, and high resolution state data 106 from the display kernel mode driver 78. The connection manager 72 may then compute 84 overall bandwidth usage for the USB driver 74, the PCIe bus driver 76, and the display kernel mode driver 78.In an embodiment, the connection manager 72 triggers 86 a switch between performance and power save governors. Upon detecting the switch, a CPU frequency (e.g., DVFS) core 88 obtains 90 P-state information for a scaling governor 92. In one example, the scaling governor 92 issues 94 a scale up or scale down signal to a CPU scaling driver 96, which in turn sends 98 available P-states for a CPU group back to the scaling governor 92. The CPU scaling driver 96 may communicate directly with one or more CPU cores 100.Thus, the connection manager 72 leverages knowledge of the various protocol drivers to manage the bandwidth allocation and other IO functionalities. As already noted, between the IO groups, DP may be part of the highest priority group, followed by USB3 and PCIe sharing the next lower priority group. Within the USB3 & PCIe priority group, USB3 may receive higher round-robin weightage and have bandwidth reserved for isochronous transfers.For example, considering a bandwidth allocation out of 40 Gbps (e.g., in decreasing order) DP, which is all isochronous, may receive 80% of the bandwidth allocation, depending on the number of DP links (e.g., up to two) and the number of lanes per link (e.g., up to four, with one DP link). Bandwidth = Link Rate * Lane − count * 8 / 10For example, for high bit rate (HBR) 3x4Lanes: Bandwidth = 8.1Gbps ∗ 4 ∗ 0.8 = 25.92 GbpsThe remaining bandwidth for the periodic/aperiodic usage from the USB driver 74 may be used at a maximum 20Gbps and the high bandwidth device usage from the PCIe bus driver 76 may be up to sixteen lanes (e.g., NVME storage device). Accordingly, the connection manager 72 may determine the bandwidth remaining and the class level activities of the USB driver 74 and the PCIe bus driver 76 to determine the functionalities configured on top of these IO protocols. The connection manager 72 also compares this information with bandwidth health and failure information. Based on these inputs, when the activity is high, the illustrated connection manager 72 instructs the processor governor to switch to performance mode (e.g., "turbo" mode). The connection manager 72 may also directly skew the clock to a higher value to improve the bandwidth health. Finally, when the connection manager 72 determines that the activity on the tunneled protocols is reduced or disconnected, the connection manager 72 may instruct the processor governors to switch to "powersave" or instruct frameworks managing the IO framework (e.g., ASPM) to switch to low power modes.As already noted, the connection manager 72 may alternatively conserve energy during operation by adopting a race to halt mode on high data transfer operations. Such a mode may be achieved by skewing the clock to higher performance, completing data intensive transfers and then entering a power saving mode.Turning now to FIG. 6 , a performance-enhanced computing system 151 is shown. The system 151 may generally be part of an electronic device/platform having computing functionality (e.g., personal digital assistant/PDA, notebook computer, tablet computer, convertible tablet, server), communications functionality (e.g., smart phone), imaging functionality (e.g., camera, camcorder), media playing functionality (e.g., smart television/TV), wearable functionality (e.g., watch, eyewear, headwear, footwear, jewelry), vehicular functionality (e.g., car, truck, motorcycle), robotic functionality (e.g., autonomous robot), etc., or any combination thereof. In the illustrated example, the system 151 includes a host processor 153 (e.g., CPU) having a governor 154 and an integrated memory controller (IMC) 155 that is coupled to a system memory 157.The illustrated system 151 also includes an input output (IO) module 159 implemented together with the host processor 153 and a graphics processor 161 (e.g., graphics processing unit/GPU) on a semiconductor die 163 as a system on chip (SoC). The illustrated IO module 159 communicates with, for example, a display 165 (e.g., touch screen, liquid crystal display/LCD, light emitting diode/LED display), a network controller 167 (e.g., wired and/or wireless), and mass storage 169 (e.g., hard disk drive/HDD, optical disk, solid state drive/SSD, flash memory). The IO module 159 may also include a shared physical interface 168 (e.g., USB hub).In an embodiment, the host processor 153, the graphics processor 161 and/or the IO module 159 execute connection manager program instructions 171 retrieved from the system memory 157 and/or the mass storage 169 to perform one or more aspects of the method 60 ( FIG. 4 ), already discussed. Thus, execution of the illustrated instructions 171 may cause the computing system 151 to collect state data from a plurality of IO drivers, wherein each of the IO drivers is to tunnel traffic through the shared physical interface 168 in accordance with a different protocol. For example, a first IO driver may tunnel traffic to the display 165 in accordance with a display protocol, a second IO driver may tunnel traffic to the mass storage 169 in accordance with a storage protocol, a third IO driver may tunnel traffic to the network controller 167 in accordance with a network protocol, and so forth. The computing system 151 may also support THUNDERBOLT interfaces and the daisy-chaining of devices (e.g., in a host-to-host configuration).Execution of the instructions 171 may also cause the computing system 151 to determine, based on the state data, a bandwidth allocation of the shared physical interface 168 among the plurality of IO drivers. In an embodiment, the bandwidth allocation prioritizes the display protocol over the storage protocol. The bandwidth allocation may also prioritize the storage protocol over the network protocol. In one example, execution of the instructions 171 causes the computing system 151 to initiate a state change (e.g., clock frequency change, operating voltage change, power state change, performance state change, etc.) of the host processor 153 based on the bandwidth allocation. The state change, which may be triggered via one or more instructions to the governor 154, prevents a starvation condition and/or a failure in at least one of the plurality of IO drivers. The computing system 151 is therefore considered performance-enhanced at least to the extent that it encounters fewer starvation conditions and/or failures in a converged IO architecture.FIG. 7 shows a semiconductor package apparatus 173. The illustrated apparatus 173 includes one or more substrates 175 (e.g., silicon, sapphire, gallium arsenide) and logic 177 (e.g., transistor array and other integrated circuit/IC components) coupled to the substrate(s) 175. The logic 177 may be implemented at least partly in configurable logic or fixed-functionality logic hardware. In one example, the logic 177 implements one or more aspects of the method 60 ( FIG. 4 ), already discussed. Thus, the logic 177 may collect state data from a plurality of IO drivers, wherein each of the IO drivers is to tunnel traffic through a shared physical interface in accordance with a different protocol. For example, a first IO driver may tunnel traffic to a display in accordance with a display protocol, a second IO driver may tunnel traffic to a mass storage in accordance with a storage protocol, a third IO driver may tunnel traffic to a network controller in accordance with a network protocol, and so forth.The logic 177 may also determine, based on the state data, a bandwidth allocation of the shared physical interface among the plurality of IO drivers. In an embodiment, the bandwidth allocation prioritizes the display protocol over the storage protocol. The bandwidth allocation may also prioritize the storage protocol over the network protocol. In one example, the logic 177 initiates a state change (e.g., clock frequency change, operating voltage change, power state change, performance state change, etc.) of a processor (e.g., host processor, graphics processor) based on the bandwidth allocation. The state change, which may be triggered via one or more instructions to a governor, prevents a starvation condition and/or a failure in at least one of the plurality of IO drivers. The apparatus 173 is therefore considered performance-enhanced at least to the extent that it encounters fewer starvation conditions and/or failures in a converged IO architecture.In one example, the logic 177 includes transistor channel regions that are positioned (e.g., embedded) within the substrate(s) 175. Thus, the interface between the logic 177 and the substrate(s) 175 may not be an abrupt junction. The logic 177 may also be considered to include an epitaxial layer that is grown on an initial wafer of the substrate(s) 175.FIG. 8 illustrates a processor core 200 according to one embodiment. The processor core 200 may be the core for any type of processor, such as a micro-processor, an embedded processor, a digital signal processor (DSP), a network processor, or other device to execute code. Although only one processor core 200 is illustrated in FIG. 8 , a processing element may alternatively include more than one of the processor core 200 illustrated in FIG. 8 . The processor core 200 may be a single-threaded core or, for at least one embodiment, the processor core 200 may be multithreaded in that it may include more than one hardware thread context (or "logical processor") per core.FIG. 8 also illustrates a memory 270 coupled to the processor core 200. The memory 270 may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art. The memory 270 may include one or more code 213 instruction(s) to be executed by the processor core 200, wherein the code 213 may implement one or more aspects of the method 60 ( FIG. 4 ), already discussed. The processor core 200 follows a program sequence of instructions indicated by the code 213. Each instruction may enter a front end portion 210 and be processed by one or more decoders 220. The decoder 220 may generate as its output a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals which reflect the original code instruction. The illustrated front end portion 210 also includes register renaming logic 225 and scheduling logic 230, which generally allocate resources and queue the operation corresponding to the convert instruction for execution.The processor core 200 is shown including execution logic 250 having a set of execution units 255-1 through 255-N. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. The illustrated execution logic 250 performs the operations specified by code instructions.After completion of execution of the operations specified by the code instructions, back end logic 260 retires the instructions of the code 213. In one embodiment, the processor core 200 allows out of order execution but requires in order retirement of instructions. Retirement logic 265 may take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like). In this manner, the processor core 200 is transformed during execution of the code 213, at least in terms of the output generated by the decoder, the hardware registers and tables utilized by the register renaming logic 225, and any registers (not shown) modified by the execution logic 250.Although not illustrated in FIG. 8 , a processing element may include other elements on chip with the processor core 200. For example, a processing element may include memory control logic along with the processor core 200. The processing element may include I/O control logic and/or may include I/O control logic integrated with memory control logic. The processing element may also include one or more caches.Referring now to FIG. 9 , shown is a block diagram of a computing system 1000 embodiment in accordance with an embodiment. Shown in FIG. 9 is a multiprocessor system 1000 that includes a first processing element 1070 and a second processing element 1080. While two processing elements 1070 and 1080 are shown, it is to be understood that an embodiment of the system 1000 may also include only one such processing element.The system 1000 is illustrated as a point-to-point interconnect system, wherein the first processing element 1070 and the second processing element 1080 are coupled via a point-to-point interconnect 1050. It should be understood that any or all of the interconnects illustrated in FIG. 9 may be implemented as a multi-drop bus rather than point-to-point interconnect.As shown in FIG. 9 , each of processing elements 1070 and 1080 may be multicore processors, including first and second processor cores (i.e., processor cores 1074a and 1074b and processor cores 1084a and 1084b). Such cores 1074a, 1074b, 1084a, 1084b may be configured to execute instruction code in a manner similar to that discussed above in connection with FIG. 8 .Each processing element 1070, 1080 may include at least one shared cache 1896a, 1896b. The shared cache 1896a, 1896b may store data (e.g., instructions) that are utilized by one or more components of the processor, such as the cores 1074a, 1074b and 1084a, 1084b, respectively. For example, the shared cache 1896a, 1896b may locally cache data stored in a memory 1032, 1034 for faster access by components of the processor. In one or more embodiments, the shared cache 1896a, 1896b may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof.While shown with only two processing elements 1070, 1080, it is to be understood that the scope of the embodiments are not so limited. In other embodiments, one or more additional processing elements may be present in a given processor. Alternatively, one or more of processing elements 1070, 1080 may be an element other than a processor, such as an accelerator or a field programmable gate array. For example, additional processing element(s) may include additional processors(s) that are the same as a first processor 1070, additional processor(s) that are heterogeneous or asymmetric to processor a first processor 1070, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processing element. There can be a variety of differences between the processing elements 1070, 1080 in terms of a spectrum of metrics of merit including architectural, micro architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst the processing elements 1070, 1080. For at least one embodiment, the various processing elements 1070, 1080 may reside in the same die package.The first processing element 1070 may further include memory controller logic (MC) 1072 and point-to-point (P-P) interfaces 1076 and 1078. Similarly, the second processing element 1080 may include a MC 1082 and P-P interfaces 1086 and 1088. As shown in FIG. 9 , MC's 1072 and 1082 couple the processors to respective memories, namely a memory 1032 and a memory 1034, which may be portions of main memory locally attached to the respective processors. While the MC 1072 and 1082 is illustrated as integrated into the processing elements 1070, 1080, for alternative embodiments the MC logic may be discrete logic outside the processing elements 1070, 1080 rather than integrated therein.The first processing element 1070 and the second processing element 1080 may be coupled to an I/O subsystem 1090 via P-P interconnects 1076 1086, respectively. As shown in FIG. 9 , the I/O subsystem 1090 includes P-P interfaces 1094 and 1098. Furthermore, I/O subsystem 1090 includes an interface 1092 to couple I/O subsystem 1090 with a high performance graphics engine 1038. In one embodiment, bus 1049 may be used to couple the graphics engine 1038 to the I/O subsystem 1090. Alternately, a point-to-point interconnect may couple these components.In turn, I/O subsystem 1090 may be coupled to a first bus 1016 via an interface 1096. In one embodiment, the first bus 1016 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the embodiments are not so limited.As shown in FIG. 9 , various I/O devices 1014 (e.g., biometric scanners, speakers, cameras, sensors) may be coupled to the first bus 1016, along with a bus bridge 1018 which may couple the first bus 1016 to a second bus 1020. In one embodiment, the second bus 1020 may be a low pin count (LPC) bus. Various devices may be coupled to the second bus 1020 including, for example, a keyboard/mouse 1012, communication device(s) 1026, and a data storage unit 1019 such as a disk drive or other mass storage device which may include code 1030, in one embodiment. The illustrated code 1030 may implement one or more aspects of the method 60 ( FIG. 4 ), already discussed. Further, an audio I/O 1024 may be coupled to second bus 1020 and a battery 1010 may supply power to the computing system 1000.Note that other embodiments are contemplated. For example, instead of the point-to-point architecture of FIG. 9 , a system may implement a multi-drop bus or another such communication topology. Also, the elements of FIG. 9 may alternatively be partitioned using more or fewer integrated chips than shown in FIG. 9 .Additional Notes and Examples:Example 1 includes a performance-enhanced computing system comprising an input/output (IO) module including a shared physical interface, a processor coupled to the IO module, and a memory coupled to the processor and the IO module, the memory comprising a set of executable program instructions, which when executed by the IO module, cause the computing system to collect state data from a plurality of IO drivers, wherein each of the plurality of IO drivers is to tunnel traffic through the shared physical interface in accordance with a different protocol, determine, based on the state data, a bandwidth allocation of the shared physical interface among the plurality of IO drivers, and initiate a state change of the processor based on the bandwidth allocation.Example 2 includes the computing system of Example 1, wherein the state change is to prevent one or more of a starvation condition or a failure in at least one of the plurality of IO drivers.Example 3 includes the computing system of Example 1, wherein the state change is to include one or more of a clock frequency change, an operating voltage change, a power state change or a performance state change.Example 4 includes the computing system of any one of Examples 1 to 3, wherein the state data is to be collected from a first IO driver, a second IO driver, and a third IO driver, wherein the first IO driver is to tunnel traffic in accordance with a display protocol, wherein the second IO driver is to tunnel traffic in accordance with a storage protocol, and wherein the third IO driver is to tunnel traffic in accordance with a network protocol.Example 5 includes the computing system of Example 4, wherein the bandwidth allocation is to prioritize the display protocol over the storage protocol.Example 6 includes the computing system of Example 5, wherein the bandwidth allocation is to further prioritize the storage protocol over the network protocol.Example 7 includes a semiconductor apparatus comprising one or more substrates, and logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to collect state data from a plurality of input/output (IO) drivers, wherein each of the plurality of IO drivers is to tunnel traffic through a shared physical interface in accordance with a different protocol, determine, based on the state data, a bandwidth allocation of the shared physical interface among the plurality of IO drivers, and initiate, based on the bandwidth allocation, a state change of a processor coupled to the shared physical interface.Example 8 includes the semiconductor apparatus of Example 7, wherein the state change is to prevent one or more of a starvation condition or a failure in at least one of the plurality of IO drivers.Example 9 includes the semiconductor apparatus of Example 7, wherein the state change is to include one or more of a clock frequency change, an operating voltage change, a power state change or a performance state change.Example 10 includes the semiconductor apparatus of any one of Examples 7 to 9, wherein the state data is to be collected from a first IO driver, a second IO driver, and a third IO driver, wherein the first IO driver is to tunnel traffic in accordance with a display protocol, wherein the second IO driver is to tunnel traffic in accordance with a storage protocol, and wherein the third IO driver is to tunnel traffic in accordance with a network protocol.Example 11 includes the semiconductor apparatus of Example 10, wherein the bandwidth allocation is to prioritize the display protocol over the storage protocol, and wherein the bandwidth allocation is to further prioritize the storage protocol over the network protocol.Example 12 includes the semiconductor apparatus of Example 7, wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.Example 13 includes at least one computer readable storage medium comprising a set of executable program instructions, which when executed by a computing system, cause the computing system to collect state data from a plurality of input/output (IO) drivers, wherein each of the plurality of IO drivers is to tunnel traffic through a shared physical interface in accordance with a different protocol, determine, based on the state data, a bandwidth allocation of the shared physical interface among the plurality of IO drivers, and initiate, based on the bandwidth allocation, a state change of a processor coupled to the shared physical interface.Example 14 includes the at least one computer readable storage medium of Example 13, wherein the state change is to prevent one or more of a starvation condition or a failure in at least one of the plurality of IO drivers.Example 15 includes the at least one computer readable storage medium of Example 13, wherein the state change is to include one or more of a clock frequency change, an operating voltage change, a power state change or a performance state change.Example 16 includes the at least one computer readable storage medium of any one of Examples 13 to 15, wherein the state data is to be collected from a first IO driver, a second IO driver, and a third IO driver, wherein the first IO driver is to tunnel traffic in accordance with a display protocol, wherein the second IO driver is to tunnel traffic in accordance with a storage protocol, and wherein the third IO driver is to tunnel traffic in accordance with a network protocol.Example 17 includes the at least one computer readable storage medium of Example 16, wherein the bandwidth allocation is to prioritize the display protocol over the storage protocol.Example 18 includes the at least one computer readable storage medium of Example 17, wherein the bandwidth allocation is to further prioritize the storage protocol over the network protocol.Example 19 includes a method of operating a performance-enhanced computing system, the method comprising collecting state data from a plurality of input/output (IO) drivers, wherein each of the plurality of IO drivers tunnels traffic through a shared physical interface in accordance with a different protocol, determining, based on the state data, a bandwidth allocation of the shared physical interface among the plurality of IO drivers, and initiating, based on the bandwidth allocation, a state change of a processor coupled to the shared physical interface.Example 20 includes the method of Example 19, wherein the state change prevents one or more of a starvation condition or a failure in at least one of the plurality of IO drivers.Example 21 includes the method of Example 19, wherein the state change includes one or more of a clock frequency change, an operating voltage change, a power state change or a performance state change.Example 22 includes the method of any one of Examples 19 to 21, wherein the state data is collected from a first IO driver, a second IO driver, and a third IO driver, wherein the first IO driver tunnels traffic in accordance with a display protocol, wherein the second IO driver tunnels traffic in accordance with a storage protocol, and wherein the third IO driver tunnels traffic in accordance with a network protocol.Example 23 includes the method of Example 22, wherein the bandwidth allocation prioritizes the display protocol over the storage protocol.Example 24 includes the method of Example 23, wherein the bandwidth allocation further prioritizes the storage protocol over the network protocol.Example 25 includes an apparatus comprising means for performing the method of any one of Examples 19 to 24.Thus, technology described herein may influence PCIe active-state power management (ASPM) for performance and power. The technology may also influence CPU governors for clock (e.g., performance) to improve USB class performance. As a result, the technology compensates the bandwidth needs of tunneled protocols to ensure seamless tunneling without starvation or failures. Thus, a better user experience is achieved with better performance.Embodiments are applicable for use with all types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, systems on chip (SoCs), SSD/NAND controller ASICs, and the like. In addition, in some of the drawings, signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.Example sizes/models/values/ranges may have been given, although embodiments are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the computing system within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments, it should be apparent to one skilled in the art that embodiments can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting.The term "coupled" may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms "first", "second", etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.As used in this application and in the claims, a list of items joined by the term "one or more of' may mean any combination of the listed terms. For example, the phrases "one or more of A, B or C" may mean A; B; C; A and B; A and C; B and C; or A, B and C.Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments can be implemented in a variety of forms. Therefore, while the embodiments have been described in connection with particular examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims. |
The invention relates to operations for describing data based on integrated memory regions. Various embodiments enable a memory subsystem to perform a read operation based on integrated memory region description data, which may be generated based on memory region description data (e.g., SGL) provided by a host system for the read operation. |
1.A memory system comprising:memory device; andA processing device, operatively coupled to the memory device, configured to perform operations including:A first request to read requested first data stored on the memory system is received from a host system, the first request specifying a first memory region description data associated with the first request, the first request a memory area description data describing a first set of individual memory areas of the host system to which the requested first data is to be sent; andIn response to the first request:Based on the first memory area description data, first integrated memory area description data is generated by identifying a first set of contiguous memory areas, the first set of contiguous memory areas each including two or more of the first set of individual memory areas More sequentially adjacent memory areas, the first integrated memory area description data includes:a single descriptor for each contiguous memory area in the first set of contiguous memory areas; anda single descriptor for each individual memory region in the first set of individual memory regions, each individual memory region being excluded from the first set of contiguous memory regions; andA first read operation is performed on the memory device based on the first integrated memory region description data generated on the memory system.2.The memory system of claim 1, wherein said generating said first integrated memory area memory description data comprises:The single descriptor is generated for each contiguous memory area in the first set of contiguous memory areas.3.The memory system of claim 1, wherein the first memory region description data is stored on the host system, the operations comprising:The first memory area description data is accessed from the host system.4.The memory system of claim 1, wherein the first memory region description data comprises a linked list of memory region descriptors, each memory region descriptor comprising a memory corresponding to a memory address space on a local memory of the host system address.5.The memory system of claim 1, wherein the first memory region description data includes a scatter-gather list SGL according to the non-volatile memory high-speed NVMe protocol.6.The memory system of claim 1, wherein each individual memory area in the first set of individual memory areas is defined by:memory addresses that correspond to individual memory address spaces on the local memory of the host system; andThe memory size of the individual memory address space.7.1. The memory system of claim 1, wherein each memory area in the first set of memory areas includes an individual memory address space on a local memory of the host system.8.The memory system of claim 1, wherein the performing the first read operation on the memory device based on the first integrated memory region description data stored on the memory system comprises:sending a set of read commands for a set of logical block addresses to the memory device, wherein the requested first data is stored on the memory system;receiving selected data from the memory device in response to a selected read command of the set of read commands for a selected logical block address of the set of logical addresses; andIn response to receiving the selected data:determining a set of selected memory regions on the host system to receive the selected data based on the selected logical block address and the first integrated memory region description data; andThe selected data is sent to the set of selected memory regions.9.9. The memory system of claim 8, wherein the selected data is sent to a location on local memory using a single transaction layer packet TLP including the selected data, the single transaction layer packet interconnecting high speed according to peripheral components PCIe standard.10.10. The memory system of claim 9, wherein the single transaction layer packet includes additional data received from the memory device in response to another read command sent to the memory device, the other read command Associated with a second read operation performed on the memory device, the second read operation performed in response to a second request received by the memory system from the host system.11.The memory system of claim 1, wherein the operations comprise:receiving a second request to read requested second data stored on the memory system, the second request specifying a second memory region describing data associated with the second request, the second memory region description data describing a second set of individual memory regions of the host system to which the requested second data is to be sent; andIn response to the second request:The second integrated memory area description data is generated by identifying a second set of contiguous memory areas, the second set of contiguous memory areas each including two or more sequentially adjacent memory areas in the second set of individual memory areas, so The second integrated memory area description data includes:a single descriptor for each contiguous memory area in the second set of contiguous memory areas; anda single descriptor for each individual memory region of the second set of individual memory regions, each individual memory region being excluded from the second set of contiguous memory regions; andA second read operation is performed on the memory device based on the second integrated memory area description data stored on the memory system.12.The memory system of claim 1, wherein the first memory region description data is generated by the host system for the first request.13.10. The memory system of claim 1, wherein the identifying the first set of contiguous memory areas comprises identifying the first set of individual memory areas in the first set of individual memory areas when accessing the first memory area description data from the host system individual contiguous memory areas.14.The memory system of claim 1, comprising:a buffer for storing a first set of memory region descriptors generated by the memory system.15.The memory system of claim 14, comprising:A memory controller including the processing device and the buffer.16.A method comprising:A request to read requested data stored on the memory system is received at a memory system from a host system, the request specifying a host memory region description data associated with the request, the host memory region description data describing the a set of individual memory areas of the host system to which the requested data is to be sent; andIn response to the request:accessing the host memory area description data from the host system;Integrated memory area description data is generated by identifying a set of contiguous memory areas, each including two or more sequentially adjacent memory areas in the set of individual memory areas, the integrated memory area description Data includes:a single descriptor for each contiguous memory region in the set of contiguous memory regions; anda single descriptor for each individual memory region in the set of individual memory regions, each individual memory region being excluded from the set of contiguous memory regions; andThe integrated memory area description data is stored on a buffer of the memory system.17.17. The method of claim 16, comprising in response to the request:A read operation is performed on a memory device of the memory system based on the integrated memory area description data stored on the buffer.18.17. The method of claim 16, wherein the host memory region description data includes a linked list of memory region descriptors, each memory region descriptor including a memory address corresponding to a memory address space on a local memory of the host system.19.17. The method of claim 16, wherein the host memory region description data comprises a scatter-gather list SGL according to the non-volatile memory high-speed NVMe protocol.20.At least one non-transitory machine-readable storage medium comprising instructions that, when executed by a processing device of a memory system, cause the processing device to perform operations comprising:A request to read requested data stored on the memory system is received from a host system, the request specifying a host memory region description data associated with the request, the host memory region description data describing the requested data a set of individual memory regions of the host system to be sent to; andIn response to the request:Based on the host memory area description data, integrated memory area description data is generated by identifying a set of contiguous memory areas, the set of contiguous memory areas each including two or more sequentially adjacent ones of the set of individual memory areas The memory area, the integrated memory area description data includes:a single descriptor for each contiguous memory region in the set of contiguous memory regions; anda single descriptor for each individual memory region in the set of individual memory regions, each individual memory region being excluded from the set of contiguous memory regions; andThe integrated memory area description data is stored on a buffer of the memory system. |
Operations on description data based on integrated memory areastechnical fieldEmbodiments of the present disclosure relate generally to memory devices, and more particularly, to memory operations, such as read operations, performed based on integrating memory region description data.Background techniqueA memory subsystem may include one or more memory devices that store data. The memory devices may be, for example, non-volatile memory devices and volatile memory devices. In general, a host system can utilize a memory subsystem to store data at and retrieve data from a memory device.Non-Volatile Memory Express (NVMe) is an example of a memory protocol that supports interaction between a memory subsystem and a host system. The current version of the NVMe protocol supports Scatter Gather Lists (SGLs), which are mechanisms for transferring commands and data between the host system and the memory subsystem. The SGL can help the memory subsystem process read or write requests, where the SGL can describe a list of memory areas on the host system that the memory subsystem uses to send data back to the host system in conjunction with read requests (eg, Data is obtained (eg, read) from the host system via the Peripheral Component Interconnect Express (PCIe) interface), or the memory subsystem uses the list of memory regions in conjunction with write requests. Each of the memory regions on the host system can act as a buffer (eg, SGL buffer) that the memory subsystem uses to send data back to the host system. The SGL typically includes a linked list of connected buffers, and each buffer can be of different size (eg, as small as 32 bytes). In some cases, the host system builds a large number of buffers on the host system, which can distribute the buffers around the host system's local memory (eg, based on space availability). This can result in the memory subsystem having to traverse (eg, walk through) the SGL frequently and repeatedly in conjunction with a single SGL-based operation.SUMMARY OF THE INVENTIONIn one aspect, the present disclosure provides a memory system comprising: a memory device; and a processing device operatively coupled to the memory device, configured to perform operations comprising: receiving a read from a host system for storing in the memory system A first request for the requested first data on the and in response to the first request: generating first integrated memory area description data by identifying a first group of contiguous memory areas based on the first memory area description data, the first group of contiguous memory areas each includes two or more sequentially adjacent memory areas in the first set of individual memory areas, the first integrated memory area description data includes: a single descriptor for each contiguous memory area in the first set of contiguous memory areas ; and a single descriptor for each individual memory area in the first set of individual memory areas, each individual memory area being excluded from the first set of contiguous memory areas; An integrated memory area describes the data to perform a first read operation on the memory device.In another aspect, the present disclosure provides a method comprising: receiving, at a memory system, a request from a host system to read requested data stored on the memory system, the request specifying a host memory region description data associated with the request , the host memory area description data describes a set of individual memory areas of the host system to which the requested data is to be sent; and in response to the request: accessing the host memory area description data from the host system; generating an integration by identifying a set of contiguous memory areas memory area description data, each of a set of contiguous memory areas includes two or more sequentially adjacent memory areas in a set of individual memory areas, the integrated memory area description data includes: for each consecutive memory area in a set of contiguous memory areas a single descriptor for a memory region; and a single descriptor for each individual memory region in a set of individual memory regions, each individual memory region being excluded from a set of contiguous memory regions; and the memory regions will be consolidated Description data is stored on buffers in the memory system.In yet another aspect, the present disclosure provides at least one non-transitory machine-readable storage medium comprising instructions that, when executed by a processing device of a memory system, cause the processing device to perform operations comprising: receiving a read from a host system A request to fetch requested data stored on a memory system, the request specifying a host memory area description data associated with the request that describes a set of individual memories of the host system to which the requested data is to be sent and in response to the request: based on the host memory area description data, generating integrated memory area description data by identifying a set of contiguous memory areas each including two or more sequential phases in a set of individual memory areas adjacent memory areas, the integrated memory area description data includes: a single descriptor for each contiguous memory area in a group of contiguous memory areas; and a single descriptor for each individual memory area in a group of individual memory areas, Each of the individual memory regions is excluded from a set of contiguous memory regions; and the integrated memory region description data is stored on a buffer of the memory system.Description of drawingsThe present disclosure will be more fully understood from the embodiments given below and from the accompanying drawings of various embodiments of the present disclosure. However, the drawings should not be construed as limiting the disclosure to the particular embodiments, but are for illustration and understanding only.1 is a block diagram illustrating an example computing system including a memory subsystem, according to some embodiments of the present disclosure.2 and 3 are flowcharts of example methods for performing memory operations based on consolidated data describing one or more memory regions on a host system, according to some embodiments of the present disclosure.4 and 5 are diagrams illustrating examples of generating integrated memory region description data, according to some embodiments of the present disclosure.6 provides an interaction diagram illustrating interactions between components of a computing environment, in the context of some embodiments, where execution is performed for integrating data based on describing one or more memory regions on a host system as described herein A method to perform a memory read operation.7 is a block diagram of an example computer system in which embodiments of the disclosure may operate.Detailed waysAspects of the present disclosure relate to memory operations performed based on consolidating memory region data. Specifically, various embodiments enable a memory subsystem to perform memory read operations based on aggregated data describing one or more memory areas (eg, data including an aggregated list of memory areas) on a host system, wherein one or more The memory regions are used by the memory subsystem as one or more buffers for sending read data to the host system. The memory subsystem may be a storage device, a memory module, or a hybrid of a storage device and a memory module. Examples of memory devices and memory modules are described below in conjunction with FIG. 1 . In general, a host system may utilize a memory subsystem that includes one or more components, such as a memory device that stores data. The host system can send access requests to the memory subsystem to store data at and read data from the memory subsystem.The host system may send access requests (eg, write commands, read commands) to the memory subsystem to store data on memory devices at the memory subsystem, to read data from memory devices on the memory subsystem, Or write/read constructs (eg, commit and completion queues) with respect to memory devices on the memory subsystem. The data to be read or written as specified by the host request is hereinafter referred to as "host data". The host request may contain logical address information (eg, logical block address (LBA), namespace) for the host data, which is the location of the host system with which the host data is associated. The logical address information (eg, LBA, namespace) may be part of the metadata of the host data. Metadata may also include error handling data (eg, error-correcting code (ECC) codewords, parity-check codes), data versions (eg, to distinguish the epoch of data written), valid bitmaps (which LBA or logical transfer cell contains valid data), etc.As used herein, a memory device may be a non-volatile memory device.Currently, the memory subsystem may facilitate data transfers by pushing or pulling data from one or more memory areas of the host system (eg, on local memory) using conventional techniques. For example, as described herein, Non-Volatile Memory Express (NVMe) is an example of a memory protocol that supports interaction between a memory subsystem and a host system. The current version of the NVMe protocol supports Scatter Gather Lists (SGLs), which are mechanisms for transferring commands and data between the host system and the memory subsystem. The host system may use a scatter list (SGL) to facilitate the memory subsystem to perform read requests, where the SGL may describe a list of memory areas on the host system that the memory subsystem uses to send (eg, transfer) the requested data. ) back to the host system (eg, via a Peripheral Component Interconnect Express (PCIe) interface). Each of the memory regions on the host system can act as a buffer (eg, SGL buffer) that the memory subsystem uses to send data back to the host system. The SGL typically includes a linked list of connected buffers, and each buffer can be of different size (eg, as small as 1 byte according to the NVMe standard). In some cases, the host system builds a large number of buffers on the host system, which can distribute the buffers around the host system's local memory (eg, based on space availability). This can result in the memory subsystem having to traverse (eg, walk through) the SGL frequently and repeatedly in conjunction with a single SGL-based operation.In general, each individual memory request (eg, read or write request) generated by the host system may have a corresponding SGL stored on the host system that the memory system may access and use in response to the individual request SGL. Additionally, each of the memory regions described by the SGL may be differently (eg, variably) sized and located at different locations (eg, logical or physical locations) on the local memory of the host system. A given SGL typically includes a list of descriptors that each describe different memory regions on the host system, and the size of a given SGL may vary based on its associated request. For example, the host system may generate or establish a large number of small memory areas on the host system's local memory in conjunction with a given read request (eg, an SGL-based read request) sent by the host system to the memory system. The size and/or number of SGLs handled by a conventional memory system at a given time may be such that the memory system simultaneously stores all parts of the SGL on the memory system (eg, without at least increasing the memory space used to store the SGL on the memory system) may or may not be feasible. Thus, when performing memory operations in conjunction with a given SGL, conventional memory systems typically access (eg, read and traverse or traverse) the relevant SGL from the host system multiple times (eg, as buffer space permits). Repeated access by conventional memory systems to associated SGLs (and other SGLs for other memory operations) can generate significant overhead for conventional memory systems when performing associated read operations, which in turn can reduce the operational efficiency of conventional memory systems.Aspects of the present disclosure address the above and other deficiencies by performing read or write operations to a memory subsystem based on integrated memory area description data, which may be based on data provided by a host system for reading or writing Generated by entering the operation's memory area description data (eg, SGL). For example, when the host system sends a request (eg, a command) to read data from the memory subsystem, the host system may: create or establish a set of memory regions (eg, buffers) on the host system to facilitate responding to the request Instead, send read data from the memory subsystem to the host system; and generate host memory region description data describing a set of memory regions created/established on the host system (eg, for the requested SGL, where the SGL is stored on the host system) ). In response to the request, various embodiments access host memory area description data (eg, SGL) from the host system. The host memory area description data may be separated from the request stored on the host system, buffered on the memory subsystem when accessed from the host system, and used by the memory subsystem via a data bus between the host system and the memory subsystem (eg, PCIe bus) access. For various embodiments, the data size of the host memory region describing data is larger than the buffers on the memory subsystem used to store such data in conjunction with read requests. Based on the host memory area description data, various embodiments identify one or more contiguous memory areas in a set of memory areas, where each contiguous memory area is located sequentially adjacent to the host system (eg, its local memory) on two or more memory areas. Various embodiments generate (and store on the memory subsystem) integrated memory area description data that includes the data for each contiguous memory area identified in the memory area list described by the host memory area description data. A single memory area descriptor, and includes a single memory descriptor for each memory area in a set of memory areas that is not part of one of the identified contiguous memory areas. In this manner, the integrated memory area description data may represent a simplified version of the host memory area description data provided by the host system.Although various embodiments are described herein with respect to read requests from a host system or device, various embodiments support write requests in a similar manner. Typically, for write requests, the memory subsystem may retrieve data sequentially from the host system, so the memory subsystem typically does not have to traverse (eg, walk) the host memory region description data (eg, SGL) as frequently as for read requests. Nonetheless, consolidating memory area description data as described herein may be beneficial for memory subsystems to perform both read and write requests because consolidating memory area description data may reduce storage used on the memory subsystem and may Helps with data bus optimization (eg, better use of Transaction Layer Packets (TLPs) sent over the PCIe bus).The resulting integrated memory area description data may be smaller in data size than the host memory area description data provided by the host system, which may enable faster traversal of the integrated memory area description data than the host memory area description data. In addition, when the host memory region description data cannot be completely stored on the memory subsystem, the smaller data size may permit the integrated memory region description data to be completely stored on the memory subsystem (eg, its designated buffer), thereby avoiding The memory subsystem repeatedly accesses (eg, reads and traverses) host memory regions describing the need for data (eg, via the PCIe data bus between the host system and the memory subsystem). Additionally, by identifying contiguous memory regions, embodiments may enable larger data transfers that improve the efficiency of the data bus between the host system and the memory subsystem.As used herein, a memory area may include memory area space on a memory device (eg, local memory) of a host system. The memory area may be used as buffer space on the host system for receiving requested read data from the memory subsystem, or as buffer space on the host system for providing data to be written to the memory subsystem. As used herein, memory region description data may describe (eg, as a list) one or more memory regions on the host system in conjunction with requests from the host system to the memory subsystem. For example, the memory area description data may include entries or descriptors for each memory area. Each entry/descriptor may be defined by a memory address corresponding to a memory address space (for individual memory regions) on the host system's memory (eg, local memory) and a memory size (eg, size value) by the memory address space separate memory area. The size of the individual memory regions described by the memory region description data may vary. The memory region description data may be implemented as a linked list of memory region descriptors. Examples of memory region description data may include, without limitation, SGLs associated with requests sent by the host system to the memory subsystem.Disclosed herein are some examples of performing memory operations based on integrated data describing one or more memory regions on a host system, as described herein.1 illustrates an example computing environment 100 that includes a memory subsystem 110 in accordance with some embodiments of the present disclosure. Memory subsystem 110 may include media such as one or more volatile memory devices (eg, memory device 140 ), one or more non-volatile memory devices (eg, memory device 130 ), or a combination thereof.Memory subsystem 110 may be a storage device, a memory module, or a mixture of storage devices and memory modules. Examples of storage devices include solid state drives (SSDs), flash drives, universal serial bus (USB) flash drives, secure digital (SD) cards, embedded multimedia controller (eMMC) drives, universal flash memory (UFS) drives and hard disk drives (HDDs). Examples of memory modules include dual inline memory modules (DIMMs), small outline DIMMs (SO-DIMMs), and various types of non-volatile dual inline memory modules (NVDIMMs).Computing system 100 may be a computing device such as a desktop computer, laptop computer, web server, mobile device, vehicle (eg, airplane, drone, train, car, or other vehicle), Internet of Things (IoT) enabled A device, an embedded computer (eg, an embedded computer contained in a vehicle, industrial equipment, or networked business device), or such computing device that includes memory and processing means.Computing system 100 may include host system 120 coupled to one or more memory subsystems 110 . In some embodiments, host system 120 is coupled to different types of memory subsystems 110 . FIG. 1 shows an example of a host system 120 coupled to a memory subsystem 110 . As used herein, "coupled to" or "coupled with" generally refers to a connection between components, which may be an indirect communication connection or a direct communication connection (eg, without intervening components), whether wired or wireless, Included, for example, are electrical connections, optical connections, magnetic connections, and the like.Host system 120 may include a processor chipset and a software stack executed by the processor chipset. A processor chipset may include one or more cores, one or more caches, a memory controller (eg, an NVDIMM controller), and a storage protocol controller (eg, a Peripheral Component Interconnect Express (PCIe) controller, serial Advanced Technology Attachment (SATA) controller). Host system 120 uses memory subsystem 110 , for example, to write data to and read data from memory subsystem 110 .Host system 120 may be coupled to memory subsystem 110 via a physical host interface. Examples of physical host interfaces include, but are not limited to, SATA interfaces, Peripheral Component Interconnect Express (PCIe) interfaces, Universal Serial Bus (USB) interfaces, Fibre Channel, Serial Attached SCSI (SAS), Small Computer System Interface (SCSI) , Double Data Rate (DDR) memory bus, Dual Inline Memory Module (DIMM) interfaces (eg, DIMM sockets supporting Double Data Rate (DDR)), Open NAND Flash Interface (ONFI), Double Data Rate (DDR), Low Power Double Data Rate (LPDDR), or any other interface. A physical host interface may be used to transfer data between host system 120 and memory subsystem 110 . When memory subsystem 110 is coupled with host system 120 through a PCIe interface, host system 120 may further utilize an NVM Express (NVMe) interface to access components (eg, memory device 130). A physical host interface may provide an interface for passing control, address, data, and other signals between memory subsystem 110 and host system 120 . FIG. 1 shows memory subsystem 110 as an example. In general, host system 120 may access multiple memory subsystems via the same communication connection, multiple independent communication connections, and/or a combination of communication connections.The memory devices 130, 140 may comprise any combination of different types of non-volatile memory devices and/or volatile memory devices. Volatile memory devices (eg, memory device 140) may be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).Some examples of non-volatile memory devices (eg, memory device 130) include NAND-type flash memory and write-in-place memory, such as three-dimensional intersection ("3D intersection") memory devices, which are non-volatile A crosspoint array of volatile memory cells. Cross-point arrays of non-volatile memory may perform bit storage based on changes in bulk resistance in conjunction with stackable cross-grid data access arrays. In addition, in contrast to many flash-based memories, cross-point non-volatile memory can perform write-in-place operations, where non-volatile memory cells can be programmed without pre-erasing the non-volatile memory cells . The NAND-type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3DNAND).Each memory device 130 may include one or more arrays of memory cells. One type of memory cell, such as SLC, can store one bit per cell. Other types of memory cells, such as multi-level cell (MLC), TLC, quad-level cell (QLC), and five-level cell (PLC), can store multiple bits per cell. In some embodiments, each memory device 130 may include one or more arrays of memory cells, such as SLC, MLC, TLC, QLC, or any combination thereof. In some embodiments, a particular memory device may include an SLC portion of memory cells, as well as an MLC portion, a TLC portion, or a QLC portion. The memory cells of memory device 130 may be grouped into pages, which may refer to logical units of the memory device used to store data. For some types of memory (eg, NAND), pages may be grouped to form blocks.Although non-volatile memory components such as NAND-type flash memory (eg, 2D NAND, 3D NAND) and 3D cross-point arrays of non-volatile memory cells are described, memory device 130 may be based on any other type of non-volatile memory Volatile memory such as read only memory (ROM), phase change memory (PCM), optional memory, other chalcogenide based memories, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM) , Magnetic Random Access Memory (MRAM), Spin Transfer Torque (STT)-MRAM, Conductive Bridge RAM (CBRAM), Resistive Random Access Memory (RRAM), Oxide-Based RRAM (OxRAM), NOR (NOR ) Flash memory and Electrically Erasable Programmable Read Only Memory (EEPROM).Memory subsystem controller 115 (or simply controller 115 ) may communicate with memory device 130 to perform operations such as reading data, writing data, or erasing data and other such operations performed at memory device 130 . The memory subsystem controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. Hardware may include digital circuitry with dedicated (ie, hard-coded) logic to perform the operations described herein. Memory subsystem controller 115 may be a microcontroller, special purpose logic circuitry (eg, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.), or other suitable processor.Memory subsystem controller 115 may include a processor (processing device) 117 configured to execute instructions stored in local memory 119 . In the example shown, the local memory 119 of the memory subsystem controller 115 includes embedded memory configured to store instructions for performing operations that control the memory subsystem 110, including handling the memory subsystem 110 and the host. Various processes, operations, logic flows and routines for communications between systems 120 .In some embodiments, local memory 119 may include memory registers that store memory pointers, fetched data, and the like. Local memory 119 may also include read only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been shown to include the memory subsystem controller 115, in another embodiment of the present disclosure, the memory subsystem 110 does not include the memory subsystem controller 115, but may instead depend on external control (eg, provided by an external host or by a processor or controller separate from the memory subsystem).In general, memory subsystem controller 115 may receive commands or operations from host system 120 and may convert the commands or operations into instructions or appropriate commands to effect desired accesses to memory device 130 and/or memory device 140 . Memory subsystem controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical addresses associated with memory devices 130 (eg, Address translation between logical block addresses (LBAs, namespaces) and physical memory addresses (eg, physical block addresses). Memory subsystem controller 115 may further include host interface circuitry to communicate with host system 120 via a physical host interface. Host interface circuitry may translate commands received from host system 120 into command instructions to access memory device 130 and/or memory device 140, and translate responses associated with memory device 130 and/or memory device 140 into useful instructions. information on the host system 120 .Memory subsystem 110 may also include additional circuitry or components not shown. In some embodiments, memory subsystem 110 may include caches or buffers (eg, DRAM) and address circuitry (eg, row and column decoders) that may be controlled from the memory subsystem The memory device 115 receives the address and decodes the address to access the memory device 130 .In some embodiments, memory device 130 includes a local media controller 135 that operates in conjunction with memory subsystem controller 115 to perform operations on one or more memory cells of memory device 130 . An external controller (eg, memory subsystem controller 115 ) may manage memory device 130 externally (eg, perform media management operations on memory device 130 ). In some embodiments, memory device 130 is a managed memory device, which is a raw memory device combined with a local controller (eg, local media controller 135) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device.The memory subsystem controller 115 includes a memory area description data integrator 113 that implements or facilitates the various methods described herein with respect to the memory subsystem 110 . For example, memory region description data integrator 113 may cause memory subsystem controller 115 to read data from or write data to memory subsystem 110 based on host memory region description data provided by host system 120 in conjunction with the The request generates integrated memory area description data on the memory subsystem 110 . In addition, the memory area description data integrator 113 may cause the memory subsystem controller 115 to perform a read operation or a write operation in response to a request based on the generated integrated memory area description data.For some embodiments, host system 120 sends a request (eg, a command) to memory subsystem 110 to read the requested data from a memory location on memory subsystem 110 that corresponds to a memory address (eg, a logical block address), or to Data is written to memory locations on memory subsystem 110 that correspond to memory addresses. In conjunction with the request, the host system 120 may create or establish a set of memory regions (eg, buffers) on the host system 120, such as on the local memory of the host system 120, to facilitate the transfer of requested data from memory in response to the request Subsystem 110 sends (eg, transmits) to host system 120 . Additionally, in conjunction with the request, host system 120 may generate host memory area description data (eg, SGL for the request) that describes a set of memory areas created/established on host system 120. For various embodiments, host memory region description data is generated and stored on memory (eg, local memory) of host system 120 . After receiving the request, memory subsystem 110 may access host memory region description data directly from host system 120 (eg, via a data bus, such as a PCIe bus) as required by memory subsystem 110 to execute the request. In general, the data size of the host memory region description data may make it impractical for the memory subsystem 110 to store all of the host memory region description data locally on the memory subsystem 110 buffers at once, especially when the memory subsystem 110 handles the respective When there are multiple requests with associated host memory area description data.In response to the request, memory region description data integrator 113 may cause memory subsystem controller 115 to access host memory region description data (eg, SGL) from host system 120 . When host memory region description data is accessed by memory subsystem 110 from host system 120 , the host memory region description data may be stored in host-side memory (eg, local memory) of host system 120 as received by memory subsystem 110 . , and requests buffered on the memory subsystem 110 are separated. Host memory region description data may be accessed by memory subsystem 110 via a data bus (eg, a PCIe bus) between host system 120 and memory subsystem 110 .The memory area description data integrator 113 may cause the memory subsystem controller 115 to generate integrated memory area description data based on host memory area description data accessed by the memory subsystem 110 from the host system 120 . As described herein, the data size of the host memory region describing data may be larger than the buffers on the memory subsystem 110 used to store such data on the memory subsystem 110 . Specifically, the memory area description data integrator 113 may cause the memory subsystem controller 115 to identify one or more contiguous memory areas in a set of memory areas described by the host memory area description data, wherein each contiguous memory area includes a Two or more memory regions in the set of memory regions are located sequentially adjacent on the host system 120 (eg, its local memory). For some embodiments, the integrated memory area description data includes a single descriptor (eg, memory area descriptor) for each contiguous memory area identified in the memory area list described by the host memory area description data, and includes for a set of memory A single memory descriptor for each memory region in the region that is not part of one of the identified contiguous memory regions. In this manner, the integrated memory area description data may represent a simplified version of the host memory area description data provided by the host system.According to various embodiments, the data size of the generated integrated memory region description data is smaller than the host memory region description data, and may have a data size that permits it to be stored entirely on the buffers of the memory subsystem 110 . By having the integrated memory area description data stored entirely on the memory subsystem 110, the memory subsystem 110 can save the memory subsystem 110 at the time when the memory subsystem 110 sends the portion of the requested data to the memory area described by the integrated memory area description data on the host system 120. Local access to integrated memory area description data. This may enable the memory subsystem controller 115 to traverse the integrated memory area description data faster than the host memory area description data. In the case of local access to the integrated memory region description data, the memory subsystem 110 can avoid repeated accesses from the host system 120 when the memory subsystem 110 sends the requested portion of the data to the memory region on the host system 120 (and buffering) the overhead for the host memory area to describe the data. Additionally, by identifying contiguous memory regions, consolidating memory region description data may enable memory subsystem controller 115 to perform larger data transfers that improve the efficiency of the data bus between host system 120 and memory subsystem 110 (eg, better use of Transaction Layer Packets (TLPs) sent over the PCIe bus.For some embodiments, the memory region description data integrator 113 causes the memory subsystem controller 115 to perform a read operation or a write operation in response to a request based on the generated integrated memory region description data. Specifically, the memory region description data integrator 113 causes the memory subsystem controller 115 to send a set of read commands to one or more of the memory devices 130, 140 for a set of logical block addresses in which the requested data Stored on memory subsystem 110 , or a set of write commands for a set of logical block addresses where data (eg, provided by host system 120 ) will be stored on memory subsystem 110 . With regard to read requests, the memory subsystem controller 115 may respond to one of the read commands (of the set of read commands) associated with the selected logical block address of the set of logical addresses from the memory device 130 One of , 140 receives the selected data. In response to receiving the selected data, memory subsystem controller 115 may determine that one or more selected memory regions in host system 120 described in Integrating memory region description data will receive the selected data from memory subsystem 110 . Regarding the write request, the memory subsystem controller 115 may determine that one or more selected memory regions described in the integrated memory region description data in the host system 120 will be provided to be written to the memory devices 130, 140 via a set of write commands Selected data for one or more of the .Determining the one or more selected memory regions of the host system 120 may include determining (eg, calculating) one or more ranges of host memory addresses corresponding to the one or more selected memory regions based on the integrated memory region description data. The integrated memory area may be stored, for example, in a buffer of memory subsystem 110 or in local memory, such as local memory 119 of memory subsystem controller 115 . The integrated memory region description data may include a set of memory region descriptors that describe the starting address and memory size of each memory region, which may be used by the memory subsystem controller 115 to view and interact with the slave memory devices 130, 140. One or more host memory addresses are computed from a selected logical block address associated with the selected data received. Specifically, the integrated memory region description data can be indexed by logical block addresses that can help determine host memory addresses. Ultimately, the memory subsystem controller 115 may send the selected data to the location on local memory that corresponds to the host memory address. A location on local memory may correspond to a location within the memory area described by the integrated memory area description data. The memory subsystem controller 115 may receive selected data (representing a portion of the requested data) in response to each read command sent to the one or more memory devices 130, 140, and for those selected data received Each of the memory subsystem controller 115 may determine the host memory address and send the selected data to the location (on the local memory of the host system 120) corresponding to the determined host memory address. The selected data may be sent to a location on local memory, eg, using a single transaction layer packet (TLP) comprising the selected data, where the single TLP may be in accordance with the Peripheral Component Interconnect Express (PCIe) standard.2 and 3 are flowcharts of example methods for performing memory operations based on consolidated data describing one or more memory regions on a host system, according to some embodiments of the present disclosure. The methods 200, 300 may include hardware (eg, processing device, circuitry, special purpose logic, programmable logic, microcode, hardware of the device, integrated circuits, etc.), software (eg, instructions that run or execute on the processing device) or a combination of processing logic execution. In some embodiments, at least one of the methods 200 , 300 is performed by the memory subsystem controller 115 of FIG. 1 based on the memory region description data integrator 113 . Additionally or alternatively, for some embodiments, at least one of the methods 200 , 300 is performed, at least in part, by the local media controller 135 of the memory device 130 of FIG. 1 . Although shown in a particular order or sequence, unless otherwise specified, the order of the processes may be modified. Accordingly, the illustrated embodiments are to be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. Additionally, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are possible.Referring now to method 200 of FIG. 2, at operation 202, a processing device (eg, processor 117 of memory subsystem controller 115) receives a read memory from a host system (eg, 120) at a memory system (eg, 130) Requested data on a memory system (eg, 110 ) or a request to write data to the memory system. The request may be a specific type of read or write request using a memory area, such as an SGL read command/request. For some embodiments, the request from the host system specifies the host memory area description data associated with the request. For example, a request may specify (eg, via a pointer to a memory location) where on the host system host memory region description data is stored or from which the host memory region description data may be accessed. For some embodiments, each request to a memory system (eg, 110) may be associated with its own host memory region description data. According to various embodiments, the host memory region description data describes a set of individual memory regions to be used by the memory system (eg, as a buffer) to send the requested data to the host system's local memory of the host system. In this manner, each of the memory regions may be used as a host-side buffer on the host system (eg, 120 ) to receive requested data from the memory system (eg, 110 ). The host memory region description data may include a linked list of memory region descriptors, where each memory region descriptor includes a memory address (eg, a pointer to a memory address) corresponding to a memory address space on the local memory of the host system. As described herein, the host memory region description data may include an SGL associated with the request, wherein the SGL is in accordance with the Non-Volatile Memory Express (NVMe) protocol. The host memory region description data may be generated by the host system (eg, 120 ) in connection with the request, and the generated host memory region description data may be stored on memory local to the host system (eg, local memory of the host system 120 ), where the memory may be The memory system (eg, 110) accesses.In response to the request, at operation 204, the processing device (eg, 117) generates integrated memory region description data based on host memory region description data provided by the host system (eg, 120) to the memory system (eg, 110). For some embodiments, the integrated memory area description data is generated by identifying a set of contiguous memory areas (in a set of individual memory areas), wherein each contiguous memory area includes a Two or more sequentially adjacent memory areas. For some embodiments, individual contiguous memory areas in a set of individual memory areas are identified as host memory area description data accessed from the host system. Each contiguous memory region may represent the largest sequence of adjacent memory regions. Two or more sequentially adjacent memory regions may be considered contiguous memory regions. For various embodiments, the integrated memory region description data includes a single descriptor for each contiguous memory region in a set of contiguous memory regions, a single descriptor for each individual memory region in a set of individual memory regions, Each of the individual memory regions is excluded from a set of contiguous memory regions (eg, is not part of any of the contiguous memory regions). For some embodiments, the new descriptor is generated in the integrated memory area description data for each contiguous memory area in the identified set of contiguous memory areas, and each of the other individual descriptors in the integrated memory area description data (For other memory areas that are not part of any contiguous memory area) Data copying can be described from the host memory area. For some embodiments, the integrated memory region description data is indexed by logical block addresses that can help determine host memory addresses. Each memory area described in the integrated memory area description data may be defined by a starting memory address corresponding to an individual memory address space (on the local memory of the host system) and a memory size of the individual memory address space. In the integrated memory area description data, each starting memory address can effectively indicate a break between the previous memory area and the starting memory address. After the integrated memory area description data has been generated on the memory system (eg, 110 ), the processing device may rely on the integrated memory area description data (in lieu of the host memory area description data) to process requests from the host system, thereby processing requests from the host system in the requested data Parts sent back to the host system (eg, 120) avoid the need for the processing device to repeatedly access the host memory area description data.At operation 206 , the processing device (eg, 117 ) updates the memory system (eg, 110 ) with one or more memory devices (eg, 130 , 140) Perform a memory operation, such as a read operation or a write operation. For some embodiments, performing a read operation on one or more memory devices (eg, 130 , 140 ) based on the integrated memory region description data includes sending (eg, issuing) to the memory device for a set of logical block addresses (eg, a set of read commands (eg, ten read commands) of ten LBAs, wherein the requested data is stored on the memory system (eg, 110 ); and in response to a selected logical block for the set of logical addresses A selected read command for an address (eg, in a set of read commands) receives selected data (eg, data from a selected LBA) from the memory device.A set of read commands is generated based on the request and associated memory address received from the host system (eg, 120 ) through operation 202 . A set of read commands can include one or more read commands to two or more memory devices of the memory system, and a set of read commands can be sent to one or more memory devices via one or more memory channels . One or more memory devices (eg, 130, 140) may provide responses or results for each of the read commands sent. Responses/results may be received randomly and out of order from one or more memory devices. In response to a set of read commands, the processing device may receive a corresponding set of responses from the one or more memory devices, wherein each response includes a selected data portion of the requested data. Each of the responses may indicate a logical block address corresponding to the selected portion of the requested data. Each logical block corresponding to a logical block address may include, for example, 512 bytes (512B) or 528 bytes (528B) in size (eg, with extended protection information or metadata).In response to receiving the selected data from one of the memory devices (eg, 130, 140), the processing device may determine that one or more selected memory regions described in the integrated memory region description data in the host system 120 are from the memory subsystem. 110 receives selected data. For example, the processing device may determine that selected data (eg, LBA) received from one of the memory devices is to be sent and saved (eg, transferred) across two or more memory regions, where the One may receive a portion of the selected data (eg, a beginning portion, a middle portion, or an ending portion). Thus, based on the integrated memory region description data, the memory system can determine (eg, calculate) one or more ranges of host memory addresses corresponding to one or more selected memory regions that will receive the selected data. A host memory address may correspond to a memory location (on the host system's local memory) that belongs to one of a set of individual memory regions created/established on the host system and originally described by the host memory region description data. Ultimately, the processing device may send the selected data to one or more memory locations on local memory corresponding to the range of host memory addresses. The selected data may be sent to a location on local memory using a single TLP including the selected data, wherein the single TLP is in accordance with the PCIe standard. To enable more efficient use of the data bus between the memory system and the host system, for some embodiments, a single TLP includes a single TLP received from one or more memory devices in response to another read command being sent to the one or more memory devices The other read command is associated with a second read operation performed on the one or more memory devices in response to a second request received by the memory system from the host system. Selected data and additional data may go to sequential contiguous memory areas on the host system. In this manner, various embodiments can optimize PCIe transfers from the memory system to the host system and maximize the use of TLP whenever possible.For some embodiments, performing a write operation to one or more memory devices (eg, 130, 140) based on the integrated memory area description data includes determining one or more of the memory devices described in the integrated memory area description data in the host system (eg, 120) a plurality of selected memory regions, one or more of which are storing data to be written to one or more memory devices (eg, 130, 140) of the memory system (eg, 110); from one or more Retrieve data from a selected memory region; and write the retrieved data to one or more memory devices upon request from a host system (eg, 120). For example, a processing device may write retrieved data to one or more memory devices by writing a set of write commands (eg, two LBAs) for a set of logical block addresses (eg, two LBAs). A write command) is sent (eg, issued) to one or more of the memory devices, the set of logical block addresses corresponding to the physical locations where a portion of the retrieved data is to be written.Referring now to method 300 of FIG. 3, at operation 302, a processing device (eg, processor 117 of memory subsystem controller 115) receives a read memory from a host system (eg, 120) at a memory system (eg, 130) Requested data on a memory system (eg, 110 ) or a request to write data to the memory system. For some embodiments, operation 302 is similar to operation 202 of method 200 described with respect to FIG. 2 .In response to receiving the request at operation 302, at operation 304, the processing device (eg, 117) accesses the host memory area description data from the host system (eg, 120). For various embodiments, the processing device (eg, 117 ) accesses the host memory area description data from the host system (eg, 120 ) via a data bus (eg, PCIe bus), which enables the processing device to pull, read as needed and traverse the host memory area to describe the data. According to some embodiments, the processing device (eg, 117 ) accesses the host memory area description data only once to facilitate operation 306 .At operation 306, the processing device (eg, 117) generates integrated memory area description data based on the host memory area description data provided by the host system (eg, 120) to the memory system (eg, 110). For some embodiments, operation 306 is similar to operation 204 of method 200 described with respect to FIG. 2 . When the integrated memory area description data is generated by operation 306 , the processing device may store the integrated memory area description data on a buffer (eg, 119 ) of the memory system (eg, 110 ) at operation 308 .At operation 310 , the processing device (eg, 117 ) updates the memory system (eg, 110 ) with one or more memory devices (eg, 130 , 140) Perform a memory operation, such as a read operation or a write operation. For some embodiments, operation 310 is similar to operation 206 of method 200 described with respect to FIG. 2 .4 and 5 are diagrams illustrating examples of generating integrated memory region description data, according to some embodiments of the present disclosure. Although FIGS. 4 and 5 are described with respect to performing read operations, various embodiments support performing write operations in a similar manner using integrated memory area description data. 4 illustrates example host memory area description data 402 generated by a host system in conjunction with a request to read data from a memory system, where the request to read data may include multiple LBAs of the memory system (eight LBAs, 1 through 8), And each logical block has a size of 512 bytes (512B). Host memory area description data 402 may represent SGLs generated by the host system, where each memory area (eg, SGL buffer) has a separate entry or descriptor in the SGL. As shown, host memory area description data 402 describes eight memory areas, and each individual memory area is described by its starting address (A) and its memory size (LEN). Specifically, host memory area description data 402 describes the following list of individual memory areas: first memory area (A=0, LEN=32B); second memory area (A=32, LEN=128B); third memory area ( A=100, LEN=96B); fourth memory area (A=196, LEN=256B); fifth memory area (A=500, LEN=512B); sixth memory area (A=1012, LEN=2048B) ; the seventh memory area (A=2036, LEN=1000B); and the eighth memory area (A=3036, LEN=24B).Host memory region description data 404 illustrates host memory region description data 402 after one or more contiguous memory regions have been identified, according to some embodiments. Specifically, the following contiguous memory areas are identified in the memory area list described by the host memory area description data 402: a first contiguous memory area, which includes a first memory area (A=0, LEN=32B) and a second memory area (A=32, LEN=128B); a second contiguous memory area including a third memory area (A=100, LEN=96B) and a fourth memory area (A=196, LEN=256B); and a third contiguous memory area A memory area including a fifth memory area (A=500, LEN=512B), a sixth memory area (A=1012, LEN=2048B) and a seventh memory area (A=2036, LEN=1000B).Based on the host memory area description data 404 and the one or more identified contiguous memory areas, the memory system may generate integrated memory area description data 406 . Specifically, the memory system may generate the integrated memory area description data 406 such that there is a single entry/descriptor for each of the first, second, and third contiguous memory areas, and for non-identified contiguous memory areas in a single entry/descriptor. A single entry/descriptor exists for the remainder of the memory area (the eighth memory area) that is part of any of the . As shown, the integrated memory area description data 406 includes a single entry/descriptor for: a first contiguous memory area with a starting address of 0 (A=0) and a memory size of 160B (LEN=160B) ; the second contiguous memory area, its starting address is 100 (A=100) and the memory size is 352B (LEN=352B); the third contiguous memory area, its starting address is 500 (A=500) and the memory size is 4072B (LEN=4072B); and the eighth memory area, whose starting address is still 3036 (A=3036) and the memory size is 24B (LEN=24B), which is the same as that described by the host memory area description data 402 .As described herein, based on the integrated memory region description data 406, the memory system may determine (eg, calculate) one or more selected memory regions for use in transferring the LBA (from the memory device) in response to a request to read data one of the received/returned from) to send (eg, transmit) to the host system. For example, in response to the request, eight read commands for eight different LBAs may be sent to one or more memory devices of the memory system to retrieve the read data requested by the host system, where each logical block may be Has a size of 512B. As described herein, responses/results may be received from one or more memory devices out of order (eg, randomly). With one or more memory devices returning eight LBAs (LBA8), the memory system may determine one or more selected memory regions of the host system that will receive LBA8. As described herein, the integrated memory area description data 406 may be an ordinal index of an LBA returned by one or more memory devices. For example, based on the integrated memory area description data 406, the memory system may determine the starting host memory address of LBA8 as follows:500+(8-1)×512B=4084,where 8 represents the eighth LBA (LBA 8). Additionally, based on the integrated memory region description data 406, the memory system can determine that the host memory address of 4084 belongs to a third contiguous memory region that has a starting memory address of 500 and an ending memory address of 4572, and that the LBA 8 will span the third contiguous memory region (starting at host memory address 4084 and ending at host memory address 4572) and the eighth memory region (starting at host memory address 4572 and ending at host memory address 4596) Send and save (eg, transmit). As a result, the memory system may send (eg, transfer or move) 488B of LBA 8 from the memory system to the memory space on the host system, starting at the memory location corresponding to host memory address 4084 and at the memory location corresponding to host memory 4084 Ends at the memory location of address 4572. Additionally, the memory system will send (eg, transfer or move) the remaining 24B of the LBA 8 from the memory system to the memory space on the host system, starting at the memory location corresponding to the host memory address 4572 and at the memory location corresponding to the host Ends at the memory location of memory address 4596.Continuing with this example, with one or more memory devices returning to the first LBA (LBA 1), the memory system may determine the host system's first contiguous memory area (starting at host memory address 0 and host memory at 512) address) will receive LBA 1. Specifically, based on the integrated memory area description data 406, the memory system can determine the starting host memory address of LBA 1 as follows:500+(1-1)×512B=0,where 1 represents the first LBA (LBA 1). Thus, the memory system may send (eg, transfer or move) 512B of LBA 1 from the memory system to the memory space on the host system, starting at the memory location corresponding to host memory address 0 and starting at the memory location corresponding to host memory address 0 Ends at the memory location of address 512. As shown, the transmission of LBA 1 does not cross into another memory area (ie, LBA 1 is in the first contiguous memory area).5 illustrates example host memory area description data 502 generated by a host system in conjunction with a request to read data from a memory system, where the request to read data may include multiple LBAs of the memory system (three LBAs, 1-3), And each of the logical blocks has a size of 528 bytes (528B). Like host memory area description data 402 of FIG. 4, host memory area description data 502 may represent SGLs generated by a host system, where each memory area (eg, SGL buffer) has a separate entry or descriptor in the SGL. As shown, host memory area description data 502 describes the following list of individual memory areas: first memory area (A=5000, LEN=512B); second memory area (A=5512, LEN=16B); third memory area ( A=6000, LEN=512B); the fourth memory area (A=6512, LEN=16B); the fifth memory area (A=7000, LEN=512B); the sixth memory area (A=7512, LEN=512B) .Host memory area description data 504 (illustrating host memory area description data 502 after one or more contiguous memory areas have been identified in accordance with some embodiments) identifies the following contiguous memory areas: a first contiguous memory area that includes the first memory area (A=5000, LEN=512B) and a second memory area (A=5512, LEN=16B); a second continuous memory area including a third memory area (A=6000, LEN=512B) and a fourth memory area (A=6512, LEN=16B); and a third contiguous memory area including a fifth memory area (A=7000, LEN=512B) and a sixth memory area (A=7512, LEN=512B).Based on the host memory area description data 504 and the one or more identified contiguous memory areas, the memory system may generate integrated memory area description data 506 . Specifically, the memory system may generate the integrated memory area description data 506 such that there is a single entry/descriptor for each of the first, second, and third contiguous memory areas. As shown, the integrated memory region description data 506 includes a single entry/descriptor for: a first contiguous memory region with a starting address of 5000 (A=5000) and a memory size of 528B (LEN=528B) ; a second contiguous memory area with a starting address of 6000 (A=6000) and a memory size of 528B (LEN=528B); and a third contiguous memory area with a starting address of 7000 (A=7000) and a memory size is 528B (LEN=528B).6 provides an interaction diagram illustrating interactions between components of a computing environment, in the context of some embodiments, where execution is performed for integrating data based on describing one or more memory regions on a host system as described herein A method to perform a memory read operation. Although FIG. 6 illustrates performing memory reads according to various embodiments, some embodiments support performing memory writes based on consolidated data (depicting one or more memory regions on a host system) in a similar manner. The operations of the method may be performed by hardware (eg, processing device, circuitry, special purpose logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (eg, instructions that run or execute on a processing device), or Its combined processing logic executes. In some embodiments, the method is performed by a host system (eg, 120), a memory subsystem controller (eg, 115), a memory device (eg, 130 or 140), or some combination thereof. Although the operations are shown in a particular order or sequence, unless otherwise specified, the order of the processes may be modified. Accordingly, the illustrated embodiments are to be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. Additionally, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. In the context of the example shown in FIG. 6 , the host system may include host system 120 , the memory subsystem controller may include memory subsystem controller 115 , and the memory device may include memory device 140 .As shown in FIG. 6, at operation 602, memory subsystem controller 115 sends a request to read the requested data from memory subsystem 110, wherein the request specifies host memory region description data associated with the request, wherein the host memory region describes The data describes a set of individual memory regions to be used to send (eg, transfer) the requested data to the local memory of the host system 120 of the host system 120 . At operation 610, memory subsystem controller 115 receives the request from the host system, and in response, memory subsystem controller 115 accesses host memory region description data from host system 120 at operation 612 (eg, via the PCIe bus). At operation 604 , host system 120 provides memory subsystem 110 access to host memory region description data stored on host system 120 .Based on the accessed host memory region description data, at operation 614, the memory subsystem controller 115 generates integrated memory region description data by identifying a set of contiguous memory regions, each including a set of individual memory regions Two or more sequentially adjacent memory areas (described by the accessed host memory area description data) in . At operation 616 , the memory subsystem controller 115 performs a read operation on the memory device 140 based on the integrated memory region description data generated by the memory subsystem controller 115 . At operation 630, memory device 140 facilitates memory subsystem controller 115 to perform a read operation, wherein memory device 140 may execute one or more read commands issued to memory device 140 by memory subsystem controller 115 in conjunction with the read operation . Additionally, at operation 606, the host system 120 provides access to one or more memory regions on the host system 120 to facilitate receipt of the requested data (through the memory subsystem controller 115) from the memory subsystem 110 to the host system 120.7 illustrates an example machine in the form of a computer system 700 within which a set of instructions can be executed to cause the machine to perform any one or more of the methods discussed herein. In some embodiments, computer system 700 may correspond to a host system (eg, host system 120 of FIG. 1 ) that includes, is coupled to, or utilizes a memory subsystem (eg, memory subsystem 110 of FIG. 1 ) or Can be used to perform the operations described in this article. In alternative embodiments, the machines may be connected (eg, networked) to other machines in a local area network (LAN), intranet, extranet, and/or the Internet. A machine may operate in the capacity of a server or client machine in a client-server network environment as a peer machine in a peer-to-peer (or decentralized) network environment or as a server or client machine in a cloud computing infrastructure or environment .A machine may be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network device, server, network router, switch, or bridge, or capable of (sequentially or otherwise) Any machine that executes a set of instructions specifying actions to be taken by the machine. Furthermore, although a single machine is shown, the term "machine" should also be considered to encompass any machine that executes, individually or collectively, a set (or sets) of instructions to perform any one or more of the methods discussed herein gather.The example computer system 700 includes a processing device 702 in communication with each other via a bus 730, a main memory 704 (eg, read only memory (ROM), flash memory, dynamic random access memory (DRAM), such as synchronous DRAM (SDRAM) or Rambus DRAM) (RDRAM), etc.), static memory 706 (eg, flash memory, static random access memory (SRAM), etc.), and data storage 718 .Processing device 702 represents one or more general-purpose processing devices, such as microprocessors, central processing units, and the like. Rather, the processing device 702 may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing other instruction sets Or a processor that implements a combination of instruction sets. Processing device 702 may also be one or more special-purpose processing devices, such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), network processors, and the like. Processing device 702 is configured to execute instructions 726 for performing the operations and steps discussed herein. Computer system 700 may further include a network interface device 708 to communicate via network 720 .Data storage 718 may include machine-readable storage media 724 (also referred to as computer-readable media) having stored thereon one or more sets of instructions 726 or embodying any of the methods or functions described herein or more software. Instructions 726 may also reside wholly or at least partially within main memory 704 and/or within processing device 702 during execution thereof by computer system 700, which also constitute machine-readable storage media. Machine-readable storage medium 724 , data storage 718 , and/or main memory 704 may correspond to memory subsystem 110 of FIG. 1 .In one embodiment, instructions 726 include instructions to implement functionality corresponding to performing a memory read operation based on consolidated data describing one or more memory regions on a host system as described herein (eg, FIG. 1 The memory area of the description data integrator 113). Although machine-readable storage medium 724 is shown as a single medium in example embodiments, the term "machine-readable storage medium" should be considered to encompass a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" shall also be considered to encompass any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Accordingly, the term "machine-readable storage medium" should be considered to include, but not be limited to, solid-state memory, optical media, and magnetic media.Some portions of the previous detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is herein and generally considered to be a self-consistent sequence of operations that produce a desired result. Operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may relate to the manipulation and transformation of data represented as physical (electronic) quantities within the registers and memory of a computer system into other data similarly represented as physical quantities within the memory or registers of a computer system or other such information storage systems The actions and processes of a computer system or similar electronic computing device.The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs may be stored in computer-readable storage media, each coupled to a computer system bus, such as, but not limited to, any type of disk (including floppy disks, optical disks, CD-ROMs, and magneto-optical disks), read only memory (ROM), Random Access Memory (RAM), EPROM, EEPROM, magnetic or optical cards or any type of media suitable for storing electronic instructions.The algorithms and displays presented herein are not inherently related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the methods. Structures for a variety of these systems will be presented as set forth in the description below. Additionally, embodiments of the present disclosure are not described with reference to any particular programming language. It should be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software that may include machine-readable instructions stored thereon that may be used to program a computer system (or other electronic device) to perform processes according to the present disclosure media. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, machine-readable (eg, computer-readable) media include machine (eg, computer-readable) storage media such as read-only memory ("ROM"), random-access memory ("RAM"), magnetic disks Storage media, optical storage media, flash memory components, etc.In the foregoing specification, embodiments of the present disclosure have been described with reference to specific example embodiments of the present disclosure. It will be apparent that various modifications may be made to the present disclosure without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. |
Methods and apparatus enable a mobile device to suggest available applications or features in which a user may be interested to the user based upon the user's past and current mobile device usage patterns. The mobile device may monitor the specific application/features used and their frequency of use. The mobile device may determine other available applications/features that the user may be interested in using based upon the frequency of use of applications or features and information which indicates a likelihood of user interest in one application or feature based upon usage of another application or feature. Applications or features determined to be potentially of interest to the user may be presented to the user in the form of suggestions to be added to the user interface menu so that the user can elect to accept or rejection the suggestion to modify the menu. |
CLAIMS What is claimed is: 1. A method of customizing a user interface menu to display beneficial applications available on a mobile device, comprising: generating an activity record including a measure of use for each application used by a user of the mobile device; mapping the activity record to applications available on the mobile device to generate a priority order of applications; determining whether any high priority applications are not included in a current user interface menu displaying a number of applications less than the applications available on the mobile device; and modifying the user interface menu to display a high priority application if it is determined that the high priority application is not included in the current user interface menu. 2. The method of claim 1 , wherein the application may be a software application or a hardware feature, 3. The method of claim 1 , wherein the measure of use for each application comprises a frequency of use. 4. The method of claim 3, wherein mapping the activity record to applications available on the mobile device to generate a priority order of applications comprises: multiplying the frequency of use for each application by affinity weighting factors to generate affinity values for each other application available on the mobile device; repeating the multiplying step for all applications available on the mobile device; summing the affinity values for each application to generate a total affinity value; andprioritizing the applications available on the mobile device based upon their total affinity values, wherein high priority applications are the applications listed in the user interface menu which have a greatest total affinity value. 5. The method of claim 4, further comprising: displaying a suggestion to add a high priority application to the current user interface menu if it is determined thai any of the top priority applications are not included in the current user interface menu. 6. The method of claim 5, further comprising: receiving a user response to the displayed suggestion to add a high priority application to the user interface menu, wherein modifying the user interface menu to display the high priority application is accomplished in response to receiving a user acceptance of the displayed suggestion. 7. The method of claim 5, further comprising: receiving a user response to the displayed suggestion to add a high priority application to the user interface menu; and modifying the affinity weighting factors if the user response indicates a rejection of the displayed suggestion. 8. The method of claim 5, further comprising: receiving a user's response to the displayed suggestions of top priority applications to include in the modified user interface menu, wherein the user's response indicates a modification to the displayed suggestions; modifying the user interface menu in accordance with the modification to the displayed suggestions; and modifying the affinity weighting factor to reflect the modification to the displayed suggestion. 9. A method of customizing a user interface menu In a second mobile device to match a customized user interface menu implemented in a first mobile device, comprising: receiving a first mobile device activity record from the first mobile device; modifying a stored activity record to reflect identified applications and measures of use values contained in the received first mobile device activity record; mapping the modified activity record to applications available on the second mobile device to generate a priority order of applications; determining whether any high priority applications are not included in a current user interface menu displaying a number of applications less than the applications available on the second mobile device; and modifying the user interface menu to display a high priority application if it is determined that the high priority application is not included in the current user interface menu. 10. The method of claim 9, wherein modifying the stored activity record comprises: replacing the stored activity record with the received first mobile device activity record, 11. The method of claim 9, wherein modifying the stored activity record comprises: augmenting the stored activity record with values from the received first mobile device activity record. 12. The method of claim 9, further comprising: receiving a first affinity table containing modified affinity weighting factors from the first mobile device: and modifying a second affinity table stored in the second mobile device to reflect the modified affinity weighting factors contained in the received first affinity table. 13. The method of claim 12, wherein modifying the stored affinity table comprises: replacing the second affinity table with the received first affinity table. |
TITLE Method and Apparatus for Customizing a User Interface Menu FIELD OF THE INfVENTION [0001] The present invention relates generally to providing a method and apparatus for customizing a user interface menu based upon the past and current user activity. BACKGROUND 32] Wireless communication technologies have seen explosive growth over the past few years. This growth has been fueled by wireless services providing freedom of movement to the mobile public, and cutting the tether to hardwired communication systems. As mobile communication devices have become ubiquitous, an overwhelming number of applications and features have been developed for use on mobile devices. Many of these applications and/or features are pre-loaded on the mobile device and offered to the user through the user interface menu. The user interface menu, being a concise display of available applications/features, is often truncated to display only a sampling of the available applications/features. However, as the number and complexity of the applications and/or features increase, their setup, capabilities and general usage can become quite baffling to many users. Given the limited space available in mobile device displays, there is simply not enough room to display all of the available applications/features to users in the user interface menu. Many times a series of complex navigation maneuvers must be performed to access a particular application/feature. As a result, many potentially- desirable applications/features are either undiscovered or rarely used. Consequently, a user may not receive the full experience offered by the mobile device. SUMMARY The various embodiments provide methods and a mobile device which monitors a user's activity on the mobile device, maps the user's activity record to other applications/features available to the user on the mobile device, and provide?suggestions to the user regarding the other applications/features (e.g., undiscovered or rarely used applications/features) already available to the user on the mobile device based upon the user's activity record. Upon receiving the suggestions, the user may elect to modify the user interface menu to highlight or include the suggested application/features for future use. In an embodiment, the identity of an application/feature and number of times a user activates the application/feature may be recorded. The frequency of use of a particular application/feature may be calculated. An affinity table may be used to estimate a user's potential interest in an application/feature based upon the user's record of use of another application/feature. The affinity table may be used to calculate relative values of the user's potential interest in applications/features note currently listed or displayed on the user interface menu. The applications/features having the highest relative values may be displayed Io the user as a suggestion to add to or modify the user interface menu. In another embodiment, the affinity table may be modified to reflect the user's actions subsequent to the displayed suggestion. In another embodiment, a user's previously created activity record may be transferred from a first mobile device to a second mobile device in order to present suggestions to modify the second mobile device's user interface menu so that it may be similar to that of the first mobile device. In yet another embodiment, the modified affinity table may also be transferred from the first mobile device to the second mobile device in order to present suggestions to modify the second mobile device's user interface menu so that it may be similar to that of the first mobile device. BRIEF DESCRIPTION OF TFlE DIiAWINGS The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary embodiments of the invention, and together with the general description given above and the detailed description given below, serve to explain the features of the invention. S] FlG. 1 is a process flow diagram of an embodiment user interface menu customization method that may be implemented by a mobile device.FIG. 2 is a process flow diagram of an embodiment method to generate a user activity record on a mobile device, FIG. 3 is an example of a user activity record. )] FlG. 4 is a process flow diagram of an embodiment method to map a user activity record to other available applications/features on a mobile device, FlG. 5 is an example of an affinity table that may be used by an embodiment method to map a user activity record to other available applications/features on a mobile device. FIG. 6A illustrates an example of affinity values that may be calculated by an embodiment method to map a user activity record to other available applications/features on a mobile device. £2j FIG. 6B illustrates an example of affinity values that may be calculated by an embodiment method Io map a user activity record to other available applications/features on a mobile device. FIG. όC illustrates an example of affinity values that may be calculated by an embodiment method to map a user activity record to other available applications/features on a mobile device. Ij FlG. 7A is a process flow diagram illustrating an embodiment method to customize a user interface menu based upon a user activity record, [0015] FIG. 7B is a process flow diagram illustrating an alternative embodiment method to customize a user interface menu based upon a user activity record. FlG. 8A is a process flow diagram illustrating an embodiment method to transfer a user activity record to a mobile device. 7} FlG. 8B is a process flow diagram illustrating an embodiment method to transfer a user activity record and dynamically modified affinity table to a mobile device.FIG. SC is a process flow diagram illustrating an embodiment method to transfer affinity values to a mobile device. FIG. 9 is a component block diagram of a mobile device suitable for use in an embodiment. FlG. 10 is a component block diagram of a server device suitable for use in an embodiment. DETAILED DESCRIPTION [0021] The various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims. I] The word ''exemplary'' is used herein to mean ''serving as an example, instance, or illustration.*' Any implementation described herein as '"exemplary" is not necessarily to be construed as preferred or advantageous over other implementations. 13] As used herein, the terms "mobile device" refers to any one or all of cellular telephones, personal data assistants (PDA's), palm-top computers, lap-top computers, wireless electronic mail receivers (e.g., the Blackberry® and Treo® devices), multimedia Internet enabled cellular telephones (e.g., the Blackberry Storm®), Global Positioning System (GPS) receivers, wireless gaming controllers, and similar personal electronic devices which include a programmable processor and memory. As the popularity of mobile communication devices has increased, so have the complexity of these devices. Today's mobile devices do not simply serve to support cellular telephone communications. Rather, today's mobile devices have become a sophisticated computing platform from which a dizzying array of applications may be executed and features utilized. Most current mobile devices come pre-programmed with a wide variety of applications while providing the user with the capability tocustomize the applications executable on the mobile device by loading additional "afier-markel" applications onto the mobile device. Moreover, most current mobile devices contain a number of features that allow users to perform a broad scope of actions. For example, many mobile devices come equipped with cameras that allow the user to take still photos as well as moving video. In addition, many mobile devices come equipped with the ability to record sound. Still further, integrated GPS receivers are becoming commonplace in mobile devices which allow the mobile device to determine its position in the world with precision and accuracy. In order to present the various application and features options to the user of the mobile device, most mobile devices implement a user interface menu. The user interface menu may be a convenient display of the applications and features available on the mobile device. Typically, by selecting an icon or listing of one of the applications or features, the selected application or feature is launched. However, to enable their portability, many mobile devices employ displays of limited size. Consequently, in many instances the user interface menu (or at least a top page of a user interface menu) lists a limited number of the available functions/applications. In most cases, only the most frequently used applications are displayed. [0026] Despite the wide variety of applications/features available on mobile devices, many users fall into a routine of using a relatively small set of familiar available applications and/or features on their mobile devices. In some cases, the applications/features utilized by the user arc the only features that the user has become familiar with through repeated use. As users upgrade and replace older mobile devices with newer ones loaded with advanced applications and/or features, the user may not take advantage of these new applications and/or features because the user is unaware or unfamiliar with them. As a result, many users fail to receive the full benefits offered by their mobile devices. In many instances, users need simply to be made aware or reminded of the availability of unused applications/features to enable them to take fuller advantage of their mobile devices, By identifying or highlighting unused applications/features in a main user interface menu, users may be prompted to utilize a new or rarely used application/feature.[0027] Additionally, while users frequently upgrade and replace older mobile devices with newer ones, tbere is a limited amount of data that is transferred to new mobile devices. Typically, users transfer their contact lists. In many instances, this transfer of contact lists occurs via a physical transfer of a smart card (e.g., SIM card). Some wireless communication network providers will transfer some of a user's personal data from an old mobile device to the new mobile device for a nominal fee. However, the capabilities of such services are often limited. On the whole, a user's settings and preferences are often non transferable between mobile devices, Consequently, any user interface menu modifications or customizations made by the user on an older device are often not transferred to the new mobile device. ∑8| The various embodiments raise a user's attention to additional applications/features readily available on the user's mobile device that the user may be interested in using based upon the user's past and/or current activity on the mobile device. By monitoring a user's activity on the mobile device (i.e., the applications and features utilized), the mobile device may be able to suggest other applications/features that the user may not be aware of but might be interested in using based upon the user's established activity record. The suggested applications/ features may be identified or highlighted in the user interface main menu to make it easier for the user to discover and use the software application or hardware. Existing user interface menu customization methods typically place shortcuts on the menu to the most frequently used applications/features. While listing commonly used applications/features on the main menu may assist users in finding and launching the applications/features they use most often, such customization methods do not allow the user to discover other applications/features that are likely to be of benefit to the user. Various embodiments described herein monitor a user's application and feature usage habits and use such information to customize a user interface menu with targeted applications/features that may best suit and possibly enhance the user's utilization of the mobile device and consequently improve their user experience. In various embodiments, the mobile device may be loaded with a matrix of links, affinitytables, fuzzy logic, or learning algorithms which enable a device processor to identify and prioritize relationships between the various applications/features available on the mobile device. Using observed user behavior patterns and the embodiment methods, the mobile device may suggest to users previously unused or unknown applications/features available on their mobile devices which the user may be interested in using without the need for any external resource to generate the suggestion. Other embodiments allow the user to transfer records of user behavior patterns from one mobile device to another. In this manner, the second mobile device may immediately make suggestions for customizing its user interface menu so that it may appear similar to that of the first mobile device without the need to generate a new user activity record. In an alternative embodiment, modifications to the matrix of links, affinity tables, fuzzy logic, or learning algorithms may be transferred from one mobile device to another in order to present suggestions to modify the second mobile device's user interface menu so that it may appear similar to that of the first mobile device. FIG. 1 is a process flow diagram illustrating an overview embodiment method for customizing a user interface menu to identify/highlight applications and/or features available to the user on a mobile device based upon the user's activity on the mobile device. A mobile device may monitor the user's activity on a mobile device, step 101. As an example, a mobile device may identify each request to launch or utilize a software application and/or a hardware feature on the mobile device. The mobile device may store a record of these launches or utilization requests in a user activity record table 301. An example of a user activity record table 301 is shown in FIG. 3 and described in more detail below. In addition to storing the identity of various applications and features used by the user in the activity record table 301, the mobile device 10 may store a measure of use of applications, such as the number of times the user launches or utilizes an identified application and/or feature, or a frequency of use or launch of applications or features (e.g., number of uses divided by a unit of time).?2j Once the user's activity has been observed and recorded, the mobile device 10 may map the user's activity record to other available applications and/or features available on the mobile device 10, step 105. Any of a number of algorithms and methods may be used to map a user's past and current activity to other available applications and/or features available on the mobile device. For example, a matrix of links may be established which link other available applications/features to the applications/features that are already being utilized by the user. As another example, an affinity table may be stored in memory which includes weighting factors that cars be used to estimate the likelihood that the user would be interested in using other available applications/features based on the applications/features that are already being used. Other methods such as the use of fuzzy logic or learning algorithms may be implemented to map the user's activity record to other available applications and/or features available on the mobile device 10 in step 105. [0033] Once the user's activity record has been mapped to other applications/features available on the mobile device, the mobile device 10 may display the other applications/features that are most likely to be of benefit to the user based upon the user's activity record, step 1 10. This display of other applications/features available on the mobile device 10 may be made as a suggestion to modify the user interface menu to include other recommended applications/features. If the user elects to accept the suggested modification, the user interface menu may be modified to reflect the accepted suggestions, step 1 J 5, 934] FIG. 2 is a process flow diagram of an embodiment method for generating a user activity record on a mobile device. The processor of a mobile device may oversee a number of processes and operations being executed on the mobile device in a main loop routine, step 201 . The main loop may comprise a number of subroutines, loops and processes that are executed at various times in the normal operation of the mobile device. When a mobile device user manipulates the user interface to launch an application or utilize a hardware feature on the mobile device an interrupt may be received by the processor to initiate a subroutine to record the user activity, step 202. Upon receipt, in addition to launching the application or initiating operation of thefeature, the processor may record the identity of the software application or hardware feature being launched by the user, step 203. In addition, the processor of the mobile device may increment a counter for the launching application or feature which indicates the number of times the application or feature has been launched or used, step 204. Such a usage counter may tally all uses, or alternatively all uses within a unit of time, such as a day, week, month or year to provide a frequency of use metric. 35] FIG. 3 provides an example of a data table containing a sample of a user activity record that may be stored in the memory of the mobile device. As shown in FIG, 3, an activity record data table 30 J may identify a number of possible software applications and hardware features available to the user on the mobile device along with a measure of the use made of such applications or features. In the example activity record data table 301 the hardware features include a camera, a voice recorder, and a video recorder, while the available software applications include email, SMS, MMS, and a social networking website application. This list of hardware features and software applications is provided merely as an illustration and is not intended to be comprehensive. In addition, an activity record data tabic 301 will include a measure of the use of applications and features by the user. The measure of use may be in the form of counter values indicating the number of times the identified application/feature has been iaunchcd'iiscd. Alternatively, the measure of use may be in the form of a frequency value (as shown) indicating the number of times the identified application/ feature has been launched/used within a unit of time (e.g., day, week, month, year, etc.). In some embodiments, the activity record may periodically decrement the counters in order to reflect the frequency of use {i.e., total number of times the identified application'Teature has been launched/used during the refresh period). In other embodiments, the activity record may contain a value indicating the total number of times the identified application/feature has been launched/used for the life of the mobile device. In such embodiments, the processor may calculate the frequency by dividing the counter value by the amount of time that has elapsed since the mobile device was activated.Once the user's activity record has been generated, the mobile device 10 may map the user's activity record to other available applications/features on the mobile device in order to provide the user with suggestions of other available applications/features on the mobile device that the user is likely to be interested in based upon the activity record. If accepted, the suggested applications/features may be added to the user interface menu so that these previously unknown or little used applications/features are presented on a priority menu for easy access by a user. FIG. 4 is a process flow diagram illustrating an embodiment method for mapping the user's activity record to other available applications/features on the mobile device. The process flow illustrated in FKJ. 4 utilizes an affinity table which may be stored in local memory of the mobile device or accessed in a remote server memory. An example of an affinity table is shown in FKJ. 5 and described in more detail below. The process shown in FIG. 4 is described with reference to FIGs. 5, and 6A. [0037] Referring to FIG. 4, while in the main loop routine, step 201, the mobile device processor may periodically check to see if the time to perform a mapping function has elapsed, determination 205. The periodicity of the mapping function may be arbitrarily set by the user, service provider, mobile device manufacturer, or some third party. The length of the time between mapping functions may be chosen to most accurately reflect the user's current behavioral usage patterns. For example, the length of time between mapping functions may be selected to be long enough to collect a minimum threshold of "data points'' in the activity record. Additionally, the length of time may be selected such that only the user's more recent activity is reflected in the activity record, (f an extraordinarily long period of time is selected, the activity record may not accurately reflect the user's current behavior patterns. For example, if a user launched a particular application repeatedly in the first week following acquisition of the mobile device 10 but never there after, if the length of time between mapping function operations is selected to encompass the first week of activity, the stored activity record may not accurately reflect the user's current behavior patterns.If the period of time between mapping function has not yet elapsed (i.e., determination 205 ~ No), the processor returns to the main loop 201. However, if tbe period of time between mapping functions has elapsed (determination 205 = Yes), the processor may multiply the frequency of use for each application/feature recorded in the activity record data table by an affinity weighting factor listed in an affinity table stored in memory to determine an affinity value for each available application/feature, step 206. [Θ039] An example of an affinity table 302 that may be used with an embodiment is shown in FIG. 5. For each application/feature listed in the first column of the affinity table 302, an affinity weighting factor is provided indicting a relative affinity that a user may have to use the particular application/feature given the user's usage of the application/feature listed in the first column. [0040] The use of the affinity table illustrated in FIG. 5 may be explained by the way of an example, in which the hardware features and software applications include: a camera, a voice recorder, a video recorder, email, SMS, MMS, and a social networking website application (e.g., Facebook ®, Twitter®, rayspace ®). As illustrated in FIG. 5, an affinity table for such a mobile device may include a data record (row) for each of the used applications and features with each data record including a data field (column) for each the possible applications and features. In this example, the use of a particular application or feature is given full weight, so applications/ features have an affinity of'l" for themselves. Thus, if the user uses the camera hardware feature, this usage indicates a likelihood that the user will want to use the camera feature again in the future. Thus, the affinity weighting factor for a camera usage stored in the camera data record (i.e., first row and first column) is 41I.'" Other affinity weighting factors may be set by software or services providers as estimates of how likely a user utilizing a first application/feature will be interested in using a second application/feature. In the example shown in FIG. 5, the affinity weighting factor assigned to the voice recorder feature (third column) given uses of the camera feature is 0.2. The affinity weighting factor may be in arbitrary units, but assigning a weighting factor of 0.2 to voice recorder may be viewed as similar toestimating that one who uses the camera feature has a 20% likelihood of being Interested in using the voice recorder feature, Further, FIG. 5 shows an affinity weighting factor for the video recorder feature (fourth column) given use of the camera function of 0.6. This affinity weighting factor implies that one who uses the camera feature is three times more likely to use the video recorder feature than the voice recorder feature. The example in FIG. 5 further includes an affinity weighting factor of 0.3 for the email application {fifth column) given use of the camera function, an affinity weighting factor 0.2 for the SMS application (sixth column) given use of the camera function, an affinity weighting factor of 0.6 for the MMS application {seventh column) given use of the camera function, and an affinity weighting factor of 0.7 for the social networking application (eighth column) given use of the camera function. It should be noted that the affinity weighting factors shown in FIG, 5 are completely arbitrary and for illustrative purposes only. The affinity weighting factors may be provided by any of service providers and based upon a number of information sources. For example, polling services may conduct surveys which ask mobile device users several questions about their mobile device usage, such as '"do you use the camera function on your mobile device?" By- obtaining information about the different types of applications and features used by the general population, patterns may be detected regarding common application/feature usages. For example, surveys may reveal that users who use camera features are likely to also use a video recorder feature and an MMS software application. Similarly, surveys may reveal that users who send text messages arc also likely to use social networking application. Based on the survey data, affinity weighting factors may be generated and stored in a table such as affinity table 302 reflecting such cross-application usage patterns. The affinity weighting factor thus can provide a relative measure of the likelihood that a user will enjoy, benefit from, or use application/feature X given that the user uses application/feature Y. I] It is noted that the illustrative affinity table 302 is symmetrical in that the affinity weighting factor of application/function X given application/function Y is the same as the affinity weighting factor of application/function Y givenapplication/function X. However, this is for illustrative purposes only and affinity tables may also be asymmetrical. An asymmetrical affinity table would be appropriate if survey data indicates that the likelihood of a user using application/feature X given use of application/feature Y is not the same as the likelihood of a user using application/feature Y given use of application/feature X. For example, survey results may reveal that users who send MMS messages may not be particularly likely to use a camera feature, but those users who do use the camera feature are quite likely to also use an MMS software application. [0043] In addition, as shown m the example affinity table 302 in FlG. 5, there may be no affinity among some application/feature pairings. For example, in row 2, column 5, the affinity weighting factor reflecting whether a user who uses the email application is likely to use the camera feature is /ero (0), This example affinity weighting factor reflects a conclusion that email usage has little if any relationship to camera usage. Referring back to FIG. 4, by multiplying the frequency that a user uses a particular application/function by the affinity weighting factors listed m an affinity table 302 for that particular application/function, affinity values can be obtained for each applications/functions addressed in the table, step 206. Such affinity values will reflect a relative likelihood that the user may be interested in the other applications/functions based upon the use of the particular application/function. Using all of the usage frequency values listed in activity record data table 301 and the affinity weighting factors listed in an affinity table 302, total affinity values may be calculated. For example, using the data illustrated m FIGs. 3 and 5, an affinity value for the camera feature given use of the camera is 20 (i.e., 20 X 1 20), an affinity value for the voice recorder feature given use of the camera is 4 (i.e., 20 X 0.2 - 4), and an affinity value for the video recorder feature given use of the camera is 12 (20 X 0.6 12). Other affinity values based on the data illustrated in FIGs. 3 and 5 are listed in the affinity value table 303 shown in FlG. 6A. nReferring back to FIG. 4, when the affinity values for every application/feature available on the mobile device are calculated based upon the activity record table and the affinity table, the affinity values for each application/' feature may be summed to provide a total or overall affinity value, step 207. An example of such calculations is shown in the affinity value table 303 in FlG. 6A. By adding each affinity value for a particular application/feature (i.e., adding each affinity value in a column of affinity value summing table 303), an overall relative affinity value can be obtained that reflects the relative likelihood that the user will be interested in a particular application or feature given the user's overall mobile device usages. Summing also takes into account multiple variant information, such as the likelihood of use of social networking applications given use of the camera feature and SMS and MMS applications. In other words, the summed affinity values provide an estimate of the relative likelihood that a user may use or benefit from a particular application or feature based upon the user's on-going usage of all other applications/features. FIG. 6A further illustrates thai by completing all of the affinity value calculations, an overall ranking of applications/features can be obtained as shown in the bottom row of table 303. For example, using the example activity record table and affinity table, shown in FIGs. 3 and 5, the camera feature exhibits a total affinity value of 43, the voice recorder feature exhibits a total affinity value of 8.5, video recorder feature exhibits a total affinity value of 29, the email application feature exhibits a total affinity value of 41.1. the SMS application feature exhibits a total affinity value of 52, the MMS application feature exhibits a total affinity value of 51.5 and the social networking application feature exhibits a total affinity value of 60. FIG. 6 A illustrates an important benefit provided by the various embodiments which is identifying unused applications or features that the user might find interesting. Referring back to FIG. 3 it can be seen that the user in this example never sends MMS messages or launches a social networking application. However, the total affinity values for these applications shown in FIG. 6 A indicates that, based upon the user's over all activities, which include use of the camera, video recorder, email and SMS, the user is likely to benefit from MMS and social networking applications. HIt should be noted that the data table of FIG. 6 A is for illustrative purposes only and does not reflect a particular user, implementation or required aspect, The data table 303 illustrates calculated affinity values and the total affinity values, but these values need not be stored in memory. )] Referring back to FIG. 4, the mobile device processor may reorder the priority of each application/feature available on the mobile device according to the summed affinity values, step 208. Thus, the application/feature having the highest summed affinity value may be deemed to be the application/feature most likely to be of benefit to the user based upon the user's activity record. Using the illustrative summed affinity values appearing in the affinity value table 303 shown in FIG. 6A. the mobile device processor may determine that the user will most likely use (benefit from use of) the social networking application followed by the SMS application, the MMS application, the camera function, the email application, the video recorder function, and finally, if there are sufficient lines in the user interface menu, the voice recorder function. As noted above, the social networking application and MMS application are included in the user interface menu based solely on the affinity calculations, and not based upon prior usage. Thus, these applications are listed high in the user menu order since they appear to be of interest to the user based on the user's overall mobile device usage pattern, even though the user has not utilized those applications in the past. Some or all of the reordered priority list may be displayed to the user on the mobile device as suggestions to add the highest priority application/features to the user interface menu (see FIG. 1, step 110). Once the priority of each of the available applications/features has been generated, in embodiments where the activity record counter values are reset/refreshed, the mobile device processor may reset or refresh the activity record counter values, step 209. The mobile device 10 may then return to the main loop 201 until the next period for performing the mapping function has elapsed. As noted earlier, the total affinity value table 303 need not be generated for storage in memory. In alternative embodiments, the mobile device processor may calculate the affinity values and sum them in a single algorithm calculation withoutstoring a table of affinity values. FIGs. 6B and 6C illustrate alternative data tables which may be generated containing only the total affinity values in a data table 304 shown in FIG. 6B, or values indicative of the priority of each application-'feaiure in a data table 305 shown in F(G. 6C based upon the overall affinity values. [0051] While the embodiment method illustrated in FlG. 4 utilizes an affinity table to determine a relative ranking of applications and features based upon a user activity record, other methods for mapping the user activity record to applications and features may be utilized. For example, instead of multiplying activity values times and affinity weighting factor, application links may be determined using a matrix of links. The total number of links for each application or feature may then be used to develop a relative ranking of applications and features. As another example, instead of reflecting survey results in affinity weighting factors, inferences and patterns obtained from user surveys may be translated into fuzzy logic factors thai can be applied to the user activity record in order to develop the relative ranking of applications and features. As a further example, inferences and patterns obtained from user surveys may be translated into learning algorithms which may be applied to the user activity record in order to develop the relative ranking of applications and features. 52] FIG. 7A is a process flow diagram illustrating an embodiment method for customizing a user interface menu based upon a user activity record. The embodiment method illustrated in FIG. 7A makes use of the activity record generation process described above with reference to FIG. 2 and the mapping process described above with reference to FIG. 4. Thus, for example, the mobile device processor may perform the process flow shown in FIG. 2 to generate a user activity record data table 301 similar to that illustrated in FIG. 3. Concurrently with or subsequent to generating the activity record data table 301 , the mobile device 10 processor may perform the example method illustrated in FIG. "A. As discussed in more detail above with reference to FIG. 4, the mobile device 10 processor may determine whether it is time to perform a mapping process, determination 205. If the time to perform the mapping process has elapsed (i.e., determination 205 = Yes), the mobile device processor may perform steps 206-20c> in a manner similar to that describedabove with reference to FlGs. 4, 5, and 6A-6C. Once the priority of each available applications/features has been determined based upon the summed affinity values, step 209. the mobile device processor may determine whether the top priority applications/features are already displayed in the user interface menu, determination 210, [0053] As noted above, due to display size constraints and other factors, the user interface menu may not display or list all of the software applications or hardware features (at least on a top page of the user interface menu). In many instances, links or shortcuts to only a fraction of available application/features will be included in the user interface menu. Accordingly, a mobile device processor may compare the top priority applications/features determined during the rc-ordcring step with those currently displayed in the user interface menu Io determine if suggestions should be made to the user to modify the user interface menu. [0054] [f all of the top priority applications are already displayed in the user interface menu (determination 210 " Yes), there is no need to present suggestions to the user to modify the user interface menu, so the mobile device processor may return to the main loop, step 201. However, if any, all or some of the top priority application'' features arc not currently displayed in the user interface menu (i.e., determination 210 ~ No), the mobile device processor may determine which of the top priority application/features is (are) not currently displayed in the user interface menu, step 211 , The mobile device 10 processor may then display one or more suggestions or requests to add each of the top priority appHeation(s)/feature(s) that are not currently included in the user interface menu as to the user interface menu, step 212. The mobile device 10 processor may wait for and receive the user's response to the suggestion to modify the user interface menu, step 213. If the received user response indicates acceptance of at least one of the suggestions to modify the user interface menu (i.e., determination 214 - Yes), the mobile device processor may modify the user interface menu to display each of the top priority application/features in the user interface menu accepted by the user, and remove one or more non-top priority application/features from the user interface menu, step 22(1 Once the user interfacemenu has been modified, the mobile device processor may return to the main loop routine 20 J until it is again time to perform the mapping process elapses. [0055] If, however, the user's response to the suggestion indicates a rejection of the displayed suggestion to modify the user interface menu (i.e., determination 214 = No), the mobile device processor may alter the affinity weighting factors stored in the affinity table 302, step 225. The user's rejection of a suggestion to modify the user interface menu may indicate that the user is not interested in and therefore will not use the suggested application or feature. Since the initial affinity weighting factors may be arbitrary or based upon a general population survey that is not applicable to the particular user, a user's response to a suggestion to modify the user interface menu may provide further insight into the user's usage patterns which can be used to refine the affinity weighting factors for the specific user. For example, upon implementation of the embodiment method illustrated in FIG. 7A, the mobile device processor may present a suggestion to the user to replace the email application with the social networking application on the user interface menu. In response the user may reject this suggestion. This rejection may be an indication that, unlike many other users, this particular user has no interest in social networking. Accordingly, for this particular user the affinity weighting factors for the social networking application maybe decreased or set to zero. Modifying the affinity weighting factors in this manner reduces the likelihood that the mobile device processor will issue a suggestion to add the social networking application in the future. In alternative embodiments, the affinity weighting factors may be manually adjusted by the user, such as in the form of a preference setting, to increase or decrease their value, thereby increasing or decreasing the likelihood that that application/feature will be suggested by the mobile device processor to be added to the user interface menu in the future. Once the modified affinity weighting factors are stored in the affinity table 302, the mobile device processor may return to the main loop routine, step 201, until the next time to perform the mapping process elapses. FIG. 7B is a process flow diagram illustrating an alternative embodiment method for customizing a user interface menu based upon a user activity record. TIu ISembodiment method illustrated in FIG. 7B is similar to that shown in FIG. 7 A and Urns, the descriptions of steps 201-214 above apply as well to FlG. 7B. In contrast to the embodiment method illustrated in FIG. 7A, if the user accepts the suggestion to modify the user interface menu ( i.e., determination 214 - Yes), the raobile device processor may prompt the user to indicate whether the proposed modifications to the user interface menu are satisfactory, decision 215. For example, the mobile device processor may reorder all available applications/features on the mobile device in accordance with step 209 and the summed affinity values. If the top priority applications/features include at least one application/feature not previously displayed in the user interface menu, at least one application/feature previously displayed in the user interface menu may be replaced by a higher priority application feature. Logically, the application/feature with the lowest summed affinity value may be chosen for replacement by the mobile device processor. However, the user may wish to accept the mobile device processor's suggestion to add an application/feature to the user interface menu, but disagree with the suggested application'' feature to be removed from the user interface menu. In other instances, the mobile device processor may suggest replacing multiple applications/ features with multiple applications/features. It may be the case that the user only agrees to accept one or some of the suggested application/feature replacement. Still further, the user may wish to modify the user interface menu, but rather than replace applications/features with those suggested by the mobile device 10 processor, the user may prefer to add and delete a different set of applications/features. If the suggested user interface menu (e.g., replacing lowest a summed affinity value application/feature with a suggested application/feature) is accepted by the user (i.e., decision 215 - Yes), the mobi Ie device processor may modify the user interface menu similar to step 220 described above. However, if the proposed user interface menu is rejected by the user (i.e., decision 215 === No), the mobile device 10 processor may receive the user's input of applications or features to add to and deleted from the user interface menu, step 216, and make the appropriate changes to the user interface menu, step 217. In addition, user's selection of applications/features to add to and delete from the user interface menu may be used to modify the affinity values in the affinity table 302. step 225. As discussed above, ifthe user rejects a suggestion to add an application or feature, this user input may indicate that [he affinity values are not properly calibrated to the particular user's desires. To correct the situation, the mobile device processor may modify the affinity weighting factors in the affinity table by decreasing or zeroing the affinity weighting factors for the suggested but not accepted application/feature may need change, In contrast, if the user elects to add an application/feature that was not suggested by the mobile device processor, this addition may indicate a need to increase the affinity weighting factors for the elected application/feature. In this manner, the implementation of the mapping functions shown in FlGs. 4, 7a, and 7 B may result in a higher livelihood that the elected application/feature will be suggested by the mobile device processor in the future. [0057] Users often replace their mobile devices. When doing so, the user may wish to transfer the user's customized user interface menu from the old mobile device to the new one. If the user interface menu appearing on a user's previous mobile device was generated and modified based on the user's behavioral patterns according to the various embodiments, to obtain the same customized user interface, the user may be required to replicate his behavior on the new mobile device. This may be time consuming if the user had used the earlier mobile device for a long time, By transferring the user's activity record from the mobile device to the replacement device, the user may be able to insure a seamless transition of the user interface menu from one mobile device to another. Alternatively, if a second user appreciates the customized user interface menu appearing on the first user's mobile device and desires to obtain the same customized user interface menu on the second user's mobile device, the second user may receive and load the first user's activity record into the second user's mobile device. In this manner, a subsequent implementation of an embodiment method, such as the embodiment illustrated in FlG. 7A, will result in a similar customized user interface menu. )58| The activity record may be transferred from one device to another in any of a variety of methods. For example, the activity record may be saved to a portable SIM card that is physically transferred from one device to another. Alternatively, theactivity record data may be downloaded via a wired or wireless connection using any wireless communication protocols and technologies (e.g., 802,1 1 , Bluetooth, Zigbee, near field communications, infrared transmission, etc). In addition, a user's activity record appearing on a first mobile device may be uploaded to a remote storage site, such as the memory of a remote server, and accessed by a second mobile device via direct link or network link (e.g., private intranet or public Internet) to download the activity record into the second mobile device. [Θ059] FlG. 8a illustrates an embodiment method by which a second (replacement) mobile device can obtain a customized user interface menu similar to the user interface menu implemented on a first mobile device. While performing a main loop routine, step 201, a second mobile device processor may receive an activity record table from an external source, step 230. The external source may be the first mobile device or an external storage storing the activity record table such as a remote server memory. The second mobile device processor may then replace or augment the activity record data table currently stored in the second mobile device memory, step 231 , For example, the activity record in the second mobile device may have additional applications/features not previously identified in the first mobile device activity record, Thus, the activity record table stored in the second mobile device may¬ be used to augment the currently stored activity record by adding counter values or usage frequency information for each identified application/feature to the values for the corresponding application/feature stored in the second mobile device activity- record. In addition, if any application/feature is identified in the first mobile device activity record that does not appear in the second device activity record, the identified application/feature may be added to the second mobile device activity record. This may occur when a particular application/feature has been modified or replaced by a newer application/feature or may simply be obsolete (e.g., infrared data transfer). Once the activity record table in the second mobile device has been replaced or augmented, the second mobile device processor may optionally implement a process to modify the user interface menu (such as by performing step 206 in FlGs. 4, 7 A, and 7B), step 240. Alternatively, the second mobile device processor may simply return tothe main loop 201 and wait until the time to perform the mapping function process elapses. FIG. SB illustrates an alternative embodiment method for customizing a second mobile device's user interface menu to be similar to that of a first mobile device. Similar to the process flow illustrated in FIG. 8 A, the second mobile device processor may perfonn a main loop routine 20 J and receive an activity record table from a first mobile device (either directly or indirectly), step 230. The second mobile device processor may replace or augment the activity record data table stored in the second mobile device, step 23 J . Since, the customized user interface menu implemented in the first mobile device may be the result of both the user's activity record and modifications made to the affinity weighting factors stored m an affinity table 302, the second mobile device processor may also receive modified affinity weighting factors from the first mobile device, step 232. Once modified affinity weighting factors are received, the second mobile device processor may replace or augment the affinity table 302 stored in memory of the second mobile device with the affinity weighting factors received from the first mobile device), step 233. Once the activity record table in the second mobile device has been replaced or augmented, the second mobile device processor may optionally immediately implement a process to modify the user interface menu (such as by implementing step 206 described above with reference to FIGs. 4, 7Λ, and 7B), step 240. Alternatively, the second mobile device processor may simply return to the main loop, step 201, and wait until it is again time to perfonn the mapping function process. FIG. 8C illustrates yet another embodiment method for customizing a second mobile device's user interface menu to be similar to that of a first mobile device. In this alternative embodiment, rather than migrating the activity record and/or affinity table, the summed affinity value table may be migrated from the first mobile device to the second mobile device. Thus, while the second mobile device processor is performing the main loop routine, step 201, it may receive the summed affinity value table (e.g., tables 303, 304 or 305) from a first mobile device, step 250. By migrating the summed affinity value table {303, 304, 305) to the second mobile device, thesecond mobile device processor may simply compare the top priority application/features identified in the received summed affinity value table with the applications/features currently displayed in the second mobile device user interface menu, step 208 in FlG. 4. The second mobile device processor may then replace the user interface menu displayed on the second mobile device such that the top priority application/features identified in the received summed affinity value table arc now displayed in the user interface menu, step 20*-) in FIG. 4. Alternatively, the second mobile device processor may compare the top priority applications/features identified in the received summed affinity value table to determine if the second mobile device processor should suggest to replace applications/features currently displayed user interface menu with top priority applications/features identified in the received summed affinity value table, steps 211-220 in FIGs. 7 A and 7B. Typical mobile devices suitable for use with the various embodiments will have in common the components illustrated in FIG. 9. For example, an exemplary mobile device 10 may include a processor J 91 coupled to interna] memory J 92, a display 193, and to a speaker 199. Additionally, the mobile device 10 may have an antenna 194 for sending and receiving electromagnetic radiation that is connected to a wireless data link and/or cellular telephone transceiver 195 coupled to the processor 191 , In some implementations, the transceiver 195 and portions of the processor 191 and memory 192 used for cellular telephone communications are collectively referred Io as the air interface since it provides a data interface via a wireless data link. Mobile devices typically also include a key pad 196 or miniature keyboard and menu selection buttons or rocker switches 197 for receiving user inputs. The processor 191 may be any programmable microprocessor, microcomputer or multiple processor chip or chips that can be configured by software instructions (applications) to perform a variety of functions, including the functions of the various embodiments described herein. In some mobile devices, multiple processors 191 may be provided, such as one processor dedicated to wireless communication functions and one processor dedicated to running other applications. Typically, software applications may be stored in the internal memory 192 before they are accessed andloaded into the processor 191. In some mobile devices, the processor 191 may include internal memory sufficient to store the application software instructions. The mobile device 10 may also include a separate memory chip 190 such as smart card for storing data such as the user activity record and a table of affinity weighting factors. In some mobile devices, the secure memory may be in a separate memory chip coupled to the processor 191. In many mobile devices 10, the internal memory 192 may be a volatile or nonvolatile memory, such as flash memory, or a mixture of both. For the purposes of this description, a general reference to memory refers to all memory accessible by the processor 191. including internal memory 192, the memory chip 190, removable memory plugged into the mobile device, and memory within the processor 191 itself. A number of the embodiments described above may also be implemented with any of a variety of remote server devices, such as the server 2400 illustrated in FIG, 10. Such a server 2400 typically includes a processor 2401 coupled to volatile memory 2402 and a large capacity nonvolatile memory, such as a disk drive 2403. The server 210 may also include a floppy disc drive and/or a compact disc (CD) drive 2406 coupled to the processor 2401 , The server 210 may also include network access ports 2404 coupled to the processor 2401 for establishing data connections with network circuits 2405, such as the Internet. &5| The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply thai the steps of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of steps in the foregoing embodiments may be performed in any order. Words such as "thereafter,'* "then." "next,"' etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles "a." "an'* or "the*' is not to be constnied as limiting the element to the singular. [Θ066] The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implementedas electronic hardware, computer software, or combinations of both. To clearly illustrate this itUerchangeabilily of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASICj, a field programmable gate array (FPCFA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein, A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, control] er, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some steps or methods may be performed by circuitry that is specific to a given function. SSj In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module executed which may reside on a computer-readable medium. Computer- readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one placeto another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or oilier optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to cany or store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media, Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a machine readable medium and/or computer-readable medium, which may be incorporated into a computer program product. [0069] The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention, Various modifications to these embodiments will be readily apparent to those skilled in the art. and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. |
Metal insulator metal capacitors having epitaxial oxides are described. A metal-insulator-metal (MIM) capacitor includes a first electrode plate. A capacitor dielectric is on the first electrode plate. The capacitor dielectric includes a single crystalline oxide material. A second electrode plate is on the capacitor dielectric, the second electrode plate having a portion over and parallel with the first electrode plate. |
A metal-insulator-metal, MIM, capacitor, comprising:a first electrode plate;a capacitor dielectric on the first electrode plate, wherein the capacitor dielectric comprises a single crystalline oxide material; anda second electrode plate on the capacitor dielectric, the second electrode plate having a portion over and parallel with the first electrode plate.The MIM capacitor of claim 1, wherein the single crystalline oxide material is a perovskite oxide.The MIM capacitor of claim 1 or 2, wherein the single crystalline oxide material comprises a material selected from the group consisting of SrTiO3, BaTiO3, and SrxBai-XTiO3.The MIM capacitor of any of claims 1 to 3, further comprising:a second capacitor dielectric on the second electrode plate; anda third electrode plate on the second capacitor dielectric, the third electrode plate having a portion over and parallel with the second electrode plate.The MIM capacitor of any of claims 1 to 4, wherein the MIM capacitor is included in a back end of line, BEOL, metallization structure.A method of fabricating a metal-insulator-metal, MIM, capacitor, the method comprising:forming a first electrode plate;forming a capacitor dielectric on the first electrode plate, wherein the capacitor dielectric comprises a single crystalline oxide material; andforming a second electrode plate on the capacitor dielectric, the second electrode plate having a portion over and parallel with the first electrode plate.The method of claim 6, wherein the single crystalline oxide material is a perovskite oxide.The method of claim 6 or 7, wherein the single crystalline oxide material comprises a material selected from the group consisting of SrTiO3, BaTiO3, and SrXBa1-XTiO3.The method of any of claims 6 to 8, further comprising:forming a second capacitor dielectric on the second electrode plate; andforming a third electrode plate on the second capacitor dielectric, the third electrode plate having a portion over and parallel with the second electrode plate.The method of any of claims 6 to 9, wherein the MIM capacitor is included in a back end of line, BEOL, metallization structure. |
TECHNICAL FIELDEmbodiments of the disclosure are in the field of advanced integrated circuit structure fabrication and, in particular, metal insulator metal (MIM) capacitors or backend transistors having epitaxial oxides.BACKGROUNDFor the past several decades, the scaling of features in integrated circuits has been a driving force behind an ever-growing semiconductor industry. Scaling to smaller and smaller features enables increased densities of functional units on the limited real estate of semiconductor chips. For example, shrinking transistor size allows for the incorporation of an increased number of memory or logic devices on a chip, lending to the fabrication of products with increased capacity. The drive for ever-more capacity, however, is not without issue. The necessity to optimize the performance of each device becomes increasingly significant.Variability in conventional and currently known fabrication processes may limit the possibility to further extend them into smaller and smaller nodes. Consequently, fabrication of the functional components needed for future technology nodes may require the introduction of new methodologies or the integration of new technologies in current fabrication processes or in place of current fabrication processes.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 illustrates a cross-sectional view of a transferring structure including a single-crystalline oxide film remote-epitaxially grown on a 2D material coated single crystal substrate, in accordance with an embodiment of the present disclosure.Figure 2 illustrates a cross-sectional view of a transferring structure including a single-crystalline oxide film grown on a layer material substrate via Van der Waals heteroepitaxy, in accordance with an embodiment of the present disclosure.Figure 3 illustrates cross-sectional views representing various operations in methods of fabricating a structure including an epitaxial oxide layer, in accordance with an embodiment of the present disclosure.Figure 4 illustrates cross-sectional views representing various operations in a method of fabricating a backend metal-insulator-metal (MIM) capacitor, in accordance with an embodiment of the present disclosure.Figure 5 illustrates another structure for a backend MIM including an epitaxial oxide layer, in accordance with an embodiment of the present disclosure.Figure 6 illustrates cross-sectional views representing various operations in a method of fabricating a backend transistor, in accordance with an embodiment of the present disclosure.Figure 7 illustrates a cross-sectional view of an integrated circuit structure having four metallization layers with a metal line composition and pitch above two metallization layers with a differing metal line composition and smaller pitch, in accordance with an embodiment of the present disclosure.Figure 8 illustrates a computing device in accordance with one implementation of the disclosure.Figure 9 illustrates an interposer that includes one or more embodiments of the disclosure.Figure 10 is an isometric view of a mobile computing platform employing an IC fabricated according to one or more processes described herein or including one or more features described herein, in accordance with an embodiment of the present disclosure.Figure 11 illustrates a cross-sectional view of a flip-chip mounted die, in accordance with an embodiment of the present disclosure.DESCRIPTION OF THE EMBODIMENTSMetal insulator metal (MIM) capacitors or backend transistors having epitaxial oxides are described. In the following description, numerous specific details are set forth, such as specific integration and material regimes, in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known features, such as integrated circuit design layouts, are not described in detail in order to not unnecessarily obscure embodiments of the present disclosure. Furthermore, it is to be appreciated that the various embodiments shown in the Figures are illustrative representations and are not necessarily drawn to scale.The following detailed description is merely illustrative in nature and is not intended to limit the embodiments of the subject matter or the application and uses of such embodiments. As used herein, the word "exemplary" means "serving as an example, instance, or illustration." Any implementation described herein as exemplary is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.This specification includes references to "one embodiment" or "an embodiment." The appearances of the phrases "in one embodiment" or "in an embodiment" do not necessarily refer to the same embodiment. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure.Terminology. The following paragraphs provide definitions or context for terms found in this disclosure (including the appended claims):"Comprising." This term is open-ended. As used in the appended claims, this term does not foreclose additional structure or operations."Configured To." Various units or components may be described or claimed as "configured to" perform a task or tasks. In such contexts, "configured to" is used to connote structure by indicating that the units or components include structure that performs those task or tasks during operation. As such, the unit or component can be said to be configured to perform the task even when the specified unit or component is not currently operational (e.g., is not on or active). Reciting that a unit or circuit or component is "configured to" perform one or more tasks is expressly intended not to invoke 35 U.S.C. §112, sixth paragraph, for that unit or component."First," "Second," etc. As used herein, these terms are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.)."Coupled" - The following description refers to elements or nodes or features being "coupled" together. As used herein, unless expressly stated otherwise, "coupled" means that one element or node or feature is directly or indirectly joined to (or directly or indirectly communicates with) another element or node or feature, and not necessarily mechanically.In addition, certain terminology may also be used in the following description for the purpose of reference only, and thus are not intended to be limiting. For example, terms such as "upper", "lower", "above", and "below" refer to directions in the drawings to which reference is made. Terms such as "front", "back", "rear", "side", "outboard", and "inboard" describe the orientation or location or both of portions of the component within a consistent but arbitrary frame of reference which is made clear by reference to the text and the associated drawings describing the component under discussion. Such terminology may include the words specifically mentioned above, derivatives thereof, and words of similar import."Inhibit" - As used herein, inhibit is used to describe a reducing or minimizing effect. When a component or feature is described as inhibiting an action, motion, or condition it may completely prevent the result or outcome or future state completely. Additionally, "inhibit" can also refer to a reduction or lessening of the outcome, performance, or effect which might otherwise occur. Accordingly, when a component, element, or feature is referred to as inhibiting a result or state, it need not completely prevent or eliminate the result or state.Embodiments described herein may be directed to front-end-of-line (FEOL) semiconductor processing and structures. FEOL is the first portion of integrated circuit (IC) fabrication where the individual devices (e.g., transistors, capacitors, resistors, etc.) are patterned in the semiconductor substrate or layer. FEOL generally covers everything up to (but not including) the deposition of metal interconnect layers. Following the last FEOL operation, the result is typically a wafer with isolated transistors (e.g., without any wires).Embodiments described herein may be directed to back end of line (BEOL) semiconductor processing and structures. BEOL is the second portion of IC fabrication where the individual devices (e.g., transistors, capacitors, resistors, etc.) get interconnected with wiring on the wafer, e.g., the metallization layer or layers. BEOL includes contacts, insulating layers (dielectrics), metal levels, and bonding sites for chip-to-package connections. In the BEOL part of the fabrication stage contacts (pads), interconnect wires, vias and dielectric structures are formed. For modern IC processes, more than 10 metal layers may be added in the BEOL.Embodiments described below may be applicable to FEOL processing and structures, BEOL processing and structures, or both FEOL and BEOL processing and structures. In particular, although an exemplary processing scheme may be illustrated using a FEOL processing scenario, such approaches may also be applicable to BEOL processing. Likewise, although an exemplary processing scheme may be illustrated using a BEOL processing scenario, such approaches may also be applicable to FEOL processing.In accordance with one or more embodiments of the present disclosure, epitaxial oxides for semiconductor applications are described. One or more embodiments are directed to a metal insulator metal (MIM) capacitor. One or more embodiments are directed to a backend transistor device.To provide context, conventional single-crystalline perovskite oxides require high-temperature (e.g., greater than 400°C) deposition which is not backend-compatible. Addressing such issues, in accordance with an embodiment of the present disclosure, heterogeneous integration of single-crystalline oxides with layer transfer by using two-dimensional (2D) materials or layer material substrates can be backend (BE) compatible processes to provide single crystalline oxides on or in BE devices.One or more embodiments are directed to BE-compatible processes to form single-crystalline high-k perovskite oxides for a BE transistor high-k gate dielectric or a high capacitance MIM capacitor stack. Heterogeneous integration of single-crystalline oxide (e.g., perovskite oxides such as SrTiO3, BaTiO3, SrXBa1-XTiO3) films remote-epitaxially grown on 2D material coated single crystal substrates or grown on layer material substrates via Van der Waals heteroepitaxy is used to form high-performance single-crystalline oxide in BE passive or active devices.As exemplary delivery vehicles, Figure 1 illustrates a cross-sectional view of a transferring structure including a single-crystalline oxide film remote-epitaxially grown on a 2D material coated single crystal substrate, in accordance with an embodiment of the present disclosure. Figure 2 illustrates a cross-sectional view of a transferring structure including a single-crystalline oxide film grown on a layer material substrate via Van der Waals heteroepitaxy, in accordance with an embodiment of the present disclosure.Referring to Figure 1 , a transferring structure 100 includes an epitaxial oxide 104 is on a 2D material layer 102 on a single crystal substrate 101. In one embodiment, the epitaxial oxide 104 is a single crystalline oxide. In one embodiment, the epitaxial oxide 104 is a perovskite oxide, such as SrTiO3, BaTiO3, or SrXBa1-XTiO3. In another embodiment, the epitaxial oxide 104 is a binary oxide or a complex oxide such as a spinel oxide. In one embodiment, the 2D material 102 is graphene or MoO3. In another embodiment, the 2D material 102 is h-BN or an MXenes. In one embodiment, the single crystal substrate 101 is a single crystal silicon substrate.Referring to Figure 2 , a transferring structure 200 includes an epitaxial oxide 204 is on a layer material substrate 202. In one embodiment, the epitaxial oxide 204 is a single crystalline oxide. In one embodiment, the epitaxial oxide 204 is a perovskite oxide, such as SrTiO3, BaTiO3, or SrXBa1-XTiO3. In one embodiment, the layer material substrate 202 is or includes mica, a sheet silicate (phyllosilicate) mineral.In accordance with an embodiment of the present disclosure, a high-performance single-crystalline oxide (e.g., a perovskite oxide, such as SrTiO3, BaTiO3, SrXBa1-XTiO3) can be formed in a BE device (passive or active) by layer transferring (1) single-crystalline oxide films remote-epitaxially grown on 2D materials coated single crystal substrates or (2) relaxed single-crystalline oxide films grown on layer material substrates via Van der Waals heteroepitaxy. For example, heterogeneous integration of high-performance single-crystalline oxide films up to hundreds of nanometers in thickness can be incorporated into BE devices by using (1) remote epitaxy or (2) Van der Waals heteroepitaxy.In a first particular example, single-crystalline oxides can be remote-epitaxially grown on 2D materials (e.g., graphene, MoO3) coated single crystal substrates followed by a deposition of a capping metal stressor for layer transfer. In such heterostructures, due to the weak interaction between 2D materials and single-crystalline film, the epitaxial oxide films can be instantly separated from weakened epitaxial interfaces while layer transferring.In a second particular example, relaxed single-crystalline oxide films are grown on layer material substrates (e.g., mica, such as sheet silicate (phyllosilicate) minerals) via Van der Waals heteroepitaxy and then a capping metal layer can be deposited for layer transfer. In such heterostructures, due to the weak interaction between substrate and film, they present as the lattice of films close to bulk and the single-crystalline films are immediately separated from weakened epitaxial interfaces while layer transferring.As exemplary processing scheme options, Figure 3 illustrates cross-sectional views representing various operations in methods of fabricating a structure including an epitaxial oxide layer, in accordance with an embodiment of the present disclosure.Referring to Figure 3 , in a first embodiment, a transferring structure 300 is a stack 308 including a cap metal 306 on an epitaxial oxide 304 on a 2D material layer 302 on a single crystal substrate 301. The stack 308 is flipped and bonded to a receiving structure 310 to form a structure 320. The receiving structure 310 includes a substrate 312 and a metallization layer 314 on or above the substrate 312. The metallization layer 314 can include conductive features 316 in a dielectric layer 318. The 2D material layer 302 and the single crystal substrate 301 are then removed from the structure 320, e.g., by facile cleaving, to form a backend starting structure 350. The backend starting structure 350 is a stack including a remaining transferred stack 352 on the receiving structure 310.Referring again to Figure 3 , in a second embodiment, a transferring structure 330 is a stack 338 including a cap metal 336 on an epitaxial oxide 334 on a layer material substrate 332. The stack 338 is flipped and bonded to a receiving structure 310 to form a structure 340. The receiving structure 310 includes a substrate 312 and a metallization layer 314 on or above the substrate 312. The metallization layer 314 can include conductive features 316 in a dielectric layer 318. The layer material substrate 332 is then removed from the structure 340, e.g., by facile cleaving, to form a backend starting structure 350. The backend starting structure 350 is a stack including a remaining transferred stack 352 on the receiving structure 310.In a first aspect, one or more embodiments are directed to the use of a scalable and configurable parallel plate capacitor layering scheme in order to provide industry leading MIM capacitive densities, without compromising the reliability of the final device. Such a scaling method can be used to increase cap density without an area impact and can enhance existing designed layouts without extra design overhead. Increasing MIM capacitance provides a significant performance improvement.Advanced transistor scaling requires an advanced and stable power delivery method. Decoupling capacitors are employed to minimize impedance and power supply noise. This can be leveraged in past by incorporating a metal-insulator-metal (MIM) capacitor in the interconnect stack. Higher overall total capacitance in such MIM capacitors can more effectively mitigate voltage droop and current ripples to the transistor and thereby enhance the overall performance of the final device.As an exemplary processing scheme, Figure 4 illustrates cross-sectional views representing various operations in a method of fabricating a backend metal-insulator-metal (MIM) capacitor, in accordance with an embodiment of the present disclosure.Referring to Figure 4 , part (a) shows the backend starting structure 350 of Figure 3 . Referring to part (b), a metal layer 400 is formed on the backend starting structure 350. Referring to part (c) one or more contacts 402 are formed on the structure of part (b) to form a backend structure 450 including a MIM capacitor. The MIM capacitor includes the cap metal 306/336 as a lower plate, the epitaxial oxide 304/334 as the dielectric, and the metal layer 400 as the upper plate.With reference again to part (c) of Figure 4 , in accordance with an embodiment of the present disclosure, a metal-insulator-metal (MIM) capacitor includes a first electrode plate 306/336. A capacitor dielectric 304/334 is on the first electrode plate 306/336. The capacitor dielectric 304/334 includes a single crystalline oxide material. A second electrode plate 400 is on the capacitor dielectric 304/334, the second electrode plate 400 having a portion over and parallel with the first electrode plate 306/336. In one embodiment, the MIM capacitor is included in a back end of line (BEOL) metallization structure.In one embodiment, the single crystalline oxide material is a perovskite oxide. In one embodiment, the single crystalline oxide material includes a material selected from the group consisting of SrTiO3, BaTiO3, and SrXBa1-XTiO3. In another embodiment, the single crystalline oxide material is a binary oxide or a complex oxide such as a spinel oxide. In one embodiment (not depicted), a second capacitor dielectric is on the second electrode plate. A third electrode plate is on the second capacitor dielectric, the third electrode plate having a portion over and parallel with the second electrode plate.In an embodiment, an electrode plate described herein is or includes Ru, Ir, RuO2 or IrO2.Figure 5 illustrates another structure for a backend MIM including an epitaxial oxide layer, in accordance with an embodiment of the present disclosure. Referring to Figure 5 , an integrated circuit structure 500 includes a MIM capacitor having a bottom plate 502, an epitaxial oxide layer 504, and a top plate 506. The MIM capacitor is integrated within a passivation material 508. Metal layers 510 are below the MIM capacitor. MIM contact vias 512 contact the MIM capacitor and the metal layers 510. In particular, the MIM contact via 512 on the right contacts the bottom plate 502, and the MIM contact via 512 on the left contacts the top plate 506.It is to be appreciated that the above structure is a 3-plate MIM capacitor structure. In other embodiments, total MIM cap density is increased by use of a scalable and configurable parallel plate capacitor layering scheme where the total number of electrode plates/capacitors in parallel increase from 3 to 4 or 5, or even more, in total.In a second aspect, one or more embodiments are directed to the fabrication of a backend transistor. As an exemplary processing scheme, Figure 6 illustrates cross-sectional views representing various operations in a method of fabricating a backend transistor, in accordance with an embodiment of the present disclosure.Referring to Figure 6 , part (a) shows the backend starting structure 350 of Figure 3 . Referring to part (b), a channel material layer 600 is formed on the backend starting structure 350. Referring to part (c), an upper gate structure 602 (gate electrode and gate oxide) and source or drain contact structures 604 are formed on the structure of part (b) to form a backend structure 650 including a transistor. The transistor includes the cap metal 306/336 as a bottom gate electrode and the epitaxial oxide 304/334 as the lower gate dielectric. It is to be appreciated that in other embodiments, the upper gate structure 602 may be omitted to provide a bottom-only gated device.With reference again to part (c) of Figure 6 , in accordance with an embodiment of the present disclosure, a transistor includes a gate electrode 306/336 above a substrate. A gate dielectric 304/334 is above and on the gate electrode 306/336. The gate dielectric 304/334 includes a single crystalline oxide material. A channel material layer 600 is on the single crystalline oxide material. Source or drain contacts 604 are on the channel material layer 600. In one embodiment, the transistor is included in a back end of line (BEOL) metallization structure.In one embodiment, the single crystalline oxide material is a perovskite oxide. In one embodiment, the single crystalline oxide material includes a material selected from the group consisting of SrTiO3, BaTiO3, and SrXBa1-XTiO3. In another embodiment, the single crystalline oxide material is a binary oxide or a complex oxide such as a spinel oxide. In one embodiment, a top gate structure 602 on the channel material layer 600, as is depicted.In another aspect, back end of line (BEOL) layers of integrated circuits commonly include electrically conductive microelectronic structures, which are known in the art as vias, to electrically connect metal lines or other interconnects above the vias to metal lines or other interconnects below the vias. In accordance with one or more embodiments of the present disclosure, a metal insulator metal (MIM) capacitor or backend transistor having an epitaxial oxide, such as described above, can be included a BEOL structure of an integrated circuit.As an exemplary but non-limiting BEOL structure, Figure 7 illustrates a cross-sectional view of an integrated circuit structure having four metallization layers with a metal line composition and pitch above two metallization layers with a differing metal line composition and smaller pitch, in accordance with an embodiment of the present disclosure. It is to be appreciated that a metal insulator metal (MIM) capacitor or backend transistor having an epitaxial oxide according to embodiments described above may be integrated into one or more layers of the integrated circuit structure described below in association with Figure 7 .Referring to Figure 7 , an integrated circuit structure 700 includes a first plurality of conductive interconnect lines 704 in and spaced apart by a first inter-layer dielectric (ILD) layer 702 above a substrate 701. Individual ones of the first plurality of conductive interconnect lines 704 include a first conductive barrier material 706 along sidewalls and a bottom of a first conductive fill material 708. Individual ones of the first plurality of conductive interconnect lines 704 are along a first direction 798 (e.g., into and out of the page).A second plurality of conductive interconnect lines 714 is in and spaced apart by a second ILD layer 712 above the first ILD layer 702. Individual ones of the second plurality of conductive interconnect lines 714 include the first conductive barrier material 706 along sidewalls and a bottom of the first conductive fill material 708. Individual ones of the second plurality of conductive interconnect lines 714 are along a second direction 799 orthogonal to the first direction 798.A third plurality of conductive interconnect lines 724 is in and spaced apart by a third ILD layer 722 above the second ILD layer 712. Individual ones of the third plurality of conductive interconnect lines 724 include a second conductive barrier material 726 along sidewalls and a bottom of a second conductive fill material 728. The second conductive fill material 728 is different in composition from the first conductive fill material 708. Individual ones of the third plurality of conductive interconnect lines 724 are along the first direction 798.A fourth plurality of conductive interconnect lines 734 is in and spaced apart by a fourth ILD layer 732 above the third ILD layer 722. Individual ones of the fourth plurality of conductive interconnect lines 734 include the second conductive barrier material 726 along sidewalls and a bottom of the second conductive fill material 728. Individual ones of the fourth plurality of conductive interconnect lines 734 are along the second direction 799.A fifth plurality of conductive interconnect lines 744 is in and spaced apart by a fifth ILD layer 742 above the fourth ILD layer 732. Individual ones of the fifth plurality of conductive interconnect lines 744 include the second conductive barrier material 726 along sidewalls and a bottom of the second conductive fill material 728. Individual ones of the fifth plurality of conductive interconnect lines 744 are along the first direction 798.A sixth plurality of conductive interconnect lines 754 is in and spaced apart by a sixth ILD layer 752 above the fifth ILD layer 742. Individual ones of the sixth plurality of conductive interconnect lines 754 include the second conductive barrier material 726 along sidewalls and a bottom of the second conductive fill material 728. Individual ones of the sixth plurality of conductive interconnect lines 754 are along the second direction 799.In an embodiment, the second conductive fill material 728 consists essentially of copper, and the first conductive fill material 708 consists essentially of cobalt. In an embodiment, the first conductive fill material 708 includes copper having a first concentration of a dopant impurity atom, and the second conductive fill material 728 includes copper having a second concentration of the dopant impurity atom, the second concentration of the dopant impurity atom less than the first concentration of the dopant impurity atom.In an embodiment, the first conductive barrier material 706 is different in composition from the second conductive barrier material 726. In another embodiment, the first conductive barrier material 706 and the second conductive barrier material 726 have the same composition.In an embodiment, a first conductive via 719 is on and electrically coupled to an individual one 704A of the first plurality of conductive interconnect lines 704. An individual one 714A of the second plurality of conductive interconnect lines 714 is on and electrically coupled to the first conductive via 719.A second conductive via 729 is on and electrically coupled to an individual one 714B of the second plurality of conductive interconnect lines 714. An individual one 724A of the third plurality of conductive interconnect lines 724 is on and electrically coupled to the second conductive via 729.A third conductive via 739 is on and electrically coupled to an individual one 724B of the third plurality of conductive interconnect lines 724. An individual one 734A of the fourth plurality of conductive interconnect lines 734 is on and electrically coupled to the third conductive via 739.A fourth conductive via 749 is on and electrically coupled to an individual one 734B of the fourth plurality of conductive interconnect lines 734. An individual one 744A of the fifth plurality of conductive interconnect lines 744 is on and electrically coupled to the fourth conductive via 749.A fifth conductive via 759 is on and electrically coupled to an individual one 744B of the fifth plurality of conductive interconnect lines 744. An individual one 754A of the sixth plurality of conductive interconnect lines 754 is on and electrically coupled to the fifth conductive via 759.In one embodiment, the first conductive via 719 includes the first conductive barrier material 706 along sidewalls and a bottom of the first conductive fill material 708. The second 729, third 739, fourth 749 and fifth 759 conductive vias include the second conductive barrier material 726 along sidewalls and a bottom of the second conductive fill material 728.In an embodiment, the first 702, second 712, third 722, fourth 732, fifth 742 and sixth 752 ILD layers are separated from one another by a corresponding etch-stop layer 790 between adjacent ILD layers. In an embodiment, the first 702, second 712, third 722, fourth 732, fifth 742 and sixth 752 ILD layers include silicon, carbon and oxygen.In an embodiment, individual ones of the first 704 and second 714 pluralities of conductive interconnect lines have a first width (W1). Individual ones of the third 724, fourth 734, fifth 744 and sixth 754 pluralities of conductive interconnect lines have a second width (W2) greater than the first width (W1).It is to be appreciated that the layers and materials described above in association with back end of line (BEOL) structures and processing may be formed on or above an underlying semiconductor substrate or structure, such as underlying device layer(s) of an integrated circuit. In an embodiment, an underlying semiconductor substrate represents a general workpiece object used to manufacture integrated circuits. The semiconductor substrate often includes a wafer or other piece of silicon or another semiconductor material. Suitable semiconductor substrates include, but are not limited to, single crystal silicon, poly crystalline silicon and silicon on insulator (SOI), as well as similar substrates formed of other semiconductor materials, such as substrates including germanium, silicon carbide, carbon, or group III-V materials. The semiconductor substrate, depending on the stage of manufacture, often includes transistors, integrated circuitry, and the like. The substrate may also include semiconductor materials, metals, dielectrics, dopants, and other materials commonly found in semiconductor substrates. Furthermore, the structures depicted may be fabricated on underlying lower level interconnect layers.Although the preceding methods of fabricating a metallization layer, or portions of a metallization layer, of a BEOL metallization layer are described in detail with respect to select operations, it is to be appreciated that additional or intermediate operations for fabrication may include standard microelectronic fabrication processes such as lithography, etch, thin films deposition, planarization (such as chemical mechanical polishing (CMP)), diffusion, metrology, the use of sacrificial layers, the use of etch stop layers, the use of planarization stop layers, or any other associated action with microelectronic component fabrication. Also, it is to be appreciated that the process operations described for the preceding process flows may be practiced in alternative sequences, not every operation need be performed or additional process operations may be performed or both.In an embodiment, as used throughout the present description, interlayer dielectric (ILD) material is composed of or includes a layer of a dielectric or insulating material. Examples of suitable dielectric materials include, but are not limited to, oxides of silicon (e.g., silicon dioxide (SiO2)), doped oxides of silicon, fluorinated oxides of silicon, carbon doped oxides of silicon, various low-k dielectric materials known in the arts, and combinations thereof. The interlayer dielectric material may be formed by techniques, such as, for example, chemical vapor deposition (CVD), physical vapor deposition (PVD), or by other deposition methods.In an embodiment, as is also used throughout the present description, metal lines or interconnect line material (and via material) is composed of one or more metal or other conductive structures. A common example is the use of copper lines and structures that may or may not include barrier layers between the copper and surrounding ILD material. As used herein, the term metal includes alloys, stacks, and other combinations of multiple metals. For example, the metal interconnect lines may include barrier layers (e.g., layers including one or more of Ta, TaN, Ti or TiN), stacks of different metals or alloys, etc. Thus, the interconnect lines may be a single material layer, or may be formed from several layers, including conductive liner layers and fill layers. Any suitable deposition process, such as electroplating, chemical vapor deposition or physical vapor deposition, may be used to form interconnect lines. In an embodiment, the interconnect lines are composed of a conductive material such as, but not limited to, Cu, Al, Ti, Zr, Hf, V, Ru, Co, Ni, Pd, Pt, W, Ag, Au or alloys thereof. The interconnect lines are also sometimes referred to in the art as traces, wires, lines, metal, or simply interconnect.In an embodiment, as is also used throughout the present description, hardmask materials are composed of dielectric materials different from the interlayer dielectric material. In one embodiment, different hardmask materials may be used in different regions so as to provide different growth or etch selectivity to each other and to the underlying dielectric and metal layers. In some embodiments, a hardmask layer includes a layer of a nitride of silicon (e.g., silicon nitride) or a layer of an oxide of silicon, or both, or a combination thereof. Other suitable materials may include carbon-based materials. In another embodiment, a hardmask material includes a metal species. For example, a hardmask or other overlying material may include a layer of a nitride of titanium or another metal (e.g., titanium nitride). Potentially lesser amounts of other materials, such as oxygen, may be included in one or more of these layers. Alternatively, other hardmask layers known in the arts may be used depending upon the particular implementation. The hardmask layers maybe formed by CVD, PVD, or by other deposition methods.In an embodiment, as is also used throughout the present description, lithographic operations are performed using 193nm immersion lithography (i193), extreme ultra-violet (EUV) lithography or electron beam direct write (EBDW) lithography, or the like. A positive tone or a negative tone resist may be used. In one embodiment, a lithographic mask is a trilayer mask composed of a topographic masking portion, an anti-reflective coating (ARC) layer, and a photoresist layer. In a particular such embodiment, the topographic masking portion is a carbon hardmask (CHM) layer and the anti-reflective coating layer is a silicon ARC layer.Embodiments disclosed herein may be used to manufacture a wide variety of different types of integrated circuits or microelectronic devices. Examples of such integrated circuits include, but are not limited to, processors, chipset components, graphics processors, digital signal processors, micro-controllers, and the like. In other embodiments, semiconductor memory may be manufactured. Moreover, the integrated circuits or other microelectronic devices may be used in a wide variety of electronic devices known in the arts. For example, in computer systems (e.g., desktop, laptop, server), cellular phones, personal electronics, etc. The integrated circuits may be coupled with a bus and other components in the systems. For example, a processor may be coupled by one or more buses to a memory, a chipset, etc. Each of the processor, the memory, and the chipset, may potentially be manufactured using the approaches disclosed herein.Figure 8 illustrates a computing device 800 in accordance with one implementation of the disclosure. The computing device 800 houses a board 802. The board 802 may include a number of components, including but not limited to a processor 804 and at least one communication chip 806. The processor 804 is physically and electrically coupled to the board 802. In some implementations the at least one communication chip 806 is also physically and electrically coupled to the board 802. In further implementations, the communication chip 806 is part of the processor 804.Depending on its applications, computing device 800 may include other components that may or may not be physically and electrically coupled to the board 802. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 806 enables wireless communications for the transfer of data to and from the computing device 800. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 806 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 800 may include a plurality of communication chips 806. For instance, a first communication chip 806 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 806 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 804 of the computing device 800 includes an integrated circuit die packaged within the processor 804. In some implementations of embodiments of the disclosure, the integrated circuit die of the processor includes one or more structures, such as a metal insulator metal (MIM) capacitor or backend transistor having an epitaxial oxide built in accordance with implementations of the disclosure. The term "processor" may refer to any device or portion of a device that processes electronic data from registers or memory to transform that electronic data, or both, into other electronic data that may be stored in registers or memory, or both.The communication chip 806 also includes an integrated circuit die packaged within the communication chip 806. In accordance with another implementation of the disclosure, the integrated circuit die of the communication chip has a metal insulator metal (MIM) capacitor or backend transistor having an epitaxial oxide built in accordance with implementations of the disclosure.In further implementations, another component housed within the computing device 800 may contain an integrated circuit die having a metal insulator metal (MIM) capacitor or backend transistor having an epitaxial oxide built in accordance with implementations of embodiments of the disclosure.In various embodiments, the computing device 800 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultramobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 800 may be any other electronic device that processes data.Figure 9 illustrates an interposer 900 that includes one or more embodiments of the disclosure. The interposer 900 is an intervening substrate used to bridge a first substrate 902 to a second substrate 904. The first substrate 902 may be, for instance, an integrated circuit die. The second substrate 904 may be, for instance, a memory module, a computer motherboard, or another integrated circuit die. Generally, the purpose of an interposer 900 is to spread a connection to a wider pitch or to reroute a connection to a different connection. For example, an interposer 900 may couple an integrated circuit die to a ball grid array (BGA) 906 that can subsequently be coupled to the second substrate 904. In some embodiments, the first and second substrates 902/904 are attached to opposing sides of the interposer 900. In other embodiments, the first and second substrates 902/904 are attached to the same side of the interposer 900. And, in further embodiments, three or more substrates are interconnected by way of the interposer 900.The interposer 900 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, a ceramic material, or a polymer material such as polyimide. In further implementations, the interposer 900 may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group III-V and group IV materials.The interposer 900 may include metal interconnects 908 and vias 910, including but not limited to through-silicon vias (TSVs) 912. The interposer 900 may further include embedded devices 914, including both passive and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio-frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices may also be formed on the interposer 900. In accordance with embodiments of the disclosure, apparatuses or processes disclosed herein may be used in the fabrication of interposer 900 or in the fabrication of components included in the interposer 900.Figure 10 is an isometric view of a mobile computing platform 1000 employing an integrated circuit (IC) fabricated according to one or more processes described herein or including one or more features described herein, in accordance with an embodiment of the present disclosure.The mobile computing platform 1000 may be any portable device configured for each of electronic data display, electronic data processing, and wireless electronic data transmission. For example, mobile computing platform 1000 may be any of a tablet, a smart phone, laptop computer, etc. and includes a display screen 1005 which in the exemplary embodiment is a touchscreen (capacitive, inductive, resistive, etc.), a chip-level (SoC) or package-level integrated system 1010, and a battery 1013. As illustrated, the greater the level of integration in the system 1010 enabled by higher transistor packing density, the greater the portion of the mobile computing platform 1000 that may be occupied by the battery 1013 or non-volatile storage, such as a solid state drive, or the greater the transistor gate count for improved platform functionality. Similarly, the greater the carrier mobility of each transistor in the system 1010, the greater the functionality. As such, techniques described herein may enable performance and form factor improvements in the mobile computing platform 1000.The integrated system 1010 is further illustrated in the expanded view 1020. In the exemplary embodiment, packaged device 1077 includes at least one memory chip (e.g., RAM), or at least one processor chip (e.g., a multi-core microprocessor and/or graphics processor) fabricated according to one or more processes described herein or including one or more features described herein. The packaged device 1077 is further coupled to the board 1060 along with one or more of a power management integrated circuit (PMIC) 1015, RF (wireless) integrated circuit (RFIC) 1025 including a wideband RF (wireless) transmitter and/or receiver (e.g., including a digital baseband and an analog front end module further includes a power amplifier on a transmit path and a low noise amplifier on a receive path), and a controller thereof 1011. Functionally, the PMIC 1015 performs battery power regulation, DC-to-DC conversion, etc., and so has an input coupled to the battery 1013 and with an output providing a current supply to all the other functional modules. As further illustrated, in the exemplary embodiment, the RFIC 1025 has an output coupled to an antenna to provide to implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. In alternative implementations, each of these board-level modules may be integrated onto separate ICs coupled to the package substrate of the packaged device 1077 or within a single IC (SoC) coupled to the package substrate of the packaged device 1077.In another aspect, semiconductor packages are used for protecting an integrated circuit (IC) chip or die, and also to provide the die with an electrical interface to external circuitry. With the increasing demand for smaller electronic devices, semiconductor packages are designed to be even more compact and must support larger circuit density. Furthermore, the demand for higher performance devices results in a need for an improved semiconductor package that enables a thin packaging profile and low overall warpage compatible with subsequent assembly processing.In an embodiment, wire bonding to a ceramic or organic package substrate is used. In another embodiment, a C4 process is used to mount a die to a ceramic or organic package substrate. In particular, C4 solder ball connections can be implemented to provide flip chip interconnections between semiconductor devices and substrates. A flip chip or Controlled Collapse Chip Connection (C4) is a type of mounting used for semiconductor devices, such as integrated circuit (IC) chips, MEMS or components, which utilizes solder bumps instead of wire bonds. The solder bumps are deposited on the C4 pads, located on the top side of the substrate package. In order to mount the semiconductor device to the substrate, it is flipped over with the active side facing down on the mounting area. The solder bumps are used to connect the semiconductor device directly to the substrate.Figure 11 illustrates a cross-sectional view of a flip-chip mounted die, in accordance with an embodiment of the present disclosure.Referring to Figure 11 , an apparatus 1100 includes a die 1102 such as an integrated circuit (IC) fabricated according to one or more processes described herein or including one or more features described herein, in accordance with an embodiment of the present disclosure. The die 1102 includes metallized pads 1104 thereon. A package substrate 1106, such as a ceramic or organic substrate, includes connections 1108 thereon. The die 1102 and package substrate 1106 are electrically connected by solder balls 1110 coupled to the metallized pads 1104 and the connections 1108. An underfill material 1112 surrounds the solder balls 1110.Processing a flip chip may be similar to conventional IC fabrication, with a few additional operations. Near the end of the manufacturing process, the attachment pads are metalized to make them more receptive to solder. This typically consists of several treatments. A small dot of solder is then deposited on each metalized pad. The chips are then cut out of the wafer as normal. To attach the flip chip into a circuit, the chip is inverted to bring the solder dots down onto connectors on the underlying electronics or circuit board. The solder is then re-melted to produce an electrical connection, typically using an ultrasonic or alternatively reflow solder process. This also leaves a small space between the chip's circuitry and the underlying mounting. In most cases an electrically-insulating adhesive is then "underfilled" to provide a stronger mechanical connection, provide a heat bridge, and to ensure the solder joints are not stressed due to differential heating of the chip and the rest of the system.In other embodiments, newer packaging and die-to-die interconnect approaches, such as through silicon via (TSV) and silicon interposer, are implemented to fabricate high performance Multi-Chip Module (MCM) and System in Package (SiP) incorporating an integrated circuit (IC) fabricated according to one or more processes described herein or including one or more features described herein, in accordance with an embodiment of the present disclosure.Thus, embodiments of the present disclosure include metal insulator metal (MIM) capacitors or backend transistors having epitaxial oxides.Although specific embodiments have been described above, these embodiments are not intended to limit the scope of the present disclosure, even where only a single embodiment is described with respect to a particular feature. Examples of features provided in the disclosure are intended to be illustrative rather than restrictive unless stated otherwise. The above description is intended to cover such alternatives, modifications, and equivalents as would be apparent to a person skilled in the art having the benefit of the present disclosure.The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of the present application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims.The following examples pertain to further embodiments. The various features of the different embodiments may be variously combined with some features included and others excluded to suit a variety of different applications.Example embodiment 1: A metal-insulator-metal (MIM) capacitor includes a first electrode plate. A capacitor dielectric is on the first electrode plate. The capacitor dielectric includes a single crystalline oxide material. A second electrode plate is on the capacitor dielectric, the second electrode plate having a portion over and parallel with the first electrode plate.Example embodiment 2: The MIM capacitor of example embodiment 1, wherein the single crystalline oxide material is a perovskite oxide.Example embodiment 3: The MIM capacitor of example embodiment 1 or 2, wherein the single crystalline oxide material includes a material selected from the group consisting of SrTiO3, BaTiO3, and SrXBa1-XTiO3.Example embodiment 4: The MIM capacitor of example embodiment 1, 2 or 3, further including a second capacitor dielectric on the second electrode plate. A third electrode plate is on the second capacitor dielectric, the third electrode plate having a portion over and parallel with the second electrode plate.Example embodiment 5: The MIM capacitor of example embodiment 1, 2, 3 or 4, wherein the MIM capacitor is included in a back end of line (BEOL) metallization structure.Example embodiment 6: A transistor includes a gate electrode above a substrate. A gate dielectric above and on the gate electrode. The gate dielectric includes a single crystalline oxide material. A channel material layer is on the single crystalline oxide material. Source or drain contacts are on the channel material layer.Example embodiment 7: The transistor of example embodiment 6, wherein the single crystalline oxide material is a perovskite oxide.Example embodiment 8: The transistor of example embodiment 6 or 7, wherein the single crystalline oxide material includes a material selected from the group consisting of SrTiO3, BaTiO3, and SrXBa1-XTiO3.Example embodiment 9: The transistor of example embodiment 6, 7 or 8, further including a top gate structure on the channel material layer.Example embodiment 10: The transistor of example embodiment 6, 7, 8 or 9, wherein the transistor is included in a back end of line (BEOL) metallization structure.Example embodiment 11: A computing device includes a board, and a component coupled to the board. The component includes a metal-insulator-metal (MIM) capacitor including a first electrode plate. A capacitor dielectric is on the first electrode plate. The capacitor dielectric includes a single crystalline oxide material. A second electrode plate is on the capacitor dielectric, the second electrode plate having a portion over and parallel with the first electrode plate.Example embodiment 12: The computing device of example embodiment 11, further including a memory coupled to the board.Example embodiment 13: The computing device of example embodiment 11 or 12, further including a communication chip coupled to the board.Example embodiment 14: The computing device of example embodiment 11, 12 or 13, further including a camera coupled to the board.Example embodiment 15: The computing device of example embodiment 11, 12, 13 or 14, wherein the component is a packaged integrated circuit die.Example embodiment 16: A computing device includes a board, and a component coupled to the board. The component includes transistor including a gate electrode above a substrate. A gate dielectric above and on the gate electrode. The gate dielectric includes a single crystalline oxide material. A channel material layer is on the single crystalline oxide material. Source or drain contacts are on the channel material layer.Example embodiment 17: The computing device of example embodiment 16, further including a memory coupled to the board.Example embodiment 18: The computing device of example embodiment 16 or 17, further including a communication chip coupled to the board.Example embodiment 19: The computing device of example embodiment 16, 17 or 18, further including a camera coupled to the board.Example embodiment 20: The computing device of example embodiment 16, 17, 18 or 19, wherein the component is a packaged integrated circuit die. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.