Comments Locked

82 Comments

Back to Article

  • Akaz1976 - Wednesday, May 4, 2011 - link

    If we wont see Atoms based ont his tech till 2013, does this mean that ARM camp would be able to come up with a similar solution by then? and thus retain its advantage?
  • nitrousoxide - Wednesday, May 4, 2011 - link

    I'm not sure but is this the same thing as FinFET? TSMC also got that technology, so ARM should be using them in near future. Whether it's 20nm or 14nm is unknown.

    Also, are they using them in the entire circuit or just the cache?
  • DanNeely - Wednesday, May 4, 2011 - link

    Both. Processes are typically demoed first with sRam because it's much easier to make than more complex circuits.
  • tygrus - Wednesday, May 4, 2011 - link

    SRAM array is used because it allows them to test and validate the silicon without having to validate complex logic. It's like making a small logic circuit and testing a sample of 1M except they don't have to make 1000's of CPU's.

    They include debug logic to check results and timings. They run through the whole array reading and writing to check frequency limits, estimate leakage and current carrying capabilities all at different voltages/frequency.

    With SRAM, the design is much the same as the previous generations so you are testing the silicon not the logic design. it's much easier to determine how far through the array you have got as to where an error occurs. It allows them to test the logic gates in isolation while still being part of a large chip.

    Trying to debug a CPU is much harder because it can have many different problems with critical paths, interference, mistakes in the logic design, errors in implementation happening simultaneously and unpredictably. It makes it very hard to locate the problem silicon and at which time the error occurs. The SRAM tests create the design rules by which the CPU can be designed and implemented so you can avoid the limitations and focus on the logic design.
  • RaynorWolfcastle - Wednesday, May 4, 2011 - link

    http://www.eetimes.com/electronics-news/4213622/TS...

    This article seems to say that TSMC is going FinFET at 14 nm and everything above will be planar! Considering their 28nm process isn't out yet and that there is a 20nm node before the 14nm, it will be a while before TSMC starts using a non-planar geometry.
  • JGabriel - Wednesday, May 4, 2011 - link

    Didn't TSMC skip 32nm to get to 28nm quicker?

    I can see them skippin 20nm and going straight to 14nm, if that'll get them to FinFET sooner. They can't spend 5 years being that far behind Intel technologically. They'll go bankrupt, as would AMD.

    I'm wondering if the tech was included in AMD's patent cross-licensing agreement as part of their last settlement with Intel.

    .
  • Ryan Smith - Thursday, May 5, 2011 - link

    32nm was a half-node for TSMC. 20nm(ish) would be a full node from 28nm. To jump from 28nm to 14nm would effectively be skipping a whole node.
  • blanarahul - Wednesday, March 28, 2012 - link

    http://images.anandtech.com/reviews/cpu/intel/22nm...

    Did someone notice a gray curved line in this image between the black and blue ones?

    Could it be representing 22 nm planar transistor? Because if it does, the entire idea of moving to 3d transistors was useless for Extreme Edition Desktop CPUs because at full speed they operate at greater than 1.3 volts.

    http://dl.dropbox.com/u/1329758/power2.jpg
    This is an image i made after modifying the original image by Intel.
  • blanarahul - Wednesday, March 28, 2012 - link

    I personally think FinFET is useless until 14 nm.
  • fwip - Friday, January 8, 2016 - link

    It loos like AMD is going to be launching GPUs on TSMC's process in the middle of 2016, 5 years after your comment: http://anandtech.com/show/9886/amd-reveals-polaris...

    "A while" was a good estimate. :)
  • Smooth2o - Wednesday, May 4, 2011 - link

    Yes, it is, Intel calls it Tri Gate or 3D. But TSMC won't have 22nm technology until 2013 and that's a planar process, not FinFET. They have announced FinFET technology with 450mm wafer technology in 2015-2016. Umm, that's 4-5 years from now, IF that happens....

    Huh? The entire die is going to be 22nm Tri Gate (FinFET). They haven't even talked about the increase in speed you get when you use it for memory...
  • blanarahul - Wednesday, March 28, 2012 - link

    http://images.anandtech.com/reviews/cpu/intel/22nm...

    Did someone notice a gray curved line in this image between the black and blue ones?

    Could it be representing 22 nm planar transistor? Because if it does the entire idea of moving to 3d transistors was useless for desktop CPUs because they operate at greater than 1.3 volts.
  • blanarahul - Wednesday, March 28, 2012 - link

    http://dl.dropbox.com/u/1329758/power2.jpg
    This is an image i made after modifying the original image by Intel!
  • siromega9 - Wednesday, May 4, 2011 - link

    One of the interesting things from one of the promo/detail videos Intel also put out this morning is that this is like a two node jump. So instead of just going from 32nm to 22nm, the actual performance they'll get out of the chip is like jumping from 32nm to 14nm. In historical terms, like jumping from a 2006 65nm Conroe Core 2 CPU to a 2011 32nm Sandy Bridge CPU.

    I don't think this will help them in the ARM battle. The difficulty is that while a 22nm Atom might be suitable for tablets, its still probably not integrated enough for smartphones. And from an architecture standpoint, thats one ecosystem (Android and iPhone promise that phone apps will run on the tablet). So an x86 tablet would need to have enough horsepower to emulate a multicore ARM chip plus the GPUs on the ARM chip. Given the possibility of 28nm quad core ARM chips with incredible GPUs, I don't see the Atom being able to emulate that successfully.
  • megakilo - Wednesday, May 4, 2011 - link

    Why emulation? Android runs fine on x86. The apps are in Java except the NDK part needs to be recompiled even though the system optimization is very important to achieve best performance.
  • siromega9 - Wednesday, May 4, 2011 - link

    Correct, so not Android but iOS apps (which are compiled to ARM code) would need an emulator.
  • nonzenze - Wednesday, May 4, 2011 - link

    Because ObjectiveC cannot be trivially compiled to x86? The APIs hide all the platform specific junk anyway.

    Heck, Apple already has all the source code from the original AppStore submission process, they could quite easily recompile the whole store without any further intervention from developers.
  • Thinkyhead - Wednesday, May 4, 2011 - link

    As an App Store developer I can assure you, Apple doesn't have any of my source code, and they would require developers to personally rebuild their apps as ARM / x86 universal binaries for new hardware. The iOS virtualizer that comes with XCode actually runs an x86 version of your app, so Apple could quickly handle the switch.
  • tristangrimaux - Wednesday, May 4, 2011 - link

    Nope. Not emulation needed for iOS either
  • Smooth2o - Wednesday, May 4, 2011 - link

    Intel Arom is not going to emulate anything. It will run Android apps directly. All apps are at high level coding and can easily be moved to a different architecture.
  • nimsaw - Thursday, May 5, 2011 - link

    I am unable to follow the concept of 'node jump'. Could you please clarify it. Is it just a jump from older manufacturing processes or is there more to it?
  • JasonInofuentes - Wednesday, May 4, 2011 - link

    "The inversion layer (blue line above) is where the current flow actually happens."

    It's sentences like that which make me dream of full color vision. Sigh.

    Otherwise, very exciting news!!
  • qwertymac93 - Wednesday, May 4, 2011 - link

    http://img709.imageshack.us/img709/4571/screenshot...

    I've replaced the blue line with an alternating black/white line.
  • qwertymac93 - Wednesday, May 4, 2011 - link

    http://img811.imageshack.us/img811/4571/screenshot...

    Did the same with the "trigate" transistor.
  • JasonInofuentes - Wednesday, May 4, 2011 - link

    I am humbly thankful. Now, could you go back and do the same thing with all my teachers powerpoint slides?
  • tristangrimaux - Wednesday, May 4, 2011 - link

    Right below the Gate Oxide is the inversion layer, a thick line where the current flows.
  • justaviking - Wednesday, May 4, 2011 - link

    Can anyone please explain the term "drive strength"?

    It was used several times, but I don't know what it means. Reliability? Longevity? What?
  • cotak - Wednesday, May 4, 2011 - link

    Simple. There's an upper limit to how much current a transistor of a specific size can pass. Increasing drive strength means more current.
  • justaviking - Wednesday, May 4, 2011 - link

    Thank you. :-)
  • Pratheek - Wednesday, May 4, 2011 - link

    Thought of reducing aspect sizes which in turn increases the chip density would be the only criterion of increasing performance. But they have increased the inversion layer which is truly a mind bending one for performance...
  • GeorgeH - Wednesday, May 4, 2011 - link

    Isn't Intel the only one with the ability to mass produce anything at 22nm right now? 3D sure sounds cool, and the theoretical benefits are easy to see, but how do we know that this isn't a "forced" innovation with PR-spin?

    By that I mean has 22nm simply gotten to the point where it's just getting too small to manufacture with reasonable reliability without effectively increasing size by expanding vertically? I know Intel is saying other manufacturers will be staying planar at 22nm, but I wonder if this news means that we can expect more than the usual number of manufacturing issues with those processes.
  • davepermen - Wednesday, May 4, 2011 - link

    of course it's a forced innovation. forced in the sense that they needed to solve a problem.
    but still a great one, as it allows to continue going smaller while staying efficient at it (leakage increased for years, harming efficiency.. that one shifted this barrier quite a bit again).

    it's like the hitachi (?) get perpendicular song about storing bits vertically: http://www.youtube.com/watch?v=-xPvD0Z9kz8
  • GeorgeH - Wednesday, May 4, 2011 - link

    But solve what problem? The problem of "make better transistors" or the more general problem of "manufacture 22nm products without making GF100's engineers blush." If it's the former, fantastic. If it's the latter, it could have interesting implications for the rest of the industry as they follow Intel in making the 22nm transition.
  • Azethoth - Thursday, May 5, 2011 - link

    This solves the problem of making better transistors. Nobody cares about blushing engineers at nVidia.

    More generally, while Moore's law has held, certain patterns in it have not. Years ago heat became a problem to the point where the steady frequency increases just stopped and so multi-core became popular because that was a way to still make use of the extra transistors that stopped getting faster and so maintain a kind of speed increase.

    This is an exciting new way to bump up a bunch of the performance characteristics that have languished for years now without much cost. Also it seems that it may scale a bit into the future with the use of more fins.
  • KarlWa - Monday, May 9, 2011 - link

    Theres no reason to think Intel won't open it's fabs to producing non-x86 chips. There are rumours that they want to make A5 chips for Apple, and they may see some gain to getting back in to ARM design if they can use their experience and scale to make the best ARM chips around. They could build a fully-custom ARM SoC platform similar to what NVidia do.

    From a business perspective, it looks like a great way to add growth. It depends on how stubborn Intel is willing to be to hold on to the x86 architecture. It's been a big competitive advantage for them until now, but it doesn't look like ATOM is going to make significant headway. Apple are not going to give up the vertical structure of designing their own SoCs, and the big Android licensees are at such pressure to differentiate from each other that they are better suited to the ARM pick-n'-mix model with SoCs from a variety of manufacturers. That could eventually filter down to the PC business with Windows licensees replacing those of Android and the rumours of Apple moving Macs to It's own ARM chips also coming true.

    The ARM SoC Market is big business and growing as fast (if not faster) than the already zippy smartphone Market. Intel needs to get involved before it becomes obsolete, and opening its fabs and/or entering the ARM SoC Market are the two best ideas I can think of for them to do it.
  • krumme - Wednesday, May 4, 2011 - link

    Thanx for very good explanation

    Ofcource the devil is in the detaiils, but the the flexibility for the product portforlio looks very interesting so far. Agree Atom 2013 is the one to look for, as its where things could change more fundamentally.
  • Shadowmaster625 - Wednesday, May 4, 2011 - link

    The 5 watt ontario c-50 i at 22nm would scale down to well under 50 mm^2 and probably 2 watts on Intel's superior process. Anything less from intel would be a failure. Of course we know Intel will make it 1 watt and the gpu will be crippled. And no one will buy it but they wont care because they will be able to sell more $900 ivy bridge tablets.
  • Rockworthy - Wednesday, May 4, 2011 - link

    Is this 3D transistor technology going to be owned by Intel or will future ARM chips be able to use this too?
  • Smooth2o - Wednesday, May 4, 2011 - link

    It's not owned by anyone, it's shared technology, kinda. That's if you can figure out how to do it. Intel won't tell you. TSMC (ARM) will have it in 2015 or 2016, however, they are already one node late on that chart, so maybe longer....
  • DanNeely - Thursday, May 5, 2011 - link

    Intel might be able to patent some implementation details, but the basic finFET idea has been in academia for over 20 years and is unpatentable as a result.
  • phazerz - Friday, May 6, 2011 - link

    It would be those "implementation details" that are the valuable IP, seeing as how FinFETs have been known for 20 years yet nobody to date apparently has found a way to mass-produce them economically.
  • lordmetroid - Wednesday, May 4, 2011 - link

    How are these transistors in the image suppose to do anything, I do not see any circuitry. No NAND gates or anything, just transistors sitting on a piece of silicon substrate. Do the images actually show meaningful circuits or simply transistors?

    In case they do show real circuitry, how are these different sets of transistors connected to each other?
  • BrightCandle - Wednesday, May 4, 2011 - link

    The images in this article only show single transistors. NAND gates are made from a small clump (trying to remember my CS....I'm going with 3 transistors IIRC) and then more advanced logic can be built up from there.
  • nirmv - Thursday, May 5, 2011 - link

    NAND is made from 4 transistors.
  • ironargonaut - Wednesday, May 4, 2011 - link

    These just show transistors and they are connected with very small wires.
  • 3tire - Thursday, May 5, 2011 - link

    You build them from transistors.
  • Akaz1976 - Wednesday, May 4, 2011 - link

    So based on comments here, with this innovation coming to atom next year (or maybe 2013) intel would have a 2-4 year lead on ARM manufacturers?

    Would this mean that Atom will become competitive in Tablets and Smartphones by 2013?
  • ViRGE - Thursday, May 5, 2011 - link

    From the press conference notes I've read, Intel expects to have a 3 year lead on any other fab implementing this (or any similar) technology. That's consistent with everyone else implementing it on the node past 22nm (normally 14nm).
  • Lucian Armasu - Thursday, May 5, 2011 - link

    No. Intel is already 1 node ahead of ARM but that hasn't helped them catch up to ARM yet. Even this technology won't reduce the gap in energy efficiency that much, but it will get them closer. ARM chips will be made at 28nm next year. Atom will still be made at 32 nm (it's at 45 nm now). Intel only uses their latest node technology on their high-end chips.
  • marc1000 - Wednesday, May 4, 2011 - link

    how long will it take to come to market?

    what do I do now??? wait another year or so, or buy sandy-bridge this year?

    changing CPU+MOBO for every new processor is tiresome...
  • cmptrdude79 - Wednesday, May 4, 2011 - link

    Quoting Wikipedia, "Ivy Bridge processors will be backwards compatible with the Sandy Bridge platform." No socket change. It's just the 22nm refresh, although it definitely looks like it'll hit higher clock speeds and be far more energy efficient.
  • marc1000 - Wednesday, May 4, 2011 - link

    let us hope intel does not kill socket 1155 in 1 year like they did to 1156 then....
  • marc1000 - Wednesday, May 4, 2011 - link

    BTW, this news went live on a local television at Brazil... in the same day of the announcement... Intel is working HARD on marketing!
  • Azethoth - Thursday, May 5, 2011 - link

    Depends on what you want. Sandy bridge is not their high performance architecture. Ivy bridge is. So if you want to write email you keep your current system. If you want to game or get high scores in folding@home and need that extra bit of CPU oomph then, like me, you wait.
  • iwod - Wednesday, May 4, 2011 - link

    I dont think Atom was ever designed with Smartphone in mind. It was more like Tablet PC, Nettop and Netbook, and trying to scale it down to so called MID segment.

    The problem is Smartphone in itself is a totally different thing, and by volume or revenue is larger then all those segment mentioned above combined.

    Tablet PC, Nettop/Book and MID are like PC in different form factor. You try to squeeze PC hardware and software into it. Like Windows 7.

    Smartphone runs on its own set of Software. That is why I think Intel should take the chance to truly design a CPU without the crap, like MMX, SSE, 286 / 386 / 486 compatibility, and just focus on x86-64 + AVX + FMA as basic.

    And may be calling it as AE86.

    That combine with 22nm and Tri Gate should level the gap between Atom and ARM A15.
  • joelypolly - Thursday, May 5, 2011 - link

    This announcement with the recent rumour that Apple wants to use Intel to do future processor manufacturing would potentially make the A6 processor very interesting. It should be a significant step up both for performance and power consumption. Perhaps a phone that lasts more than 2 days with regular use is near.
  • synaesthetic - Thursday, May 5, 2011 - link

    I'd take a phone that lasts more than ONE day with regular (admittedly somewhat heavy) use... :)
  • Lucian Armasu - Thursday, May 5, 2011 - link

    The rumor said Intel wants to make chips for Apple, not the other way around. Very different thing.
  • AnotherGuy - Thursday, May 5, 2011 - link

    Wth ever happened to AMD... Bulldozer or did it get totally scratched?
    The only good things i hear about them is in the graphics department... did they fire all the CPU Engineers?
  • silverblue - Thursday, May 5, 2011 - link

    Bulldozer and Llano are due to come on sale within the next two months if reports are to be believed. Both should at least be fully unveiled in June.
  • chukked - Thursday, May 5, 2011 - link

    multitaking here means a lot of data switching or swapping contentiously in cpu

    i am a big multitasker, three vm running for java and db servers and a lot of opera + firefox woth 400+ tabs opened, and many pdf opened (8 GB RAM)

    i look at least 2 MB cache per core in processor (without integrated graphics core) running at full processor speed.
    (i will prefer or choose a processor with more cache rather then the core count, cache size matter to me more then core counts, beyond four cores i am looking for more cache rather then more cores)

    for the processors having integrated graphics core also then the additional 4 MB cache should be there to supplement graphics core.

    now the multitasking performance of my core 2 due with 6 MB cache is at par with i7 6 MB cache (it has not increased by any considerable percentage to appreciate)
    so i am very frustrated with i7 series on 32nm that rather then adding more cache, they wasted all technology advances to graphics core plus reduced cache also so as to make it slower on multitasking

    when talking on integrated graphics, i want to consider first four cores + 10 MB cache at least then the grasphics core (i am ok with integrated graphics mediocre performance up to to run regular multimedia and 2D games as if i need more graphics power i will add a graphics card but if i do not have a lot od cache to swap quickly i am doomed and performance is back to pentium era )

    exp users of multitasking kindly add your comments

    thanks
    chukked

    P.S. Dear Anand thanks for your informative and technology specs influencer website, could you please make a separate single task and multitasking test cases and reviews in your performance review articles as multitasking is what multicore processor claims they arw good at.
  • ssj4Gogeta - Thursday, May 5, 2011 - link

    Can you even relate cache size to speed across multiple architectures?

    Get an SSD and use it as the system drive.
  • Azethoth - Thursday, May 5, 2011 - link

    400 tabs eh. When you start up your browser, how many hours does it take to fetch fresh pages from the internets? After they are ready for display, how many days does it take you to cycle through all tabs and see whats new? Do you lose track of where you are and have to restart? Inquiring minds want to know.
  • smut - Thursday, May 5, 2011 - link

    400 tabs? WTF, elaborate please!
  • chukked - Friday, May 6, 2011 - link

    multiple portable opera browser sessions, each for different subject and having a lots of tabs, just see it as each one serving as a research, collection workspace.

    right now this anandtech session alone has 57 tabs from 3-jan-2011 covering intel i7

    no it does not take eternity to load, they all open simultaneously, i have tweaked cache setting of opera and firefox both but firefox only one session runs unlike opera so i mostly use opera.

    anyway i use bartab plugin in firefox which prevent opera from loading any tab except the one having focus, so loading is not an issue. and yes it has more then 400 tabs even after i splitted it in to separate folders one is tech and another is science and energy (cold fusion is exotic).

    a big cache alone is capable to improve the system response time by a big factor even your ssd can not do anything moreover it prevents system hang and windows crash altogether when you are switching programs to programs very quickly.
  • banvetor - Thursday, May 5, 2011 - link

    One point that I didn't see anyone talking about is the apparent change in the CMOS design paradigm that smaller technologies mean smaller transistor length (L).

    From the slides available at Intel website (of which some are shown here), it seems that in this new 22nm 3D transistor, the technology resolution corresponds to the gate width (W), and not length...

    It's interesting to think how this change will affect the chip design...
  • Shadowmaster625 - Thursday, May 5, 2011 - link

    This is probably why intel's stock price jumped 20% a couple weeks ago. So now we know how long the banksters have the inside info. Anyway it is just a pump and dump scheme. If you look at the "Transistor Gate Delay" chart carefully, you will notice that each new process node yields a similar improvement. If you actually compare regular planar gates at 32nm vs regular planar gates at 45nm, you see the same ~25% improvement. In reality this particular innovation (the waffling) is only good for a few percent. So are we really to believe that a bit of waffling is the best Intel can do this year? If so, they are in trouble. I think their marketing department is doing even more waffling than their engineers.
  • marc1000 - Thursday, May 5, 2011 - link

    second!
  • fic2 - Thursday, May 5, 2011 - link

    Intel CEO made an announcement a couple of weeks ago that their 22nm tech would be revolutionary but wouldn't say anything more than that.

    If this is Intel "waffling" wtf are other companies doing? Intel is way ahead of everybody else. This puts them further ahead. If Intel is in trouble all the other companies are just walking dead then.
  • albundy12345 - Thursday, May 5, 2011 - link

    Or, maybe their stock price jumped because they pocketed 1 billion dollars a month for the last half a year.
  • ironargonaut - Friday, May 6, 2011 - link

    You have totally disregarded the power savings mentioned in the article and elsewhere. The more power you save the higher you can make the frequency which increase the performance. So in effect pwr savings equals performance boost.

    Remember when Intel reduced the leakage current significantly? Remember the raging success of that part?
  • iwod - Thursday, May 5, 2011 - link

    I think this is bad news, because it give a a strong, and one more reason to wait for Ivy Bridge.
    But does any one know if Ivy will have FMA yet?
  • silverblue - Thursday, May 5, 2011 - link

    Not as far as I've heard, but Haswell will definitely implement FMA3. Bulldozer supports FMA4 and AMD is playing a wait-and-see game to figure out which is the more successful (and will probably eventually support both, I expect).

    I read into it a bit and found that AMD originally planned FMA3 as part of SSE5. Intel hit back with FMA4, THEN changed their minds and went for FMA3, whilst AMD changed to FMA4. All this playing silly buggers makes it hard to keep track of anything.
  • DanNeely - Thursday, May 5, 2011 - link

    The worst part is that it became a mess because they weren't talking to each other after making their initial proposals. Intel looked at FMA3 and decided it was better. AMD decided that even though FMA3 was better than 4, it wasn't enough better to justify supporting it when Intels volume would make FMA4 the default implementation used by 99% of software in a few years. By the time AMD found out about Intel's change it was too late to revert bulldozer.
  • grodrigues - Thursday, May 5, 2011 - link

    I don't know much about processors, so maybe my question is stupid. But can someone explain to me why ARM processors are more efficient than Intel's Atom?
  • Lucian Armasu - Thursday, May 5, 2011 - link

    To put it simply, the x86 architecture is more bloated than ARM.
  • BoyBawang - Thursday, May 5, 2011 - link

    X86 is a bloated old pig from the 70's. No matter how you put lipstic it's still a PIG!
  • Libra4US - Thursday, May 5, 2011 - link

    The picture only show the pattern of just isolated transistors themselves.

    After adding the needed contacts, let's see what they look like and how dense they are as working transistors.
  • etamin - Thursday, May 5, 2011 - link

    so is there any chance SNB-EX will be built on this new 22nm process? shipping in 2H 2011 sounds possible...
  • fire400 - Thursday, May 5, 2011 - link

    2007 was to be the debut.

    good thing it wasn't built off of the GHz war though.
  • dealcorn - Friday, May 6, 2011 - link

    As I read this, voltage tunes transistor switching speed (and cpu speed potential) Intel may vary voltage and cpu speed based on demand resulting in both better peak performance and substantially greater average efficiency. At peak performance, it sounds like efficiency may increase by something like 37%. Efficiency improvements at lower performance levels are greater. Efficiency is appreciated in the server room.

    Intel has spent years and trying to position itself to gain access to the phone cpu market. Is a 40-50% improvement in power efficiency adequate to get Intel a seat at the table?
  • ProDigit - Friday, May 6, 2011 - link

    It'd be nice to see the first Atom processors being built on this technology!
    They'd be between 2 and 2,2Ghz, yet have a thermal footprint of lower than current atom processors!

    This is excellent material for netbooks and tablet pc's, as well as servers and gaming consoles!

    If intel get's it's head out of it's ass, they would NOT bring this to the entry level laptops and office desktops; but first test this technology on netbooks, tablets, and gamer processors!

    Once it's proven there are not too many issues with the technology, and overclocking laws have been established by multitude of overclockers, it might be interesting to tackle the server, business, and entry (budget) laptop market!

    But netbooks and tablets first, because they are in dire need for low power solutions, and gamers too, because they would like to discover the potential higher overclocking capabilities!

    The technology is not established and trusted well enough to enter the server and business markets yet!

Log in

Don't have an account? Sign up now