When I wrote my first article on Intel's Atom architecture I called it The Journey Begins. I did so because while Atom has made a nice home in netbooks over the years, it was Intel's smartphone aspirations that would make or break the product. And the version of Atom that was suitable for smartphone use was two years away.

Time sure does fly. Today Intel is finally unveiling its first Atom processors for smartphones and tablets. Welcome to Moorestown.

Craig & Paul’s Excellent Adventure

Six years ago Intel’s management canned a project called Tejas. It was destined to be another multi-GHz screamer, but concerns over power consumption kept it from coming to fruition. Intel instead focused on its new Core architecture that eventually led to the CPUs we know and love today (Nehalem, Lynnfield, Arrandale, Gulftown, etc...).

When a project gets cancelled, it wreaks havoc on the design team. They live and breathe that architecture for years of their lives. To not see it through to fruition is depressing. But Intel’s teams are usually resilient, as is evidenced by another team that worked on a canceled T-project.

The Tejas team in, er, Texas was quickly tasked with coming up with the exact opposite of the chip they had just worked on: an extremely low power core for use in some sort of a mobile device (it actually started as a low power core as a part of a many core x86 CPU, but the many core project got moved elsewhere before the end of 2004). A small group of engineers were first asked to find out whether or not Intel could reuse any existing architectures in the design of this ultra low power mobile CPU. The answer quickly came back as a no and work began on what was known as the Bonnell core.

No one knew what the Bonnell core would be used in, just that it was going to be portable. Remember this was 2004 and back then the smartphone revolution was far from taking over. Intel’s management felt that people were either going to carry around some sort of mobile internet device or an evolution of the smartphone. Given the somewhat conflicting design goals of those two devices, the design team in Austin had to focus on only one for the first implementation of the Bonnell core.

In 2005, Intel directed the team to go after mobile internet devices first. The smartphone version would follow. Many would argue that it was the wrong choice, after all, when was the last time you bought a MID? Hindsight is 20/20 and back then the future wasn’t so clear. Not to mention that shooting for a mobile extension of the PC was a far safer bet for a PC microprocessor company than going after the smartphone space. Add in the fact that Intel already had a smartphone application processor division (XScale) at the time and going the MID route made a lot of sense.

The team had to make an ultra low power chip for use in handheld PCs by 2008. The power target? Roughly 0.5W.

Climbing Bonnell

An existing design wouldn’t suffice, so the Austin team lead by Belli Kuttanna (former Sun and Motorola chip designer) started with the most basic of architectures: a single-issue, in-order core. The team iterated from there, increasing performance and power consumption until their internal targets were met.

In order architectures, as you may remember, have to execute instructions in the order they’re decoded. This works fine for low latency math operations but instructions that need data from memory will stall the pipeline and severely reduce performance. It’s like not being able to drive around a stopped car. Out of order architectures let you schedule around memory dependent operations so you can mask some of the latency to memory and generally improve performance. Despite what order you execute instructions, they all must complete in the program’s intended order. Dealing with this complexity costs additional die area and power. It’s worth it in the long run as we’ve seen. All Intel CPUs since the Pentium Pro have been wide (3 - 4 issue), out of order cores, but they also have had much higher power budgets.

As I mentioned in my original Atom article in 2008 Intel was committed to using in order cores for this family for the next 5 years. It’s safe to assume that at some point, when transistor geometries get small enough, we’ll see Intel revisit this fundamental architectural decision. In fact, ARM has already gone out of order with its Cortex A9 CPU.

The Bonnell design was the first to implement Intel’s 2 for 1 rule. Any feature included in the core had to increase performance by 2% for every 1% increase in power consumption. That design philosophy has since been embraced by the entire company. Nehalem was the first to implement the 2 for 1 rule on the desktop.

What emerged was a dual issue, in-order architecture. The first of its kind from Intel since the original Pentium microprocessor. Intel has learned a great deal since 1993, so reinventing the Pentium came with some obvious enhancements.

The easiest was SMT, or as most know it: Hyper Threading. Five years ago we were still arguing about the merits of single vs. dual core processors, today virtually all workloads are at least somewhat multithreaded. SMT vastly improves efficiency if you have multithreaded code, so Hyper Threading was a definite shoe in.

Other enhancements include Safe Instruction Recognition (SIR) and macro-op execution. SIR allows conditional out of order execution depending if the right group of instructions appear. Macro-op execution, on the other hand, fuses x86 instructions that perform related ops (e.g. load-op-store, load-op-execute) so they go down the pipeline together rather than independently. This increases the effective width of the machine and improves performance (as well as power efficiency).

Features like hardware prefetchers are present in Bonnell but absent from the original Pentium. And the caches are highly power optimized.

Bonnell refers to the core itself, but when paired with an L2 cache and FSB interface it became Silverthorne - the CPU in the original Atom. For more detail on the Atom architecture be sure to look at my original article.

The World Changes, MIDs Ahead of Their Time
Comments Locked

67 Comments

View All Comments

  • DanNeely - Wednesday, May 5, 2010 - link

    I think you're misunderstanding the slide. It's not saying 1024x600 to 1366x768, it's saying upto 1366x768 on interface A, upto 1024x600 on interface B.
  • Mike1111 - Wednesday, May 5, 2010 - link

    Thanks for the clarification. Looks like I really misunderstood this sentence:
    "Lincroft only supports two display interfaces: 1024 x 600 over MIPI (lower power display interface) or 1366 x 768 over LVDS (for tablets/smartbooks/netbooks)."
  • uibo - Wednesday, May 5, 2010 - link

    I wonder how many transistors are there in a Cortex A9 core? Just the core nothing else.
    For me it seems that ARM could just double or quadruple their core count against the Intel solution while still maintaining lower transistor count.
    Also they could just increase the CPU clock speed, if there is a market for the more power-hungry Intel solution the there is one for the ARM also.
  • strikeback03 - Wednesday, May 5, 2010 - link

    I would imagine even less smartphone software is written for multi-core now than was for desktop when dual-core CPUs started appearing in desktops. So going beyond 2 cores at this time is probably not a great move. Plus the dual core A9 isn't out to see power consumption yet, but even at 45nm I doubt it will be much below the current 65nm single-core chips if at all, so if Intel is already competitive then ARM doesn't exactly have the power budget to add cores.
  • uibo - Thursday, May 6, 2010 - link

    That actually makes sense. Nobody is going to write multi-threaded apps for a single thread CPU. I'd imagine that the number of apps, which experience is hindered by performance, is not that great at the moment. Games, browsers, UI, database for the info stored in your device - I'm not expecting these to scale perfectly across many cores but do expect a x0% performance increase.
  • DanNeely - Thursday, May 6, 2010 - link

    The real benefit for the 2nd core is probably multi-tasking. Your streaming music app can run in the background on the second core while your browser still has a full core to render web pages.
  • Shadowmaster625 - Wednesday, May 5, 2010 - link

    Mooresetown has to support a desktop OS. Intel is clearly moving towards wireless computing. They are bringing wireless video. With wireless video you can turn your phone into a desktop pc instantly by adding a wireless monitor and keyboard. What is the point of moving in that direction if you're moving towards a crippled OS? (Not that windows isnt crippled, if you consider obesity a form of cripple.)

    If it needs a pci bus, then emulate one!
  • Caddish - Wednesday, May 5, 2010 - link

    Just registered to say keep up the good work. Since the SSD antology I have red all of your article like that one and they are awesome
  • legoman666 - Wednesday, May 5, 2010 - link

    Excellent article, very well written.
  • jasperjones - Wednesday, May 5, 2010 - link

    Anand,

    You mention twice in the article that Apple and Google dominate the smartphone market. This is utter nonsense. The numbers from IDC as well as the numbers from Canalys clearly show that Nokia is the worldwide leader in the smartphone market. RIM is number 2. Apple is in the third place, the first company that produces Android devices, HTC, has the number 4 spot.

    I realize that Nokia's market share in the U.S. is smaller than its global market share. However, even if we restrict ourselves to the U.S. market, RIM smartphone sales are bigger than those of Apple. They are also bigger than the sales of all Android smartphones combined.

Log in

Don't have an account? Sign up now