Amongst the new iPad and Watch devices released today, Apple made news in releasing the new A14 SoC chip. Apple’s newest generation silicon design is noteworthy in that is the industry’s first commercial chip to be manufactured on a 5nm process node, marking this the first of a new generation of designs that are expected to significantly push the envelope in the semiconductor space.

Apple’s event disclosures this year were a bit confusing as the company was comparing the new A14 metrics against the A12, given that’s what the previous generation iPad Air had been using until now – we’ll need to add some proper context behind the figures to extrapolate what this means.

On the CPU side of things, Apple is using new generation large performance cores as well as new small power efficient cores, but remains in a 2+4 configuration. Apple here claims a 40% performance boost on the part of the CPUs, although the company doesn’t specify exactly what this metric refers to – is it single-threaded performance? Is it multi-threaded performance? Is it for the large or the small cores?

What we do know though is that it’s in reference to the A12 chipset, and the A13 already had claimed a 20% boost over that generation. Simple arithmetic thus dictates that the A14 would be roughly 16% faster than the A13 if Apple’s performance metric measurements are consistent between generations.

On the GPU side, we also see a similar calculation as Apple claims a 30% performance boost compared to the A12 generation thanks to the new 4-core GPU in the A14. Normalising this against the A13 this would mean only an 8.3% performance boost which is actually quite meagre.

In other areas, Apple is boasting more significant performance jumps such as the new 16-core neural engine which now sports up to 11TOPs inferencing throughput, which is over double the 5TOPs of the A12 and 83% more than the estimated 6TOPs of the A13 neural engine.

Apple does advertise a new image signal processor amongst new features of the SoC, but otherwise the performance metrics (aside from the neural engine) seem rather conservative given the fact that the new chip is boasting 11.8 billion transistors, a 38% generational increase over the A13’s 8.5bn figures.

The one explanation and theory I have is that Apple might have finally pulled back on their excessive peak power draw at the maximum performance states of the CPUs and GPUs, and thus peak performance wouldn’t have seen such a large jump this generation, but favour more sustainable thermal figures.

Apple’s A12 and A13 chips were large performance upgrades both on the side of the CPU and GPU, however one criticism I had made of the company’s designs is that they both increased the power draw beyond what was usually sustainable in a mobile thermal envelope. This meant that while the designs had amazing peak performance figures, the chips were unable to sustain them for prolonged periods beyond 2-3 minutes. Keeping that in mind, the devices throttled to performance levels that were still ahead of the competition, leaving Apple in a leadership position in terms of efficiency.

What speaks against such a theory is that Apple made no mention at all of concrete power or power efficiency improvements this generation, which is rather very unusual given they’ve traditionally always made a remark on this aspect of the new A-series designs.

We’ll just have to wait and see if this is indicative of the actual products not having improved in this regard, of it’s just an omission and side-effect of the new more streamlined presentation style of the event.

Whatever the performance and efficiency figures are, what Apple can boast about is having the industry’s first ever 5nm silicon design. The new TSMC-fabricated A14 thus represents the cutting-edge of semiconductor technology today, and Apple made sure to mention this during the presentation.

Related Reading:

Comments Locked


View All Comments

  • Oxford Guy - Tuesday, September 15, 2020 - link

    Perhaps I'm too cynical but my first guess is always that the ever-expanding AI transistor budget is primarily driven by the desire for corporations/corporate government to have increasingly sophisticated spyware. (Giving people perks to go along with it — like saving some guy about to fall off a cliff — is the spoon full of sugar for the medicine, of course.)

    Agner Fog joked years ago that the main benefit of multiple processor cores is to make the spyware run faster. Well, since we already have plenty of extra cores these days a newer approach was needed, to expand upon the panopticon. There are already so many layers of spyware in current devices that the spies might need the AI to keep track of all of it.

    Getting people to surrender to chip-based TIA while paying for the pleasure is a neat trick.
  • watzupken - Tuesday, September 15, 2020 - link

    From the sound of it, this may be one of the smallest improvement in their SOC performance to date. The fact that they are using A12 instead of A13 as comparison is a tell tale sign. They've always compare new vs last generation to show the performance improvement from my memory. Seems like they lost their lead CPU designer and we are starting to see the impact. At this rate, Apple is at risk of losing whatever single core advantage to the generic ARM chips.
  • Boland - Tuesday, September 15, 2020 - link

    They're comparing to the A12 because that's what was in the last iPad. When the phone keynote comes around, you'll get the 13 comparison there.
  • Zerrohero - Wednesday, September 16, 2020 - link

    ”Apple is at risk of losing whatever single core advantage to the generic ARM chips.”

    If A14 is 40% faster in single core than A12, it means that the GB5 score for A14 is about 1550.

    For reference, the SD 865 gets about 900 in GB5. Yes, 900.

    If Apple stops chip development now, then the generic ARM chips will surpass them in four-ish years in single core performance, if they improve 15% every year.
  • jaj18 - Wednesday, September 16, 2020 - link

    lol no, ARM already announced the cortex X1 which will be 30% faster than 865 @3ghz . Then its just 25% difference with A14. Will be lesser if qualcomm goes above 3ghz.
  • Meteor2 - Wednesday, October 7, 2020 - link

    What phone can I buy that in?
  • Rego78 - Tuesday, September 15, 2020 - link

    As far as we know the A12 was the first Processor on 7nm. Is their chart wrong?
  • Rego78 - Tuesday, September 15, 2020 - link

    Even saying so in their presser:
  • Sychonut - Tuesday, September 15, 2020 - link

    This would have performed better on 14+++++. Just saying.
  • Oxford Guy - Wednesday, September 16, 2020 - link

    Looks like I'm not too cynical after all:

    "All of this is just to treat the symptoms, not the cause. Chaslot believes the real focus needs to be on long term solutions. At the moment, users are fighting against supercomputers to try to protect their free will, but it’s a losing battle with the current tools.

    That’s why Chaslot is convinced the only way forward is to provide proper transparency and give users real control: 'In the long term, we need people to be in control of the AI, instead of AI controlling the users.'"

    This from an article called ‘YouTube recommendations are toxic,’ says dev who worked on the algorithm

    I have been trying to figure out how to rid myself of the dreadful banality of the "12-year-old Humilitates Simon Cowell" video, among other monstrosities that incessantly show up in that list (because my mother visited and watched those awful reality show videos via our WiFi).

    Chaslot's statement about supercomputers and our free will seems spot on, if chilling. But, go go Apple. And Nvidia. And everyone else. Cram in as much AI goodness as possible.

Log in

Don't have an account? Sign up now