To detract from the weak cpu. e.g. tegra 3 has 4 A9's @ up to 1.5Ghz, which means in most applications it'll be significantly faster. The exception being a few graphics heavy games.
Are you reading this site? What kind of tablet app uses that many cores and/or that much horsepower from the CPU? Considering that Apple has to push more pixels in games than anyone else and considering that iOS doesn't need many CPU cores, I think they made the best decision. Now, I still won't buy a tablet for a year or more, but I cannot see how they made a mistake here.
>Are you reading this site? What kind of tablet app uses that many cores and/or that much horsepower from the CPU?
The OS? Anything that can be instanced (read: anything)? Anything that can be run in parallel with anything else (read: anything)? Also, note that the operating frequency of the CPU is 50% higher as well. So even in the world of a poorly written OS Executive vis a vi Apple iOS, all else being equal Tegra 3 is 50% faster.
Christ you Mac fanboys need to stop drinking the Kool-Aid.
>Considering that Apple has to push more pixels in games than anyone else and considering that iOS doesn't need many CPU cores, I think they made the best decision.
I use an A5 on a daily basis, and I can tell you that the biggest problem with iOS ain't the graphics - it's the OS and thread switching. It hangs like Windows XP on 64MB of RAM. iOS has to be the least responsive OS I've ever used, and as a lifelong Windows user that's saying something. Graphics won't make it more responsive, just more eye candy - which is perfect for the Mac crowd - as long as they have something pretty to look at with an Apple logo they're pacified. It's like intellectual kryptonite to them.
From most benchmarks I've seen though that's not the case, T3 seems to be memory bottlenecked. It has a single channel memory controller for four active cores and a GPU, while others have dual channel for just two cores and GPU. Besides, Anandtech showed most mobile apps still aren't very well threaded, the majority still don't use two cores let alone four, and with Android and iOS multitasking is strictly controlled so you won't be doing four intensive things at once anyways.
So what use-cases are you thinking of that will scale to 4 cores and be bottlenecked by the CPU? I can think of some games that may rely on very heavy physics but those are few and far between. Not to mention they likely work just fine if they utilize NEON on 2xA9 anyway.
Web browsing is (or at least could be) multithreaded. In fact any app that can use 2 cores can probably also use 4. But anyway, Tegra 3 is the fastest mobile CPU so far even if you disable 2 cores, because it's the highest clocked Cortex A9.
Programs that can use 2 cores can probably benefit from 4 cores. The question is are they already programmed to be highly multithreaded? If they aren't, how many developers are going to reprogram their apps to take advantage of 4 cores? Your point about clock speed is right, in that higher clock speed benefits existing apps without needing developer work. It seems most other new SoC from TI, Samsung or Qualcomm are focusing on high-clock speed dual cores rather than moving to quad core so the Tegra 3 may be the odd man out, which makes it less likely most developers would take the time to enhance their multithreading. Quad core may be a good marketing feature, but the end user will see the performance benefits.
In that case they should have made 2 GHz+ single core CPUs instead of dual and quad core. There might not be a lot of multithreaded programs right now in the mobile space, but except if you change your tablet every 6 months you probably want it to be future proof a little. It's no different than desktop PCs. At first we had single, dual, and now quad or even more cores. It's just a matter of time. However it is true that if Nvidia is the only quad-core maker this year this could slow things down a little.
The issue is that even most desktop programs including games barely make use of 4 cores even after all this time. I doubt the situation will be better for tablets in the near term, ie. the lifespan of a Tegra 3 tablet.
Just as for desktops, going from 1 to 2 cores made a lot of sense because even without multithreaded apps or multiple apps, 2 cores are useful because background tasks can be shuffled to the 2nd core allowing the foreground program to run uninterrupted on the 1st core. Going from dual to quad core however sees diminishing returns without some work on the developers part.
Well again if I had to choose between a quad 1.3 GHz or a dual 2.6, the later wins hands down. It will be interesting to see more dual 1.5 GHz krait vs quad 1.3 GHz Cortex A9
How so? Multiple tabs can be multithreaded easily but that's more of a user behavior study rather than a computational study.
Many of the computationally intensive components of parsing a webpage aren't really thread-friendly. Namely, javascript and html parsing. You can argue that a single page contains so many different snippets that you can parse them individually in parallel, but the problem becomes that the overhead required to fill the cache and start parsing would eclipse the processing time of most scriptlets. This makes loading them onto multiple cores impractical and not really beneficial.
There may very well be webpages that do benefit tremendously from multiple cores. But thus far, we've not seen them.
I agree with Tegra 3's CPU, even in single-threaded cases such as webpage parsing and rendering, being faster than the A5X (albeit from page load times, I'd say Krait does far better). But Apple obviously thinks iPad 2 level webpage performance is beyond the point of being perceived to be "slow" by the end-user that they don't think it needs to be faster, at least for now.
How true that is will depend entirely on the user and the typical webpage they go to, obviously.
Many web sites seems to include animated GIFs (mainly for advertisement) now that Flash isn't really popular on mobile devices. I guess cores can be dedicated to that while others render each web page frames.
How much overhead do you think an animated gif takes? Have you profiled one? You know that other than the initial decoding, it doesn't really take much if any of the cpu....
Hell, even the decode of a gif is a blip on the radar.
The redraw can be a function of the cpu but unless the browser renders the page in frames with multiple redraw windows, that task is probably better left to the GPU and is backcache.
All I know is that browsing on a desktop PC is still much faster than on tablet, no matter what GPU sits in your PC. So there is still room for much faster browsing on tablets and faster CPU is the key. Now maybe not everything can be threaded so in that case we will need higher clock speeds and better architectures.
I would look more towards the browser. An iPad does about as well (better in many cases) than IE does on my laptop.
Chrome, of course, flies past them all.
But again, there's a point of diminishing return. Apple obviously feels that battery life takes precedence over the benefits of instant vs half a second when it comes to rendering a page.
Interestingly, iOS has already had a port of Grand Central Dispatch from OS X since iOS 4. The thing is, I'm not sure how many people are taking advantage of it when coding iOS apps...
1) iOS uses a lot more hardware acceleration than Android; the tegra 3 may do more *work*, but that doesn't mean the user experience will be significantly faster.
2) To the extent that horsepower is left over for userland, the tegra 3 will be faster on CPU-bound tasks. There aren't a whole lot of these in the tablet space -- most apps are limited by network, GPU, or memory speed.
1)ICS has full hardware acceleration 2)GPU? Except 3D games, I don't think there is anything that is bound by GPU on tablet. Faster CPU helps while browsing real (non-mobile) web sites. Just like you wouldn't want to browse on a Pentium 3 PC anymore, a faster CPU is always welcome on tablets.
Most multimedia productivity apps on iOS are GPU accelerated since they would be built on the GPU-accelerated Core Image, Core Video, and Core Graphics frameworks that Apple provides. Adoption of these frameworks is presumably very high for these types of apps, especially for smaller developers, due to the speedup compared to going CPU-only and the convenience of not having to write your own OpenGL ES solution. Apple uses them themselves for iMovie, Garageband, and iPhoto. UI for all applications generally relies on the GPU accelerated Core Animation so the GPU can be more valuable than the CPU for UI responsiveness. Even text can be GPU accelerated using Core Text. These frameworks have always been GPU accelerated and generally have been around for a while for adoption should be very high. Given everything from text to UI transitions to images and video is GPU accelerated on iOS, it's not surprising that Apple wanted to focus on the GPU over the CPU.
Now with expanded hardware acceleration in Android 3.x/4.x does Android provide these types of GPU accelerated frameworks for a variety of tasks? Is adoption by apps high? (These are actual questions since I don't know.)
Yep, Android 4.0 does provide GPU-accelerated rendering paths for all apps that choose to use it. In fact, there's a developer option to force all apps to use the GPU accelerated path - it's disabled by default since some apps might crash with this set until they are updated, but I've not encountered any problems so far.
Is it just GPU accelerating the rendering of the program screen to the display or does it GPU accelerate the functional compute tasks of a program such as the application of an image filter to a picture as Core Image does for photo editing apps?
Stuff like video editing is GPU accelerated on PC too. Yet, you will never find any professional or even amateur video guy telling you that it runs just as fast on single core than on quad core. Not everything can be offloaded to the GPU. Also, UI transitions do not need a huge GPU. The SGX535 in the old iPhone is still good enough for that.
iPad 3 GLBenchmark results are in. The low level benchmarks shows pretty much a perfect 2x scaling from the iPad 2 in the tests that don't already hit the 60 fps cap, so clock speed appears the same going from SGX543MP2 to SGX543MP4. The MP architecture seems to have very efficient scaling on these theoreticals.
In the game benchmarks, the iPad 3 is ~70% faster than the iPad 2 in Pro 720p and ~55% faster in Egypt 720p so there appears to be a bottleneck somewhere. Seeing the Geekbench results show memory performance hasn't changed, Apple likely retained the existing 2x32-bit LPDDR2-800 setup meaning the iPad 3 is bandwidth constrained. If Apple wanted to focus on the GPU over the CPU, you'd think they would have moved to LPDDR2-1066 to maximize the SGX543MP4 since they need as much GPU performance as possible for the Retina Display.
I think that's pretty much what is expected in terms of the theoretical raw speedup manufacturers claim and the real world difference developers and users see once everything is put together. Hopefully the iPad 2 GPU had a lot of unused performance so game developers won't be trading increased graphical effects to drive the extra pixels in the iPad 3.
I was thinking the same, the CPU is clocked the same and has the same architecture, the post above showed the GPU runs at the same speed, so the only difference on the SoC seems to be moving from the MP2 to the MP4 graphics. If they moved to 32nm, would that really need the new heat spreader? The old one already had an EMI shield over it, so I'm guessing that's what the metal bit is for.
Dear Kristian, Engadget had posted through one of its readers a similar GeekBench for the Asus TP tablet; score 1900. Would that allow Nvidia to make a presentation slide and state that its SoC is more than twice as faster than Apple's "new" ipad Soc ?
Definitely, if tested under the right circumstances. ASUS Transformer Pad Infinity will have Tegra 3 running at up to 1.6GHz, so it would smoke the A5X when it comes to raw CPU performance. Then again, when it comes to GPU performance, A5X is 2-3 times faster (see the links below and compare T3 with iPad 3)
Resolutions that Tegra 3 won't likely be paired with, I'm guessing. By the time Android tablets get resolutions like this iPad we'll be on to the next Tegra.
The TF700 is coming out in a few months with a 1920x1200 resolution. That's going to be a significant work load for the Tegra 3 in games, at current clock speeds. I'm not sure if they can go any higher though.
Yup, with twice as many cores and clocked higher the T3 would smoke the A5X in raw compute performance. The thing is, most apps won't use all four cores though, so the difference probably won't be that large. We'll see if Tegra Zone games really take off, that might be one of the few things that will use four cores.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
49 Comments
Back to Article
Dribble - Tuesday, March 13, 2012 - link
To detract from the weak cpu. e.g. tegra 3 has 4 A9's @ up to 1.5Ghz, which means in most applications it'll be significantly faster. The exception being a few graphics heavy games.Death666Angel - Tuesday, March 13, 2012 - link
Are you reading this site? What kind of tablet app uses that many cores and/or that much horsepower from the CPU? Considering that Apple has to push more pixels in games than anyone else and considering that iOS doesn't need many CPU cores, I think they made the best decision. Now, I still won't buy a tablet for a year or more, but I cannot see how they made a mistake here.Sahrin - Thursday, March 15, 2012 - link
>Are you reading this site? What kind of tablet app uses that many cores and/or that much horsepower from the CPU?The OS? Anything that can be instanced (read: anything)? Anything that can be run in parallel with anything else (read: anything)? Also, note that the operating frequency of the CPU is 50% higher as well. So even in the world of a poorly written OS Executive vis a vi Apple iOS, all else being equal Tegra 3 is 50% faster.
Christ you Mac fanboys need to stop drinking the Kool-Aid.
>Considering that Apple has to push more pixels in games than anyone else and considering that iOS doesn't need many CPU cores, I think they made the best decision.
I use an A5 on a daily basis, and I can tell you that the biggest problem with iOS ain't the graphics - it's the OS and thread switching. It hangs like Windows XP on 64MB of RAM. iOS has to be the least responsive OS I've ever used, and as a lifelong Windows user that's saying something. Graphics won't make it more responsive, just more eye candy - which is perfect for the Mac crowd - as long as they have something pretty to look at with an Apple logo they're pacified. It's like intellectual kryptonite to them.
michael2k - Friday, March 16, 2012 - link
Haha, least responsive OS? I guess you haven't used Android, then :)tipoo - Tuesday, March 13, 2012 - link
From most benchmarks I've seen though that's not the case, T3 seems to be memory bottlenecked. It has a single channel memory controller for four active cores and a GPU, while others have dual channel for just two cores and GPU. Besides, Anandtech showed most mobile apps still aren't very well threaded, the majority still don't use two cores let alone four, and with Android and iOS multitasking is strictly controlled so you won't be doing four intensive things at once anyways.zorxd - Tuesday, March 13, 2012 - link
Most benchmarks so far are either GPU benchmarks or browser benchmarks.None of them are good to show the CPU speed.
metafor - Tuesday, March 13, 2012 - link
So what use-cases are you thinking of that will scale to 4 cores and be bottlenecked by the CPU? I can think of some games that may rely on very heavy physics but those are few and far between. Not to mention they likely work just fine if they utilize NEON on 2xA9 anyway.zorxd - Tuesday, March 13, 2012 - link
Web browsing is (or at least could be) multithreaded. In fact any app that can use 2 cores can probably also use 4.But anyway, Tegra 3 is the fastest mobile CPU so far even if you disable 2 cores, because it's the highest clocked Cortex A9.
ltcommanderdata - Tuesday, March 13, 2012 - link
Programs that can use 2 cores can probably benefit from 4 cores. The question is are they already programmed to be highly multithreaded? If they aren't, how many developers are going to reprogram their apps to take advantage of 4 cores? Your point about clock speed is right, in that higher clock speed benefits existing apps without needing developer work. It seems most other new SoC from TI, Samsung or Qualcomm are focusing on high-clock speed dual cores rather than moving to quad core so the Tegra 3 may be the odd man out, which makes it less likely most developers would take the time to enhance their multithreading. Quad core may be a good marketing feature, but the end user will see the performance benefits.ltcommanderdata - Tuesday, March 13, 2012 - link
Due to the lack of an edit button the correction for my last line about is:Quad core may be a good marketing feature, but will the end user see the performance benefits?
zorxd - Tuesday, March 13, 2012 - link
In that case they should have made 2 GHz+ single core CPUs instead of dual and quad core.There might not be a lot of multithreaded programs right now in the mobile space, but except if you change your tablet every 6 months you probably want it to be future proof a little. It's no different than desktop PCs. At first we had single, dual, and now quad or even more cores. It's just a matter of time. However it is true that if Nvidia is the only quad-core maker this year this could slow things down a little.
ltcommanderdata - Tuesday, March 13, 2012 - link
The issue is that even most desktop programs including games barely make use of 4 cores even after all this time. I doubt the situation will be better for tablets in the near term, ie. the lifespan of a Tegra 3 tablet.Just as for desktops, going from 1 to 2 cores made a lot of sense because even without multithreaded apps or multiple apps, 2 cores are useful because background tasks can be shuffled to the 2nd core allowing the foreground program to run uninterrupted on the 1st core. Going from dual to quad core however sees diminishing returns without some work on the developers part.
zorxd - Tuesday, March 13, 2012 - link
Well again if I had to choose between a quad 1.3 GHz or a dual 2.6, the later wins hands down.It will be interesting to see more dual 1.5 GHz krait vs quad 1.3 GHz Cortex A9
name99 - Tuesday, March 13, 2012 - link
Just stop it dude. You are embarrassing yourself.
Remember that kid with the lightsaber on YouTube? That's where you're headed...
metafor - Tuesday, March 13, 2012 - link
How so? Multiple tabs can be multithreaded easily but that's more of a user behavior study rather than a computational study.Many of the computationally intensive components of parsing a webpage aren't really thread-friendly. Namely, javascript and html parsing. You can argue that a single page contains so many different snippets that you can parse them individually in parallel, but the problem becomes that the overhead required to fill the cache and start parsing would eclipse the processing time of most scriptlets. This makes loading them onto multiple cores impractical and not really beneficial.
There may very well be webpages that do benefit tremendously from multiple cores. But thus far, we've not seen them.
I agree with Tegra 3's CPU, even in single-threaded cases such as webpage parsing and rendering, being faster than the A5X (albeit from page load times, I'd say Krait does far better). But Apple obviously thinks iPad 2 level webpage performance is beyond the point of being perceived to be "slow" by the end-user that they don't think it needs to be faster, at least for now.
How true that is will depend entirely on the user and the typical webpage they go to, obviously.
zorxd - Tuesday, March 13, 2012 - link
Many web sites seems to include animated GIFs (mainly for advertisement) now that Flash isn't really popular on mobile devices. I guess cores can be dedicated to that while others render each web page frames.metafor - Tuesday, March 13, 2012 - link
How much overhead do you think an animated gif takes? Have you profiled one? You know that other than the initial decoding, it doesn't really take much if any of the cpu....Hell, even the decode of a gif is a blip on the radar.
The redraw can be a function of the cpu but unless the browser renders the page in frames with multiple redraw windows, that task is probably better left to the GPU and is backcache.
zorxd - Tuesday, March 13, 2012 - link
All I know is that browsing on a desktop PC is still much faster than on tablet, no matter what GPU sits in your PC.So there is still room for much faster browsing on tablets and faster CPU is the key.
Now maybe not everything can be threaded so in that case we will need higher clock speeds and better architectures.
metafor - Tuesday, March 13, 2012 - link
I would look more towards the browser. An iPad does about as well (better in many cases) than IE does on my laptop.Chrome, of course, flies past them all.
But again, there's a point of diminishing return. Apple obviously feels that battery life takes precedence over the benefits of instant vs half a second when it comes to rendering a page.
zorxd - Tuesday, March 13, 2012 - link
I don't use IE often but I am pretty sure that it is faster than an iPad by a big marginThreeDee912 - Wednesday, March 14, 2012 - link
Interestingly, iOS has already had a port of Grand Central Dispatch from OS X since iOS 4. The thing is, I'm not sure how many people are taking advantage of it when coding iOS apps...BrooksT - Tuesday, March 13, 2012 - link
Two huge caveats:1) iOS uses a lot more hardware acceleration than Android; the tegra 3 may do more *work*, but that doesn't mean the user experience will be significantly faster.
2) To the extent that horsepower is left over for userland, the tegra 3 will be faster on CPU-bound tasks. There aren't a whole lot of these in the tablet space -- most apps are limited by network, GPU, or memory speed.
zorxd - Tuesday, March 13, 2012 - link
1)ICS has full hardware acceleration2)GPU? Except 3D games, I don't think there is anything that is bound by GPU on tablet. Faster CPU helps while browsing real (non-mobile) web sites. Just like you wouldn't want to browse on a Pentium 3 PC anymore, a faster CPU is always welcome on tablets.
ltcommanderdata - Tuesday, March 13, 2012 - link
Most multimedia productivity apps on iOS are GPU accelerated since they would be built on the GPU-accelerated Core Image, Core Video, and Core Graphics frameworks that Apple provides. Adoption of these frameworks is presumably very high for these types of apps, especially for smaller developers, due to the speedup compared to going CPU-only and the convenience of not having to write your own OpenGL ES solution. Apple uses them themselves for iMovie, Garageband, and iPhoto. UI for all applications generally relies on the GPU accelerated Core Animation so the GPU can be more valuable than the CPU for UI responsiveness. Even text can be GPU accelerated using Core Text. These frameworks have always been GPU accelerated and generally have been around for a while for adoption should be very high. Given everything from text to UI transitions to images and video is GPU accelerated on iOS, it's not surprising that Apple wanted to focus on the GPU over the CPU.Now with expanded hardware acceleration in Android 3.x/4.x does Android provide these types of GPU accelerated frameworks for a variety of tasks? Is adoption by apps high? (These are actual questions since I don't know.)
r3loaded - Tuesday, March 13, 2012 - link
Yep, Android 4.0 does provide GPU-accelerated rendering paths for all apps that choose to use it. In fact, there's a developer option to force all apps to use the GPU accelerated path - it's disabled by default since some apps might crash with this set until they are updated, but I've not encountered any problems so far.ltcommanderdata - Tuesday, March 13, 2012 - link
Is it just GPU accelerating the rendering of the program screen to the display or does it GPU accelerate the functional compute tasks of a program such as the application of an image filter to a picture as Core Image does for photo editing apps?zorxd - Tuesday, March 13, 2012 - link
Stuff like video editing is GPU accelerated on PC too. Yet, you will never find any professional or even amateur video guy telling you that it runs just as fast on single core than on quad core. Not everything can be offloaded to the GPU.Also, UI transitions do not need a huge GPU. The SGX535 in the old iPhone is still good enough for that.
zorxd - Tuesday, March 13, 2012 - link
Tegra 3 is currently limited to 1.3 GHz at least on the transformer prime.BSMonitor - Wednesday, March 14, 2012 - link
iOS doesn't require tons of CPU cores and threads to have great response times. All 4 cores at 1.5GHz says to me is "Battery = 0%".scook9 - Wednesday, March 14, 2012 - link
So my Transformer Prime got 1721......just to put numbers to what was stated above me ;)That is Tegra 3 overclocked to 1.6 GHz by the way
So, Tegra 3 CPU > A5x CPU but Tegra 3 GPU < A5x GPU
conglyvaness - Monday, October 28, 2019 - link
<a href="https://unitedairlines-vn.com/ve-may-bay-di-chicag... máy bay đi Chicago</a>ltcommanderdata - Tuesday, March 13, 2012 - link
iPad 3:http://www.glbenchmark.com/phonedetails.jsp?D=Appl...
iPad 2:
http://www.glbenchmark.com/phonedetails.jsp?D=Appl...
iPad 3 GLBenchmark results are in. The low level benchmarks shows pretty much a perfect 2x scaling from the iPad 2 in the tests that don't already hit the 60 fps cap, so clock speed appears the same going from SGX543MP2 to SGX543MP4. The MP architecture seems to have very efficient scaling on these theoreticals.
In the game benchmarks, the iPad 3 is ~70% faster than the iPad 2 in Pro 720p and ~55% faster in Egypt 720p so there appears to be a bottleneck somewhere. Seeing the Geekbench results show memory performance hasn't changed, Apple likely retained the existing 2x32-bit LPDDR2-800 setup meaning the iPad 3 is bandwidth constrained. If Apple wanted to focus on the GPU over the CPU, you'd think they would have moved to LPDDR2-1066 to maximize the SGX543MP4 since they need as much GPU performance as possible for the Retina Display.
zorxd - Tuesday, March 13, 2012 - link
InterestingSo in the end the A5X is about 2x (Egypt 720p) to 3x (Pro 720p) faster than Tegra 3 for graphics.
ltcommanderdata - Tuesday, March 13, 2012 - link
I think that's pretty much what is expected in terms of the theoretical raw speedup manufacturers claim and the real world difference developers and users see once everything is put together. Hopefully the iPad 2 GPU had a lot of unused performance so game developers won't be trading increased graphical effects to drive the extra pixels in the iPad 3.archer75 - Tuesday, March 13, 2012 - link
Now let's see that at the native resolution of the ipad 3 display.milli - Tuesday, March 13, 2012 - link
How do you conclude that the chip is 32nm?If you ask me it's still 45nm.
FATCamaro - Tuesday, March 13, 2012 - link
Agreed. It's not clear at all.tipoo - Tuesday, March 13, 2012 - link
I was thinking the same, the CPU is clocked the same and has the same architecture, the post above showed the GPU runs at the same speed, so the only difference on the SoC seems to be moving from the MP2 to the MP4 graphics. If they moved to 32nm, would that really need the new heat spreader? The old one already had an EMI shield over it, so I'm guessing that's what the metal bit is for.Kristian Vättö - Tuesday, March 13, 2012 - link
I didn't state that it's 32nm for sure but that has been our guess.ananduser - Tuesday, March 13, 2012 - link
Dear Kristian,Engadget had posted through one of its readers a similar GeekBench for the Asus TP tablet; score 1900. Would that allow Nvidia to make a presentation slide and state that its SoC is more than twice as faster than Apple's "new" ipad Soc ?
Zink - Tuesday, March 13, 2012 - link
With the right bench Tegra 3 could be shown to have 3 times the CPU performance.zorxd - Tuesday, March 13, 2012 - link
at 1.5 GHz, absolutely.Kristian Vättö - Tuesday, March 13, 2012 - link
Definitely, if tested under the right circumstances. ASUS Transformer Pad Infinity will have Tegra 3 running at up to 1.6GHz, so it would smoke the A5X when it comes to raw CPU performance. Then again, when it comes to GPU performance, A5X is 2-3 times faster (see the links below and compare T3 with iPad 3)http://www.anandtech.com/show/5663/analysis-of-the...
http://www.glbenchmark.com/phonedetails.jsp?D=Appl...
Steelbom - Wednesday, March 14, 2012 - link
Keep in mind that the difference may (and likely will imo) grow to four times at higher resolutions.tipoo - Wednesday, March 14, 2012 - link
Resolutions that Tegra 3 won't likely be paired with, I'm guessing. By the time Android tablets get resolutions like this iPad we'll be on to the next Tegra.Steelbom - Saturday, March 17, 2012 - link
The TF700 is coming out in a few months with a 1920x1200 resolution. That's going to be a significant work load for the Tegra 3 in games, at current clock speeds. I'm not sure if they can go any higher though.tipoo - Tuesday, March 13, 2012 - link
Yup, with twice as many cores and clocked higher the T3 would smoke the A5X in raw compute performance. The thing is, most apps won't use all four cores though, so the difference probably won't be that large. We'll see if Tegra Zone games really take off, that might be one of the few things that will use four cores.Steelbom - Wednesday, March 14, 2012 - link
That would be a little deceptive if they did that though, Apple did say it was only in graphics that it was faster.Steelbom - Wednesday, March 14, 2012 - link
Is it possible for me to sort these comments so I see the new ones at the top of the article instead of the bottom on page four?