Intel’s Tiger Lake 11th Gen Core i7-1185G7 Review and Deep Dive: Baskin’ for the Exoticby Dr. Ian Cutress & Andrei Frumusanu on September 17, 2020 9:35 AM EST
- Posted in
- Tiger Lake
- Willow Cove
- 11th Gen
- Tiger King
The big notebook launch for Intel this year is Tiger Lake, its upcoming 10nm platform designed to pair a new graphics architecture with a nice high frequency for the performance that customers in this space require. Over the past few weeks, we’ve covered the microarchitecture as presented by Intel at its latest Intel Architecture Day 2020, as well as the formal launch of the new platform in early September. The missing piece of the puzzle was actually testing it, to see if it can match the very progressive platform currently offered by AMD’s Ryzen Mobile. Today is that review, with one of Intel’s reference design laptops.
Like a Tiger Carving Through The Ice
The system we have to hand is one of Intel’s Reference Design systems, which is very similar to the Software Development System (SDS) we tested for Ice Lake last year. The notebook we were sent was built in conjunction with one of Intel’s OEM partners, and is meant to act as an example system to other OEMs. This is slightly different to the software development system, which was mainly for the big company software developers (think Adobe) for code optimization, but the principle is still the same: a high powered system overbuilt for thermals and strong fans. These systems aren’t retail, and so noise and battery life aren’t part of the equation of our testing, but it also means that the performance we test should be some of the best the platform has to offer.
Our reference design review sample implements Intel’s top tier Tiger Lake ‘Core 11th Gen’ processor, the Core i7-1185G7. This is a quad core processor with hyperthreading, offering eight threads total. This processor also has the full sized new Xe-LP graphics, with 96 execution units running up to 1450 MHz.
I haven’t mentioned the processor frequency or the power consumption, because for this generation Intel is deciding to offer its mobile processors with a range of supported speeds and feeds. To complicate the issue, Intel by definition is only publically offering it in the mix-max form, whereas those of us who are interested in the data would much rather see a sliding scale.
|Intel Core i7-1185G7 'Tiger Lake'|
|Base Frequency at 12 W||1200 MHz|
|Base Frequency at 15 W||1800 MHz|
|Base Frequency at 28 W||3000 MHz|
|1C Turbo up to 50 W||4800 MHz|
|All-core Turbo up to 50 W||4300 MHz|
|L2 Cache||1.25 MB per core
|L3 Cache||12 MB
96 Execution Units
1350 MHz Turbo
|Memory Support||32 GB LPDDR4X-4266
64 GB DDR4-3200
In this case, the Core i7-1185G7 will be offered to OEMs with thermal design points (TDPs) from 12 W to 28 W. An OEM can choose the minimum, the maximum, or something in-between, and one of the annoying things about this is that as a user, without equipment measuring the CPU power, you will not be able to tell, as the OEMs do not give the resellers this information when promoting the notebooks.
For this reference design, it has been built to offer both, so in effect it is more like a 28 W design for peak performance as to avoid any thermal issues.
At 12 W, Intel lists a base frequency of 1.2 GHz, while at 28 W, Intel lists a base frequency of 3.0 GHz. Unfortunately Intel does not list the value that we think is most valuable – 15 W – which would enable fairer comparisons with the previous generation Intel hardware as well as the competition. After testing the laptop, we can confirm that the 15 W value as programmed into the silicon (so we’re baffled why Intel wouldn’t tell us) is 1.8 GHz.
In both 12 W and 28 W scenarios, the processor can turbo up to 4.8 GHz on one core / two threads. This system was built for thermals or power to not to be an issue, so the CPU can boost to 4.8 GHz in both modes. Not only that, but the power consumption while in the turbo modes is limited to 55 W, for any TDP setting. The turbo budget for the system increases with the thermal design point of the processor, and so when in 28 W mode, it will also turbo for longer. We observed this in our testing, and you can find the results in the power section of this review.
The Reference Design
Intel sampled its Reference Design to a number of the press for testing. We had approximately 4 days with the device before it had to be handed back, enough to cover some key areas such as best-case performance on CPU and GPU, microarchitectural changes to the core and cache structure, and some industry standard benchmarks.
There were some caveats and pre-conditions to this review, similar to our initial Ice Lake development system test, because this isn’t a retail device. The fans were fully on and the screen was on a fixed brightness. Intel also requested no battery life testing, because the system hasn't been optimized for power in the same way a retail device would - however as we only had a 4 day review loan, that meant that battery life testing wasn’t possible anyway. Intel also requested no photography of the inside of the chassis, because again this wasn’t an optimized retail device. The silicon photographs you see in this review have been provided by Intel .
When Intel’s regional PR teams started teasing the reference design on twitter (e.g. UK, FR), I initially thought this was an Honor based system due to the blue chamfered bezel like the Magicbook I reviewed earlier in the year. This isn’t an Honor machine, but rather one of the bigger OEMs known for its mix of business and gaming designs.
Large keypad, chiclet style keys, and a 1080p display. For ports, this design only has two Type-C, both of which can be used for power or DisplayPort-over-Type C. The design uses the opening of the display to act as a stand for the main body of the machine.
On the back is a big vent for the airflow in. Under the conditions of the review sample we’re not able to take pictures of the insides, however it’s clear that this system was built with an extra dGPU in mind. Intel wasn’t able to comment on whether the OEM it partnered with will use this as a final design for any of its systems, given some of the extra elements added to the design to enable its use as a reference platform.
For the full system build, it was equipped with Intel’s AX201 Wi-Fi 6 module, as well as a PCIe 3.0 x4 Samsung SSD.
|Intel Reference Design: Tiger Lake|
|CPU||Intel Core i7-1185G7
Four Cores, Eight Threads
1200 MHz Base at 12 W
1800 MHz Base at 15 W
3000 MHz Base at 28 W
4800 MHz Turbo 1C up to 50W
4300 MHz Turbo nT up to 50W
|GPU||Integrated Xe-LP Graphics
96 Execution Units, up to 1450 MHz
|DRAM||16 GB of LPDDR4X-4266 CL36|
|Storage||Samsung 1 TB NVMe PCIe 3.0 x4 SSD|
|Display||14-inch 1920x1080, Fixed Brightness|
|IO||Two Type-C ports
Supporting Charge, DP over Type-C
|Wi-Fi||Intel AX201 Wi-Fi 6 CNVi RF Module|
|Power Modes||15 W, no Adaptix
28 W, no Adaptix
28W, with Adaptix
The first devices to market with the Core i7-1185G7 will have either LPDDR4X-4266 (32 GB) or DDR4-3200 (64 GB). Intel advertised these chips also supporting LPDDR5-5400, and we confirmed with the engineers that this initial silicon revision is built for LPDDR5, however it is still in the process to be validated. Coupled with the high cost of LPDDR5, Intel expects LP5 systems a bit later in the product cycle life-time, probably in Q1 2021.
On storage: Tiger Lake technically supports PCIe 4.0 x4 from the processor. This can be used for a GPU or SSD, but Intel sees it mostly for fast storage. Given the prevalence of PCIe 4.0 SSDs on the market already, it was curious to see the reference designs without a corresponding PCIe 4.0 drive. Intel’s official reason for not equipping the system with such a drive was along the lines of ‘they’ve not been in the market for long and so we weren’t able to validate in time’. This is immediately and painfully laughable – PCIe 4.0 x4 enabled drives, built on Phison’s E16 controller, have been in the market for six months. We reported on them last year at Computex. To be clear, Intel’s argument here isn’t simply that it didn’t have enough time to validate it, it is the combination of validation time plus the argument that the drives haven’t been out in the market long enough for validation. This is wrong. If the drives had only been in the market for 6-8 weeks, perhaps I might agree with them, but to say it when the drives have been out for 24+ weeks amazes me.
The real reason why this system doesn’t have a PCIe 4.0 x4 drive is because the E16 drives are too power hungry. The E16 is based on Phison’s E12 PCIe 3.0 SSD controller, but with the PCIe 3.0 removed and PCIe 4.0 added, without much adjustment to the compute side of the controller or the efficiency point of the silicon. As a result, the E16-based drives can score up to 8 W for a peak throughput of 5 GB/s. A properly designed from-the-ground-up PCIe 4.0 x4 drive should be able to reach 8 GB/s at theoretical peak, preferably in that 2-4 W window.
Adding an 8 W PCIe 4.0 SSD to a notebook, as we’ve said since they were launched, is a bad idea. Most laptops don’t have the cooling requirements for such a power hungry SSD, causing hot spots and thermal overrun, but also the effect on battery life would be easily noticeable. If Intel had said that ‘current PCIe 4.0 x4 drives on the market aren’t suitable due to the high power consumption of current solutions, however future drives will be much more suitable’, I would have agreed with them as a valid reason for not using one in the reference design. It makes sense – it certainly makes more sense than the reason first given about not being in the market long enough for validation.
Beyond all this, by the time Tiger Lake notebooks come to market, new drives built on Phison’s E18 and Samsung’s Elpis PCIe 4.0 controllers are likely to be available. Whether these will be available in sufficient numbers for notebook deployment would be an interesting question, and so we are likely to see a mix of PCIe 3.0 and PCIe 4.0 enabled NVMe SSDs. I’m hopeful the OEMs and resellers will identify which are being used at the point of sale, or offer different SKU variants between PCIe 3.0 and PCIe 4.0, but I wouldn’t put money on it.
Priority on Power
Normal operation on a notebook is for the processor to be offered at a specific thermal design point, and any changes to the power plan in the operating system will affect how long the system uses its turbo mode, or requirements to enter higher power states. This is because most notebooks are built to be optimized around that single thermal design point.
In our Ice Lake development system (and in a few select OEM designs, like the Razer Stealth), the power slider while in the ‘Balanced’ power mode allowed us to choose between a 15 W power mode and a 25 W power mode, adjusting the base frequency (and subsequently the turbo budget) of the processor. The chassis was built for the higher power modes, and it allowed anyone using the development system to see the effect of the performance between the two thermal design points.
For our Tiger Lake reference design, we have a similar adjustment at play. The power slider can choose either 15 W mode or 28 W mode (note that this is different to the 12 W to 28 W mode that Intel’s Tiger Lake is meant to offer, which I found odd for leaving out, but good in the sense that we could do 15W to 15W comparisons). There is also a third option: 28 W with Intel’s Dynamic Tuning enabled, also known as Adaptix.
Intel’s Dynamic Tuning/Adaptix is a way for the system to more carefully manage turbo power and power limits based on the workload at hand. With Adaptix enabled, the idea is that the power can be more intelligently managed, giving a longer turbo profile, as well as a better all-core extended turbo where the chassis is capable. Intel has always stated that Adaptix is an OEM-level optimization, and it wasn’t enabled in our Ice Lake testing system due to that system not being optimized in the same way.
However for our Tiger Lake system it has been enabled - at least in the 28 W mode anyway. Technically Adaptix could be enabled at any thermal design point, even at 12 W, but in all cases it should offer better performance in line with what the chassis can provide and the OEM feels safe. It still remains an OEM-enabled optimization tool, and Intel believes that the 28 W with Adaptix mode on the reference design should showcase Tiger Lake in its best light.
More info later in the review.
As a first look at Tiger Lake’s performance, our goal with this review is to confirm the claims Intel has made. The new platform has new features, and Intel has promoted its performance against the competition and previous generation. We’ll also go into microarchitectural details.
Page two will be a brief primer on the fundamental updates on Tiger Lake: the transition to 10nm ‘SuperFin’ technology, the enhanced frequency, and the graphics. We’ll also cover the core as compared to Ice Lake, as well as the SoC level changes such as cache and updated hardware blocks.
We’ll then move onto the new data. Page three will cover the minor changes in the core when it comes to instructions, as well as updates to security. We’ll also cover cache performance, latency, and a key part of modern computing in frequency ramping on page four.
For the power consumption part of the coverage, I’m going to cover it into two brackets: how Intel compares to its own previous generation at 15 W, then moving onto the difference between a 15 W Tiger Lake and a 28 W Tiger Lake, which is going to be a running theme throughout this review.
In Intel’s own announcement for Tiger Lake, the company pitted the 28 W version of Tiger Lake against the best power and thermal setting on an AMD 15 W processor; we’re going to see if those performance comparisons actually hold water, or if it’s simply a diversionary tactic to show Intel has the upper hand by using almost 2x the power.
We’ll also cover our CPU gaming benchmark suite, tested at both 1080p maximum as well as 720p minimum. Intel made big claims about its new Xe-LP graphics architecture against AMD, so we will see how these measure up, both in 15 W Tiger Lake and 28 W Tiger Lake modes.
- Tiger Lake: Playing with Toe Beans
- 10nm Superfin, Willow Cove, Xe, and new SoC
- New Instructions and Updated Security
- Cache Performance, Core-to-Core Latency, and Frequency Ramping
- Power Consumption: Comparing 15 W TGL to 15 W ICL
- Power Consumption: Comparing 15 W TGL to 28 W TGL
- CPU Performance: SPEC 2006, SPEC 2017
- CPU Performance: Office and Web
- CPU Performance: Simulation and Science
- CPU Performance: Encoding and Rendering
- CPU Performance: Legacy and Synthetic
- Xe-LP GPU Performance: Borderlands 3, Gears Tactics
- Xe-LP GPU Performance: Final Fantasy XIV, Final Fantasy XV
- Xe-LP GPU Performance: Civilization 6, Deus Ex Mankind Divided
- Xe-LP GPU Performance: World of Tanks, Strange Brigade
- Conclusion: Is Intel Smothering AMD in Sardine Oil?
Post Your CommentPlease log in or sign up to comment.
View All Comments
blppt - Friday, September 18, 2020 - linkYeah, we can extrapolate such things if power consumption and heat dissipation are of no relevance to AMD. You're leaving out other factors that go into building a top line GPU.
AnarchoPrimitiv - Saturday, September 26, 2020 - linkPower? It will certainly be better than Ampere which is awful at efficiency... Are you forgetting that RDNA2 will be on an improved 7nm node, meaning a better 7nm node that RDNA2?
Spunjji - Friday, September 18, 2020 - linkBig Navi probably won't clock that high for TDP reasons, but the people who are buying that it's only going to have 2080Ti performance are in for a rude surprise. It should compete solidly with the 3080, and I'm betting at a lower TDP. We'll see.
blppt - Saturday, September 19, 2020 - linkIts been AMD's modus operandi for a long time now. Introduce new card, and either because of inferior tech (occasionally) or drivers (mostly), it usually ends up matching Nvidia's last gen flagship. Although also at a lower price.
Considering the leaked benches we've already seen, Big Navi appears to be more of the same. Around 2080Ti performance, probably at a much lower price, though.
Spunjji - Saturday, September 19, 2020 - link@blppt - not sure if you're shilling or credulous, but there's no indication that those leaked benchmarks are "Big Navi". Based on the probable specs vs. the known performance of the 3080, it's extremely unlikely that it will significantly underperform the 3080. It's entirely possible that it will perform similarly at lower power levels. They're also specifically holding back the launch to work on software.
In other words: assuming AMD will keep doing the same thing over and over when they already stopped doing that (see: RDNA, Zen 2, Renoir) is not a solid bet.
But none of this is relevant here. It's amazing how far shills will go to poison the well in off-topic posts.
blppt - Sunday, September 20, 2020 - linkConsidering that the 2080ti itself doesn't "significantly underperform the 3080", Big Navi being in line with the 2080ti doesn't qualify it as getting pummeled by the 3080.
blppt - Sunday, September 20, 2020 - linkOh, and BTW, I am not a shill for Nvidia. I've owned many AMD cards and cpus over the years, and they have been this way for a while. I keep wishing they'll release a true high end card, but they always end up matching Nvidia's previous gen flagship.
Witness the disappointing 5700XT in my machine at the moment. Due to AMD's lesser driver team, it often is less consistent in games then my now ancient 1080ti. Even in its ideal situation with well optimized drivers in a game that favors AMD cards, it just barely outperforms that old 1080ti. Most of the time its around 1080 performance.
Actually, YOU are the shill for AMD if you keep denying this is the way they have been for a while.
"In other words: assuming AMD will keep doing the same thing over and over when they already stopped doing that (see: RDNA, Zen 2, Renoir) is not a solid bet."
Except---they STILL don't hit the top of the charts in games on their CPUs. Zen/Zen 2 is a massive improvement, and dominates Intel in anything highly multi-core optimized, but that almost always never applies to games.
So, going to a Zen comparison for what you think Big Navi will do is not a particularly good analogy.
Spunjji - Sunday, September 20, 2020 - link@blppt - "I'm not the shill, you're the shill, I totally own this product, let me whine about how disappointing it is though, even though performance characteristics were clear from the leaks and it still outperformed them. I bought it to replace a far more expensive card that it doesn't outperform". Okay buddy, sure. Whatever you say. 🙄
I didn't say it would take the performance lead. Going for a Zen comparison is exactly what I meant and I stand by it. We will see, until benchmarks come out it's all just talk anyway - just some of it's more obvious nonsense than the rest...
blppt - Sunday, September 20, 2020 - link@Spunji
That was the dumbest counter argument I've ever heard.
First off, I didn't buy it to 'replace' anything. The 1080ti is in one of my other boxes. Where did you get 'replace' from? The 5700XT was to complete an all-AMD rig consisting of a 3900X and and AMD video card.
Secondly, the 1080ti is now almost 4 freaking years old. You bet your rear end I'd expect it to outperform a top end card from almost 4 years ago, when it is currently STILL the best gpu AMD offers.
And finally, I have over 20 years experience with both AMD cpus and gpus in various builds of mine, so don't give me that "bought one AMD product and decided they stink" B.S.
I've been on both sides of the aisle. Don't try and tell me i'm a shill for Nvidia. I've spent way too much time and money around AMD systems for that to be true.
AnarchoPrimitiv - Saturday, September 26, 2020 - linkYou're a liar, I'm so sick of Nvidia fans lying about owning AMD cards