Concluding Remarks

The NUC10i7FNH is the latest in the line-up of mainstream NUCs from Intel. Long a niche market that Intel has dominated both directly and indirectly via its own NUCs as well as its low-TDP processors, compared where things stood a few years back, consumers these days have a number of alternatives to the mainstream NUCs. We are not referring only to the NUC clones using Intel's U-series processors, but, also the new crop of Ryzen-based UCFF PCs. The new competition means that Intel has to deliver a package that delivers more value for money compared to previous NUC offerings.

While reviewing the Bean Canyon NUC, we had indicated that it was a compact powerhouse ticking the right boxes for multiple use-cases. The tangible benefit delivered by the 'NUC8' over its predecessor was the upgrading of all the external USB ports to USB 3.2 Gen 2 (10 Gbps), and the inclusion of a more powerful Wireless-AC 9560 WLAN component. Similarly, the Frost Canyon NUC10 carries over some of the important features and also provides some welcome upgrades:

  • The top-level configuration with the Core i7-10710U is a 6C/12T processor compared to the 4C/8T Core i7-8559U in Bean Canyon
  • The NUC10 makes the move to Wi-Fi 6 with the AX 201 WLAN component.
  • The NUC10 officially supports 64GB of DRAM (the first NUC to do so)
  • One of the front panel USB 3.2 Gen 2 (10 Gbps) ports is Type-C , which is very welcome given that Type-C peripherals are becoming more prevalent now.
  • The Frost Canyon NUCs make use of the latest Titan Ridge Thunderbolt 3 controller (compared to Alpine Ridge in previous NUCs), though this is transparent to the end-users of host systems. (On the peripherals side, it enables devices such as docks to talk to both Thunderbolt 3 and USB host ports).
  • The BIOS has new value additions such as RAM disk creation support and pre-boot iSCSI volume mounting.

However, while the NUC8 was an upgrade over NUC7 in every respect, the Frost Canyon NUC10 slips up a little. Intel's 10th generation U-series processors come in two different versions – the 10nm Ice Lake and the 14nm Comet Lake. Intel's high-end Iris Graphics is available only on Ice Lake, and unfortunately, the Frost Canyon is based on Comet Lake. This means that, for a variety of graphics intensive workloads, the NUC10 actually performs worse than the Iris Plus graphics-equipped NUC8.

The hexa-core CPU is a nice upgrade, but, as both BAPCo SYSmark 2018 and UL's PCMark 10 show, the current typical workloads for office PCs and other generic SFF PC applications are not really capable of putting the extra cores to good use. That said, some specific tasks that scale nicely with thread counts (such as the compression and cryptography operations) can take full advantage of the capabilities offered by the Core i7-10710U in the Frost Canyon NUC10i7FNH. The availability of six cores might make the NUC an attractive option for home labs focusing on virtualization, but the requirements of the VM workloads may also need to be kept in mind given the 30W PL1 limit of the processor.

Overall, the Frost Canyon NUC10i7FNH is a mixed bag. Given a choice between, say, the Kaby Lake-based Baby Canyon NUC7 and the Coffee Lake-based Bean Canyon NUC8s, it would be a no-brainer to go for the Bean Canyon. However, choosing between Bean Canyon and Frost Canyon is not that straightforward. While Frost Canyon delivers upgrades in many respects, the retrogression in the GPU area may make the Bean Canyon NUC at a lower price point an attractive alternative. In some respects Intel has traded off GPU performance for more CPU performance, and I'm not sure that's what their NUCs really needed.

On the pricing front, the NUC10i7FNH barebones version is available for around $605, while the NUC8i7BEH is around $50 cheaper. While the two additional CPU cores and Wi-Fi 6 support can definitely justify the additional cost, it is up to the consumer to decide whether forsaking some GPU performance is also worth it.

 
Power Consumption and Thermal Performance
Comments Locked

85 Comments

View All Comments

  • The_Assimilator - Monday, March 2, 2020 - link

    It's not, but the point is still valid: nobody buying these things is doing so because they expect them to be graphics powerhouses.
  • HStewart - Monday, March 2, 2020 - link

    But some people are so naive and don't realize the point. I came up in days when your purchase card that didn't even have GPU's on it. Not sure what level iGPU's are but they surely can run business graphics fine and even games a couple of years ago.
  • notb - Thursday, March 5, 2020 - link

    Horrible?
    These iGPUs can drive 3 screens with maybe 1-2W power draw. Show me another GPU that can do this.

    This is an integrated GPU made for efficient 2D graphics. There's very little potential to make it any better.
  • PaulHoule - Monday, March 2, 2020 - link

    Well, Intel's horrible iGPUs forced Microsoft to walk back the graphical complexity of Windows XP. They kept the GPU dependent architecture, but had to downgrade to "worse than cell phone" visual quality because Intel kneecaped the graphics performance of the x86 platform. (Maybe you could get something better, but developers can't expect you to have it)
  • HStewart - Monday, March 2, 2020 - link

    I think we need actual proof on these bias statements. I think there is big difference of running a screen at 27 or more inches than 6 to 8 inches no matter what the resolution.
  • Korguz - Monday, March 2, 2020 - link

    we need proof of your bias statements, but yet, you very rarely provide any.. point is ??
  • Samus - Monday, March 2, 2020 - link

    What does screen size have to do with anything? Intel can't make an iGPU that can drive a 4K panel fluidly, meanwhile mainstream Qualcomm SoC's have GPU performance able to drive 4K panels using a watt of power.
  • HStewart - Tuesday, March 3, 2020 - link

    Can Qualcomm actually drive say a 32 in 4k screen efficiently. Also what is being measure here, Videos or actually games and that depends on how they are written.
  • erple2 - Saturday, March 14, 2020 - link

    I'm not sure that I understand your statement here, as it doesn't seem to make any sense. I was not aware that they physical dimensions of the screen mattered at all to the GPU, apart from how many pixels it has to individually manage/draw. If your implication is the complexity and quantity of information that can be made significant on a 32" screen is different from a 5.7" screen, then I suppose you can make that argument. However, I have to make guesses as to what you meant for this to come to that conclusion.

    Generally the graphical load to display 4k resolution is independent of whether the actual screen is 6" or 100". Unless I'm mistaken?
  • PeachNCream - Monday, March 2, 2020 - link

    For once, I agree with HStewart (feels like I've been shot into the Twilight Zone to even type that). To the point though, Windows XP was released in 2001. Phones in that time period were still using black and white LCD displays. Intel's graphics processors in that time period were the Intel Extreme series built into the motherboard chipset (where they would remain until around 2010, after the release of WIndows 7). Sure those video processors are slow compared to modern cell phones, but nothing a phone could do when XP was in development was anything close to what a bottom-feeder graphics processor could handle. I mean crap, Doom ran (poorly) on a 386 with minimal video hardware and that was in the early 1990s whereas phones eight years later still didn't have color screens.

Log in

Don't have an account? Sign up now