There are very limited information on Graph Core. How are they any different to other Machine Learning processors like TPU from Google or NPU within an Apple SoC.
Will there be an in-depth article, or even a high level overview on Graphcore?
It's very different from the NPU within an Apple SoC for a couple of reasons. Firstly those neural coprocessors on SoCs are generally CNN inference ASICs. CNN inference is simpler than training and something like Graphcore's chip is designed to accelerate more than just CNNs (in inference and in training). Secondly, Graphcore has built a scalable systems-level architecture, not just a chip that can be integrated in an SoC. It's (comparatively) much easier to build something like an NPU and integrate it into an SoC than it is to design a scale-out architecture.
As far as the TPU, it's similar but there are differences in the implementation. You can go to www.nextplatform.com if you want to see some information on the TPU, graphcore, or other ai accelerators.
Citadel have got an interesting paper on arXiv where there attempt to analyse the Graphcore architecture via microbenchmarking. Worth a look: https://arxiv.org/abs/1912.03413
Hi ksec, these resources may help give some insight into what Graphcore have done and how it differs in approach to the existing approaches to ML/AI workloads.
Funding secured is often more an indication of how much hype there is for the people/product involved and how good of salesmen the people are. Take a look at MagicLeap and Theranos.
Remember it's very much a step-and-repeat sort of chip - version 1 was 19*2^6 copies of a block comprising a core with a complete-16-FP-multiplies-per-cycle unit and 256kb of RAM, this version 2 is probably a larger number of copies of a very similar block. If you were asked to pick something that's not too hard to port to a new smaller process, you'd pick this.
Well, $440M is small change in the fab industry. And to produce a chip on 3 nm, Intel would give ten times that (as it probably already gave for the 10nm technology to small effect).
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
12 Comments
Back to Article
jtd871 - Monday, January 4, 2021 - link
Public employee (Ontario, Canada teachers in this case) pension funds are big institutional investors.ksec - Monday, January 4, 2021 - link
There are very limited information on Graph Core. How are they any different to other Machine Learning processors like TPU from Google or NPU within an Apple SoC.Will there be an in-depth article, or even a high level overview on Graphcore?
Yojimbo - Monday, January 4, 2021 - link
It's very different from the NPU within an Apple SoC for a couple of reasons. Firstly those neural coprocessors on SoCs are generally CNN inference ASICs. CNN inference is simpler than training and something like Graphcore's chip is designed to accelerate more than just CNNs (in inference and in training). Secondly, Graphcore has built a scalable systems-level architecture, not just a chip that can be integrated in an SoC. It's (comparatively) much easier to build something like an NPU and integrate it into an SoC than it is to design a scale-out architecture.As far as the TPU, it's similar but there are differences in the implementation. You can go to www.nextplatform.com if you want to see some information on the TPU, graphcore, or other ai accelerators.
galeos - Monday, January 4, 2021 - link
Citadel have got an interesting paper on arXiv where there attempt to analyse the Graphcore architecture via microbenchmarking. Worth a look: https://arxiv.org/abs/1912.03413JohnLeonard - Tuesday, January 5, 2021 - link
Hi ksec, these resources may help give some insight into what Graphcore have done and how it differs in approach to the existing approaches to ML/AI workloads.https://www.graphcore.ai/resources/white-papers
Feel free to reach out if you need any further information.
thanks,
John Leonard- Product Marketing Manager, Graphcore.
ksec - Wednesday, January 6, 2021 - link
Thanks.Yojimbo - Monday, January 4, 2021 - link
Funding secured is often more an indication of how much hype there is for the people/product involved and how good of salesmen the people are. Take a look at MagicLeap and Theranos.Yojimbo - Monday, January 4, 2021 - link
Regarding $440m cash on hand. I wonder if that's really enough to design and produce a 3 nm chip.TomWomack - Monday, January 4, 2021 - link
Remember it's very much a step-and-repeat sort of chip - version 1 was 19*2^6 copies of a block comprising a core with a complete-16-FP-multiplies-per-cycle unit and 256kb of RAM, this version 2 is probably a larger number of copies of a very similar block. If you were asked to pick something that's not too hard to port to a new smaller process, you'd pick this.Calin - Tuesday, January 5, 2021 - link
Well, $440M is small change in the fab industry. And to produce a chip on 3 nm, Intel would give ten times that (as it probably already gave for the 10nm technology to small effect).artifex - Monday, January 4, 2021 - link
Colossus? Have they mentioned a "Forbin Project?"lemurbutton - Monday, January 11, 2021 - link
How do these compare to Nvidia's Ampere solutions? Is Nvidia in trouble? Is training moving towards these specialized chips?