Comments Locked

5 Comments

Back to Article

  • abufrejoval - Tuesday, September 17, 2019 - link

    Interesting and nice writeup!

    I have absolutely no problem imagining a need for several clusters on mobiles even today:

    You could be running ground truth detection 3D mapping from the camera, but need another cluster for sensor fusion using gyros/gravity/magnetic sensors, you may want to mix in object/subject detection and inventory, you still may want to run speech models for recognition, synthesis, BERT or another complex LSTM etc.

    And these could require rather distinct rate of compute vs. I/O while nothing and no-one could schedule them on an NN-core today like compute tasks on a CPU: It's a GPU-scheduling-nightmare³...

    You'd want plenty of clusters and the ability to partition the power for sustaining a couple of distinct workloads.
  • p1esk - Tuesday, September 17, 2019 - link

    Products are 2 years away? So is this chip better than what Nvidia will release in 2 years?
  • name99 - Tuesday, September 17, 2019 - link

    As a comparison, the A13's NPU is at 6TOPs (so half their peak, and what they are targeting at high-end smartphone) but available today.

    Apple is probably the most interesting example today (outside cars...) of using this stuff for vision. Obviously there is the computation photography stuff, but it's still the case that very few people have seen or experimented with
    This is an interesting example because it's actually rather glitchy (very beta software demo'd at WWDC this year) and the glitches kinda show the reality (once the demo is too slick, it's hard to see the magic going on behind it).
    https://www.youtube.com/watch?v=g8cf2gMarqo

    The significant points to note (and the things that are newest code, so go wrong the worst!) are the way the AR occludes people BEHIND the bowling ball, but not people IN FRONT OF the bowling ball...

    Obviously this stuff on a phone or iPad is using a less than ideal form factor; it's definitely a hassle to keep holding your device at the correct angle. (Even so it IS useful, and I've used it, to measure distances, or to see how a piece of furniture would fit in a room, or whether it can pass through a door.)
    But I think it's significant to compare the gap between how much time Apple gives AR at developer-targeted events (lots!) vs at public-facing events (basically nothing!). It seems to me so obvious that Apple's strategy is to get developers (including their own developers) familiar with AR, and testing the hardware and algorithms, while the real public debut and fanfair will occur with the release of the Apple Glasses.

    Is there any use for this stuff before then? Well the next aTV will have an A12 (and so a 5TOPs NPU) in it. I have visions of a a gaming setup that allows one for place an iPhone (for casual use) or a dedicated camera (maybe via USB or ethernet) for hardcore use on top of the TV, and use this AR/pose recognition stuff for things like DDR, or Wii and XBox games, or shared space remote games.
    We'll see. (Apple's game team seem to be trying harder than usual this year to really kickstart games. BUT that same team also seem to have astonishingly limited imagination when it comes to actually allowing the aTV to open up to alternative input methods, like cameras. So...)
  • p1esk - Tuesday, September 17, 2019 - link

    A couple of things:

    1. Apple does not really compete with anyone in terms of NN accelerators. I mean it seems like they just do whatever they feel like every generation.

    2. This CEVA chip is strictly for cars. So it's gonna compete with Nvidia (mostly). As such, it's gotta be at least 2x better than whatever Nvidia releases after Amper, so it's a pretty high bar. I'm not holding my breath. But, competition is always good.
  • p1esk - Tuesday, September 17, 2019 - link

    Sorry didn’t mean to reply to your comment

Log in

Don't have an account? Sign up now