Heheh.. I agree. Their performance has always been abysmal in the realm of power and performance grid. Trying to integrate two road maps is a good idea only if the multi-dimensional issues are resolved. I wonder how CUDA can fit into Android based platform ? Aren't CUDA libs themselves add up to lot of over weight ?
getting 10 times the performance of Tegra 1 shouldn't have been a big issue. But now A15's are already pushing the power limits for small form factor. Imagine in 2 years today's A15 will be the LITTLE in the big.LITTLE!!
Something with A15 performance levels might be the LITTLE core; but it won't be the A15 itself any more than the A8 or A9 (or ARM11 cores) are being used in current implementations; instead the new ultra power efficient A7 core is being used. Even more than the A8/9 the A15 is optimized for high performance/high power consumption (relative to other ARM designs); while the LITTLE core needs to be tuned for minimum idle power above all else.
Arms next generation after the A15 are the A53/57; both using a newer instruction set and paired for big.LITTLE operation. Performance per core, per mhz is only 20% higher for A53/57 than with A7/15 cores. Clock speeds might go up a bit more; but that's expensive in power consumption and per core performance is already a bit past where Moores law broke for higher powered CPU because all the easy ways to speed them up are done. Since Arm already went multi-core I suspect most of the transistor growth in future SoCs will end up being in the GPU and/or integration of more of the handful of still off chip components while CPU performance levels off.
Yeah, I guess it doesn't have to be 10 times increase in CPU performance. Although it would be interesting to see what the 3D transistors will do for ARM.
In the past with the Tegra3, and I believe with the Exynos 5, you are correct in that the little core is a previous more efficient architecture. However with Tegra 4 the 5th core is an A15. I suspect the core is more efficient due to a lower clock speed, voltage and other tunings.
Rumor has it that Nvidia is using a different type of transistor for different parts of the die. The quad core A15's are using high performance high leakage transistors while the +1 A15 is using a low leakage low performance transistor design.
BIG.little is different from nVidias 4+1: the former uses different cores compatible to the same instruction set, whereas the latter uses similar cores with transistors of one of them tuned for low power consumption.
Nvidia is being misleading. They are doing something like this "5x CPU improvement + 5x GPU improvement = 10x improvement!!".
I hate it when they do that. That sort of calculation makes no sense whatsoever. In that case it would be a 5x improvement for both the CPU and the GPU, not a "10x" improvement. Same for the 100x one.
Exactly. That is where things will start to get interesting, I think. If I want a little Tegra 3 system, the Ouya will scratch that itch well enough. Bring on a Tegra 4 mITX system.
Hopefully they've got one Sata II and one mSata or even two mSata; such a setup would be great for something like that tiny Antec behind-the-monitor case.
"After Logan is Parker, which NVIDIA shared will contain the codename Denver CPU NVIDIA is working on, with 64 bit capabilities and codename Maxwell GPU"
You're saying that as if Logan will NOT be 64 bit?! Oh god, please tell me that's just a mistake on your part and Nvidia is not actually going to release a 32-bit only chip in 2014, when everyone else will have 64 bit chips based on ARMv8.
According to their article on the A53 and A57, those chips will be sampling in late 2013 and early 2014 and won't be in production until 6-12 months into 2014. The soonest we'll be seeing consumer products with it would then probably Q4 2014 and for most uses the 64 bit change will not have yet been optimized. In the consumer environment this does not appear to be a big issue. The 64 bit transition from what I've heard and read will be felt more in server use.
That comment doesn't necessarily mean that Logan won't be 64-bit, it's just noting that Denver is. Although I suspect Logan may still be using A15s. I don't know about everyone else using 64-bit chips in 2014.. While a few companies have said they're going to license A57 I don't think anyone has announced an actual product aside from maybe Calxeda. We don't know when Apple or Qualcomm will transition their CPU designs to 64-bit, but based on RAM capacity growth in iPads over the last few years I can see Apple being comfortable with staying 32-bit until the end of 2014.
It's Kepler, so yes. It's going to be the most advanced GPU architecture in the mobile market by far. But that doesn't necessarily guarantee success, though. If Nvidia keeps making the die size only 80mm2, when everyone else has it at 120mm2, and if they release a 28nm chip, instead of a 20nm one in 2014, then it's very possible Tegra 5 could be behind again in graphics performance in 2014.
Also the fact that they will not have a 64 bit chip in 2014 is very disappointing. I'm pretty sure Qualcomm will have one early on in 2014, and made at 20nm, too.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
25 Comments
Back to Article
ahmadamaj - Tuesday, March 19, 2013 - link
10 times increase in performance in 2 years!!phoenix_rizzen - Tuesday, March 19, 2013 - link
Not too hard, considering how crappy their performance has been with Tegra1-3.Gopi - Tuesday, March 19, 2013 - link
Heheh.. I agree. Their performance has always been abysmal in the realm of power and performance grid. Trying to integrate two road maps is a good idea only if the multi-dimensional issues are resolved. I wonder how CUDA can fit into Android based platform ? Aren't CUDA libs themselves add up to lot of over weight ?ahmadamaj - Tuesday, March 19, 2013 - link
getting 10 times the performance of Tegra 1 shouldn't have been a big issue. But now A15's are already pushing the power limits for small form factor. Imagine in 2 years today's A15 will be the LITTLE in the big.LITTLE!!DanNeely - Tuesday, March 19, 2013 - link
Something with A15 performance levels might be the LITTLE core; but it won't be the A15 itself any more than the A8 or A9 (or ARM11 cores) are being used in current implementations; instead the new ultra power efficient A7 core is being used. Even more than the A8/9 the A15 is optimized for high performance/high power consumption (relative to other ARM designs); while the LITTLE core needs to be tuned for minimum idle power above all else.Arms next generation after the A15 are the A53/57; both using a newer instruction set and paired for big.LITTLE operation.
Performance per core, per mhz is only 20% higher for A53/57 than with A7/15 cores. Clock speeds might go up a bit more; but that's expensive in power consumption and per core performance is already a bit past where Moores law broke for higher powered CPU because all the easy ways to speed them up are done. Since Arm already went multi-core I suspect most of the transistor growth in future SoCs will end up being in the GPU and/or integration of more of the handful of still off chip components while CPU performance levels off.
ahmadamaj - Tuesday, March 19, 2013 - link
Yeah, I guess it doesn't have to be 10 times increase in CPU performance. Although it would be interesting to see what the 3D transistors will do for ARM.mabellon - Tuesday, March 19, 2013 - link
In the past with the Tegra3, and I believe with the Exynos 5, you are correct in that the little core is a previous more efficient architecture. However with Tegra 4 the 5th core is an A15. I suspect the core is more efficient due to a lower clock speed, voltage and other tunings.Source: http://www.anandtech.com/show/6550/more-details-on...
cmikeh2 - Tuesday, March 19, 2013 - link
Rumor has it that Nvidia is using a different type of transistor for different parts of the die. The quad core A15's are using high performance high leakage transistors while the +1 A15 is using a low leakage low performance transistor design.extide - Wednesday, March 20, 2013 - link
Tegra 4 is NOT big.LITTLE btw.MrSpadge - Wednesday, March 20, 2013 - link
BIG.little is different from nVidias 4+1: the former uses different cores compatible to the same instruction set, whereas the latter uses similar cores with transistors of one of them tuned for low power consumption.Alketi - Tuesday, March 19, 2013 - link
Yeah, so I don't see how 20% generational gains get ARM to the golden ring before Intel.Intel is two generations away from having i7 desktop-class computing in a mobile power profile.
Can ARM reach i7-class computing power in two generations??
Krysto - Tuesday, March 19, 2013 - link
Nvidia is being misleading. They are doing something like this "5x CPU improvement + 5x GPU improvement = 10x improvement!!".I hate it when they do that. That sort of calculation makes no sense whatsoever. In that case it would be a 5x improvement for both the CPU and the GPU, not a "10x" improvement. Same for the 100x one.
ET - Wednesday, March 20, 2013 - link
What are you basing this assertion on? If that was the case, Tegra 4 would have shown on the graph as being a 10x improvement over Tegra 3.jjj - Tuesday, March 19, 2013 - link
We need more details on Kayla ,like how many SATA ports,what PSU one needs and when we get a Tegra 4 version.jwcalla - Tuesday, March 19, 2013 - link
Looks like one SATA port from the photo. It might be limited to Linux only though.blckgrffn - Tuesday, March 19, 2013 - link
Exactly. That is where things will start to get interesting, I think. If I want a little Tegra 3 system, the Ouya will scratch that itch well enough. Bring on a Tegra 4 mITX system.Brian Klug - Tuesday, March 19, 2013 - link
I'm digging on that one, hopefully I can find out more!-Brian
lmcd - Wednesday, March 20, 2013 - link
Hopefully they've got one Sata II and one mSata or even two mSata; such a setup would be great for something like that tiny Antec behind-the-monitor case.silenceisgolden - Tuesday, March 19, 2013 - link
Forgive me for being naive, but what could I be doing on a phone or a tablet or a Chromebook that I would need an application coded with CUDA for?Blammar - Tuesday, March 19, 2013 - link
What good is a baby, eh? I can think of: accurate speech to text, nearly intelligent agents, lip reading, HDR and light field imaging, ...Krysto - Tuesday, March 19, 2013 - link
"After Logan is Parker, which NVIDIA shared will contain the codename Denver CPU NVIDIA is working on, with 64 bit capabilities and codename Maxwell GPU"You're saying that as if Logan will NOT be 64 bit?! Oh god, please tell me that's just a mistake on your part and Nvidia is not actually going to release a 32-bit only chip in 2014, when everyone else will have 64 bit chips based on ARMv8.
cmikeh2 - Tuesday, March 19, 2013 - link
According to their article on the A53 and A57, those chips will be sampling in late 2013 and early 2014 and won't be in production until 6-12 months into 2014. The soonest we'll be seeing consumer products with it would then probably Q4 2014 and for most uses the 64 bit change will not have yet been optimized. In the consumer environment this does not appear to be a big issue. The 64 bit transition from what I've heard and read will be felt more in server use.Exophase - Thursday, March 21, 2013 - link
That comment doesn't necessarily mean that Logan won't be 64-bit, it's just noting that Denver is. Although I suspect Logan may still be using A15s. I don't know about everyone else using 64-bit chips in 2014.. While a few companies have said they're going to license A57 I don't think anyone has announced an actual product aside from maybe Calxeda. We don't know when Apple or Qualcomm will transition their CPU designs to 64-bit, but based on RAM capacity growth in iPads over the last few years I can see Apple being comfortable with staying 32-bit until the end of 2014.mayankleoboy1 - Tuesday, March 19, 2013 - link
Do we know if Logan will have unified shaders ?Krysto - Wednesday, March 20, 2013 - link
It's Kepler, so yes. It's going to be the most advanced GPU architecture in the mobile market by far. But that doesn't necessarily guarantee success, though. If Nvidia keeps making the die size only 80mm2, when everyone else has it at 120mm2, and if they release a 28nm chip, instead of a 20nm one in 2014, then it's very possible Tegra 5 could be behind again in graphics performance in 2014.Also the fact that they will not have a 64 bit chip in 2014 is very disappointing. I'm pretty sure Qualcomm will have one early on in 2014, and made at 20nm, too.