From Ubergizmo -- When Apple announced the New iPad, the part about the Apple A5X chip has created a lot of confusion, hype, joy and sometime sheer anger: first, many now believe that the new iPad is a Quad-Core tablet and secondly, Apple claims that the A5X graphics unit is 4X faster than NVIDIA’s Tegra 3, which is currently (one of) the fastest chips for Android devices and Windows 8 tablets. Tegra 3 is also the first mobile quad-core chip. Because Apple has used the term Quad-Core in an unusual context (graphics unit), it leaves a lot of users confused about what Quad-Core means. Coincidentally, this was exactly what Apple needed, from a PR/Marketing standpoint. To make a long story short: the Apple A5X is NOT a quad-core processor, but let’s take a closer look…
Most definitely not a “quad-core processor”Typically, the term quad-core is used to describe how many central processing units, (CPU) or cores, are integrated within a chip. Those cores are given general tasks like parsing web pages, installing and executing apps and most things that “computers” do. To help those CPUs, chip makers often add a graphics processing unit (GPU) that will offload graphics tasks like user interface graphics, game graphics etc… many also have a dedicated video unit that decodes and encodes videos streams. When we say “dual core”, it really means “two CPU cores”. In the Apple presentation, they were talking about graphics cores – this is totally different.
Mobile processors are highly integrated and are called Systems On a Chip, or SoCs. You can learn more about them by reading SoC: The Force That Rules All Smartphones and Tablets.
So what are GPU cores?Graphics processors typically use an array of basic computing “building blocks” that are also called “cores”. And because computer graphics is highly parallelizable, adding more cores is one way to scale graphics performance. GPU vendors like AMD, Power VR, Qualcomm, NVIDIA and others have done this for more than a decade, and a modern GPU for desktop computers can host 512+ “cores”.
The thing is, GPU Cores have been marketed so much that the term doesn’t mean much anymore. It’s useful when comparing several GPUs models within a single generation, but from one brand to the next, the definition of graphics core can vary greatly. For instance, AMD usually declare having much more “cores” than NVIDIA, but in the end, the actual performance does not reflect the difference at all.
Basically, the past two paragraphs say that GPU cores have turned into a marketing gimmick, and that real-world performance is what really matters.
So… is the Apple A5X truly 4X faster than NVIDIA’s Tegra 3 in graphics speed?This is where the second controversy starts. The best thing to do is to test performance on both platforms with the same applications, and depending on your goal (synthetic performance or real-world performance), you want to choose benchmarks that stress the system in a particular way. There are currently no benchmarks that test the A5X directly against Tegra 3, but there are benchmarks that pit the A5 against Tegra 3.
Because GLBenchmark Egypt doesn’t represent anything close to current games (compare it with Infinity Blade, now an “old” game ), the numbers derived from it are not very meaningful. In the end, we really need in-game cross-platform benchmarks, but none are available. Why? This is another story… but from what I have gathered in the development community, there is a lot of politics that get in the way of in-game benchmarking. Why? Because most brands do not want to market their products based on hardware performance.
To other way to conclude that Apple’s A5X is 4X faster than Tegra 3 is by adding the total computing capability (multiply-adds or MADs) of each GPU design (in GFLOPS) and come out with a theoretical number that assumes 100% utilization of all the transistors. Depending on the clock-rates and assuming that Apple is using the PowerVR SGX543MP4 GPU design, there’s a reasonable chance to get to the magic number of “4X”.
To answer the question raised in the heading of this paragraph, I would say that the A5X seems to have a very fast graphics engine, and that in some conditions, it is certainly faster than NVIDIA’s Tegra 3. That said, I don’t think that in any real-world game you will see anything close to a 4X performance difference (would Riptide run at 240FPS at 1280×720 with an A5X?). As it stands, my opinion is that Apple’s priority was to process more pixels to accommodate the higher resolution (fillrate), while NVIDIA’s priority is to focus on more complex rendering (shading).
In reality, it depends on how apps take advantage of whatever hardware is available (again, software is king). Some could decide to play the strengths of the A5/A5X and orient their game design to use this to their advantage. Others may want to port console games that may perform very well on Tegra 3. In the end, both platforms will do very well.
So… is the New iPad a quad-core tablet? (No)Simply put: “no”. Because quad-core refers to the fact that there are four CPUs. The A5X is a dual-core chip and the new iPad is a dual-core tablet.
It’s not a bad thing at all. Few people have actually complained about the actual CPU performance of the iPad 2, but in terms of CPU workload, an actual quad-core chip like the NVIDIA Tegra 3 could be 2X faster than the A5X, given that both designs use a variant of the ARM Cortex A9 CPU core (note MHz speeds should be taken into account too). *If* the apps are multi-core friendly, the chip with more cores will perform better. It’s that easy. If the apps are not multi-core friendly, then the additional cores won’t matter much.
Why all the noise then?Quad-core chips are a very sensitive topic these days, for several reasons. The most important one is that it is an easy concept to grasp for prospect buyers, it’s like “Megapixels” for cameras. Four sounds obviously better than two, and Quad-Core has been heavily communicated upon by Intel and AMD for years, so users are very much aware of this.
Secondly, Android tablets have started using quad-core chips back in November with the ASUS Transformer Prime, so in some ways, new tablets that feature “only” dual-core systems may look a bit weak or old, there’s no win. There is a tremendous manufacturer anxiety when releasing a dual-core tablet. In the end, the “specs-sensitive” crowd will frown upon “older” dual-core technology and Apple will have to face the Quad-Core argument for the remainder of 2012 (at least, with the new iPad).
This may explain why Apple has put forward “something (anything?) Quad-Core” and in this instance, the GPU was it. Of course, saying that the A5X graphics unit is “Quad-Core” is semantically perfectly valid, but it can be confusing. Was it a deliberate attempt to blur the meaning of Quad-Core? I don’t know. What’s important is that the confusion did happen, so at best, the message was not clear.
Conclusion: real-world benchmarks matterConfusion is never good when you are going to invest a significant amount of money into a device. At Ubergizmo, we think that informed buyers become happy users, so it was important to talk about this. The new iPad is a Dual-Core tablet with a graphics engine that is 2X the speed of the iPad 2, that we know for sure.
When the independent benchmarks come out, we expect to see results that are in-line with the theory formulated above, and we are looking forward to testing real-world applications and run more benchmark numbers on the units that we have purchased. Stay tuned for a complete Ubergizmo review of the new iPad on March 19th as our retail unit will arrive on the 16th. Feel free to drop a comment to give us your opinion. If you feel like saving some money by getting an older model, check our iPad 1 and iPad 2 reviews.