Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

scottrichardson

macrumors 6502a
Original poster
Jul 10, 2007
716
293
Ulladulla, NSW Australia
Yo, smart dudes. What can Apple do to make the Mac Studio/Mac Pro competitive with the i9 and 4090 in 3D applications that demand a beefy GPU?
Basically the M3 generation I feel will hit that mark and possibly exceed it. Given the M2 Ultra is somewhere between a 4070Ti and a 4080 - that's pretty damn good for a 'mobile phone cpu'.
 

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
Yo, smart dudes. What can Apple do to make the Mac Studio/Mac Pro competitive with the i9 and 4090 in 3D applications that demand a beefy GPU?
In 3D? Probably just add dedicated Raytracing hardware and add MOAR COREZ”

If I’m just spitballing, the render times is the achillies heel of the current M series (which is why on every occasion, Apple shows off viewport performance, the strength of the M series). And the render times are bound by lighting calculations, which anything with dedicated hardware would have an advantage.

In theory, Apple COULD go a similar route to NVidia, and have the RT hardware work in tandem with the Neural Networks to do the lighting work. But that’s easier said than done.

Against the x86 competition, it’s a core war. Intel is adding their “efficiency” cores, which are not very efficient, as a die-space effective way to improve multicore performance.

Taking that and the stupid amount of memory throughput that the M series has into account, it shouldn’t (again, easier said than done) to throw cores at the problem.

I suppose Apple could “just” clock their processors higher, but that assumes that they’re not going to run into stability issues or other problems.

Maybe Apple could take a lesson from AMD and go the chiplet route, with different dies talking over an interposer (which, iirc, the M2 Ultra already does) this is the way I believe they’ll go.

All in all, Apple does have several options to take on the competition. I think they need to iterate faster though.
 

scottrichardson

macrumors 6502a
Original poster
Jul 10, 2007
716
293
Ulladulla, NSW Australia
In 3D? Probably just add dedicated Raytracing hardware and add MOAR COREZ”

If I’m just spitballing, the render times is the achillies heel of the current M series (which is why on every occasion, Apple shows off viewport performance, the strength of the M series). And the render times are bound by lighting calculations, which anything with dedicated hardware would have an advantage.

In theory, Apple COULD go a similar route to NVidia, and have the RT hardware work in tandem with the Neural Networks to do the lighting work. But that’s easier said than done.

Against the x86 competition, it’s a core war. Intel is adding their “efficiency” cores, which are not very efficient, as a die-space effective way to improve multicore performance.

Taking that and the stupid amount of memory throughput that the M series has into account, it shouldn’t (again, easier said than done) to throw cores at the problem.

I suppose Apple could “just” clock their processors higher, but that assumes that they’re not going to run into stability issues or other problems.

Maybe Apple could take a lesson from AMD and go the chiplet route, with different dies talking over an interposer (which, iirc, the M2 Ultra already does) this is the way I believe they’ll go.

All in all, Apple does have several options to take on the competition. I think they need to iterate faster though.

I think it's actually pretty incredible that Apple ALREADY has a GPU capable of keeping up with MOST of the nVidia 4000 series. Given that those GPU's are insanely powerful, and physically HUGE, Apple has done an amazing job in their relatively short time producing Mac chipsets. I appreciate that they have technically been developing GPU's for much longer within the iPhone/iPad devices. Still, the M2 Ultra is up there with a 4070Ti in raster performance. Obviously ray tracing is another story. And all of that performance with less than 50% of the power draw, often substantially less. CRAZY!

I think that to achieve 4090 performance, they are going to have to crank up everything. Cores, clock, IPC etc. Given that the M2 Ultra pulls around 27.2 TFLOPS while the 4090 is pulling 83 TFLOPS, that's 3 times the theoretical performance. I don't believe the real-world performance is 3x the M2 Ultra, but there's still a wide gap.

The 4080 pulls 49 TFLOPS, which is 80% more than the M2 Ultra.

What's interesting is on PAPER, the 4070Ti hits 40 TFLOPS, which is more than the M2 Ultra, but the M2 Ultra can beat it in real-world scenarios. So it's not a cut and dry comparison I don't think.

So, using rough guess work, if Apple could hit something in the order of 50TFLOPS with the M3 Ultra, I think we'd see it neck and neck, and in SOME cases beating the 4090. The only question is ray-tracing. I have a gut feeling the first iteration of ray-tracing hardware will fall short of the likes of RTX or Radeon, but STILL be a huge leap compared to NO dedicated hardware at all.

On the CPU front, I think we will see the stock M3 single-core performance being right up there with the fastest Intel and AMD CPUs. Didn't leaked benchmarks already come out indicating just as much?
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Looking at benchmarks, that are optimized for M series, M2 Ultra is actually around 20% faster than GA103 based laptop 4090.

Which makes that GPU in the range of 4080, desktop, in optimized software.
 
  • Like
Reactions: scottrichardson

leman

macrumors Core
Oct 14, 2008
19,517
19,664
If I’m just spitballing, the render times is the achillies heel of the current M series (which is why on every occasion, Apple shows off viewport performance, the strength of the M series). And the render times are bound by lighting calculations, which anything with dedicated hardware would have an advantage.

In pure compute, Apple is fairly competitive. In fact, it's outperforming GPUs with comparable peak advertised FLOPs. The weaknesses are lack of RT acceleration (as you mention) as well as very low operational clocks.

In theory, Apple COULD go a similar route to NVidia, and have the RT hardware work in tandem with the Neural Networks to do the lighting work. But that’s easier said than done.

Neural networks are involved in denoising and AFAIK Apple already uses them for this purpose. As to hardware RT, there is around a dozen of relevant Apple patents that paint a very clear picture of the implementation Apple is pursuing. It looks very interesting and more advanced than any other current RT tech. If it works out, Apple might be the first one to deliver real-time ultra-energy-efficient RT.

The secret sauce? RT unit operates at reduced precision, which allows it to quickly test (and reject) a large amount of nodes in parallel. The ray hits are then compacted, reordered, and handed off to a freshly launched shader that does final hit precision testing and shading. What's important here is that general-purpose shaders are only invoked for rays that are suspected hits, which should in theory dramatically improve hardware utilisation efficiency.


I suppose Apple could “just” clock their processors higher, but that assumes that they’re not going to run into stability issues or other problems.

It should be fairly clear at this point that A14/A15 u-arch is not designed to scale with higher clocks. We will see whether the next u-arch will be more flexible here. Given Apple's ludicrous lead in the smartphone market, it would make some sense to sacrifice a bit of energy efficiency for higher clock ceiling and just clock the smartphone cores lower to compensate. But of course, there are a lot of variables.

I'd say the CPU is mostly fine (and Apple can always add more multicore throughput by increasing the number of area-efficient E-cores, like Intel), but where the low clock really hurts is the GPU. Even the Mac Studio should be able to handle 1.7-1.8Ghz, which would result in a flat 30% compute improvement.

Maybe Apple could take a lesson from AMD and go the chiplet route, with different dies talking over an interposer (which, iirc, the M2 Ultra already does) this is the way I believe they’ll go.

Chiplets as AMD does them are primarily cost saving measure. I doubt this would be a good solution for Apple with their focus on mobile SoCs, as they want to keep their energy efficiency (AMD uses monolithic dies on mobile too as their desktop chipsets waste too much power...).

However, there is another venue Apple is exploring, and that's 3D die stacking. They have advanced patents here going back to 2020, so they must have been experimenting with it for some time now. From what I understand the idea is to put compute logic (CPU/GPU) on a die manufactured with a smaller, more expensive node; while putting memory controllers and cache on another die manufactured on an older process. These dies are then stacked on each other and connected directly. This gives you the best of two worlds while retaining the energy efficiency of the monolithic approach. The resulting package is also supposed to be smaller than a monolithic die. I wouldn't be surprised if we see this technology in the 3nm products, simply because 3nm is so damn expensive and doesn't scale for cache memory. So it might make sense to spend a bit more on advanced packaging that uses "cheap" 5nm for cache — the resulting cost will be lower than a purely 3nm chip, with comparable performance and energy efficiency. If not on M3, maybe M3 Pro/Max.

Where we will certainly continue seeing "chipsets" is the larger systems aka. Ultra/Extreme, as energy efficiency is less of a concern there. M1/M2 Ultra seems to be a fairly straightforward interconnect — at least from logical standpoint — they "simply" join together the two on-chip networks into one big network. But a very recent Apple patent depicts a four -chip configuration using a new routing network interconnect — similar to what Intel uses in their new Xeons, only that Apple's solution will likely have much higher bandwidth.

Of course, all this is speculation, albeit informed one. It is also very possible that M3 family will continue to be "boring" monolithic dies with conservative clocks. It's just that I hope for Apple's sake they have some exiting new tech in the pipeline because otherwise they might run out of momentum.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Chiplets as AMD does them are primarily cost saving measure. I doubt this would be a good solution for Apple with their focus on mobile SoCs, as they want to keep their energy efficiency (AMD uses monolithic dies on mobile too as their desktop chipsets waste too much power...).

However, there is another venue Apple is exploring, and that's 3D die stacking. They have advanced patents here going back to 2020, so they must have been experimenting with it for some time now. From what I understand the idea is to put compute logic (CPU/GPU) on a die manufactured with a smaller, more expensive node; while putting memory controllers and cache on another die manufactured on an older process. These dies are then stacked on each other and connected directly. This gives you the best of two worlds while retaining the energy efficiency of the monolithic approach. The resulting package is also supposed to be smaller than a monolithic die. I wouldn't be surprised if we see this technology in the 3nm products, simply because 3nm is so damn expensive and doesn't scale for cache memory. So it might make sense to spend a bit more on advanced packaging that uses "cheap" 5nm for cache — the resulting cost will be lower than a purely 3nm chip, with comparable performance and energy efficiency. If not on M3, maybe M3 Pro/Max.

Where we will certainly continue seeing "chipsets" is the larger systems aka. Ultra/Extreme, as energy efficiency is less of a concern there. M1/M2 Ultra seems to be a fairly straightforward interconnect — at least from logical standpoint — they "simply" join together the two on-chip networks into one big network. But a very recent Apple patent depicts a four -chip configuration using a new routing network interconnect — similar to what Intel uses in their new Xeons, only that Apple's solution will likely have much higher bandwidth.

Of course, all this is speculation, albeit informed one. It is also very possible that M3 family will continue to be "boring" monolithic dies with conservative clocks. It's just that I hope for Apple's sake they have some exiting new tech in the pipeline because otherwise they might run out of momentum.
Intel CPUs AND GPU are monolithic, and draw stupid amounts of power, whereas Meteor Lake and Arrow Lake mobile are going Tile(chiplet) route, and are rumored to not exceed 65W of power. Also, chiplet CPUs from AMD are in 65W thermal envelopes, and stay within their boudaries.

What are we going to make now if it contradicts our theory of chiplets using large amount of power?

Power is result of process technology and clocks. Intels process sucks, hence their designs will use stupid amounts of power. TSMC is way ahead of Intel on this front which is why Meteor Lake and Arrow Lake mobile CPUs are going to be completely manufactured on TSMC process.

With Apple clocks - there would be absolutely no difference in power draw between monolithic vs chiplet/tile approach to design of a SOC/SOP.

And with lower densities on smaller nodes - that is the ONLY OPTION if Apple wants to get any performance increases that are meaningful.
 

leman

macrumors Core
Oct 14, 2008
19,517
19,664
Intel CPUs AND GPU are monolithic, and draw stupid amounts of power, whereas Meteor Lake and Arrow Lake mobile are going Tile(chiplet) route, and are rumored to not exceed 65W of power. Also, chiplet CPUs from AMD are in 65W thermal envelopes, and stay within their boudaries.

Note that I never claimed chipsets per se aren't energy-efficient, just that AMD's current implementation is not. And we are not discussing 65W TDP envelopes (which in AMDs terminology is 95W means power consumption), we are talking about mobile technology were every watt counts. And the passive power consumption of Zen4 desktop CPUs is known to be rather high (at 20+ watts).

It is entirely possible that Intel's tile approach will be more optimised for energy efficiency. Especially since from what I understand they are targeting fairly low bandwidth.

With Apple clocks - there would be absolutely no difference in power draw between monolithic vs chiplet/tile approach to design of a SOC/SOP.

I would think that inter-chip connection will always use more energy, simply because it has to overcome larger electrical resistance. Also, this connection is often wrapped in some additional protocol (like UCIe), which can be low-overhead, but never zero-overhead.

Of course, it's quite possible that I am overestimating the overhead. The UCIe marketing info mentions costs of 0.5pJ per bit of transmitted information, I have no idea whether it's a lot or not compared to on-chip communication.

And with lower densities on smaller nodes - that is the ONLY OPTION if Apple wants to get any performance increases that are meaningful.

I have described an alternative solution to the problem in my previous post, based on a published Apple patent. One can describe it as "chiplets", but the dies are very closely integrated rather than tiled, which allows for reduced package area and power consumption. I am inclined to believe that this is the sort of cost-effective, high-tech solution that is more likely to address Apple's needs. The more traditional horizontal tile MCM will likely continue to be used in large desktop units (Ultra/Extreme).

At any rate, there only Apple chiplet-related patents are those describing Ultra/Extreme style systems with symmetrical SoCs. All other patents discuss die stacking and other kinds of integration (which is not AMD/Intel-style tiling). Apple is also notably absent from the list of companies that support UCIe. Based on all this, I would expect them to bake their own, more intricate solution.
 

dmccloud

macrumors 68040
Sep 7, 2009
3,138
1,899
Anchorage, AK
However, there is another venue Apple is exploring, and that's 3D die stacking. They have advanced patents here going back to 2020, so they must have been experimenting with it for some time now. From what I understand the idea is to put compute logic (CPU/GPU) on a die manufactured with a smaller, more expensive node; while putting memory controllers and cache on another die manufactured on an older process. These dies are then stacked on each other and connected directly. This gives you the best of two worlds while retaining the energy efficiency of the monolithic approach. The resulting package is also supposed to be smaller than a monolithic die. I wouldn't be surprised if we see this technology in the 3nm products, simply because 3nm is so damn expensive and doesn't scale for cache memory. So it might make sense to spend a bit more on advanced packaging that uses "cheap" 5nm for cache — the resulting cost will be lower than a purely 3nm chip, with comparable performance and energy efficiency. If not on M3, maybe M3 Pro/Max.

Where we will certainly continue seeing "chipsets" is the larger systems aka. Ultra/Extreme, as energy efficiency is less of a concern there. M1/M2 Ultra seems to be a fairly straightforward interconnect — at least from logical standpoint — they "simply" join together the two on-chip networks into one big network. But a very recent Apple patent depicts a four -chip configuration using a new routing network interconnect — similar to what Intel uses in their new Xeons, only that Apple's solution will likely have much higher bandwidth.

Of course, all this is speculation, albeit informed one. It is also very possible that M3 family will continue to be "boring" monolithic dies with conservative clocks. It's just that I hope for Apple's sake they have some exiting new tech in the pipeline because otherwise they might run out of momentum.

One thing to note here: AMD already uses 3D stacking in it's X3D CPUs (Ryzen 5 5800X3D, Ryzen 7 7800X3D, Ryzen 9 7900X3D, Ryzen 9 7950X3D et. al.) AMD brands it as "3D V-Cache", but it is relevant here given that both Apple and AMD use TSMC to build their silicon.
 

257Loner

macrumors 6502
Dec 3, 2022
456
635
I would like to ask all y'all a question: Wouldst thou agree with the analysis that Apple's high-end M-series chips (Max and above) are basically dedicated GPUs with integrated CPUs? The reason provided for this analysis, and I can't remember who said it first, was that most of the die space is dedicated to GPU cores, and so these chips are GPUs first, and CPUs second.
 
Last edited:

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
Wouldst thou agree with the analysis that Apple's high-end M-series chips (Max and above) are basically dedicated GPUs with integrated CPUs?
Yes, as you can seen in these die shots.

M1PRO_575px.jpg


M1MAX_575px.jpg

 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Note that I never claimed chipsets per se aren't energy-efficient, just that AMD's current implementation is not. And we are not discussing 65W TDP envelopes (which in AMDs terminology is 95W means power consumption), we are talking about mobile technology were every watt counts. And the passive power consumption of Zen4 desktop CPUs is known to be rather high (at 20+ watts).

It is entirely possible that Intel's tile approach will be more optimised for energy efficiency. Especially since from what I understand they are targeting fairly low bandwidth.



I would think that inter-chip connection will always use more energy, simply because it has to overcome larger electrical resistance. Also, this connection is often wrapped in some additional protocol (like UCIe), which can be low-overhead, but never zero-overhead.

Of course, it's quite possible that I am overestimating the overhead. The UCIe marketing info mentions costs of 0.5pJ per bit of transmitted information, I have no idea whether it's a lot or not compared to on-chip communication.



I have described an alternative solution to the problem in my previous post, based on a published Apple patent. One can describe it as "chiplets", but the dies are very closely integrated rather than tiled, which allows for reduced package area and power consumption. I am inclined to believe that this is the sort of cost-effective, high-tech solution that is more likely to address Apple's needs. The more traditional horizontal tile MCM will likely continue to be used in large desktop units (Ultra/Extreme).

At any rate, there only Apple chiplet-related patents are those describing Ultra/Extreme style systems with symmetrical SoCs. All other patents discuss die stacking and other kinds of integration (which is not AMD/Intel-style tiling). Apple is also notably absent from the list of companies that support UCIe. Based on all this, I would expect them to bake their own, more intricate solution.
You forget the point. Intel has monolithic CPUs in mobile, and they do draw much, much more power than AMD's solutions. 6P/8E CPU 13900H draws 115W of power.

And the mobile solutions from AMD clock past 4 GHz within 35W Thermal envelope which is the same as M2 Pro.

Is it monolithic? Yes, but there is absolutely nothing in Dragon Range CPUs with similar clock configuration that would dissallow for the same thermal envelope. Its just clock speeds.

Power drawn by SOC is byproduct of clocks and technology process. Not the underlining technology. Inter-chip connections do draw power, but there is very good reason why AMD designs their CPUs with CPU PACKAGE POWER, not the core power in mind, which is completely different approach to Intel. Intel designs their CPUs around core power, first, and because they have crap process tech - we end up with 250W mainstream CPUs, that clock very high, and are performance, but use way too much power, and are Monolithic.


Chiplets, and moving cache outside the dies are the only option for not Only Apple, but basically everybody in this industry. Cache, and for the same reason, memory controllers reached their limit in scaling down with process tech. Without it - there will NOT be any meaningful perf uplift on next gen nodes, while costs will rise stupidly high.

There simply is no other choice. And even if M3 is not chiplet based/it is completely monolithic, M3 Pro and Max most likely will have to be system on chip, with advanced packaging solutions. They are too big, and with very large caches.

It opens the doors for wider memory buses, and as I have said previously in one of the threads, if that leaked specs of a M3 series chip in a MacBook pro with 6P/6E cores, 36 GB of RAM and 18 GPU cores is the actual M3 - it either has to be mobolithic, large SOC, or chiplet based, with 192 bit bus, and 20 GPU cores.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
I would like to ask all y'all a question: Wouldst thou agree with the analysis that Apple's high-end M-series chips (Max and above) are basically dedicated GPUs with integrated CPUs? The reason provided for this analysis, and I can't remember who said it first, was that most of the die space is dedicated to GPU cores, and so these chips are GPUs first, and CPUs second.
No. They are just System on Chip/Package with unified memory architecture.
 
  • Like
Reactions: Basic75

leman

macrumors Core
Oct 14, 2008
19,517
19,664
I would like to ask all y'all a question: Wouldst thou agree with the analysis that Apple's high-end M-series chips (Max and above) are basically dedicated GPUs with integrated CPUs? The reason provided for this analysis, and I can't remember who said it first, was that most of the die space is dedicated to GPU cores, and so these chips are GPUs first, and CPUs second.

It‘s not that much about the die space, but about the memory subsystem. CPUs traditionally use only few memory controllers and narrow memory interfaces, and the entire system is optimized for latency. GPUs on the other hand have more memory controllers, wider memory interfaces, and are optimized for serving many memory requests at the cost of latency. Apples implementation looks more like a GPU in this regard, or - if you prefer - a gaming console.

Of course, if one would start earnestly insisting that M1 Max should be called “a GPU with iCPU”, that would be very silly. It’s not analysis, just trying to fit a square peg into a round hole. Once device categories become fluent the old labels lose their utility.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Guys, does operating system run on GPU, or CPU?

If you know the answer to that - you know that M series chips are NOT a GPU with integrated CPU.
 
  • Like
Reactions: iPadified and leman

257Loner

macrumors 6502
Dec 3, 2022
456
635
Guys, does operating system run on GPU, or CPU?

If you know the answer to that - you know that M series chips are NOT a GPU with integrated CPU.
My quote was not an analysis of how software runs on the computer. It was a statement evaluating the priority that parts of the chip are receiving these days and how we talk about it.
 
  • Like
Reactions: Tagbert

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
My quote was not an analysis of how software runs on the computer. It was a statement evaluating the priority that parts of the chip are receiving these days and how we talk about it.
CPU is still the priority of parts, because that is what is running ALL OF THE SOFTWARE, starting with the OS.
 

257Loner

macrumors 6502
Dec 3, 2022
456
635

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
There's room for debate there. Check out these die shots, and tell me which component has received the lion's share of the die:
No mate, there is no room for debate. Untill GPU will become so robust that they can run the OS, themselves - there is nothing to talk about.

Hierarchy of hardware/software priority is this:

CPU->Memory->accelerators. Die sizes have absolutely no matter here, because they are byproduct of competition on the market, and customer expectations, and product marketing.
Do you also think that AMD and Microsoft prioritized the CPU over that huge GPU when developing the SoC for the Xbox series?
View attachment 2234274
Why did MS and Sony even bother with adding the CPU to the console if GPU was the priority? /s

Why do you think PS5 and Xbox Series are the first consoles in history with 120 FPS capabilities?

BECAUSE OF THE CPU. The slower the CPU - the lower the framerate you will get, the larger GPU bottleneck.

Do I really have to explain to people differences between a GPU and a CPU and what they do in a computer, and why they(GPU) require such huge amounts of logic on the dies?
 
  • Like
Reactions: iPadified and leman

leman

macrumors Core
Oct 14, 2008
19,517
19,664
There's room for debate there. Check out these die shots, and tell me which component has received the lion's share of the die:

That’s a weak argument. There is an even larger disparity between GPU and CPU areas of any workstation or gaming PC, just that here these are two separate components.

GPU takes more area simply because it is designed for much larger workloads than the CPU. And parallel throughout becomes increasingly more important.
 
  • Like
Reactions: Basic75

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
That’s a weak argument. There is an even larger disparity between GPU and CPU areas of any workstation or gaming PC, just that here these are two separate components.

GPU takes more area simply because it is designed for much larger workloads than the CPU. And parallel throughout becomes increasingly more important.
Not larger, but "WIDER". Thats the difference. Its important distinction which defines how GPUs work.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
GPU takes more area simply because it is designed for much larger workloads than the CPU.
And since the GPU is more than 3 times larger than the CPU in M1 Max, some describe it as a GPU with CPU.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
And since the GPU is more than 3 times larger than the CPU in M1 Max, some describe it as a GPU with CPU.
Only the people who do not know the difference between a CPU and a GPU, role, and reason for such robustness of such GPU.
 

257Loner

macrumors 6502
Dec 3, 2022
456
635
Only the people who do not know the difference between a CPU and a GPU, role, and reason for such robustness of such GPU.
Millions of dollars are being poured into research and development to erode the difference between CPUs and GPUs. The evidence?

This evidence:
General-purpose computing on GPUs
CUDA
OpenCL
OpenGL
Unified Memory Architecture (What Apple Silicon uses)
Heterogeneous System Architecture
AMD APUs (Used in the PlayStation 4 and Xbox One)

If you give computer engineers enough time and money, they'll continue to erode the difference between CPUs and GPUs and change computing again.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.