Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
I think just filling up all the slots is going to be easier than dealing with this lol.
View attachment 1908297


I think Apple might have had Ice Lake Mac Pro planned initially, but may never see the light of the day.
AS "Mac Pro" could end being the entirely new system and might as well drop the Mac Pro name. They can just settle with either 256GB or 512GB(if they manage to double up the density next year) maximum memory configuration on 4x Jade-C
Sure but that's just what I'm saying that modularity has its own costs and why Apple might choose to not go with a modular RAM system. They still might, but they might not and this is one reason why. Because again, it'll likely have to feed the GPU too.

For Ice Lake, in reference to our earlier discussions about Intel and leak names being a pretty good indication of what's coming down the pipe, generally if product code names in macOS/Xcode that product has been released relatively soon after.

This is from just earlier this year:


In the initial transition video, Tim Cook referenced releasing new Intel hardware, an Intel leaker said there will be one, and Xcode recently gained a reference to it. Apple could yes in theory change their mind, but that's pretty strong evidence that an Ice Lake Mac Pro is coming. Historically, that confluence of evidence was pretty much a guarantee of a product.
 

theorist9

macrumors 68040
May 28, 2015
3,881
3,060
The modularity is also an issue though. In order to get the full bandwidth you need a lot of DDR slots and all the slots have to be filled. You have to rely on the user to do that right. This is even more critical if it’s feeding the GPU as well as the CPU.

I think just filling up all the slots is going to be easier than dealing with this lol.
View attachment 1908297
IMO, the above is not a big deal for a pro user. What could be easier than following the above RAM diagram? It's like paint-by-numbers, except simpler, because there's only one color.

I.e., suppose you gave Mac Pro buyers the following choice: We can either give you soldered-in memory that you have to choose at purchase, or we can give you upgradeable memory, but you need to fill all the slots to get full performance (and if you don't fill all of them, you need to follow a scheme like the above). My guess is that the overwhelming majority would choose the latter. I certainly would.

Question: Let's suppose you have all the slots filled with the minimum-size cards (let's suppose it's 8 GB). If upgrading, do you need to replace all the slots with the same-size memory, or can you simply replace pairs? If it's the latter, then here's one possible soution: Apple's minimum memory configuration could be, say, 16 x 8 GB = 128 GB. Then users could easily upgrade while maintaining full performance. E.g., the next upgrade would be 14 x 8 GB + 2 x 16 GB = 144 GB.
 
Last edited:

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
What could be easier than following the above RAM diagram?

You would think so ... but the things I've seen and read ...

I'm not saying Apple absolutely won't use DDR5 modules - honestly I don't know know what they'll do. What I am saying is that there are reasons why Apple, given their SOC design combined with their overall approach to computing, might choose to go with something more bespoke that guarantees a given level of performance rather than emphasizing modularity and upgradeability. They may still go with the latter for the AS Mac Pro but I wouldn't put all money on it. :)
 

throAU

macrumors G3
Feb 13, 2012
9,204
7,355
Perth, Western Australia
The thing with alder lake E cores is that if you want to run avx512 workloads (which intel have been previously trying to sell as an advantage of their architecture) they need to be turned off in the bios.

Yes that’s right. The e cores don’t support avx512 and rather than have a scheduler that can handle that, avx512 is just not functional unless they are turned off.

?
 
  • Like
Reactions: JMacHack

Kpjoslee

macrumors 6502
Sep 11, 2007
417
269
You would think so ... but the things I've seen and read ...

I'm not saying Apple absolutely won't use DDR5 modules - honestly I don't know know what they'll do. What I am saying is that there are reasons why Apple, given their SOC design combined with their overall approach to computing, might choose to go with something more bespoke that guarantees a given level of performance rather than emphasizing modularity and upgradeability. They may still go with the latter for the AS Mac Pro but I wouldn't put all money on it. :)

Well, moving from LPDDR5 to DDR5 doesn't really give performance penalty or latency....power consumption may go up by around 20~30% but would be a non-issue on desktop platform like Mac Pro. DDR5 is pretty much the only option that can go above 1TB of memory capacity, and wouldn't require significant modification of Jade SoC in terms of memory IO. Reason I think there is a pretty good chance next Mac Pro (or whatever end up being) may end up using DDR5.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
Well, moving from LPDDR5 to DDR5 doesn't really give performance penalty or latency....power consumption may go up by around 20~30% but would be a non-issue on desktop platform like Mac Pro. DDR5 is pretty much the only option that can go above 1TB of memory capacity, and wouldn't require significant modification of Jade SoC in terms of memory IO. Reason I think there is a pretty good chance next Mac Pro (or whatever end up being) may end up using DDR5.

I was thinking in terms of: doesn’t require the user to do anything right to get the stated memory bandwidth rather than DDR5 not being up to the task. ?
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Well, moving from LPDDR5 to DDR5 doesn't really give performance penalty or latency....power consumption may go up by around 20~30% but would be a non-issue on desktop platform like Mac Pro. DDR5 is pretty much the only option that can go above 1TB of memory capacity, and wouldn't require significant modification of Jade SoC in terms of memory IO. Reason I think there is a pretty good chance next Mac Pro (or whatever end up being) may end up using DDR5.
I would think so too. Not sure if LPDDR5 memory modules comes with ECC channels. Since LPDDR5 are meant for mobile, I don't think those will have ECC.

The question now is if Apple goes with DDR5 DIMMs modules, how wide will they go for the memory bus.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
I would think so too. Not sure if LPDDR5 memory modules comes with ECC channels. Since LPDDR5 are meant for mobile, I don't think those will have ECC.

The question now is if Apple goes with DDR5 DIMMs modules, how wide will they go for the memory bus.

I believe lpDDR5 can have ECC. Although I’ll be honest I can’t remember if it’s ECC or … “ECC” (all DDR5 dimms have on chip “ECC” which is not the same thing as true ECC memory and I can’t remember if the lpDDR5 ECC was true ECC or like that)
 
Last edited:

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I believe lpDDR5 can have ECC. Although I’ll be honest I can’t remember if it’s ECC or … “ECC” (all DDR5 dimms have on chip “ECC” which is not the same thing as true ECC memory)

I have a vague recollection that it uses ECC for transmission, but doesn’t store ECC bits (so if the state flips it isn’t caught).
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
I believe lpDDR5 can have ECC. Although I’ll be honest I can’t remember if it’s ECC or … “ECC” (all DDR5 dimms have on chip “ECC” which is not the same thing as true ECC memory)
AFAIK, ECC is mandated for DDR5 within the DRAM memory cells. It is not mandated for the channels between the memory module to the memory controller.
 

eoblaed

macrumors 68040
Apr 21, 2010
3,088
3,202
If Intel can convince people that the speed of their CPUs will predict a faster user experience than M(x) SOCs, they will have successfully swept one of the biggest advantages of the Apple Silicon SOCs: the fully integrated suite of systems on a single die. The fact that all these components can, for example, access data in RAM without 0 copies and without having to go through a traditionally much slower RAM BUS is huge, and is something that cannot be reasonably attacked by a discrete Intel chip.

The performance and performance to power ratio is only one aspect of the brilliance of the M1 design; removing the need to copy data, transport data across slow BUSes, etc, is a big, big part of it as well.

It's not just about CPU speed when your CPU is throttled by the limitations of the hardware integration points.
 
  • Like
Reactions: throAU

altaic

Suspended
Jan 26, 2004
712
484
Sure but that's just what I'm saying that modularity has its own costs and why Apple might choose to not go with a modular RAM system. They still might, but they might not and this is one reason why. Because again, it'll likely have to feed the GPU too.

For Ice Lake, in reference to our earlier discussions about Intel and leak names being a pretty good indication of what's coming down the pipe, generally if product code names in macOS/Xcode that product has been released relatively soon after.

This is from just earlier this year:


In the initial transition video, Tim Cook referenced releasing new Intel hardware, an Intel leaker said there will be one, and Xcode recently gained a reference to it. Apple could yes in theory change their mind, but that's pretty strong evidence that an Ice Lake Mac Pro is coming. Historically, that confluence of evidence was pretty much a guarantee of a product.
I'd really like to see Apple release a $30k Ice Lake Mac Pro with fancy GPUs alongside a $10k M1 Apex Mac Pro that bests it. 'Course a more affordable $10k Intel Mac Pro configuration wouldn't hold a candle to the ASi machine.

Also, yes, I call the quad M1 Max SoC "Apex" now. Apexes are cool.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I'd really like to see Apple release a $30k Ice Lake Mac Pro with fancy GPUs alongside a $10k M1 Apex Mac Pro that bests it. 'Course a more affordable $10k Intel Mac Pro configuration wouldn't hold a candle to the ASi machine.

Also, yes, I call the quad M1 Max SoC "Apex" now. Apexes are cool.

I call it Octamax.
 
  • Haha
Reactions: Technerd108

throAU

macrumors G3
Feb 13, 2012
9,204
7,355
Perth, Western Australia
If Intel can convince people that the speed of their CPUs will predict a faster user experience than M(x) SOCs, they will have successfully swept one of the biggest advantages of the Apple Silicon SOCs: the fully integrated suite of systems on a single die. The fact that all these components can, for example, access data in RAM without 0 copies and without having to go through a traditionally much slower RAM BUS is huge, and is something that cannot be reasonably attacked by a discrete Intel chip.

The performance and performance to power ratio is only one aspect of the brilliance of the M1 design; removing the need to copy data, transport data across slow BUSes, etc, is a big, big part of it as well.

It's not just about CPU speed when your CPU is throttled by the limitations of the hardware integration points.
The other is that a lot of the heavy compute is handled by dedicated non cpu ASICS leaving the cpu free while working hard
 

theorist9

macrumors 68040
May 28, 2015
3,881
3,060
In the initial transition video, Tim Cook referenced releasing new Intel hardware, an Intel leaker said there will be one, and Xcode recently gained a reference to it. Apple could yes in theory change their mind, but that's pretty strong evidence that an Ice Lake Mac Pro is coming. Historically, that confluence of evidence was pretty much a guarantee of a product.
If Apple does this, I wonder if they will allow the CPU's (which are socketed) on existing Mac Pro's to be upgraded to Ice Lake. Probably not, but it would be cool if they did.
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Alder Lake reviews are out, and given recent discussions on the topic I though it would be interesting to revisit this, this time with proper numbers in hand.

The bottom line is: yes, Alder Lake performance cores are faster than M1 performance cores... barely (by ~ 10%)... while consuming more than 10 times more power. In multi-core performance, the top of the line desktop i9 (8+8 cores, 24 threads) is up to 50% faster in integer workloads than M1 Max/Pro (8+2 cores, 10 threads), while consuming 6x as much power... and no performance advantage on SPEC fp workloads. But hey, Intel has overtaken Zen3... slightly... while still consuming 2x power on desktop. Thermally constrained i9 laptops will probably have 5% higher scores in single core compared to M1 chips (while revving up the fans like crazy) and likely at least 20% in sustained workloads. Or maybe even more, if the 45W TDP is a hard sustained ceiling (which is probably not going to be).

This again illustrates that Apple did the right thing switching. There is no meaningful innovation happening in x86 world. Intel CPUs run hotter than ever and the promises that the new E-cores perform like Skylake at much lower power consumption levels were of course greatly exaggerated. Intel is squeezing out some more performance by literally cranking up the burner. And let's hope your workflow is parallel enough to properly schedule 24 asymmetric hardware threads...
Real comparable mix will be when Alder Lake laptops start shipping.

Apple made the right move. Looking back, it's clear that Intel didn't properly communicate development issues to Apple and definitely poorly did it to the public about their roadmap.

Intel still has a lot to lose out of everything. The silicon market hasn't been this fierce... I don't think ever.

Intel got arrogant, and spent a long time thinking no one could touch them, and this perfect storm brewed up.

I hope Intel ACTUALLY becomes competitive again. The industry needs as much competition as powerful. I also REALLY hoped Blackberry would start innovating after the release of the iPhone but a very specific type of arrogance and lack of understanding what made it popular killed their phone business.

There's lots of things going against them. I haven't had a look at Alder Lake outside of an Anandtech article on it in any real detail, but if it's a chiplet based design I have questions about the core to core interconnects and the speed of it. I am also curious about real world testing under sustained load.

The Intel MacBook Pro 16 ramped down aggressively shortly after starting a video call. Fans were going full blast. Sure, Alder Lake looks great in a Desktop environment but does it fall off under continuous load? Can it sustain the performance with liquid cooling? Does PCIe 5 give them the same types of performances gains between the CPU and GPU that having them on the same package would give you?

By no means trying to argue, or talk negatively about your statement. You have lots of valid points.

I'd have to review my original readings on what Intel was doing, but when the Surface Neo was announced the "Hybrid CPU" wasn't an actual SOC. It was an SIP (System In a Package), which is great because you can fab a bunch of Intel Core series processors, take the good ones, and connect them with Intel Atom (just saying that after I thought it was dead is making me roll my eyes). Theoretically all of these things are promising. But it's less of an architected solution, as much as it's trying to get all of the pizza crusts from Chuck E. Cheese to prove they reuse pizza. Sure, you can do it, but if the connectivity isn't there, and if there isn't shared caches across distinct cores, it just looks like more latency.

I'll do more of a review over the weekend. I am BEYOND happy to be wrong on all of this. I want Intel to sock Apple in the jaw. That way, Apple has to up its game.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
If Apple does this, I wonder if they will allow the CPU's (which are socketed) on existing Mac Pro's to be upgraded to Ice Lake. Probably not, but it would be cool if they did.

I think that would depend more on Intel than Apple if it’s socketed - it would have to be compatible. But yeah if you’re able to buy the same chip it should work without being a hackintosh or anything. I think. ?‍♂️
 

theorist9

macrumors 68040
May 28, 2015
3,881
3,060
I think that would depend more on Intel than Apple if it’s socketed - it would have to be compatible. But yeah if you’re able to buy the same chip it should work without being a hackintosh or anything. I think. ?‍♂️
Good point. Ice Lake does use a different socket. So Apple would instead have to be willing to sell the whole logic board as an upgrade (unless it wouldn't be compatible for other reasons). Maybe the swap could be done at an Apple store or authorized repair center. IIRC from the iFixit teardown, it does easily pull out. If they're developing an Ice Lake logic board that fits the existing case anyways, then their development costs are already sunk. But they'd need to decide what percent of their customers would, in the absence of this option, simply upgrade to the entirely new machine -- those are sales they'd lose if they offered a logic board upgrade.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,521
19,679
In a desktop AS implementation, would DDR's reduced latency provide enough benefit to outweigh LPDDR's efficiency?

Who knows, maybe? But I doubt it would make sense economically for Apple. The AS platform is based on economy of scale, doing a separate RAM implementation for desktop will probably defeat the purpose.

For Mac Pro, DDR as system memory will be problematic as the number of sockets needed to provide enough bandwidth would quickly surpass a manageable number (e.g. 32 sockets to get to 1.6 Gb/s).

Personallly, I think depending i which direction Apple takes with the Mac Pro, we’d either see on-package RAM just like in the laptops, or on-package RAM plus slower (maybe 6 or 8 channels) Modular RAM that acts like an additional level of cache.

BTW, from what I've subsquently learned, LPDDR can also be modular—though that is not it's typical implementation, plus it might be more complicated to implement: https://news.ycombinator.com/item?id=18408496

Everything can be modular… but just think about complexity. You are talking about RAM modules with pinout higher than a modern server CPU. The cost of such system would be prohibitive and there will be major consequences for its energy efficiency.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
I like this thread, since I'm getting so many of my questions answered!

Given this, I'd like to ask my three I/O questions:

1) Apple specifies a limit of two external displays on the M1 Pro. Owners of the M1 Pro have confirmed this is a hard limit (there are probably workarounds that allow more, but I'm talking by direct connection).

With 3 x TB4 and 1 x HDMI 2.0, and a powerful GPU, this maximum of two external displays seems like a surprising limitation. Heck, my mid-2014 MBP, with 2 x TB2 and 1 x HDMI 1.4, can (and does) drive three external displays. I.e., it drives displays from each of its video-capable ports.

What is it about the M1 Pro's architecture that explains this limitation? And likewise for the M1 Max, which is limited to three external displays (rather than being able to drive displays from all four of its video-capable ports, like the 16" Intel MBP could).

2) The TB4 standard calls for 40 Gb/s full duplex (bidirectionally). It also includes DisplayPort Alt Mode 2.0, which enables the interface to alternately support 80 Gb/s unidrectional transmission (see https://en.wikipedia.org/wiki/Thunderbolt_(interface)). However, from what poster Krevnik wrote below, it sounds like neither of these are available in practice, including on the M1 Pro/Max. Is there a consensus that this is the case?
And might we perhaps see one or both of these capabilities in the 2022 iMac Pro/Max's TB4 implementation?


3) I've pasted, at the bottom, a nice explanation from usernames need to be uniq for why Apple limited the SD port to UHS-II (mainly, if they went with USH-III, then UHS-II cards would downgrade to UHS-I; plus UHS-III cards don't exist). But what's the explanation for why Apple decided to limit its HDMI port to 2.0? Was it a bandwidth limitation, a lack of reliable HDMI 2.1 controller chips, or something else?
1. M1, M1 Pro, and M1 Max all use an internal SoC building block called DCP for display outputs. I am not fully sure about this, but I think it's true that each DCP block handles one DisplayPort output stream, so the supported display count is a function of how many DCPs that particular SoC has. M1 Max uses some of its bigger floorplan to add more DCPs.

And yes, no typo there, M1 only natively supports DisplayPort. All M1 Macs with a HDMI port drive it with a DP-to-HDMI converter chip.

2. I think @Krevnik might have been missing that in DP Alt Mode 2.0, the way they get to 80 Gbps isn't by increasing per-lane speed. USB-C cabling and connectors provide four high speed differential pairs. In USB or TB mode, these are generally allocated as two transmit (TX), two receive (RX). DP Alt Mode 2.0 supports a configuration where all four are transmitters at 20 Gbps - that's how they get to 80 Gbps throughput. It also supports 2x20 for DP with the other two pairs used as high speed USB TX/RX.

Intel contributed the Thunderbolt PHY spec to VESA, so the 20 GT/s signaling standard used in DP Alt 2.0 is the TB4/USB4 20 GT/s standard. I don't know whether Apple implemented DP Alt 2.0 in M1 Pro/Max, but they've got the physical layer for it.

3. I tried to confirm the claim that UHS-III would force UHS-II cards to fall back to UHS-I and I think it's completely false. This SD Association (SDA) document says that UHS-III is a strict superset of UHS-II; it merely adds new speeds on top of existing UHS-II speeds.


The real issue is that the SDA seems to have misjudged where the camera industry wanted to go. SDA released the SD 6.0 spec with UHS-III nearly 5 years ago. After so much time and zero products shipped, it seems safe to declare that UHS-III was DOA and will never be seen in public.

Instead, everyone interested in pushing beyond UHS-II speeds has jumped on to CF Express. And it strikes me as the right thing to do - CFE is just PCIe+NVMe, which is the right thing to do if you want ultra high performance. The SDA seems to agree, as they've defined a new successor standard, SD Express, which me-toos CFE - it's also based on PCIe+NVMe. But they didn't manage to get any products launched until this year, so I suspect that CFE has already won the high end flash card market.

But wait, you were really asking why HDMI 2.0. The glib answer is: that's what the DP-to-HDMI converter chip Apple selected provides. The real answer is that I suspect they regard the HDMI port as an auxiliary thing you use to drive a TV or projector in a conference room. For that purpose, 2.0 is probably fine for 99.9% of people. If you want a big high res display, they think you should be using some form of DisplayPort.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
1. M1, M1 Pro, and M1 Max all use an internal SoC building block called DCP for display outputs. I am not fully sure about this, but I think it's true that each DCP block handles one DisplayPort output stream, so the supported display count is a function of how many DCPs that particular SoC has. M1 Max uses some of its bigger floorplan to add more DCPs.

And yes, no typo there, M1 only natively supports DisplayPort. All M1 Macs with a HDMI port drive it with a DP-to-HDMI converter chip.

2. I think @Krevnik might have been missing that in DP Alt Mode 2.0, the way they get to 80 Gbps isn't by increasing per-lane speed. USB-C cabling and connectors provide four high speed differential pairs. In USB or TB mode, these are generally allocated as two transmit (TX), two receive (RX). DP Alt Mode 2.0 supports a configuration where all four are transmitters at 20 Gbps - that's how they get to 80 Gbps throughput. It also supports 2x20 for DP with the other two pairs used as high speed USB TX/RX.

Intel contributed the Thunderbolt PHY spec to VESA, so the 20 GT/s signaling standard used in DP Alt 2.0 is the TB4/USB4 20 GT/s standard. I don't know whether Apple implemented DP Alt 2.0 in M1 Pro/Max, but they've got the physical layer for it.

3. I tried to confirm the claim that UHS-III would force UHS-II cards to fall back to UHS-I and I think it's completely false. This SD Association (SDA) document says that UHS-III is a strict superset of UHS-II; it merely adds new speeds on top of existing UHS-II speeds.


The real issue is that the SDA seems to have misjudged where the camera industry wanted to go. SDA released the SD 6.0 spec with UHS-III nearly 5 years ago. After so much time and zero products shipped, it seems safe to declare that UHS-III was DOA and will never be seen in public.

Instead, everyone interested in pushing beyond UHS-II speeds has jumped on to CF Express. And it strikes me as the right thing to do - CFE is just PCIe+NVMe, which is the right thing to do if you want ultra high performance. The SDA seems to agree, as they've defined a new successor standard, SD Express, which me-toos CFE - it's also based on PCIe+NVMe. But they didn't manage to get any products launched until this year, so I suspect that CFE has already won the high end flash card market.

But wait, you were really asking why HDMI 2.0. The glib answer is: that's what the DP-to-HDMI converter chip Apple selected provides. The real answer is that I suspect they regard the HDMI port as an auxiliary thing you use to drive a TV or projector in a conference room. For that purpose, 2.0 is probably fine for 99.9% of people. If you want a big high res display, they think you should be using some form of DisplayPort.

Yep. Not hearing any photographers clamoring for UHS-III. My new super-fancy camera has CFE, as do all the other high end cameras. Very questionable whether UHS-III ever is a thing.

And the HDMI slot is nice because you don’t need a dongle or special usb-c-terminated cable. But if you are at your own desk at your own office or home, the dongle/cable isn’t a hassle. You buy it once and are done with it. Now with the hdmi socket, no need to drag a cable around with you when you travel.
 
  • Like
Reactions: theorist9

Kpjoslee

macrumors 6502
Sep 11, 2007
417
269
Yep. Not hearing any photographers clamoring for UHS-III. My new super-fancy camera has CFE, as do all the other high end cameras. Very questionable whether UHS-III ever is a thing.

And the HDMI slot is nice because you don’t need a dongle or special usb-c-terminated cable. But if you are at your own desk at your own office or home, the dongle/cable isn’t a hassle. You buy it once and are done with it. Now with the hdmi socket, no need to drag a cable around with you when you travel.

Well, I still believe they should have supported HDMI 2.1 if they really bothered to put HDMI slot at a expense of another USB-C slot. Using smaller OLED TVs as high-res/high refresh rate display would have been a nice alternative option for Macbook Pros if they supported HDMI 2.1.
 
  • Like
Reactions: turbineseaplane

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Well, I still believe they should have supported HDMI 2.1 if they really bothered to put HDMI slot at a expense of another USB-C slot. Using smaller OLED TVs as high-res/high refresh rate display would have been a nice alternative option for Macbook Pros if they supported HDMI 2.1.

For most people, they didn‘t lose a usb-c socket, because they no longer need to use one for charging. So I think it‘s generally a wash (Other than folks who are using a single usb-c for power and monitor).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.