Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

257Loner

macrumors 6502
Dec 3, 2022
456
635
I would say Apple is chasing specs but based on their on product visions and needs. Apple likely planned for yearly refreshes of the Mac AS SoC, much like the iPhone SoCs, but the pandemic likely threw a big spanner into the works.

Many advocating that Apple release a SoC to rival what AMD/nVidia/Intel have is asking Apple to skate to where the puck is. The world is moving increasingly into the mobile space, and devices are getting smaller. That is where the puck is moving to. AMD/nVidia/Intel's strategy does not allow Apple to go where the world is moving to. I don't see the trio having solutions that can power a personal device like the rumoured Apple Glass without also carrying a heavy battery in your backpack.

I don't think Apple's management is going to go into a pissing contest.
I think you're right. I can't believe Nvidia thinks the market is moving towards $1500 GPUs! For those who buy their own dedicated GPUs, Jay Langevin from JayzTwoCents argues that most people are buying $200-$300 graphics cards. But as you said, Apple thinks iGPUs are the future. And there's good reason.

Apple is the largest gaming console maker in America today. In 2021, Sony sold 17 million PlayStation 5’s, Microsoft sold 9 million Xboxes, and Nintendo sold 8 million Switches. That same year, Apple shipped more than 240 million iPhones, 7 times more than the other console makers combined.

Apple’s technological strategy has been to use iGPUs on their phones’ SoCs, along with Metal 3’s graphical features, upscaling among them. Apple has extended this strategy to their other computers, both to their laptops and to their small form factor desktop computers.

With Apple having found so much success in the console gaming market and iGPUs, it is unlikely they will devote R&D to dedicated GPUs and traditional high-end hardware.
 

sam_dean

Suspended
Sep 9, 2022
1,262
1,091
I think you're right. I can't believe Nvidia thinks the market is moving towards $1500 GPUs! For those who buy their own dedicated GPUs, Jay Langevin from JayzTwoCents argues that most people are buying $200-$300 graphics cards. But as you said, Apple thinks iGPUs are the future. And there's good reason.

Apple is the largest gaming console maker in America today. In 2021, Sony sold 17 million PlayStation 5’s, Microsoft sold 9 million Xboxes, and Nintendo sold 8 million Switches. That same year, Apple shipped more than 240 million iPhones, 7 times more than the other console makers combined.

Apple’s technological strategy has been to use iGPUs on their phones’ SoCs, along with Metal 3’s graphical features, upscaling among them. Apple has extended this strategy to their other computers, both to their laptops and to their small form factor desktop computers.

With Apple having found so much success in the console gaming market and iGPUs, it is unlikely they will devote R&D to dedicated GPUs and traditional high-end hardware.
Apple thinks there should be an iPhone that surpasses the Pro Max.


Nearing 100% of all PS5s, Xboxs and Nintendo are used to run games.

That cannot be said with all 240 million plus iPhones as not that large of a % plays any games.

Within the PC market dGPUs is a outcome of modularization. Intel/AMD had to create products that are general purpose. Any specialization like a dGPU is only commercially viable as an add on.

Apple is a systems maker so they can design to their exacting specifications.

At the time M1 came out in Nov 2020 it had the most powerful iGPU of any laptop/desktop.

When the M1 Ultra came out in 2022 it had the most powerful iGPU that was comparable to a RTX 3090 at <200W.

Many here were not expecting and even refused to consider that any iGPU would reach the performance of a dGPU. I do not blame them it was never been done and it would not work outside of Apple's ecosystem.

But if they looked close enough they'd have known this to be inevitable if they saw the 2019 & 2020 iPad Pro prior to the M1's release.

We just did not know where to look.

That rumor of Mac chips coming out annually in sync with iPhone chips makes all the sense in the world.

Apple failed to execute largely because of supply chain challenges caused by COVID.

If that were not the the case then we'd be on a 5nm M3 last Oct/Nov.

Looking back... we should have gotten our 5nm M2 on Oct/Nov 2021.

I welcome this improvement but it would sadly impact my iMac 27" replacement.
 

257Loner

macrumors 6502
Dec 3, 2022
456
635
When the M1 Ultra came out in 2022 it had the most powerful iGPU that was comparable to a RTX 3090 at <200W.
I hoped you would have read my reply to this here: https://forums.macrumors.com/thread...ly.2379887/page-3?post=31948519#post-31948519

Nearing 100% of all PS5s, Xboxs and Nintendo are used to run games.

That cannot be said with all 240 million plus iPhones as not that large of a % plays any games.
According to the Times of India, Apple Arcade is already making Apple more money than Microsoft and Nintendo's gaming businesses:
 

sam_dean

Suspended
Sep 9, 2022
1,262
1,091
I hoped you would have read my reply to this here: https://forums.macrumors.com/thread...ly.2379887/page-3?post=31948519#post-31948519


According to the Times of India, Apple Arcade is already making Apple more money than Microsoft and Nintendo's gaming businesses:
I said it was comparable & equivalent as I am aware of the word choices of Apple that makes it appear it is superior to the RTX's raw performance.

Apple raked in more profits from games than Xbox maker Microsoft Corp., gaming giants Nintendo Co. and Activision Blizzard Inc. and PlayStation maker Sony Corp.—combined—in its fiscal year 2019, according to a Wall Street Journal analysis of figures released as part of the company’s recent antitrust trial.

When you referenced the sold units of PS5, Xbox & Nintendo as compared to iPhones you were implying that all those iPhones were used for games. I provided context of the use cases of those video game consoles that they are nearly 100% used to play games.

That is not the same with iPhones or even Android.

In the same sense that PC shipment is over 200 million last year. It does not mean that nearly 100% of them will be used for games. A better metric would be dGPU shipments but it isn't a good one either due to crypto.

What the WSJ and Times of India are saying that software game revenue outperformed those of any other company mentioned in said articles.

When having any discussions I use very specific words to provide precise meaning.
 

257Loner

macrumors 6502
Dec 3, 2022
456
635
I said it was comparable & equivalent as I am aware of the word choices of Apple that makes it appear it is superior to the RTX's raw performance.

Apple raked in more profits from games than Xbox maker Microsoft Corp., gaming giants Nintendo Co. and Activision Blizzard Inc. and PlayStation maker Sony Corp.—combined—in its fiscal year 2019, according to a Wall Street Journal analysis of figures released as part of the company’s recent antitrust trial.

When you referenced the sold units of PS5, Xbox & Nintendo as compared to iPhones you were implying that all those iPhones were used for games. I provided context of the use cases of those video game consoles that they are nearly 100% used to play games.

That is not the same with iPhones or even Android.

In the same sense that PC shipment is over 200 million last year. It does not mean that nearly 100% of them will be used for games. A better metric would be dGPU shipments but it isn't a good one either due to crypto.

What the WSJ and Times of India are saying that software game revenue outperformed those of any other company mentioned in said articles.

When having any discussions I use very specific words to provide precise meaning.
That's fine. And I'm saying iGPUs are taking over because of the smartphone gaming industry, which is already huge and will soon eclipse older forms of electronic entertainment in popularity. That doesn't mean iGPUs are equivalent to dGPUs in power, but in smartphones they are more necessary to save battery power, and because of smartphones there will be more of them.
 
  • Like
Reactions: sam_dean

sam_dean

Suspended
Sep 9, 2022
1,262
1,091
That's fine. And I'm saying iGPUs are taking over because of the smartphone gaming industry, which is already huge and will soon eclipse older forms of electronic entertainment in popularity.

Agree

That doesn't mean iGPUs are equivalent to dGPUs in power

In raw performance Apple proved that it is possible to design a SoC with the raw performance and better performance per watt than a RTX dGPU. It is just that Apple's business model allows for it.

Apple is a system's vendor. Meaning that they sell the finished product, not just the processors. So they can use several parts from the vertical process to subsidize others. In this case, Apple can afford to make very good SoCs with best in class iGPU because they don't sell those chips elsewhere, meaning that they are not as pressured to make them "cheap" in terms of area for example. Since they're going to recoup the profit from elsewhere in the product.

In contrast; AMD and Intel sell their processors to OEMs, so they only get profit from the processor not the finished system. So they have to prioritize cost, by optimizing their designs for Area first and then focus on power. This is why both AMD and Intel use smaller cores, which allows them for smaller dies. But which have to be clocked faster in order to compete in performance, unfortunately that also increases power.

This is probably their key difference; Apple can afford the larger design that is more power efficient for the same performance. Whereas AMD/Intel have to aim for the smaller design that is less power efficient for the same performance.

Notice how Apple focuses their marketing on core counts rather than clock speeds.

Apple uses their larger/more complex cores to their advantage, by running them at a slower clock rate. While allowing them to do more work per clock cycle. This allows them to operate on the frequency/power sweet spot for their process. One has to note that power consumption increases significantly (way higher than linear) the higher the frequency.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
In raw performance Apple proved that it is possible to design a SoC with the raw performance and better performance per watt than a RTX dGPU. It is just that Apple's business model allows for it.
I have speculated previously that Apple could also go the route of having the CPU/NPU/ME cores in one die and GPU cores in another die, and tie both together with a variant of their UltraFusion interconnect. This should allow them to go wild in the core counts department to hit their performance targets. With the better GPU cores in the M2, Apple may be able to claim a win at the GPU front. The could also clock the CPU cores higher and pack more of them, and maybe also claim a win at the CPU front.

With both dies further apart, it's probably easier to cool as well, with each die having their own memory controllers.
 

sam_dean

Suspended
Sep 9, 2022
1,262
1,091
I have speculated previously that Apple could also go the route of having the CPU/NPU/ME cores in one die and GPU cores in another die, and tie both together with a variant of their UltraFusion interconnect. This should allow them to go wild in the core counts department to hit their performance targets. With the better GPU cores in the M2, Apple may be able to claim a win at the GPU front. The could also clock the CPU cores higher and pack more of them, and maybe also claim a win at the CPU front.

With both dies further apart, it's probably easier to cool as well, with each die having their own memory controllers.
UltraFusion is an interconnection technology with a high-speed interface that enables hardware components to be interconnected with one another. The interconnected components fundamentally work as a single hardware component.

Apple specifically designed the M1 Max chip with a silicon interposer as part of its overall chip packaging. Note that packaging represents a stage in chip fabrication that involves mounting and interconnecting integrated circuits and other components, and encapsulating a semiconductor component or an entire chip into a protective enclosure.

On the other hand, an interposer is a bridge or conduit that enables electric signals to pass through and onto another destination. The interposer used by Apple allows two M1 Max chips to connect over 10000 signals and maintain an interprocessor bandwidth of 2.5TB/s.

UltraFusion is basically an implementation of silicon interposer technology and chip packaging techniques to create an interprocessor interconnection technology. Apple implemented these using one of the advanced packaging technologies developed and deployed by TSMC called chip-on-wafer-on-substrate with silicon interposer or CoWoS-S.

Source: https://www.profolus.com/topics/apple-ultrafusion-technology-explained-what-and-how/

I am unsure if UltraFusion will work with a non-identical SoC as you propose. For simplicity Apple just doubles the M1 Max chip to create an M1 Ultra chip.

As designed by Apple increasing in clockspeed will significantly increase power consumption without linearly increasing raw performance.

Moving them apart may nullify UltraFusion as the current implementation has these dies adjacent to each other. This increases material component cost and power consumption overhead.

The solution for all the above is always being at the next process node before anyone else. After which optimization.

Synchronizing Mac chips to iPhone chip generation will help increase economies of scale and simplifying and reducing supply chain and fab movements.
 
Last edited:

senttoschool

macrumors 68030
Original poster
Nov 2, 2017
2,626
5,482
The most interesting thing I read in that article is that the Apple Silicon team is eager to "seed" Macs with more powerful GPUs and a more sophisticated Metal API as soon as possible to entice game developers to sell their games on the Mac. Now that Apple is in charge of what kind of iGPU finds its way into a Mac and not Intel, whether or not the Mac becomes a gaming platform will be their responsibility going forward.
You might be interested in this thread that I started years ago: https://forums.macrumors.com/thread...le-of-playing-aaa-games-will-be-macs.2275962/

My arguments were exactly the same as in the article.
 

sam_dean

Suspended
Sep 9, 2022
1,262
1,091
You might be interested in this thread that I started years ago: https://forums.macrumors.com/thread...le-of-playing-aaa-games-will-be-macs.2275962/

My arguments were exactly the same as in the article.
When I read other threads about how "wrong" Apple was with moving Macs from Intel to Apple Silicon the counter arguments makes the assumption that tech stopped advancing and new methods or improved methods are not part of the factors in Apple succeeding.

Not to mention changing consumer behavior and use case.

This especially applies to any thread that concerns the Mac Pro.

It is akin to mainframe users putting down PCs because of their physical size and the workflow they have grown accustomed to will cause a change of routine even when it is a better performance per $ or performance per watt metrics.

When anyone talks about performance per watt they instantly get replied to that power consumption is not a factor especially with desktops.

Truth is with improved performance per watt future replacements of the gaming PCs they have now will run cooler, be smaller, be faster all at a lower price point. Lower power bills are but a bonus.

Compare casual games on 90s PCs vs casual games on the smartphone for the last decade. They're largely the same but the cost of purchase and to operate have improved.

Same will be done to PC Master Race types. Eventually it will go the way of the mainframe.

Look at where the mainframes now? Only legacy systems owned by old economy companies prior to Y2K still use them because of fear of disrupting day to day operations and losing data of nearing a century. Those mainframes probably have the same computational power as last year's iPhone.

Look at the posts on your thread about iGPU can never be fast enough to play games.

Or that iGPU can never have dGPU performance.

All their doubts were proven wrong within a year or 2.

Largely because they are trapped in the business model of parts makers rather than systems makers.

From 2006-2012 Apple observed that >50% of Mac Pro users rarely if ever added to or modified their PCIe expansion slots.

So they developed these Pro desktops without PCIe expansion slots to lower material and manufacturing cost for I/O that was unwanted by >50% of users

- 2013 Mac Pro
- 2017 iMac Pro
- 2022 Mac Studio

This is not to say that <50% of Mac Pro users who always add or modify their PCIe expansion slots have invalid use cases. It simply means that Apple produced products that addressed the majority of user's needs.

That's why after the 2012 Mac Pro they produced the 2019 Mac Pro with all the PCIe expansion slots there. As that model is approaching 4 years by December will we see a Mac Pro with Apple Silicon in 2023? Adds are, yes but I am unsure if it has expandable LPDDR5 SODIMM or M.2 SSD.

Give it a decade or 2 from now odds are Mac Pro will be retired and Mac Studio will still be with us.

There are more economic ways to do things that uses less time, money and effort.

It is the equivalent of transitioning from the horse-drawn carriage to a Ford model T. Many will protest that the horse eats grass and not the noxious gasoline/kerosine/diesel but today no one wants to deal with horse sh it
 
Last edited:

sam_dean

Suspended
Sep 9, 2022
1,262
1,091
Do you have a link to confirm this? Is GPU performance linearly scalable?

I was speaking of a iGPU having dGPU raw performance. No one has ever done it before and many have countered that it is technically impossible.

What limits it isnt technical but business & commercial reasons. What is the market for a SoC of a combination of a Thread Ripper Pro + RTX 4090? Would PC Master Race types buy that? No parts maker like Intel/AMD/Nvidia would have the volume to make it cost effective as the PC market prefers extreme modularization.

A system maker does.

2022 Mac Studio M1 Ultra was compared to a RTX 3090.

For the system it uses <415W but was measured much lower.

For the dGPU it draws 355-365W while gaming and occasionally spikes to 464W.

With such a difference in power consumption then makes the Mac Studio (system) have a superior performance per watt than RTX 3090 (dGPU).

As to the M1 Ultra being superior to the RTX 3090 in terms of raw performance without regard of power consumption or thermal output then you have to pick your benchmarks and apps to fit your position. ;-)

If any Apple Silicon chip gets overclocked it does not linearly improve performance. To get that at the performance per watt would require more transistors/cores/engines/etc instead or even a process node improvement.

This is why Apple's looking for ways to increase their utility of the next process node by trying to put all their SoC within the same generation of the latest iPhone chip. This improves economies of scale.

Examples of this would be

- M1/M2 chips originally in Macs finding itself into iPad Air & iPad Pro

- iPhone chips finding itself into iPads, Apple TV, iPod touch, Studio Display etc.

So say when the next iPhone moves to a 3nm A17 Bionic chip with improved

- Max CPU clock rate
- L2 cache
- Architecture
- Instruction Set
- Transistor count
- Efficiency core count & performance
- Performance core count & performance
- GPU core count & performance
- Raw performance
- Performance per watt
- Power consumption

It all trickles up to the M3/Pro/Max/Ultra/Extreme within 11-13 months.

Then repeat in an infinite loop.

If COVID did not occur in 2019 then we would have been on iPhone chip tech of 2022 A16 Bionic with M3 (not M2) chips since Oct/Nov 2022. M2 chips would have started Oct/Nov 2021 instead of some time in 2022.

The M4 would be be released Oct/Nov 2023 based on the 2023 A17 Bionic for the 2023 iPhone 15 Pro.

So long as the TDP & SoC package complies with the smaller device tech requirements then they will be repurposed for a new form factor.

That's economies of scale to cater for Apple's internal requirements. Reason why Apple bought Intel's smartphone modem division so they can R&D their own 5G modem. They can leverage 3nm process for that for lower power consumption.

When successful that means no more paying as much licensing/royalty fees to Qualcomm/Broadcom. IIRC the fees are based on the MSRP of the device it is installed into. That's why 5G pocket wifis are cheap but those built into laptops cost way more.

Before 2008 Apple spent R&D money for a MBP with built-in 3G modem. Apple's compromise lead to iPhone being the 3G/4G/5G modem for all Macs, iPads and other devices.
 
Last edited:

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
  • Like
Reactions: T'hain Esh Kelch

sam_dean

Suspended
Sep 9, 2022
1,262
1,091
Well, my point is that Apple has not shown their hands on how they will scale their GPU solutions for the Mac Pro, which was the question posed.
They may offer a M2 Extreme with four M2 Max chips or M2 Ultra with four M2 Max chips
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
2022 Mac Studio M1 Ultra was compared to a RTX 3090.

For the system it uses <415W but was measured much lower.

For the dGPU it draws 355-365W while gaming and occasionally spikes to 464W.

With such a difference in power consumption then makes the Mac Studio (system) have a superior performance per watt than RTX 3090 (dGPU).

As to the M1 Ultra being superior to the RTX 3090 in terms of raw performance without regard of power consumption or thermal output then you have to pick your benchmarks and apps to fit your position. ;-)
Any comparison with Nvidia's current GPU? RTX 30 is at least a node and a half behind M1.
 

sam_dean

Suspended
Sep 9, 2022
1,262
1,091
Any comparison with Nvidia's current GPU? RTX 30 is at least a node and a half behind M1.
I would not bother until a M2 Ultra or M2 Extreme appears.

Why compare a M2 Max 12 core CPU to a Intel Core i9 24 core CPU?
 

Confused-User

macrumors 6502a
Oct 14, 2014
850
984
UltraFusion is an interconnection technology with a high-speed interface that enables hardware components to be interconnected with one another. The interconnected components fundamentally work as a single hardware component.

Apple specifically designed the M1 Max chip with a silicon interposer as part of its overall chip packaging. Note that packaging represents a stage in chip fabrication that involves mounting and interconnecting integrated circuits and other components, and encapsulating a semiconductor component or an entire chip into a protective enclosure.

On the other hand, an interposer is a bridge or conduit that enables electric signals to pass through and onto another destination. The interposer used by Apple allows two M1 Max chips to connect over 10000 signals and maintain an interprocessor bandwidth of 2.5TB/s.

UltraFusion is basically an implementation of silicon interposer technology and chip packaging techniques to create an interprocessor interconnection technology. Apple implemented these using one of the advanced packaging technologies developed and deployed by TSMC called chip-on-wafer-on-substrate with silicon interposer or CoWoS-S.

Source: https://www.profolus.com/topics/apple-ultrafusion-technology-explained-what-and-how/
Unfortunately, some of this is just nonsense, while some was once thought to be true, but we know better now.

The Max was not designed "with a silicon interposer". It was designed with the "ultrafusion" crossconnect for use in the Ultra, but that is just part of the die. The *Ultra* is designed with an interposer, but it does not use any form of CoWoS. As discussed here at length a couple of weeks ago, it uses InFO-LI. See for example https://www.tomshardware.com/news/tsmc-clarifies-apple-ultrafusion-chip-to-chip-interconnect

I am unsure if UltraFusion will work with a non-identical SoC as you propose.
Of course it will. There is nothing magic about that interconnect (though it's extraordinary engineering).

Now, whether that would produce a better result than what we have currently with the Ultra is an open question and not one likely to be answered easily. I'm going to bet that it would beat the M1 Ultra, but that's only because scaling in the Ultra, especially for the GPU, is very poor in most applications. I expect the next Ultra (probably an M3, not M2, but who knows, and anyway we hashed that out ad nauseum in another thread) will do drastically better, and that moves the ball back to midcourt again.

Note that at the very least, doing this would make the hardware decidedly NUMA, which... might be OK. Or maybe not. But despite many claims by the ignorant, it would *not* necessarily break the "unified memory" model. You could still have all cores (CPU/GPU/NPU/etc) have first-class access to all memory lanes. You'd just have to pay a latency penalty for some memory and not other. Which is, after all, already true on the Ultra.

As designed by Apple increasing in clockspeed will significantly increase power consumption without linearly increasing raw performance.
This has nothing to do with "As designed by Apple". It's physics, and it applies to all chips by all designers.

[...]The solution for all the above is always being at the next process node before anyone else. After which optimization.
That's not a solution. That's a business advantage, and a very valuable one, but it doesn't solve the issues you're talking about. It pushes the problem out a ways because you can get more transistors in the same area, but it doesn't eliminate it.

Synchronizing Mac chips to iPhone chip generation will help increase economies of scale and simplifying and reducing supply chain and fab movements.
That is true, and part of why I said earlier that I suspect that Apple is trying to move to once-a-year for the Mac line. It may well not rise to the level of "overwhelming advantage", which is why I wouldn't bet too much on my prediction.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Note that at the very least, doing this would make the hardware decidedly NUMA, which... might be OK. Or maybe not. But despite many claims by the ignorant, it would *not* necessarily break the "unified memory" model. You could still have all cores (CPU/GPU/NPU/etc) have first-class access to all memory lanes. You'd just have to pay a latency penalty for some memory and not other. Which is, after all, already true on the Ultra.
Well, one can say that the M1/M2 Pro & Max is already "micro"-NUMAish, seeing that they have 2/4 memory controllers independently controlling 2/4 different banks of LPDDR memories providing data to the various IP cores.

Splitting the GPU cores from the rest of the IP cores into different dies does not make it any different from putting them in the same die, other than accounting for more signal propagation delays between the dies.

What is important to Apple, IMHO, is that macOS does not need a NUMA overhaul.
 
  • Like
Reactions: bcortens

sam_dean

Suspended
Sep 9, 2022
1,262
1,091
Why not compare the efficiency between Nvidia's RTX 40 and M2 Max/Pro? Do you expect M2 Ultra to be more efficient than M2 Max?
I am unaware that Apple or Nvidia makes those comparison between a RTX 40 vs M2 Max much less Pro.

Until such time I leave it to you to furnish all pertinent benchmarks. :)
 
  • Like
Reactions: satcomer

scottrichardson

macrumors 6502a
Jul 10, 2007
716
293
Ulladulla, NSW Australia
5nm and future 3nm process node will reduce thermal output. A future Ultra chip will become cool enough not to need such a beefy HSF for silent PC requirements.

When you think about it. 3nm M3 series chips will likely have a bunch more cores given the extra die space available within the same power constraints as everything will be smaller. So it’s fair to say that an M3 Max could very well end up having the core count of an M1 Ultra, all while using the same power as an M2 Max.

And just for fun, here’s my M3 series chip speculation:

M3 - 12 core, 3.78Ghz
6p cores, 6e cores / 16GPU cores / 32GB max RAM

M3 Pro - 16 core, 3.78Ghz
10p cores, 6e cores / 24GPU cores w. ray Tracing / 48GB max RAM

M3 Max - 16 core, 3.89Ghz
10p cores, 6e cores / 48GPU cores w. ray Tracing / 144GB max RAM

M3 Ultra - 32 core 3.98Ghz
20p cores, 12e cores / 96GPU cores w. Ray tracing / 288GB max RAM
 
  • Like
Reactions: sam_dean
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.