Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

soggywulf

macrumors 6502
May 24, 2003
319
0
Originally posted by Rower_CPU
This community, on the whole, is quite "zealot-free". :)

That's true, I have to say this board is usually pretty level-headed relative to others.
 

ddtlm

macrumors 65816
Aug 20, 2001
1,184
0
It just occurred to me that the G5 has about the same upgradability as a Shuttle cube; it takes more RAM and more PCI cards but nothing else. Hmmm. This leads me to think it's too bad the single CPU G5's arent in a smaller form-factor, more the size and layout of the Shuttle (not Apple) cube. That would be a machine I could get excited about!
 

wizard

macrumors 68040
May 29, 2003
3,854
571
Originally posted by The Shadow
Well Dave, you were the one claiming that the G5 was too expensive compared to PCs, yet noone in this forum has come up with a PC cheaper than than the DP G5 Mac! Since this price/performance was your initial main concern, a retraction would be nice.;)

I'm not surprised that no one has come up with a high performance PC that is competitive witht he G5, apparently many have not left the event horizon of the WWDC. There is no need for a retraction, if you can't find competitve machines then that is your problem not mind. there are advertised Opteron systems for as little as $1390.

You can't design a machine that suits everyone. The G5 is aimed mainly at graphics, video and music pros and scientists. Nearly everything they do that is time consuming involves floating point and requires a lot of bandwidth. These are the G5's strengths, and where it will massively out perform the "old technology" you speak of. By the way, Intel doesn't call it that. They've just released a 3.2 chip. Funny that!;)
I agree with your statement here, about suitability, 100% but I've already acknowleged the G5 floating point performance as being pretty good. So in effect you have verified what I've been say for some time in this thread. Remember I've been referencing Apples release performance information and other material floating around to make the statements I've made.

A speed bumped P4 is not new processor architecture by any measure. Everyone agrees that the G5 is a new and innovative processor architecture, similar fresh designs are coming on the market in the I86 world. The 970, Opteron and what ever intel is about to introduce, are processors of the same generation and should be compared together.

For the market and real world apps the G5 is aimed at they probably have more than parity for the short term. But you're missing the point - Apple only needs parity. There are other reasons to use a Mac. And at the moment, one of those is price, which seems to have taken many by surprise.
Well this is the problem, Apple is marketing this unit as a general purpose personal computer. Yes this machine will be a beast for float work, but for other applications it simply achieves parity. Yes I agree that they need parity and better. I still find the price high and the configurations low end. You would think that the high end machine would have in the base configuration a high end graphics adapter.

Generally speaking, if 2 systems had parity, including the apps you want them to run, who in their right mind would choose a PC? IMHO
I would re phrase that to who in their right maind would choose a MicroSoft operating system. Without a doubt Panther is a compelling operating system and is a much bigger factor in my G5 interest than the G5 itself. The problem is that I hope it is compelling enough that businesses and professionals pick it up, Apples existing user base is simply to small.
:p

Thanks anyway for your contribution, we can choose to disagree. It has been entertaining debate.

Best Regards,

TS

I'm not even sure we disagree on that much TS. The only thing I'm concerned about is that people think that Apple has an overwhelming lead in performance, this is not the case. Apples own data is enough to support that opinion. What has not been factored into the equation is the performance of i86 processors that are of the same generation as the G5 (that is currently being introduced into the market as is the G5). Performance can be many things to many people, but wild claims about processor peformance really need to be moderated atleast until a bigger picture is developed.

Dave
 

wizard

macrumors 68040
May 29, 2003
3,854
571
Except for some stated concerns I believe that pretty good sums up the machines. The low end machine is grossly over priced or under configured depending on how you look at it. I would be surprised if the sell many of these.

The high end machine is a bit of a mystery also. Between the limited expansion capability and the fact that high end usres will have to spend a lot of money right off the bat to bring it to professional standards is confusing to say the least.

Yes the data is thin right now, but Apple does have very good documentation on how the spec tests where done. Careful reading of those tests should temper ones opinion of the machine, some seem to percieve these machines as the ultimate in performance but that is not justified. Atleast not based on publicly available info.

I see it this way the G5 will probally whip 99.9% of the PC's out in the wild right now. To that I can say FINALLY!!!!!! How it will do in September agianst i86 machines also available then is another story, which can not be fully written yet.


Thanks
Dave

Originally posted by soggywulf
Indeed it would. :(

Personally I think the G5s are pretty good. I'm a bit curious as to why they didn't adopt the 256mb version of the 9800, but I suppose that's a minor issue (until you start antialiasing :( ). The prices are a little disappointing, but that is just one part of price/perf. As far as performance, I don't think we have the data to come to any conclusions yet. All we have is a few app tests done on-stage, hardly what I would call an objective comparison. I think we will really have to wait until the machines become available, and get tested independently against new PCs available in August (or whenever the G5s actually arrive), on a wider range of apps. My prediction is that the G5s will be slightly behind price/perf wise, but well within the value difference represented by OSX. Which would be great news. :)

And because Man and Woman do not live on bread alone...I think the G5 actually looks pretty good. It has a sophisticated, understated air. And the tremendously free-flowing ventilation system gives me a warm fuzzy feeling. The lack of expandability is disappointing, but then again messing up that great airflow with a bunch of clunky HDs would be a shame. No matter, I can get a spare ATX case cheap and stick the extra HDs there. :)
 

Rower_CPU

Moderator emeritus
Oct 5, 2001
11,219
2
San Diego, CA
Originally posted by wizard
Yes the data is thin right now, but Apple does have very good documentation on how the spec tests where done. Careful reading of those tests should temper ones opinion of the machine, some seem to percieve these machines as the ultimate in performance but that is not justified. Atleast not based on publicly available info.

There are reports that contradict this assertion.

http://macedition.com/soup/soup_20030629.php
 

wizard

macrumors 68040
May 29, 2003
3,854
571
I really don't know how that piece of garbage contradicts anything I've said. Some assertions in the referenced article are so out in left field as to be worthless. The reference that "SSE2 was engineered to artificially inflate SPECint scores, or that SPECint was engineered to let SSE2 artificially inflate Intel’s scores" is uninformed. This and other problems completely invalidate the message the author was trying to deliever. Whomever the intended audience was I don't know, but it certainly was not people with even a modest understanding of SPEC and microprocessors.

I really don't understand why people seem to be missing one very important point, my assertions are based on Apples information. The deduction is parity peformance with Intel technology plus rather good floating point performance. This is not an insult to Apple engineering, but just a reality of the implementation. To dwell on it is to ignore the positive aspects of the machine. One of whcih is hitting 2.5GHz or faster in very short order, possibly before Christmas.

Lets face it; what is really needed is some time in user hands with this hardware, running applications in a normal environment. I believe that Apple made an honest effort to do the SPEC scores right, I do not have the same confidence in the bake offs. Jobs is foremost a marketeer, the demos would be constructed in such a way as to facilitate the marketing program, so are suspect. You can be sure that some of the demos selected where choosen for the deltas they had over the PC hardware. It is better to calm wild expectations until hard evidence can be compiled.

My geuss from the currently available information; is that the G5 will perform about the same or slightly better as the most recent P4 implementations, for single processor machines running non floating point code. They will run dramatically faster if the application is making use of floating point math and AltVec code often but not alway being better. Panther may tilt the results even more in Apples direction. Now it could be that SPEC is totally useless when it comes ot quantifing the 970's performance, but I don't see reason to believe in a giant error.

One interesting tibit is that the dual processor machines throws some of this out the window. When running SMP it looks like Apple can make up some performance advantage. A sign of good hardware implementation possibly, but even here Apple has had excellent SMP support in the OS for sometime now.

At this point I rather be wrong and be rewarded with a much faster machine than expected, than to get my hopes up and have them crushed when experience with the real hardware is gained.

Thanks
Dave


Originally posted by Rower_CPU
There are reports that contradict this assertion.

http://macedition.com/soup/soup_20030629.php
 

Rower_CPU

Moderator emeritus
Oct 5, 2001
11,219
2
San Diego, CA
No offense, but I'd take the opinion of someone writing for a reputable website over some random person on a discussion forum. Calling it a "piece of garbage" is nothing short of inflammatory.

As for your SSE2 comments, let's put them in context:
And finally, SSE2, the Pentium’s sad and sorry little answer to the VMX/Altivec/Velocity Engine SIMD unit on the PowerPC, was enabled for the test. In GCC 3.3, the SSE flag covers both SSE and SSE2, and turning it on increases the Integer scores dramatically.

Which is probably cheating.

Apple didn’t compile with Altivec turned on... that would require the altivec and force_cpusubtype_all flags to be set... presumably because it wouldn’t help their scores since SPEC isn’t coded with AltiVec extensions. This means that either SSE2 was engineered to artificially inflate SPECint scores, or that SPECint was engineered to let SSE2 artificially inflate Intel’s scores. Either way, it smells really, really fishy. But we’ll give ’em the benefit of the doubt, and the edge in raw integer, as (contrary to “spl’s”protestations otherwise) almost nothing is dependent on raw integer performance these days. It’s all floating point math and SIMD vector performance... two areas where the G5 is untouchable.

Why does SSE2 inflate SPEC scores so substantially if not for the benefit of SPEC? Since you are positioning yourself as someone with a "modest understanding of SPEC and microprocessors" clue us in.

Writing off the rest of the authors claims wholesale is an easy out and fails to address the issues he takes with the article posted at haxial and its apparently shoddy "investigative journalism".

And finally (as I've been saying from the start), I agree that all this debate is silly until we can evaluate real-world performance when the G5s are available for purchase, and compare that to currently available PCs. Saying, with any certainty, that the G5s will underperform PCs in August is just as ridiculous as saying they will outperform them. Not a single person here knows, or can know, what will happen then (unless they are psychic), so let's put the bickering aside and stop predicting doom and gloom for Apple when you have nothing of substance on which to base your claims.
 

ddtlm

macrumors 65816
Aug 20, 2001
1,184
0
wizard:

A speed bumped P4 is not new processor architecture by any measure. Everyone agrees that the G5 is a new and innovative processor architecture, similar fresh designs are coming on the market in the I86 world. The 970, Opteron and what ever intel is about to introduce, are processors of the same generation and should be compared together.
You are painting with way to broad a brush here. I assert that:

1) The P4 is newer than you believe. The current P4 is probably as different from the original P4 as the Opteron is from a modern Athlon. The whole thing was relaid and optimized, and Hyperthreading was added. This is far more than a speed bump. It is probably the same scope of change that we will see again when Intel moves to the next P4/P5.
2) The PPC970 is older than you believe. It is thought to be very closely related to the Power4 and I'm unaware of anyone credible that has claimed otherwise. I'm not sure of exact launch date but the Power4 is probably nearly as old as the current P4 design.

You would think that the high end machine would have in the base configuration a high end graphics adapter.
This is one of the more repeditive any silly complaints I've heard. A lot of G5 purchasers will have no use for a high-end gaming card, or even a high-end "workstation card" which is of course just a rebranded gaming card with optimized drivers, optimized tie-ins to special software (which OSX lacks) and sometimes a few hardware tweaks. Perhaps you'd be happy if they packaged a rebranded GeForce4mx as a Quadro for the base card, like some of the Operton vendors do? (I bought one of those cards in January... $271.29 pre-shipping for a glorified GeForce4mx that does dual DVI in Linux, Y-shaped DVI adapter sold separately. Boy did that make me feel like I got my high-end money's worth. Hey its a "Quadro"!)

Rower_CPU:

Of course all the integer gains SSE/SSE2 provide would more than be made up by AltiVec if a compiler would do it automatically.
 

The Shadow

macrumors regular
Mar 25, 2003
216
0
Sydney, Australia
Originally posted by wizard
There is no need for a retraction, if you can't find competitve machines then that is your problem not mind. there are advertised Opteron systems for as little as $1390.
Dave

$1390? Tell me where please! I'll go buy a dozen and flog them off second hand for a small fortune!:D

I've just checked 12 "cheapest" sites selling the latest 1.6GHz Opteron CPUs, and to purchase 2 x CPUs alone will cost you around $1600. Add to that the rest of the system.:eek:

The only top end dual Opteron you've come up with is the Boxx, which according to their site, as I have stated previously, is over $1000 more expensive than the Dual G5. The other reasons you give for not buying an Apple, such as only having a 2% market share, are trite and irrelevant and really do make you sound extremely biased, like you would never by a Mac no matter how big the performance lead. Which makes me wonder why you are making these posts in a Mac forum. You should be aware that it comes across as rude and arrogant.

I use PCs at work and a PowerMac at home everyday, and I know perfectly well why my next computer will also be a PowerMac! But I don't presume to lecture PC users, about it. They can make up their own minds.

For what it's worth, I work for a global implementation division for Nestle. Nestle is by far the biggest food company in the world and currently the 59th largest company of any sort in terms of revenue. Do you know what Nestle's estimated global market share is? You guessed it, 2%!:D


EDIT: I do agree that few people will probably buy the 1.6 GHz G5 - read my earlier posts. If you upgrade the HDD to 160 GB and the RAM to 512 like the 1.8 GHz model, its only $150 cheaper. And the 1.8 has got a bigger system bus and a slightly better graphics card, I think from memory. So, as I've previiously posted, I don't know what pro would buy the 1.6. Hopefully, these a signs of a speed bump real soon, and the 1.6 will go in a consumer desktop!;)
 

tychay

macrumors regular
Jul 1, 2002
222
30
San Francisco, CA
Re: Re: Lies, Damn Lies, and Benchmarks

Originally posted by Cubeboy
Tychay, your joking right? Because thats the funniest piece of garbage that I've ever heard...look up the SciMark scores for ICC and GCC since you mentioned scientific and engineering apps)...Might I also suggest that you wait for some real world benchmarks to come out from an third party before you shoot your mouth off or do you just believe whatever Apple tells you to believe.

I don't know why I'm responding because this thread is probably dead. In any case, I'm not joking.

There is a world of difference between well documented optimizations which will be enabled by default on patches that will no doubt be folded into GCC and the underhanded "compiler recognizes SPEC and replaces it with optimized assembly or lookup table" crap that all microprocessor manufacturers (Motorola, IBM, Intel...) are guilty of. There is no "digging" involved as the basic conditions for the test was on Apple's website and the full report is publicly available. Heck, when Jobs uttered the words "GCC" and "SPEC" in the same sentence in the WWDC keynote, I noticed a strange smell wafting over Denmark.

The fundamental problem is that there is no valid benchmark for comparing a mature architecture to an unreleased one, and, other than the non-thread-safe malloc library, all the "cheating" claims easily fall under honest attempts to deal with that problem.

As for the malloc library, I don't really know what is going on there. Maybe Apple hasn't had time to optimize the thread-safe version? Maybe they wanted to juice the benchmark? However, If I wanted to "juice" the SPECmark there are far more trivial hacks that one can do to gcc that will give more gain with less work. Hell, the fact these modifications are documented is highly unusual for a SPEC score. And, no, it's nothing like LibMoto which patches certain functions used in FPU benchmarks with lookup tables--this sort of thing is actually allowed in SPEC!

The fact that compilers and operating systems can be rigged in this way is why SPECmarks are system benchmarks, and not CPU ones. Tell me, are you running your scientific simulations on some internally developed OS running a patched ICC with all the right flags turned on (and somehow using sin but not arcsin; arctan but not tan)?

The choice of GCC and NAGware f95 are obviously to make Apple look better (there is no way even VisualAge on a G5/970 will yield as much a gain as ICC on a P4--that dang new vs. mature thing rears its ugly head again). But they both happen to be cross-platform and to have a x86 bias in their history (another cross-platform compiler is CodeWarrior, any bets on how that shootout would have turned out?). The fact that GCC is open-source and focuses on portability means it is not as vulnerable to benchmark tampering. By some coincidence, I have used gcc when I was in science. I'd guess about half of the computational research done in the physics department was probably using GCC (the others needed either a f90/f95 compiler or they were on Windows NT using Visual Studio). I'd imagine that the percentage of gcc users in the biosciences are even higher.

Finally, SciMark is a Java-benchmark, so I cannot "look up it's scores in ICC and GCC". If you mean ScienceMark 2 which is quoted by a lot of AMD v. Intel shootouts, the guy who wrote it was my roommate in graduate school. Last time I talked to him, it only compiled on x86 Visual Studio so I can't do your comparison on that either. I think it may also have contained some inlined assembly to take advantage of SIMD in SSE or 3DNow (I may be wrong).

I can't honestly say I'm the one who is "shooting off at the mouth" or even that I "believe what Apple tells me". After all in the same post, I mentioned that it is impossible to "normalize out" the compiler in SPEC as Apple claims to have done, nor does standardizing against a compiler mesh with standard practice for reporting SPEC scores.

However, I have yet to find one post stating that running a benchmark on a Dual Xeon system configured like in the report with exactly those flags set yields anything better than the numbers obtained by Veritest. This lends some credence to Apple's numbers. Add to fact that the report is coming from a 3rd party which has made more money benchmarking for PCs more than they ever could from Macs.

Apple's test may have some use to scientists and engineers, but is useless to users who are, most likely, using software compiled in CodeWarrior (Mac) or Visual Studio (Windows). I'll reiterate my previous post: I don't see that many developers using VisualAge or ICC. And this is considering that ICC produces the best SSE2 code around and is a drop-in into VisualStudio! I really question the applicability of SPEC benchmarking standard practices to any user.

You said you'd consider SPEC numbers along with application benchmarks, etc. in making your next purchase. As for me, I know how SPEC scores are obtained and personally, I'm going to use the same criteria, minus the SPEC numbers.

Well, I'll consider those criteria and my wallet. Right now my wallet is saying I really don't need a G5 or an Opteron...
 

wizard

macrumors 68040
May 29, 2003
3,854
571
Originally posted by The Shadow
$1390? Tell me where please! I'll go buy a dozen and flog them off second hand for a small fortune!:D

I've just checked 12 "cheapest" sites selling the latest 1.6GHz Opteron CPUs, and to purchase 2 x CPUs alone will cost you around $1600. Add to that the rest of the system.:eek:

The only top end dual Opteron you've come up with is the Boxx, which according to their site, as I have stated previously, is over $1000 more expensive than the Dual G5. The other reasons you give for not buying an Apple, such as only having a 2% market share, are trite and irrelevant and really do make you sound extremely biased, like you would never by a Mac no matter how big the performance lead. Which makes me wonder why you are making these posts in a Mac forum. You should be aware that it comes across as rude and arrogant.

I have never suggested not buying an apple and I have certainly never referenced their market share. You must have confused quotations. What I've have said repeatedly is that information that Apple supplies does not jive with the statement that they have the fastest computer on the market.

I'm niether rude nor arrogant, apparently you are attributing parts of this thread to me that was not my material originally. Further; I will not go shopping for somebody else, there are many reasons for that, not the least of which is the endless qualifications you then have to deal with.
I use PCs at work and a PowerMac at home everyday, and I know perfectly well why my next computer will also be a PowerMac! But I don't presume to lecture PC users, about it. They can make up their own minds.

For what it's worth, I work for a global implementation division for Nestle. Nestle is by far the biggest food company in the world and currently the 59th largest company of any sort in terms of revenue. Do you know what Nestle's estimated global market share is? You guessed it, 2%!:D


EDIT: I do agree that few people will probably buy the 1.6 GHz G5 - read my earlier posts. If you upgrade the HDD to 160 GB and the RAM to 512 like the 1.8 GHz model, its only $150 cheaper. And the 1.8 has got a bigger system bus and a slightly better graphics card, I think from memory. So, as I've previiously posted, I don't know what pro would buy the 1.6. Hopefully, these a signs of a speed bump real soon, and the 1.6 will go in a consumer desktop!;)

Now you are talking, G5 iMac here we come. At this time though we do not know why Apple has such high prices on the G5. That is is the cost of the chipsets an issue, if it is it may take awhile before we see a g5 imac.

Apple has publicly stated that they expect to hit 3 GHz in 12 months. That would mean a rather large speed bump in 6 months time. I'm actually hoping for Chritmas, but January or febuary are more likely. Like many; jumping up to purchase a new computer is not possible, so I will have to digest the the market reception of the G5 while my bank account goes through a restructuring.
 

tychay

macrumors regular
Jul 1, 2002
222
30
San Francisco, CA
Originally posted by wizard
The p4 is old technology, as die shrink and a few new features does not make a new processor.

I agree with the second part of the statement. However, the P4 of today is not just a "die shrink and a few new features", neither is the P4 Xeon.

This is actually a good sign for Apple; if a faster g4, with a coresponding faster I/O bus, can be stuffed into a laptop the powerbooks could remain competitve for awhile longer.

I'd assert that the G4 is already competitive in the notebook market and is guaranteed to be so for a while longer.

As to the current state, Intel pushes the Centrino which uses a Pentium-M, not the Pentium-4M. So even though the Pentium-4M clocks in higher, it becomes so cached starved on notebook I/O that it's performance is actually worse. Battery life dwindling to zero, doesn't help matters either.

As to the future, Motorola uses an old process for the G4 so a process shrink alone will give a lot of wiggle room. Also, they could always underclock a G5 if there is enough room to get the peak power requirements within notebook tolerances. I think the latter isn't as likely to happen given the former, though. It's nice to have options--something Apple didn't have until this summer, I might add.

The P4 was designed to "rev" higher to keep intel ahead of the competition for marketing purposes. The chip itself is not an outstanding performer at the frequency it runs.

I agree that the P4 design was marketing driven. I even agree that it is a poor performer for it's clockspeed.

But I'd like to state that just because Intel is driven by a marketting guy now doesn't mean that they don't have some pretty good engineers.* Take how quickly they were able to push out a DDR capable P4 chipset out the door after the RAMBUS fiasco, or Hyperthreading, or how quickly they ramped up the performance of the P4?

What I disagree with is that you should be normalizing a processor by its clockspeed. The simple fact is that the P4 achieves by design clockspeeds impossible to reach by another processor using the same process. The analogy Hannibal uses is that the Pentium 4 has a "McDonald's drive thru" approach (many servers, one line) while the G4 is like the counters inside McDonald's (many lines, each with one server). (The G5 tries to be both at the same time.) The P4 today is a solid performer for its price. Hyperthreading may not help the SPEC, but it certainly has real-world benefits.

Of course, you are free to normalize however you want. Normalize by price to compare price/performance; or normalize by power for notebooks; or normalize by time of introduction for CPU wars. Normalize by clockspeed if you are Apple marketing these last three years and want to draw attention that there is a certain company based in Chicago that seems to have think that Moore's Law is a linear, not exponentional, relationship.

It is interesting to note that the last time there was a processor parity between Mac and PC was in 1999 (G4, Athlon, P3), but at the time Mac OS X didn't exist.

Yep, interesting times indeed...

* OBSomeOfMyBestRoomatesAreIntelEngineers ;)

Take care,
 

wizard

macrumors 68040
May 29, 2003
3,854
571
Originally posted by Rower_CPU
No offense, but I'd take the opinion of someone writing for a reputable website over some random person on a discussion forum. Calling it a "piece of garbage" is nothing short of inflammatory.

I could make the argument that the article itself was imflammatory. Frankly; I find such articles harmful to the reputation of a web site.
As for your SSE2 comments, let's put them in context:


Why does SSE2 inflate SPEC scores so substantially if not for the benefit of SPEC? Since you are positioning yourself as someone with a "modest understanding of SPEC and microprocessors" clue us in.
What the article and the statement you made above implies is that Intel designed SSE2 to inflate SPEC scores. I do not believe that this is the case, SSE2 is an architecture to handle differrent data streams than a conventional ALU in a processor. It may have the side effect of increasing SPEC scores, but so do most other architecture improvements. The point is that claiming that SSE2 was implemented to inflate SPEC scores is wrong, it was implemented to accelerate processing of alternative data streams.
Writing off the rest of the authors claims wholesale is an easy out and fails to address the issues he takes with the article posted at haxial and its apparently shoddy "investigative journalism".
I'm by no means supporting some of the argurments I've seen elsewhere on the web related to Apples benchmarking. On the other hand if your going to write an article supporting Apple, you (the author of the referenced article) really should avoid claims like the reference to SSE2 that simply fall apart.

And finally (as I've been saying from the start), I agree that all this debate is silly until we can evaluate real-world performance when the G5s are available for purchase, and compare that to currently available PCs. Saying, with any certainty, that the G5s will underperform PCs in August is just as ridiculous as saying they will outperform them. Not a single person here knows, or can know, what will happen then (unless they are psychic), so let's put the bickering aside and stop predicting doom and gloom for Apple when you have nothing of substance on which to base your claims.

I'm begining to think that the problem here is that to many people are blowing what I say out of proportion to what I'm saying. That is that Apples SPEC scores indicate that it would be wise for some to moderate their expectations.

That does not mean that the G5 is all washed up, you will need a state of the art AMD or Intel system to compete with it. It will be interesting to see how things flesh out by october. It is not likely that my bank account will allow me to experience a G5 before the end of the year, so I'm expecting to see a lot of info to tide me over.

Dave
 

wizard

macrumors 68040
May 29, 2003
3,854
571
Originally posted by tychay
I agree with the second part of the statement. However, the P4 of today is not just a "die shrink and a few new features", neither is the P4 Xeon.

Hmmm, I geuss this all depends on what you call a few new features. I would have to say that the various P4 are more related than say the G4 and the G5 or the Athlon and the Opteron.

I'd assert that the G4 is already competitive in the notebook market and is guaranteed to be so for a while longer.

As to the current state, Intel pushes the Centrino which uses a Pentium-M, not the Pentium-4M. So even though the Pentium-4M clocks in higher, it becomes so cached starved on notebook I/O that it's performance is actually worse. Battery life dwindling to zero, doesn't help matters either.
I would not use the term guaranteed, on the other hand no one seems to have come up with a power efficient system to run i86. Apple though is at there worst when their products become stagnet, so keeping well ahead of the alternatives is very important.
As to the future, Motorola uses an old process for the G4 so a process shrink alone will give a lot of wiggle room. Also, they could always underclock a G5 if there is enough room to get the peak power requirements within notebook tolerances. I think the latter isn't as likely to happen given the former, though. It's nice to have options--something Apple didn't have until this summer, I might add.
I'd love to get ahold of a G5 with and adjustable clock. I'm not to sure that it would be competitive underclocked.

The problem with die shrinks on the G4 is that it is already I/O throttled and realy needs a new bus interface. That does not mean however that the next rev won't offer alot to any powerbook rev. You are right in that choices are good, and with some of the rumored g3 revisions they have more choices than ever before.
I agree that the P4 design was marketing driven. I even agree that it is a poor performer for it's clockspeed.

But I'd like to state that just because Intel is driven by a marketting guy now doesn't mean that they don't have some pretty good engineers.* Take how quickly they were able to push out a DDR capable P4 chipset out the door after the RAMBUS fiasco, or Hyperthreading, or how quickly they ramped up the performance of the P4?

What I disagree with is that you should be normalizing a processor by its clockspeed. The simple fact is that the P4 achieves by design clockspeeds impossible to reach by another processor using the same process. The analogy Hannibal uses is that the Pentium 4 has a "McDonald's drive thru" approach (many servers, one line) while the G4 is like the counters inside McDonald's (many lines, each with one server). (The G5 tries to be both at the same time.) The P4 today is a solid performer for its price. Hyperthreading may not help the SPEC, but it certainly has real-world benefits.
My references to normalizing clockspeed have only been with respect to the G4 and G5. this is very relavent with respect to laptops, as it does not appear that an under clocked G5 will have an overwhelming advantage over a newwer G4 version running at a high clock speed. Now that doesn't mean that a G5 would not affer certain Laptop users advantages, but a G4 running faster and at lower power will most likely serve better for the averge user.

Of course, you are free to normalize however you want. Normalize by price to compare price/performance; or normalize by power for notebooks; or normalize by time of introduction for CPU wars. Normalize by clockspeed if you are Apple marketing these last three years and want to draw attention that there is a certain company based in Chicago that seems to have think that Moore's Law is a linear, not exponentional, relationship.

It is interesting to note that the last time there was a processor parity between Mac and PC was in 1999 (G4, Athlon, P3), but at the time Mac OS X didn't exist.

Yep, interesting times indeed...
Yes very interesting times indeed. Apple has a new PowerMac that is a good design and an operating system that has gotten far better in a very short period of time. As a linux user. I can not understate how appealing the G5 with Panther will be.

Appealing or not It will be sometime before one is on my desk. By that time the world will have a good handle on the G5's performance. If I'm lucky they might even rev the case to address a few problem areas.

* OBSomeOfMyBestRoomatesAreIntelEngineers ;)

Take care,
 

Cubeboy

macrumors regular
Mar 25, 2003
249
0
Bridgewater NJ
Wizard:

SPEC's source code consists of large data blocks, not little snippets so it can't be hand optimized for SSE2 or any other instruction set for that matter. What you can do is optimize the compiler so that it can vectorize the code and hopefully produce enough packed floating-point SSE2 code to improve performance by an significant amount. In the Pentium 4's case, it's hardly anything spectacular as tests done by Intel show only a 5% improvement over x87 only code. Link below:

http://developer.intel.com/technology/itj/q12001/articles/art_2.htm

As for real world uses, nearly all the most used rendering programs (3dsMax, Lightwave, etc) encoding programs (Windows Media Player, MP3 Maker Platinum, Main Concept, Pinnacle), archiving programs (WinRAR, etc), have some degree of SSE2 optimizations, and it's also found in some scientific/engineering apps as well as a few games.

Regarding PPC970 with Altivec, an competent auto-vectorizing compiler would do nicely.
 

Ensoniq

macrumors regular
Jul 16, 2002
131
1
Bronx, NY
The point Rower_CPU was making about SSE2 and SPEC was this I believe...

Some people originally complained that SSE2 had not been enabled for the testing...which turned out to be false. But the assertion was that if SSE2 HAD been turned on (which it was) that the SPEC scores would have been higher.

For this to be true, it means that not only would SSE2 need to be enabled in the compiler (which it was) but that SPEC would have to recognize SSE2 in order for the test scores to come out better. Meaning, SPEC has SOME code in it specifically to check for and use SSE2 if it's available. I don't know if that's true, but that's the assertion I believe Rower_CPU made.

Whether it's true or not, what's perfectly clear is that the SPEC test does NOT have any code in it to detect and use AltiVec if it's available. So even if Apple had used special flags in GCC to enable the use of AltiVec (which they specificially did NOT use, even though they turned on SSE2 for their x86 counterparts), SPEC wouldn't have looked for it.

So if SPEC is designed to use SSE2 (not certain) but is NOT designed to use AltiVec (an absolute fact) than right out of the gate it shows that SPEC would have an inherent bias BUILT-IN to skew scores more favorably to the x86 processors. How come no one goes out of their way to point out how THAT might be considered cheating at worst, an unfair advantage at best?

So far, the only claims of "cheating" have come when it may have given the Mac an edge. And though there is STILL no shred of evidence of true cheating by Apple (even the admissions on the PC side that Intel, AMD, and ATI do it, so Apple MUST too remains less than credible), it's the only drum for the Wintelers to keep banging...so that is what they have done.

-- Ensoniq
 

ddtlm

macrumors 65816
Aug 20, 2001
1,184
0
wizard:

I would have to say that the various P4 are more related than say the G4 and the G5 or the Athlon and the Opteron.
Clearly the G4 and G5 are about as unrelated as PPC chips get, to make an analogy in the x86 world I'd say as different as the P3 and P4. However as I already said once (in a comment aimed at you) I see no reason to suspect that the various P4's are any more closely related to each other than the various Athlon/Opteron products are to each other. Perhaps the Opteron seems new and different with x86-64 and an on-die memory controller, but the location of the memory controller is hardly a significant modification to the processor core itself, and x86-64 is unlikley to have been any more complex a change than hyperthreading. With the Opteron the core was also optimized and reliad compared to the Athlon XP/MP, just as with the current P4 was when compared to the P4 "classic".
 

ddtlm

macrumors 65816
Aug 20, 2001
1,184
0
Ensoniq:

No, there isn't any reason that AltiVec can't be used in SPEC except that compilers don't know how to auto-generate it, yet. SPEC is written 100% in high-level languages without regard to processor-specific anything, as far as I know. When the compiler detects a situation where SSE/SSE2/AltiVec can be applied, it is up to the compiler to insert that vector code, along with any sort of enable/disable features that it might add to allow the executable to be run on chips not supporting SSE/SSE2/AltiVec.
 

Rower_CPU

Moderator emeritus
Oct 5, 2001
11,219
2
San Diego, CA
Originally posted by Ensoniq
The point Rower_CPU was making about SSE2 and SPEC was this I believe...

Some people originally complained that SSE2 had not been enabled for the testing...which turned out to be false. But the assertion was that if SSE2 HAD been turned on (which it was) that the SPEC scores would have been higher.

For this to be true, it means that not only would SSE2 need to be enabled in the compiler (which it was) but that SPEC would have to recognize SSE2 in order for the test scores to come out better. Meaning, SPEC has SOME code in it specifically to check for and use SSE2 if it's available. I don't know if that's true, but that's the assertion I believe Rower_CPU made.

Whether it's true or not, what's perfectly clear is that the SPEC test does NOT have any code in it to detect and use AltiVec if it's available. So even if Apple had used special flags in GCC to enable the use of AltiVec (which they specificially did NOT use, even though they turned on SSE2 for their x86 counterparts), SPEC wouldn't have looked for it.

So if SPEC is designed to use SSE2 (not certain) but is NOT designed to use AltiVec (an absolute fact) than right out of the gate it shows that SPEC would have an inherent bias BUILT-IN to skew scores more favorably to the x86 processors. How come no one goes out of their way to point out how THAT might be considered cheating at worst, an unfair advantage at best?

So far, the only claims of "cheating" have come when it may have given the Mac an edge. And though there is STILL no shred of evidence of true cheating by Apple (even the admissions on the PC side that Intel, AMD, and ATI do it, so Apple MUST too remains less than credible), it's the only drum for the Wintelers to keep banging...so that is what they have done.

-- Ensoniq

Well said. Thank you.
 

ddtlm

macrumors 65816
Aug 20, 2001
1,184
0
Rower_CPU:

It was well said, but partly erronious. ;) SPEC is not designed to benefit from SSE2; Intel was simply wise to design a compiler that could auto-generate SSE2 code.
 

Rower_CPU

Moderator emeritus
Oct 5, 2001
11,219
2
San Diego, CA
Originally posted by ddtlm
Rower_CPU:

It was well said, but partly erronious. ;) SPEC is not designed to benefit from SSE2; Intel was simply wise to design a compiler that could auto-generate SSE2 code.

That's not the point. Apple was more than fair to the Intel chips by enabling SSE, but not taking any special measure to enable Altivec.

The performance gain by enabling SSE results in an unfair advantage to Intel, and Apple still comes out performing just fine.
 

ddtlm

macrumors 65816
Aug 20, 2001
1,184
0
Rower_CPU:

The performance gain by enabling SSE results in an unfair advantage to Intel, and Apple still comes out performing just fine.
Is it even known that there is a significant performance gain to be had by enabling SSE/SEE2 in GCC for SPEC?

Cubeboy:

The link you provide for that 5% gain figure is at least 2 years old, so I would expect that some progress has been made on the compiler front.
 

wizard

macrumors 68040
May 29, 2003
3,854
571
Hi Cubeboy;

That is a clear and rational explanation, well done.

What is not well done is the point I was trying to make. An article was referenced a few messages before in which that quite clearly said that SSE2 was engineered to enhance SPEC numbers. It is my contention that that is garbage, SSE was implemented to handle certain data streams quicker than a standard ALU can. Much as AltVec was integrated into PowerPC to handle alternate data streams.

The fact that Intel can make use of this component of the processor in its compilers and get better through put for non traditional vector operations is a plus for them. That does not mean though that SSE was designed twith SPEC in mind. In a like manner Apple has made very good use of the AltVec extensions to PowerPC, but everyone should agree that AltVec was not designed to enhance SPEC scores. AltVec was designed to enhance short vector math opperations, and certian other operations, the fact that Apple is rumored to us this facility to enhance Mac OS/X is a feather in their cap.

Niether AltVec or SSE/SSE2 where designed or engineered to enhance SPEC scores, seeing that online in that article is a little to much to leave unchallenged. It would be foolish on Intels or Apples part not to use all the feature of their processors.


I have to run will get back to this later.
Thanks

Dave


Originally posted by Cubeboy
Wizard:

SPEC's source code consists of large data blocks, not little snippets so it can't be hand optimized for SSE2 or any other instruction set for that matter. What you can do is optimize the compiler so that it can vectorize the code and hopefully produce enough packed floating-point SSE2 code to improve performance by an significant amount. In the Pentium 4's case, it's hardly anything spectacular as tests done by Intel show only a 5% improvement over x87 only code. Link below:

http://developer.intel.com/technology/itj/q12001/articles/art_2.htm

As for real world uses, nearly all the most used rendering programs (3dsMax, Lightwave, etc) encoding programs (Windows Media Player, MP3 Maker Platinum, Main Concept, Pinnacle), archiving programs (WinRAR, etc), have some degree of SSE2 optimizations, and it's also found in some scientific/engineering apps as well as a few games.

Regarding PPC970 with Altivec, an competent auto-vectorizing compiler would do nicely.
 

ddtlm

macrumors 65816
Aug 20, 2001
1,184
0
Rower_CPU:

Yeah, thats the trick... I don't know of any SPEC scores with GCC 3.3 and a P4 without the SSE flag. Aceshardware recently posted a FLOPS test that does show the effect of the flag, but it only deals with floating point, and its not clear if it applies to SPEC.

http://www.aceshardware.com/forum?read=105020636
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.