Originally posted by Rower_CPU
This community, on the whole, is quite "zealot-free".
That's true, I have to say this board is usually pretty level-headed relative to others.
Originally posted by Rower_CPU
This community, on the whole, is quite "zealot-free".
Originally posted by The Shadow
Well Dave, you were the one claiming that the G5 was too expensive compared to PCs, yet noone in this forum has come up with a PC cheaper than than the DP G5 Mac! Since this price/performance was your initial main concern, a retraction would be nice.
I agree with your statement here, about suitability, 100% but I've already acknowleged the G5 floating point performance as being pretty good. So in effect you have verified what I've been say for some time in this thread. Remember I've been referencing Apples release performance information and other material floating around to make the statements I've made.You can't design a machine that suits everyone. The G5 is aimed mainly at graphics, video and music pros and scientists. Nearly everything they do that is time consuming involves floating point and requires a lot of bandwidth. These are the G5's strengths, and where it will massively out perform the "old technology" you speak of. By the way, Intel doesn't call it that. They've just released a 3.2 chip. Funny that!
Well this is the problem, Apple is marketing this unit as a general purpose personal computer. Yes this machine will be a beast for float work, but for other applications it simply achieves parity. Yes I agree that they need parity and better. I still find the price high and the configurations low end. You would think that the high end machine would have in the base configuration a high end graphics adapter.For the market and real world apps the G5 is aimed at they probably have more than parity for the short term. But you're missing the point - Apple only needs parity. There are other reasons to use a Mac. And at the moment, one of those is price, which seems to have taken many by surprise.
I would re phrase that to who in their right maind would choose a MicroSoft operating system. Without a doubt Panther is a compelling operating system and is a much bigger factor in my G5 interest than the G5 itself. The problem is that I hope it is compelling enough that businesses and professionals pick it up, Apples existing user base is simply to small.Generally speaking, if 2 systems had parity, including the apps you want them to run, who in their right mind would choose a PC? IMHO
Thanks anyway for your contribution, we can choose to disagree. It has been entertaining debate.
Best Regards,
TS
Originally posted by soggywulf
Indeed it would.
Personally I think the G5s are pretty good. I'm a bit curious as to why they didn't adopt the 256mb version of the 9800, but I suppose that's a minor issue (until you start antialiasing ). The prices are a little disappointing, but that is just one part of price/perf. As far as performance, I don't think we have the data to come to any conclusions yet. All we have is a few app tests done on-stage, hardly what I would call an objective comparison. I think we will really have to wait until the machines become available, and get tested independently against new PCs available in August (or whenever the G5s actually arrive), on a wider range of apps. My prediction is that the G5s will be slightly behind price/perf wise, but well within the value difference represented by OSX. Which would be great news.
And because Man and Woman do not live on bread alone...I think the G5 actually looks pretty good. It has a sophisticated, understated air. And the tremendously free-flowing ventilation system gives me a warm fuzzy feeling. The lack of expandability is disappointing, but then again messing up that great airflow with a bunch of clunky HDs would be a shame. No matter, I can get a spare ATX case cheap and stick the extra HDs there.
Originally posted by wizard
Yes the data is thin right now, but Apple does have very good documentation on how the spec tests where done. Careful reading of those tests should temper ones opinion of the machine, some seem to percieve these machines as the ultimate in performance but that is not justified. Atleast not based on publicly available info.
Originally posted by Rower_CPU
There are reports that contradict this assertion.
http://macedition.com/soup/soup_20030629.php
And finally, SSE2, the Pentiums sad and sorry little answer to the VMX/Altivec/Velocity Engine SIMD unit on the PowerPC, was enabled for the test. In GCC 3.3, the SSE flag covers both SSE and SSE2, and turning it on increases the Integer scores dramatically.
Which is probably cheating.
Apple didnt compile with Altivec turned on... that would require the altivec and force_cpusubtype_all flags to be set... presumably because it wouldnt help their scores since SPEC isnt coded with AltiVec extensions. This means that either SSE2 was engineered to artificially inflate SPECint scores, or that SPECint was engineered to let SSE2 artificially inflate Intels scores. Either way, it smells really, really fishy. But well give em the benefit of the doubt, and the edge in raw integer, as (contrary to splsprotestations otherwise) almost nothing is dependent on raw integer performance these days. Its all floating point math and SIMD vector performance... two areas where the G5 is untouchable.
You are painting with way to broad a brush here. I assert that:A speed bumped P4 is not new processor architecture by any measure. Everyone agrees that the G5 is a new and innovative processor architecture, similar fresh designs are coming on the market in the I86 world. The 970, Opteron and what ever intel is about to introduce, are processors of the same generation and should be compared together.
This is one of the more repeditive any silly complaints I've heard. A lot of G5 purchasers will have no use for a high-end gaming card, or even a high-end "workstation card" which is of course just a rebranded gaming card with optimized drivers, optimized tie-ins to special software (which OSX lacks) and sometimes a few hardware tweaks. Perhaps you'd be happy if they packaged a rebranded GeForce4mx as a Quadro for the base card, like some of the Operton vendors do? (I bought one of those cards in January... $271.29 pre-shipping for a glorified GeForce4mx that does dual DVI in Linux, Y-shaped DVI adapter sold separately. Boy did that make me feel like I got my high-end money's worth. Hey its a "Quadro"!)You would think that the high end machine would have in the base configuration a high end graphics adapter.
Originally posted by wizard
There is no need for a retraction, if you can't find competitve machines then that is your problem not mind. there are advertised Opteron systems for as little as $1390.
Dave
Originally posted by Cubeboy
Tychay, your joking right? Because thats the funniest piece of garbage that I've ever heard...look up the SciMark scores for ICC and GCC since you mentioned scientific and engineering apps)...Might I also suggest that you wait for some real world benchmarks to come out from an third party before you shoot your mouth off or do you just believe whatever Apple tells you to believe.
Originally posted by The Shadow
$1390? Tell me where please! I'll go buy a dozen and flog them off second hand for a small fortune!
I've just checked 12 "cheapest" sites selling the latest 1.6GHz Opteron CPUs, and to purchase 2 x CPUs alone will cost you around $1600. Add to that the rest of the system.
The only top end dual Opteron you've come up with is the Boxx, which according to their site, as I have stated previously, is over $1000 more expensive than the Dual G5. The other reasons you give for not buying an Apple, such as only having a 2% market share, are trite and irrelevant and really do make you sound extremely biased, like you would never by a Mac no matter how big the performance lead. Which makes me wonder why you are making these posts in a Mac forum. You should be aware that it comes across as rude and arrogant.
I use PCs at work and a PowerMac at home everyday, and I know perfectly well why my next computer will also be a PowerMac! But I don't presume to lecture PC users, about it. They can make up their own minds.
For what it's worth, I work for a global implementation division for Nestle. Nestle is by far the biggest food company in the world and currently the 59th largest company of any sort in terms of revenue. Do you know what Nestle's estimated global market share is? You guessed it, 2%!
EDIT: I do agree that few people will probably buy the 1.6 GHz G5 - read my earlier posts. If you upgrade the HDD to 160 GB and the RAM to 512 like the 1.8 GHz model, its only $150 cheaper. And the 1.8 has got a bigger system bus and a slightly better graphics card, I think from memory. So, as I've previiously posted, I don't know what pro would buy the 1.6. Hopefully, these a signs of a speed bump real soon, and the 1.6 will go in a consumer desktop!
Originally posted by wizard
The p4 is old technology, as die shrink and a few new features does not make a new processor.
This is actually a good sign for Apple; if a faster g4, with a coresponding faster I/O bus, can be stuffed into a laptop the powerbooks could remain competitve for awhile longer.
The P4 was designed to "rev" higher to keep intel ahead of the competition for marketing purposes. The chip itself is not an outstanding performer at the frequency it runs.
Originally posted by Rower_CPU
No offense, but I'd take the opinion of someone writing for a reputable website over some random person on a discussion forum. Calling it a "piece of garbage" is nothing short of inflammatory.
What the article and the statement you made above implies is that Intel designed SSE2 to inflate SPEC scores. I do not believe that this is the case, SSE2 is an architecture to handle differrent data streams than a conventional ALU in a processor. It may have the side effect of increasing SPEC scores, but so do most other architecture improvements. The point is that claiming that SSE2 was implemented to inflate SPEC scores is wrong, it was implemented to accelerate processing of alternative data streams.As for your SSE2 comments, let's put them in context:
Why does SSE2 inflate SPEC scores so substantially if not for the benefit of SPEC? Since you are positioning yourself as someone with a "modest understanding of SPEC and microprocessors" clue us in.
I'm by no means supporting some of the argurments I've seen elsewhere on the web related to Apples benchmarking. On the other hand if your going to write an article supporting Apple, you (the author of the referenced article) really should avoid claims like the reference to SSE2 that simply fall apart.Writing off the rest of the authors claims wholesale is an easy out and fails to address the issues he takes with the article posted at haxial and its apparently shoddy "investigative journalism".
And finally (as I've been saying from the start), I agree that all this debate is silly until we can evaluate real-world performance when the G5s are available for purchase, and compare that to currently available PCs. Saying, with any certainty, that the G5s will underperform PCs in August is just as ridiculous as saying they will outperform them. Not a single person here knows, or can know, what will happen then (unless they are psychic), so let's put the bickering aside and stop predicting doom and gloom for Apple when you have nothing of substance on which to base your claims.
Originally posted by tychay
I agree with the second part of the statement. However, the P4 of today is not just a "die shrink and a few new features", neither is the P4 Xeon.
I would not use the term guaranteed, on the other hand no one seems to have come up with a power efficient system to run i86. Apple though is at there worst when their products become stagnet, so keeping well ahead of the alternatives is very important.I'd assert that the G4 is already competitive in the notebook market and is guaranteed to be so for a while longer.
As to the current state, Intel pushes the Centrino which uses a Pentium-M, not the Pentium-4M. So even though the Pentium-4M clocks in higher, it becomes so cached starved on notebook I/O that it's performance is actually worse. Battery life dwindling to zero, doesn't help matters either.
I'd love to get ahold of a G5 with and adjustable clock. I'm not to sure that it would be competitive underclocked.As to the future, Motorola uses an old process for the G4 so a process shrink alone will give a lot of wiggle room. Also, they could always underclock a G5 if there is enough room to get the peak power requirements within notebook tolerances. I think the latter isn't as likely to happen given the former, though. It's nice to have options--something Apple didn't have until this summer, I might add.
My references to normalizing clockspeed have only been with respect to the G4 and G5. this is very relavent with respect to laptops, as it does not appear that an under clocked G5 will have an overwhelming advantage over a newwer G4 version running at a high clock speed. Now that doesn't mean that a G5 would not affer certain Laptop users advantages, but a G4 running faster and at lower power will most likely serve better for the averge user.I agree that the P4 design was marketing driven. I even agree that it is a poor performer for it's clockspeed.
But I'd like to state that just because Intel is driven by a marketting guy now doesn't mean that they don't have some pretty good engineers.* Take how quickly they were able to push out a DDR capable P4 chipset out the door after the RAMBUS fiasco, or Hyperthreading, or how quickly they ramped up the performance of the P4?
What I disagree with is that you should be normalizing a processor by its clockspeed. The simple fact is that the P4 achieves by design clockspeeds impossible to reach by another processor using the same process. The analogy Hannibal uses is that the Pentium 4 has a "McDonald's drive thru" approach (many servers, one line) while the G4 is like the counters inside McDonald's (many lines, each with one server). (The G5 tries to be both at the same time.) The P4 today is a solid performer for its price. Hyperthreading may not help the SPEC, but it certainly has real-world benefits.
Yes very interesting times indeed. Apple has a new PowerMac that is a good design and an operating system that has gotten far better in a very short period of time. As a linux user. I can not understate how appealing the G5 with Panther will be.Of course, you are free to normalize however you want. Normalize by price to compare price/performance; or normalize by power for notebooks; or normalize by time of introduction for CPU wars. Normalize by clockspeed if you are Apple marketing these last three years and want to draw attention that there is a certain company based in Chicago that seems to have think that Moore's Law is a linear, not exponentional, relationship.
It is interesting to note that the last time there was a processor parity between Mac and PC was in 1999 (G4, Athlon, P3), but at the time Mac OS X didn't exist.
Yep, interesting times indeed...
* OBSomeOfMyBestRoomatesAreIntelEngineers
Take care,
Clearly the G4 and G5 are about as unrelated as PPC chips get, to make an analogy in the x86 world I'd say as different as the P3 and P4. However as I already said once (in a comment aimed at you) I see no reason to suspect that the various P4's are any more closely related to each other than the various Athlon/Opteron products are to each other. Perhaps the Opteron seems new and different with x86-64 and an on-die memory controller, but the location of the memory controller is hardly a significant modification to the processor core itself, and x86-64 is unlikley to have been any more complex a change than hyperthreading. With the Opteron the core was also optimized and reliad compared to the Athlon XP/MP, just as with the current P4 was when compared to the P4 "classic".I would have to say that the various P4 are more related than say the G4 and the G5 or the Athlon and the Opteron.
Originally posted by Ensoniq
The point Rower_CPU was making about SSE2 and SPEC was this I believe...
Some people originally complained that SSE2 had not been enabled for the testing...which turned out to be false. But the assertion was that if SSE2 HAD been turned on (which it was) that the SPEC scores would have been higher.
For this to be true, it means that not only would SSE2 need to be enabled in the compiler (which it was) but that SPEC would have to recognize SSE2 in order for the test scores to come out better. Meaning, SPEC has SOME code in it specifically to check for and use SSE2 if it's available. I don't know if that's true, but that's the assertion I believe Rower_CPU made.
Whether it's true or not, what's perfectly clear is that the SPEC test does NOT have any code in it to detect and use AltiVec if it's available. So even if Apple had used special flags in GCC to enable the use of AltiVec (which they specificially did NOT use, even though they turned on SSE2 for their x86 counterparts), SPEC wouldn't have looked for it.
So if SPEC is designed to use SSE2 (not certain) but is NOT designed to use AltiVec (an absolute fact) than right out of the gate it shows that SPEC would have an inherent bias BUILT-IN to skew scores more favorably to the x86 processors. How come no one goes out of their way to point out how THAT might be considered cheating at worst, an unfair advantage at best?
So far, the only claims of "cheating" have come when it may have given the Mac an edge. And though there is STILL no shred of evidence of true cheating by Apple (even the admissions on the PC side that Intel, AMD, and ATI do it, so Apple MUST too remains less than credible), it's the only drum for the Wintelers to keep banging...so that is what they have done.
-- Ensoniq
Originally posted by ddtlm
Rower_CPU:
It was well said, but partly erronious. SPEC is not designed to benefit from SSE2; Intel was simply wise to design a compiler that could auto-generate SSE2 code.
Is it even known that there is a significant performance gain to be had by enabling SSE/SEE2 in GCC for SPEC?The performance gain by enabling SSE results in an unfair advantage to Intel, and Apple still comes out performing just fine.
Originally posted by Cubeboy
Wizard:
SPEC's source code consists of large data blocks, not little snippets so it can't be hand optimized for SSE2 or any other instruction set for that matter. What you can do is optimize the compiler so that it can vectorize the code and hopefully produce enough packed floating-point SSE2 code to improve performance by an significant amount. In the Pentium 4's case, it's hardly anything spectacular as tests done by Intel show only a 5% improvement over x87 only code. Link below:
http://developer.intel.com/technology/itj/q12001/articles/art_2.htm
As for real world uses, nearly all the most used rendering programs (3dsMax, Lightwave, etc) encoding programs (Windows Media Player, MP3 Maker Platinum, Main Concept, Pinnacle), archiving programs (WinRAR, etc), have some degree of SSE2 optimizations, and it's also found in some scientific/engineering apps as well as a few games.
Regarding PPC970 with Altivec, an competent auto-vectorizing compiler would do nicely.
Originally posted by ddtlm
it even known that there is a significant performance gain to be had by enabling SSE/SEE2 in GCC for SPEC?