Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I'm not dec, but I worked for DEC.

x86 is gone - we're on like the sixth to tenth generation of x64. New processors are rolling out with AVX512 - 512-bit register instructions that can do 16 simultaneous 32-bit integer or floating point operations per instruction. Few people care if the ISA isn't elegant - that's a problem for the compilers and they seem to be doing a great job.

Alpha died not because it was a bad design, but because Intel with P6 shifted to RISC processors that matched Alpha on performance but were far less expensive. DEC couldn't afford to keep up with the FAB race, and couldn't keep up with the architecture changes. (Although, DEC pioneered the modern SMT designs - but the Alpha disappeared before the SMT Alphas showed up.)
Didn't Intel license some of the Alpha technology after DEC brought a case against them for infringement? I seem to remember something about that, back in the mid-late 90s.
 
Didn't Intel license some of the Alpha technology after DEC brought a case against them for infringement? I seem to remember something about that, back in the mid-late 90s.
Intel bought DEC's semiconductor operations - fabs and all. This included Digital's IP for SMT - which moved to the Pentium 4 and was called Hyper-Threading.

Digital was fab'ing StrongARM - one of the leading ARM chips. This moved to Intel as well. https://en.wikipedia.org/wiki/StrongARM

DEC_StrongARM[1].jpg

Not sure but they just licensed ARM..

Laugh all you want but I see ARM in the future for Macs.
So Apple will decide to ignore the entire enthusiast and professional camp, and focus its efforts on competing with ChromeBooks? :eek:
 
Last edited:
You know what I mean. I'm not that absent minded that I haven't heard of x86 and AVX.
I'm talking about the whole arch in general, call it what you will. OK, it's not accurate to just call it x86, and I'm sure not referring to 8088/8086/80186/80286/80386/80386SX/80386DX... should I go on?!
The thing is, the old, legacy stuff is still there, no matter how you wrap it up to make it into something else. The patching alone gives me the creeps. Does it work? Sure. Does it work well? No doubt. Is it the most efficient and lean design possible some 40 years or so later?
It might not be important to you if you still boot in real mode, switch modes, translate instructions, whatever.
I admit I have a problem with legacy stuff, patch works, amended stuff to look like something better. And that's what x86-64 is now, layers on top of each other.
The same goes for Windows and the same reason I'd like to drop it altogether. You just need to dir the Windows folder and see what I mean.
OK, you'll now tell me that Unix (and variants) are older and suffer the same illness. Fair enough. But it seems easier to live with it.
What I like (at least more) about Apple is that they're not afraid to cut with the past. As much as it hurts some.
It will cost you money buying new stuff. Sure, but eventually you will have to anyway.
Enough rant for today.

Too bad Alpha couldn't keep up indeed. Great design at the time. Had it survived and maybe we'd now have an alternative. There are still designs based on it though.
You're very ignorant of a modern x86 processor. The legacy "problems" you complain of no longer exist. Today's x86 processors function very differently internally than the processors you have in mind. Aiden informed you as much in post 1716. Perhaps you'd be so good as to detail what issues x86 has that causes problems?

As to why someone doesn't come up with something "better" well...there's no reason to. x86 is a very capable architecture and runs an awful lot of software.
 
What I've understood (and I've been in the business nearly 30 years) is that x86-64 software is still a prisoner of the legacy code and all these SSE, AVX, TXE... etc., are patches, "extensions" that make the old fart fly. Intel Core CPU's are CISC/RISC hybrids and x86-64 is translated to the RISC unit - a unit that is hidden from programmers. If x86 instruction set goes, Intel will lose a lot of customers, because if software has to be rewritten anyway, there are cheaper or more powerful or more efficient options than Intel. Intel wont undermine x86-64 by any means.

CISC/RISC hybrid is not the most efficient way to do things, but it is better than to have two completely different instruction sets on same CPU. x86 is a railroad that software developers have to follow.. to Intel stations. AMD commuter train also travels on same tracks, but has been a bargain, less efficient solution these days.
 
Right, that's what I've been saying.
And it was just blowing some steam, I'm not expecting x86 to go anywhere soon. Exactly because there's a whole lot of dependency on it.
If only at the time Intel would have started up fresh with a new design. But it was a way to get you hooked, and has prevailed to this day.
I'm not expecting a major turn around here.
At least Apple had the guts to make the switch (for known reasons, but to me Apple adopting x86 is a bit against what I would expect although I can understand the reasons) from PPC to x86.
Maybe they'll come up with something of their own in the future.
[doublepost=1473864931][/doublepost]Skylake-E?! Highly doubtful.
 
Well I have written something similar very long time ago, specifically at polish forum myapple.pl, that there will be point in time, when the power consumption of computers will be governed, because of energy efficiency.

Maybe there was point in the design of Mac Pro, after all? ;)

I think this was discussed inside the European Union in 2014, and 2015, but nothing so far has spawned from this discussion. So far...
 
Well I have written something similar very long time ago, specifically at polish forum myapple.pl, that there will be point in time, when the power consumption of computers will be governed, because of energy efficiency.

Maybe there was point in the design of Mac Pro, after all? ;)

I think this was discussed inside the European Union in 2014, and 2015, but nothing so far has spawned from this discussion. So far...

In europe maybe, but where the source of electricity is hydro based and you have to use it or lose it (it can't be stocked) and is dirt cheap not really. And before you reply, keep in mind that I work for one of the top hydro electricity producer in the world.
 
I see that it is the matter of Californian regulators. And as far as I know, California is in USA. So it is not only matter of Europe ;).

Energy efficiency is problematic around the world, but its not the matter we should discuss here. What should be discussed here is the effect we will see around the world, of governed power consumption of computers, and the need for design of computers with efficiency in mind in first place.

Mac Pro 6.1 design was not so pointless after all, as much as it was spun out by people here. Because this is probably the first computer that was designed with this in mind. Others will have to follow, the suit.

And even if all of this governing of PSU's and the design of computers will fail at first, in the end, we will see upper limit for power consumption. Which is truly great thing to see.


One last thing: Was I writing for very long time, that Energy efficiency will be important, or not? ;)
 
I see that it is the matter of Californian regulators. And as far as I know, California is in USA. So it is not only matter of Europe ;).

Energy efficiency is problematic around the world, but its not the matter we should discuss here. What should be discussed here is the effect we will see around the world, of governed power consumption of computers, and the need for design of computers with efficiency in mind in first place.

Mac Pro 6.1 design was not so pointless after all, as much as it was spun out by people here. Because this is probably the first computer that was designed with this in mind. Others will have to follow, the suit.

And even if all of this governing of PSU's and the design of computers will fail at first, in the end, we will see upper limit for power consumption. Which is truly great thing to see.


One last thing: Was I writing for very long time, that Energy efficiency will be important, or not? ;)
Yes, you were writing.
We also need some scalability and flexibility in this efficient designs, a way to become more powerful... efficiently.
 
External expansion.

How can you have flexibility if your environment is constrained by power design in the first place? Whole point of this is reducing the power footprint for each desktop computer.

External expansion here is the only way we can get more power from your computers in upcoming future. Whether we like it or not.
 
Of course, but aren't we far enough from this point?
Even the expected TB3 is not enough in many cases. I think.
That is not a question to me. I think that for the whole idea of external expansion exploding we need similar technology like thunderbolt from another vendor, so it would create competition.

Because Intel develops TB there is no push for breakthroughs in it. Because I dont think there is similar technology available for mainstream market that can work with mainstream CPUs and technology.
 
Also, as long as you don't use too many external solutions/ or better none, you 're ok and within the reduced power footprint for each desktop computer.
Isn't this completely untrue when you need multiple external solutions?
Isn't this footprint enlarged then by a large proportion?
[doublepost=1473885624][/doublepost]
That is not a question to me. I think that for the whole idea of external expansion exploding we need similar technology like thunderbolt from another vendor, so it would create competition.

Because Intel develops TB there is no push for breakthroughs in it. Because I dont think there is similar technology available for mainstream market that can work with mainstream CPUs and technology.
True, and here is where we need Apple's old good and indepent innovation, like when Apple was intel's competition (in innovation not massive sales).
 
Last edited:
It depends how efficient are those external solutions. Soon NAS'es will be using SSD technology rather than HDD, and they do consume much less power than HDD's.

If you read the techpowerup article you will ask yourself one question. Why do they define the high-end GPU by memory bandwidth? Because GDDR5 memory bandwidth using 512 bit memory bus is consuming 80W of power, at 6000 MHz.
HBM can use 1/4th of this.

GPUs in upcoming months will be much more efficient, than previous generations. Not by raw power, but power from each watt consumed. Look what Nvidia was able to achieve. 5.5 TFLOPs from 100W GPU(Tesla P4). That is something.

As for GPU designs. AMD touts that because of increased costs future designs will be jumping around of scalability idea. What this means is that GPU designs will be perfectly scalable both in terms of compute units/SM units, or the number of GPU dies on single GPU board.

We were talking about efficiency, yes? Imagine this: Single PCB plate inside external GPU enclosure, that is resembling the Mac Pro 6.1 design. On that PCB there is interposer, and 4 GPU dies, that are connected by that interposer. It also plays a role of coherent fabric. Then we have feature called dual link interposer, that connects the HBM memory not only to particular GPU dies, to whom are the HBM chips attached, but also to other GPU dies on that interposer. It is in very complicated way hardware implementation of software idea which is: HSA 2.0 and Unified Memory.

In future, because of increased silicon design costs, we will see GPUs made from multiple smaller GPU dies, instead of single big ones.

And one last part. People complain that Apple does not update the Mac Pro line very often. Well, you better get used to it, because when the regulation will come to fruition, on high end side, only possible jumps in performance will be available with every new node. And the time spans between node jumps will only increase. We have waited 4 years for 14/16 nm node in GPUs. 7 nm will come in 2020, at least! We are 4 years from next generation GPUs.
 
Last edited:
That is not a question to me. I think that for the whole idea of external expansion exploding we need similar technology like thunderbolt from another vendor, so it would create competition.

Because Intel develops TB there is no push for breakthroughs in it. Because I dont think there is similar technology available for mainstream market that can work with mainstream CPUs and technology.

That's because few people really need anything like it. I'm sure I'm an upper 1% computer user, but I hardly ever use TB.

This whole regulation thing stinks. Companies buying 100s or 1000s of desktop computers for basic office work already are concerned with power usage. As are data centers and clusters. The only person getting hosed here will be those buying home computers. Now we're going to have classes of computers that we have upgrade to in order to justify certain power needs? Holy perverse incentives Batman! So if I want to up the power beyond a certain point, I need X GBs/sec GPUs or Y Watt power supply? And forcing LEDs as the only option or "setting a new default brightness standard (since most consumers never change their monitor brightness)", really? Tough ship if you're on a budget with a monitor or just want some basic thing that would hardly be on anyway. And seriously, "most consumers" don't change the brightness on the display? I want to see that data and where its coming from. And jesus christ, even if true, you don't think more people will start changing the brightness if you just ship monitors with dimmer defaults?

I really, really hate living in California...
 
  • Like
Reactions: Flint Ironstag
I think we should start to read "between the lines". All of this is made for silicon designers to build more efficient hardware. Dimming the monitors, will result in much more efficient monitors, for it to leave the brightness of them on previous levels. Similar thing about hardware. Rather than looking at the boarders, why not just make the hardware more efficient? It will be easier for manufactures and companies to think this way, rather than adjust to the requirements proposed by Californian gov.

Why not design the desktop computer with extremely low power consumption, compared to previous paradigms of desktop computer, and squeeze every single bit from its efficiency in terms of raw power? For this, we need to get the fastest possible CPU, faster and highest amounts of RAM, lock the GPUs to certain power levels, and possibly down clock them to get lower power consumption for not huge amount of raw power lost. And to get more efficiency, we need to get rid of internal expansion, because then we can use really low power design of PSU. Sounds familiar?
 
Also, as long as you don't use too many external solutions/ or better none, you 're ok and within the reduced power footprint for each desktop computer.
Isn't this completely untrue when you need multiple external solutions?
Isn't this footprint enlarged then by a large proportion?
The answer to both is: Yes. Something largely ignored by the nMP advocates.
 
  • Like
Reactions: AidenShaw
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.