Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
What does that even mean? Are you saying performance doesn’t matter because Apple doesn’t use an Intel/Amd cpu?
Even back when they did use Intel, performance didn't really matter. For iMac and Mac mini Apple only ever used laptop versions of Intel's newest CPUs. The form factor experience was always more important than raw performance. Only the Mac Pro came with proper cooling.
I don’t know how to process that. You would really have hated the 90s when Sparc, Alpha, MIPS, PowerPC and X86 were around and creating great competition.
Until the iPhone I'd never seen a computer that wasn't Commodore, Amiga or x86 PC.
Is there no competition between Apple and Samsung?
Not really, no.
Can’t pull the cpu out of your phone, or generally out of your laptop. What a bizarre opinion.
Competition is between devices, for tasks you perform on those devices.
Among Android phones the competition is between devices, and among x86 PCs there is an Intel vs AMD CPU competition. But Apple only ever competes with other companies on the basis of their entire platform ecosystem. Different Apple devices compete among each other over which is best suited for the job. An M2 MBA competes with an M2 Pro MBP. Apple wouldn't be the most profitable company in history, if they allowed others to challenge them!
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
Even back when they did use Intel, performance didn't really matter. For iMac and Mac mini Apple only ever used laptop versions of Intel's newest CPUs.
It's incorrect to say performance didn't matter. It's more correct to say that they cared about performance, but not about ultimate performance. Also, the later Intel iMacs did feature desktop chips, such as the i9-9900K in my 2019 27" iMac. This is the fastest non-extreme consumer desktop processor Intel made at the time.
Only the Mac Pro came with proper cooling.
The iMac Pro came with proper cooling as well. Though it is true that the cooling on the 27" iMacs is inadequate.

My 2019 27" i9 iMac gets noisy when I'm using all the CPU cores; I'd imagine it would be noisier still if I were also stressing the GPU. Apple should have designed the 27" iMacs with cooling similar to what they gave the iMac Pro, which was known for being quiet under load. The additional cost of an extra fan, some extra plastic air ducts, and possibly a larger heat sink (if needed) would have been small.
 
Last edited:

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
It's incorrect to say performance didn't matter. It's more correct to say that they cared about performance, but not about ultimate performance.
On some Macs they sacrificed all performance for a stunning form factor.


Steve Jobs' stated belief was that the designers should be tasked to shape a device and the engineers had to fit their stuff in there somehow. This led to many beautiful, compromised and flawed designs over the years. Only lately with ARM chips and thicker MacBooks with uniform thickness and no wedge shape, we got truly reliable ugly computers. There was definitely a change of heart at Apple after Steve Jobs' death.
Also, the later Intel iMacs did feature desktop chips, such as the i9-9900K in my 2019 27" iMac. This is the fastest non-extreme consumer desktop processor Intel made at the time.
And did it throttle under sustained performance?
The iMac Pro came with proper cooling as well. Though it is true that the cooling on the 27" iMacs is inadequate.
Or Intel processors are inadequate for the kind of computers Apple wants to build and customers want to buy. Intel was (and still is) focused on the market for big beige boxes. Time and again Apple explained that they only care for performance per watt.
My 2019 27" i9 iMac gets noisy when I'm using all the CPU cores; I'd imagine it would be noisier still if I were also stressing the GPU.
Here is somebody who will fall in love with the upcoming new large iMacs. 🥰 🖥️
Apple should have designed the 27" iMacs with cooling similar to what they gave the iMac Pro, which was known for being quiet under load. The additional cost of an extra fan, some extra plastic air ducts, and possibly a larger heat sink (if needed) would have been small.
2019 is just one year before the M1. They already knew they were about to abandon Intel for good. Not a good time for a complete retooling of an entire production line.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866

You said "can't simulate", so I don't see a reason to expand the scope of what should be a trivial discussion. It's unreasonable to limit the point to existing operating systems, compilers, etc. When we needed to make one processor look like many, we built the tools to do it. GPU programming and shaders are typically many cores working on a single workload in a way that, even if barred from async execution, would appear as a single host thread calling a peripheral.

But the details of how aren't really important. It is possible, it is often necessary, it is just not preferred.
Look, no. This isn't a matter of preference or will.

We do in fact have a great need to make many processors work like one while also providing the aggregate of their individual performance. We've got so many cores in modern CPUs, and we frequently run software which doesn't provide enough runnable threads to utilize all those cores. So we could really use some technique which would let us use two or more cores to work together on one thread, running it faster than a single core can.

Unfortunately, there's no known general solution to this problem. In fact, forget about "general", there aren't even any narrow and limited solutions. It's simply not a thing anyone can do. (And that's not because nobody's ever tried, either.)

That wikipedia article you linked isn't what you think. Autopar tools analyze source code (not running binaries) and rewrite it to spawn multiple threads. Autopar sometimes helps to provide more threads, but it doesn't actually meld two cores together.

But even if we try to look past that, there's still instructive difficulties. Unlike timeslicing (which lets us treat one CPU core as many lower speed CPU cores), autopar isn't general and trivial and universal. It relies on detecting patterns in source code which can be safely transformed into multiple worker threads, just as if the program's author had explicitly done so. When such patterns don't exist, which is quite common, autopar does nothing. Furthermore, most software which can benefit from autopar will benefit more from programmers doing the work themselves: autopar often does a mediocre job, and is fragile.

I know it might seem logical that if you can go one direction quite easily, the inverse direction should be reasonable too. Maybe not as easy, but still common and well understood. But this isn't always true! In mathematics, there's a lot of heavily studied "trap door" functions where f(X) = Y is trivial to compute, but doing the inverse (if you know a value of Y, what value(s) of X produce it?) is insanely hard. (Trapdoor functions are very important, they're the foundation of cryptography.)

This is a bit like a very good trapdoor function. It may be trivial to make one full-speed CPU behave approximately like two half-speed CPUs, but it's essentially impossible to do the inverse.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
Sorry about that. Was thinking '32-bit registers' and wrote '32 registers' . 32 registers would be been a huge deal for 1980 era system that was affordable with the transistor budget restrictions those days.

In terms of register pressure it is further away from the PowerPC, , but still not eyeball deep in 'complicated' instructions. It isn't extreme RISC adherent but it isn't quite CICS either.
68K is very much a CISC, not a RISC. In fact, it was significantly more CISCy than x86, especially the post-68020 version of 68K.
 
  • Like
Reactions: Basic75

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
And did it throttle under sustained performance?
Whether it throttles or not doesn't change the fact that this statement you made (recopied below) is factually incorrect. All the 27" Retina iMacs, starting with first model introduced in late 2014, used desktop CPU's. How about being forthright and acknowledging that?
For iMac and Mac mini Apple only ever used laptop versions of Intel's newest CPUs.
*******
2019 is just one year before the M1. They already knew they were about to abandon Intel for good. Not a good time for a complete retooling of an entire production line.
You misunderstood my point. I wasn't saying they should have retooled in 2019. I'm saying they should have incorporated adequate cooling from the start of the new model line. I.e., they knew they were putting high-powered desktop chips into the Retina iMacs when they designed them, and they knew what the thermal consequences would be. Thus, when they introduced the new design in 2014, they should have incorporated sufficient cooling at that time.

Here's the thermal profile of the late 2014 27" Retina iMac, from Apple's own data. Power consumption with max CPU usage is 288W.
1697140690509.png
 
Last edited:

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
Look, no. This isn't a matter of preference or will.

We do in fact have a great need to make many processors work like one while also providing the aggregate of their individual performance. We've got so many cores in modern CPUs, and we frequently run software which doesn't provide enough runnable threads to utilize all those cores. So we could really use some technique which would let us use two or more cores to work together on one thread, running it faster than a single core can.

Unfortunately, there's no known general solution to this problem. In fact, forget about "general", there aren't even any narrow and limited solutions. It's simply not a thing anyone can do. (And that's not because nobody's ever tried, either.)

That wikipedia article you linked isn't what you think. Autopar tools analyze source code (not running binaries) and rewrite it to spawn multiple threads. Autopar sometimes helps to provide more threads, but it doesn't actually meld two cores together.

But even if we try to look past that, there's still instructive difficulties. Unlike timeslicing (which lets us treat one CPU core as many lower speed CPU cores), autopar isn't general and trivial and universal. It relies on detecting patterns in source code which can be safely transformed into multiple worker threads, just as if the program's author had explicitly done so. When such patterns don't exist, which is quite common, autopar does nothing. Furthermore, most software which can benefit from autopar will benefit more from programmers doing the work themselves: autopar often does a mediocre job, and is fragile.

I know it might seem logical that if you can go one direction quite easily, the inverse direction should be reasonable too. Maybe not as easy, but still common and well understood. But this isn't always true! In mathematics, there's a lot of heavily studied "trap door" functions where f(X) = Y is trivial to compute, but doing the inverse (if you know a value of Y, what value(s) of X produce it?) is insanely hard. (Trapdoor functions are very important, they're the foundation of cryptography.)

This is a bit like a very good trapdoor function. It may be trivial to make one full-speed CPU behave approximately like two half-speed CPUs, but it's essentially impossible to do the inverse.

Please look through my comments in this thread for context. In particular, you should find that I am repeatedly making the point that parallelization is hard and often inefficient.

My problem is with statements like this:

Unlike timeslicing (which lets us treat one CPU core as many lower speed CPU cores), autopar isn't general and trivial and universal.

Time slicing also isn't general or trivial, and in particular isn't universal. It is simply a set of tradeoffs that we've come to accept for a particular class of usage scenarios. There are significant added latencies in thread execution well beyond the clock scaling, there are all sorts of challenges in maintaining thread priorities and avoiding deadlock, there is a ton of extra code that needs to run to supervise the system, etc, etc, etc. All of that because we are trying to simulate a parallel system with a serial one.

If you and I each need to run a batch, do you think it is faster and more efficient for us to share one computer or to have two? I'd argue two is better, even if each is half as fast. If we had to share a computer that is twice as fast then I'd argue it is better to run one of our batches exclusively, and then follow it with the other rather than time slice between them making us both wait until both batches are complete.

The same is arguably true if I have two fully independent high performance computation loads to execute-- if they are independent, then there's no reason for them to spend time fighting for resources slowing each down. There are benefits to independence and reduced latency that time slicing can't hide.

And, further, the same is true for high performance systems where performance is measured by latency rather than throughput, and so called real time systems, for a given definition of real time.

This is why, even in modern systems with fast processors, it's common to have separate co-processors for timing critical functions (audio, video, motion tracking, radio communciations, etc). Those functions are quite sensitive to timing and can't afford the tradeoffs inherent in time slicing even if they're relatively light loads.

Let's not forget that there has been decades of work trying to figure out how to make parallel tasks appear to work on a single execution core (which itself is a collection of parallel bits, but I'll leave that for a different rabbit hole), and designing hardware, drivers, operating systems and applications to function in that environment. We didn't get to this world without a ton of hard work.


So no, this isn't a "trapdoor function", there's just a number of criteria to consider and you're only thinking of some of them. There are tradeoffs in simulating in either direction-- your use case may just be less sensitive to one than the other. Most use cases may be more sensitive to one than the other. But that doesn't mean one direction is trivial and the other is impossible.
 
Last edited:

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
Whether it throttles or not doesn't change the fact that this statement you made (recopied below) is factually incorrect. All the 27" Retina iMacs, starting with first model introduced in late 2014, used desktop CPU's. How about being forthright and acknowledging that?
Sure, I acknowledge the fact (implicitly, by not disputing it) and highlight that although they eventually did put desktop chips in the iMac, the cooling solution was never quite made for them to sustain full performance.
You misunderstood my point. I wasn't saying they should have retooled in 2019. I'm saying they should have incorporated adequate cooling from the start of the new model line. I.e., they knew they were putting high-powered desktop chips into the Retina iMacs when they designed them, and they knew what the thermal consequences would be.
And they didn't care. Form factor and silent operation are more important to Apple than peak, raw and overall performance.
Thus, when they introduced the new design in 2014, they should have incorporated sufficient cooling at that time.
No, they should set their own priorities and communicate them to customers via keynote.

 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
Please look through my comments in this thread for context. In particular, you should find that I am repeatedly making the point that parallelization is hard and often inefficient.
I did. I held off commenting because you were saying things which were so out there, I assumed I was misinterpreting, but you just keep saying nonsense.

Time slicing also isn't general or trivial, and in particular isn't universal. It is simply a set of tradeoffs that we've come to accept for a particular class of usage scenarios. There are significant added latencies in thread execution well beyond the clock scaling, there are all sorts of challenges in maintaining thread priorities and avoiding deadlock, there is a ton of extra code that needs to run to supervise the system, etc, etc, etc. All of that because we are trying to simulate a parallel system with a serial one.
* Time slicing is in fact trivial. Context switching code is tiny and easy to write, and so is a basic scheduler. Undergrad CS students taking OS courses often do it as a homework project. The only hardware required aside from a CPU and memory is a periodic timer interrupt, which is trivial. People have implemented timesliced operating systems for the 6502, the 8-bit CPU which powered the Apple II and many other home computers of the 1970s and early 1980s.

* Most software doesn't care about these added latencies. This is why I am willing to call it "universal" - as in, it's a technology useful for probably 99.9% of the programs out there. And I'm probably being far too pessimistic about the number of 9s there, especially in the context of personal computing.

* There are no significant additional deadlock/priority/etc challenges compared to having two (or more) real cores. In fact, reality tends to work the other way: back when the personal computer industry started migrating to multi-core, developers started noticing lots of concurrency bugs that had been present for years, but were very hard to actually trigger on single-core machines.

* There is not a ton of extra code to supervise the system. Why would there be?

* Nobody actually "simulates" a parallel system with a serial one when they do this, everyone does it by yanking the CPU away from one process/thread and giving it to another for the next few milliseconds. There's literally no effort put into actually simulating simultaneity. I say this only because you keep using 'simulate' in contexts where it's not a good fit for what we're talking about, and it's unclear whether you understand that.

This is why, even in modern systems with fast processors, it's common to have separate co-processors for timing critical functions (audio, video, motion tracking, radio communciations, etc). Those functions are quite sensitive to timing and can't afford the tradeoffs inherent in time slicing even if they're relatively light loads.
The actual reason we have separate coprocessors for these functions: we now have an embarrassment of transistors to spend on performance, power efficiency, and power-efficient performance.

Few of the things you listed actually have hard realtime requirements, and back in the day when transistors were a lot more expensive, they were frequently done on the CPU. As an example, in 2001's Mac OS X 10.0, audio was handled just fine by a timeslicing scheduler on single core Macs with zero assistance from coprocessors.

Let's not forget that there has been decades of work trying to figure out how to make parallel tasks appear to work on a single execution core (which itself is a collection of parallel bits, but I'll leave that for a different rabbit hole), and designing hardware, drivers, operating systems and applications to function in that environment. We didn't get to this world without a ton of hard work.
Dude. Timeslicing a CPU to run multiple tasks is technology which was reasonably mature by the end of the 1960s. It's not something which we had to slowly perfect over the decades since. This is not to say that there was no effort involved, but you seem to have a very strange view of the timeline and the difficulty.

So no, this isn't a "trapdoor function", there's just a number of criteria to consider and you're only thinking of some of them. There are tradeoffs in simulating in either direction-- your use case may just be less sensitive to one than the other. Most use cases may be more sensitive to one than the other. But that doesn't mean one direction is trivial and the other is impossible.
Nonsense. One direction is trivial, about 60 years old, and you are using it right at the very moment you read this post. The other is fundamentally very difficult, has defeated major attempts at cracking the problem, and you are not using it because it does not exist as a fielded technology.

I cannot overemphasize how out of touch you are if you think otherwise. It would be awesome if there really was magical technology which could, in a reasonably general way (I'd be happy about a tenth as much generality as timeslicing!), add together two independent cores and provide single-thread performance that is roughly the sum of their individual performance. I would love to see that. I don't expect to in my lifetime, and I don't expect anyone ever will.
 

leman

macrumors Core
Oct 14, 2008
19,518
19,668
@mr_roboto is right. Preemptive multitasking has been used for very long time before we got multiple fires, and it’s cheap and easy to implement. Pretty much the only potential problem is bugged multithreaded code that doesn’t unlock the threads properly and multi-core systems have the same problem.

Multi-core computing on the other hand is really really hard, on hardware, OS, and software level. Communicating changes between and scheduling multiple cores is extremely complex. On software side, you not only have to deal with the non-trivial problem of making your algorithms scale to multiple cores, but also with the fact that changes to data might appear out of order and happen while you do something else. I consider myself to be a reasonably competent system programmer and yet I still have difficulty wrapping my head around the C++ memory model and its implications.
 
  • Like
Reactions: wegster and Basic75

leman

macrumors Core
Oct 14, 2008
19,518
19,668
The same is arguably true if I have two fully independent high performance computation loads to execute-- if they are independent, then there's no reason for them to spend time fighting for resources slowing each down. There are benefits to independence and reduced latency that time slicing can't hide.

Context switch time plus time lost to cache misses is under 3-4 microseconds on modern hardware. If your time slice is 1ms that’s less than 0.5% added overhead. Completely negligible. If you are running demanding independent work it makes sense to set the timeslice even longer, which will make the overhead respectively smaller.

And of course, doing this will make your system sluggish as you’d need to
interrupt the thread to handle system stuff. But the same is also true for two slower cores. So where I agree with you is that having a third (even small) core to handle basic system stuff and user interaction would be helpful. That’s the beauty of Apples E-cores.
 

leman

macrumors Core
Oct 14, 2008
19,518
19,668
By the way, something that might be relevant to the current discussion. When I was doing my power/frequency tests on the A17 I was surprised to see that the OS was routinely moving threads between P and E cores. When running a multi-core workload, each thread spent at least 20% of its time running on an E-core. So even if you are running an “optimal” number of threads the system still uses time-slicing and even migrates the threads between clusters. I suspect this is done to minimize the latency on an asymmetric system (you don’t want a thread to stay on an E-core for too long).
 

Muziekschuur

macrumors regular
Oct 14, 2023
120
5
Sunn computers was heavily invested in the Spark computers. I think when Oracle bought Sunn Apple was in on the deal. Apple and Oracle have very different markets. It is not a surprise that Mac Os is built on Unix.

To develop the current product line of Apple M processors ... I think 40 years of development doesn't even cover it.
 
  • Like
Reactions: talking pipe

picpicmac

macrumors 65816
Aug 10, 2023
1,239
1,833
Sunn computers was heavily invested in the Spark computers. I think when Oracle bought Sunn Apple was in on the deal. Apple and Oracle have very different markets. It is not a surprise that Mac Os is built on Unix.
Sun Microsystems was a big deal at one time, and yes their SPARC stuff was going to be the cat's meow. But much time has passed since then.

MacOS has its ancestry in UNIX because Apple was adrift with Jobs gone, Jobs had NeXT which had a nifty little box running a grandchild of UNIX, and then Apple brought Jobs back and obviously he brought his little nifty box with him.

The rest, as they say, is history.
 
  • Like
Reactions: wegster

JouniS

macrumors 6502a
Nov 22, 2020
638
399
Multi-core computing on the other hand is really really hard, on hardware, OS, and software level. Communicating changes between and scheduling multiple cores is extremely complex. On software side, you not only have to deal with the non-trivial problem of making your algorithms scale to multiple cores, but also with the fact that changes to data might appear out of order and happen while you do something else. I consider myself to be a reasonably competent system programmer and yet I still have difficulty wrapping my head around the C++ memory model and its implications.
Multicore programming is usually easy, as long as you only care about throughput. You can already do a lot of things with the trivial OpenMP approach, where you have shared immutable data, each thread has its own private mutable data, and you can only access shared mutable data within a single critical section. If you need something more flexible, you can extend the approach beyond the single critical section without too much trouble, as long as you are careful.

But if you care about latency, things can get a lot more difficult. You may need to do something clever, which is a cardinal sin in multicore programming.
 

wegster

macrumors 6502a
Nov 1, 2006
642
298
Yes that's definitely true. With that load I had described, I couldn't have had any more than 4 active threads, and probably only a couple.
That’s not true, or can’t be said for certain. Modern software has a fairly complex dependency chain of libraries and numerous ‘helpers’ which often take the form of other threads.

I’m gong to have to assume somewhere in ‘the other 6-8 more pages’ someone finally got across that:
1. Single core scores are relevant as in many to most cases, additional cores are similar to the one, and it’s not a direct multiplier of e.g. 8 cores like X = 8X, for various reasons.
2. The ‘prefer 2 2.5GHz cores over 1 5GHz’ core is as stated, wrong. Running multiple threads and processes on a single core is more efficient in general, from more in fast CPU/core specific cache, not doing any data transfer over much slower speed busses, and not having to effectively marshall data from ‘jobs’ spread across cores. I stopped paying initiate details to CPU design but I have worked in software for decades, including building numerous single and multi-threaded as well as distributed systems. Some jobs as stated, are difficult to effectively split up, but yeah - many are and that’s become the standard. Still, having said that, a single core of X speed will outperform 2 cores of 1/2 X speed due to additional latencies introduced.
3. Note even single cores continue to manage MANY tasks via the OS scheduler, with the net result being the faster that single core, and more effectively it’s scheduler and task mgmt, IPC, etc. - the more overall load you can achieve with noticable effect. On any modern CPU, your two web browser examples is basically trivial - for a single core system.
 
  • Like
Reactions: APCX

wegster

macrumors 6502a
Nov 1, 2006
642
298
People thinking Intel and AMD don't make comparable chips are not following along. If you mean ARM, because they don't want to make ARM chips. Intel/AMD chips are still more diverse in what they can do. The real question should be why doesn't Apple push the server market.
Eh, they did once upon a time. G5 PPC servers. While they’ve been making some strides towards improving things like ‘enterprise management’ (think of even a small school district needing to ensure specific software and update levels are in place across 500 macbook airs and iPads - it gets worse from there, but Apple does seem to be making inroads at this level anyways), the lack of things like a form of Lights Out Management (think of this as a tiny OS used for overall system monitoring, and if need be, hard reset of the entire server in the event the normally operating OS hangs or locks).

Further, part of NextStep (core or what became OSX) was as someone stated elsewhere, effectively a BSD Unix variant, BUT - they effectively wrapped native BSD threads… with ‘Mach’ (Next kernel) interfaces. They did this elsewhere in the overall Next/OSX OS, although I can’t say for certain if this was limited only to kernel space or not, but the net effect is instead of calling a ‘user level function’ which then maps in some cases 1:1 with an underlying heavily optimized core C or system function, there was an another layer in between, going from Mach to BSD, performing additional context switching and increasing latency.

The short of it is I did a handful of benchmarks way back and the G5 server in general did not perform very well versus x86 Linux-based alternatives. I suspect a fair amount of it was down to the added latency and wrapping, as especially fro heavy I/O or compute type of systems, you wind up with a huge number of threads.

There are other things that Apple would need to seriously invest in before ‘servers’ were even remotely a viable market, from management software and things like (Azure or not) Active Directory, bulk update, software policies at scales to manage thousands of devices, etc… so yeah, I don’t think they’ll be heading back into that space any time soon, and it’s probably for the best honestly.
 

wegster

macrumors 6502a
Nov 1, 2006
642
298
IMHO, the difference between a great race car driver vs an average one is that the former knows how to exploit the best characteristics of his/her equipment, as different race circuits calls for different equipment setup that suits the driver's driving style. If you don't know your car, you will not be able to set it up properly to win races.

Back to software, I'm the the camp that in order to create great software, you need to know the hardware you're creating the software for. Processing element like a GPU calls for a different structure of codes than say a CPU, that will be different from an FPU or NPU e.g.

Solution like Electron is a terrible one.
There’s a balance in there - a solid driver knows the behaviors and at least high level systems, but they don’t need to know for examaple, what the injector pulse/on times are vs rpm and load in order to drive the heck out of it.

Same on the software comment but it does depend on the software. I once was in massive ‘shock’ when working on a large-scale system management system. Basic stuff nowadays but not at the time - managing bare metal provisioning in datacenters, patch and compliance policies, etc. So every time someone would reconfigure a servers networking, the system would never return back to mgmt visibility, so I took a quick look at one of the disconnected systems, then went to the dev owning that part of the code, asking them if they knew HOW to reconfigure networking (IP, netmask, gateway, etc. ) at the command line/OS level. The response still annoys me to this day - ‘no, I only know Java, so I use this method.’ Yeah, so NOT a good education with Java as language 1, apparently zero OS fundamentals etc.

For near-real-time or realtime systems, embedded, drivers, ASIC and firmware type development - yeah, someone had BEST have a solid understanding of hardware, instruction sets, and many other things in order to ensure performance and base functionality. But for typical app development, especially e.g. dedicated front end/UI developers, but not JUST them, they have n-depth hardware knowledge just isn’t needed, although I certainly still expect they’re able to decipher WHY their code is running so slowly and how to improve it. Likewise for backend or services, understand fundamentals to at least be able to profile your code using tools of your choice to measure, find, and reduce bottlenecks.

I’m not entirely disagreeing with you, and yeah, Electron is kind of a pig, but if the software runs ‘well enough on the systems its sold for’ and isn’t rt/embedded/HPC or similar, low level hw knowledge just isn‘t ‘needed’ although it’s usually a bonus.
 
  • Like
Reactions: quarkysg

FlyingTexan

macrumors 6502a
Jul 13, 2015
941
783
Eh, they did once upon a time. G5 PPC servers. While they’ve been making some strides towards improving things like ‘enterprise management’ (think of even a small school district needing to ensure specific software and update levels are in place across 500 macbook airs and iPads - it gets worse from there, but Apple does seem to be making inroads at this level anyways), the lack of things like a form of Lights Out Management (think of this as a tiny OS used for overall system monitoring, and if need be, hard reset of the entire server in the event the normally operating OS hangs or locks).

Further, part of NextStep (core or what became OSX) was as someone stated elsewhere, effectively a BSD Unix variant, BUT - they effectively wrapped native BSD threads… with ‘Mach’ (Next kernel) interfaces. They did this elsewhere in the overall Next/OSX OS, although I can’t say for certain if this was limited only to kernel space or not, but the net effect is instead of calling a ‘user level function’ which then maps in some cases 1:1 with an underlying heavily optimized core C or system function, there was an another layer in between, going from Mach to BSD, performing additional context switching and increasing latency.

The short of it is I did a handful of benchmarks way back and the G5 server in general did not perform very well versus x86 Linux-based alternatives. I suspect a fair amount of it was down to the added latency and wrapping, as especially fro heavy I/O or compute type of systems, you wind up with a huge number of threads.

There are other things that Apple would need to seriously invest in before ‘servers’ were even remotely a viable market, from management software and things like (Azure or not) Active Directory, bulk update, software policies at scales to manage thousands of devices, etc… so yeah, I don’t think they’ll be heading back into that space any time soon, and it’s probably for the best honestly.
Today is nothing like yesterday with Apple. They were resource thin and had completely separate products. Now they're a 2.5T company, all the funding they could want, have all the programming power they could want along with many new tools at their disposal. They have a much more integrated OS, much better diagnostics, and hardware that definitely has areas it excels at on a performance per watt basis. They have their own hardware now. The company and capabilities now are nothing like the company and capabilities then. They're so large now with so much data storage, AI, front and back end softwares and services that they could being their own biggest client. Tesla is out there investing in a next gen computer that's been holding their company valuation up. Apple is still focusing on the consumer but not focusing on a very large and very untapped market that is extremely thirsty. Server market is vastly outpacing consumer PCs as their capability grows. I use to get a new laptop every year or year and a half because so much new tech and capability was coming out. My macbook Pro 14" is built to last for what I do. They already have almost all the tech they'd need created. They spent more making that damn VR headset. AppleCare+ has something like a 60% margin on it, why? Because services is where it's at. Apple has become a defacto powerhouse with it's ecosystem we've bought into. Time to focus that integration capability on the business sector. Especially not a freakin car.
 

SoldOnApple

macrumors 65816
Jul 20, 2011
1,280
2,186
Today is nothing like yesterday with Apple. They were resource thin and had completely separate products. Now they're a 2.5T company, all the funding they could want, have all the programming power they could want along with many new tools at their disposal.
You're right. Apple today can afford to spend billions developing radio chips they can't even use. Steve Jobs was fired from yesterday's Apple for wasting less money on a more promising product.
 

Muziekschuur

macrumors regular
Oct 14, 2023
120
5
Sun Microsystems was a big deal at one time, and yes their SPARC stuff was going to be the cat's meow. But much time has passed since then.

MacOS has its ancestry in UNIX because Apple was adrift with Jobs gone, Jobs had NeXT which had a nifty little box running a grandchild of UNIX, and then Apple brought Jobs back and obviously he brought his little nifty box with him.

The rest, as they say, is history.
I like good marketing. But Jobs was in my humble opinion just a front man. I think behind it was a big big force that still is in controll of both Oracle and Apple. The apple with a bite out of it is a give away.
 

wegster

macrumors 6502a
Nov 1, 2006
642
298
Today is nothing like yesterday with Apple. They were resource thin and had completely separate products. Now they're a 2.5T company, all the funding they could want, have all the programming power they could want along with many new tools at their disposal.
...

Yet Apple Business Manager is more of a mobile device mgmt system (e.g. for phones) more than enterprise type assets. Yes, you can view 'Johns laptop' and deploy App Store apps, and they made some headway into 'custom apps', but e.g. I've seen no ability to check for compliance on specific policies, update levels, etc. They have made progress (and Education has a variant of Business Manager), which is good on them. I was a beta tester on their initial Exchange integration way back now to the point I got some free Apple swag (Apple branded hoodie, some other bits) from a lot of contributions - I still wouldn't use Mail.app over Outlook however, and it's not perfect at several levels.

Point being - work still needs to be done to get to 'simple device management' and that work continues to grow significantly once theoretical 'future servers' come into the mix.

Apple Device Mgmt solutions for anyone interested:
Apple's Business Manager: https://support.apple.com/guide/apple-business-manager/view-device-information-axm02774ff54/web

Jamf (third party, but seems to be more full-featured. Can't make out what their chart halfway down the page is, but would expect when dealing with hundreds or thousands of devices to have similar, e.g. show me systems not logging into corp SSO, show me device breakdown on OS/patchlevel, etc.)

They've done a good job of progressing some things forward, but not convinced they will re-enter the very competitive server space and enterprise mgmt software and services required of same.
 

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
68K is very much a CISC, not a RISC. In fact, it was significantly more CISCy than x86, especially the post-68020 version of 68K.
The only thing that makes the 68020 look "more CISCy" than x86 is the elaborate indirect addressing mode, which was as ill-considered as Intel's original idea of making the string operations one byte long. Even the BF ops have analogs in modern RISC processors. Claiming one architecture is more one or the other type is a fraught assertion.
 

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
They've done a good job of progressing some things forward, but not convinced they will re-enter the very competitive server space and enterprise mgmt software and services required of same.
I think Apple in the enterprise space as a whole is a non-starter.

In my experience as an electronics lab technician at a fortune 500 company, I don’t think Apple could even get off the ground.

I don’t even know how well Macs can run things like MatLab, Simula, CAN adapters or any of the proprietary software that the companies that build the simulators have.

Hell, I wouldn’t bet on them being able to use serial ports.

And MacOS, as much as I love it, just can’t compete with Windows in terms of backwards compatibility. Which in an industry RIDDLED with legacy baggage is critical.

And if you need a Unix-compatible system, the machines ship with Ubuntu.

And there’s the cost-sensitive issue that Dell, HP, Lenovo, etc. sells their hardware in bulk for dirt cheap to enterprises. Something that I have never heard Apple doing.

There’s also the factor of things like bulk discounts for Microsoft Office, Sharepoint, etc. Not even addressing how the Mac versions of Office are gimped compared to the Windows version.

I love my Mac, don’t get me wrong. Apple has done some amazing work with them lately. But they’re what I call an “individual user centric” experience. They’re really more for individual or small business creative types, or “prosumers” rather than enterprise types.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.