Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
We may need to rewind a bit to the start of the discussion where it was claimed that frameworks/libraries wouldn't be ported over to different architectures which is no problem.

Once you have done that you might start looking into issues like cache misses which may not even be related to architecture but different implementations/generations of the same one. So Intel vs AMD, Ryzen vs. Threadripper, 9th gen i7 vs. 12th gen or Apple vs Qualcom.

But you still have a working binary for every one of those, some may just miss 10% of their maximum performance.
In some applications like HFT being 10% slower is the same as not working at all. I was responding specifically to the claim that "As others have said, most developers don't (need to) care about what processor they are designing for anymore".
 

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
The only optimizing you do to avoid cache misses is maximize locality. Nobody takes into account the cache size, because the cache size will change from machine to machine even within the same processor family and generation, and because you are running other apps and the operating system simultaneously, so you can never know how much cache you actually have at any given time. Not to mention complications caused by multiple levels of cache.

The only minor exceptions would be for things like embedded devices running RTOS or no OS at all, and those are fewer and fewer.
I agree that you will always be running the OS, other apps maybe not. As for the cache size varying, that is certainly true but if you pick the CPU you use, you will know the cache size.
 

poorcody

macrumors 65816
Jul 23, 2013
1,339
1,584
You are still using an embedded web view to render the UI. Nothing new about that, it's conceptually not that different from an electron app but on the .NET Core platform. Many mobile apps use the same approach.
No, there are a lot of differences. Using a web view is optional, you can also use only native code, use a cross-platform UI layer over native components, and mix-and-match the combinations, per each platform. My Blazor web code also doesn't run via an internal web-server, but natively by the .NET runtime. Unlike Electron you don't embed and instantiate a Chromium (and Javscript engine) for each instance, it uses the native machine's browser component, among many other things. It's much lighter-weight, more flexible, higher-performance, and more flexible than Electron.
 
Last edited:

cocoua

macrumors 65816
May 19, 2014
1,011
628
madrid, spain
ARM has structural advantages over x86 and Apple just showed the world what those are. In response, Intel and AMD will have to eventually go to ARM or RISC. And they will have to go through the same, painful transition that Apple is going through right now.

Intel's i9-12900 is faster than Apple's M1 in single-core and multi-core Geekbench 5. The i9-12900 should start shipping later this year or early next year. The i9-12900 beeds 250 Watts to beat the M1 running at 20 Watts though. Intel wins!

AMD recently won a big contract with Cloudflare for edge servers. Intel's performance was fine but their CPUs used hundreds of watts more than AMD's. I don't think that Apple will care to compete in many of the places where Intel and AMD compete like servers. But there will be other companies like nVidia making ARM server chips which should have the big performance per watt advantages over x86, just like Apple.i9-12900 ships next year, M2
Interesting. So you think that the whole market will shift towards ARM, because it is inherently superior? Which, in turn, would force developers to focus on developing for ARM, and it won't matter if they develop for Apple ARM or AMD Arm, since it will be the same thing, like in the case of AMD and Intel today?
this is something every guy in processor industry knows since M1’s worldwide presentation
There are no doubts! Apple did the transition as soon as they could, this means there are more leaps to coming next years.
moreover, Intel could make similar or faster processors for workstations, but laptops are going to be ARM yes or yes
 

poorcody

macrumors 65816
Jul 23, 2013
1,339
1,584
"March 2011"

What needed "special HW" back then could be done with a rPi today......
No, the problem is the trading software is running an OS which just has too much overhead to accomplish what they wanted. The FPGA just shortcuts execution of a specific task. Not even a RTOS can compete with that. Remember they are competing with other traders so no matter how fast it goes, it needs to be faster than "the other guys".
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
For office work and posting stuff on MacRumors for sure, but there are still jobs that even with the fastest PC still end up being "waiting for results" half your work hours.
Not any jobs our PC's do, though I definitely know those jobs exist and one should buy the right computer for the job! Our Power9 gets those tasks.
 

Realityck

macrumors G4
Nov 9, 2015
11,433
17,223
Silicon Valley, CA
I think that the biggest problem will be getting developers on board for Apple. Apple will have to take a huge chunk of the market for developers to bother developing specifically for Apple Silicon, and that's something we just can't predict at this moment.
Did you ever go though this June 2021 thread?

 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
To be fair to the C++ guys, templates were added to the language in 1990 long before C# and Java had Generics and before they even existed.
Sure, but the templates thing was just one example. Point was more just that I generally find the C++ community can be a bit exclusionary at times. That's again not a dig at anyone, just my personal experiences and only online. In person everyone's lovely. But even Bjarne Stroustrup seemed to think it was a bit of a problem last I saw him. At least he showed a lot of concern for getting new people involved with an ever more complex language like C++ and making the environment nice for them to come into.
Swift is actually my favorite language right now but I find it works best for apps targeting the Apple platforms. I have not used OCaml but I did play around a bit with F# which is sort of OCaml.NET.
Swift is great. Presently my favourite language too. And yeah, F# is basically the .NET OCaml :) Truth be told I don't like OCaml all that much. It's got a design that makes some things just super simple to do, but it also feels like it makes other things really hard to do. Last time I used it I made a compiler with it (compiled to LLVM so Chris Lattner really has been a hero for me, both Swift and LLVM :p ) it was pretty good for that cause it was fairly easy to do things in a pretty functional manner and really utilise pattern matching heavily. The project I am working on now with OCaml is something completely different though; Distributed systems code. So far I miss C which isn't a great sign, haha. But the university would like it done in OCaml because the goal is to do formal verification on implementations of advanced distributed systems algorithms and we have a lot of tooling in OCaml - whole thing partially funded by Amazon. So yeah that's the fun I get up to haha; But no Swift is great and while I haven't experimented much with it yet the new structured concurrency model looks pretty solid. Hopefully it lives up to the name and does for concurrency what structured programming did to programming in general.
Anyone writing high performance code in C or C++ (e.g. low latency trading applications) is going to care about the hardware the code is running on.
In some applications like HFT being 10% slower is the same as not working at all. I was responding specifically to the claim that "As others have said, most developers don't (need to) care about what processor they are designing for anymore".
I'd argue that in a great deal of cases you can even do all this in a portable way, making the amount of developers who "need" to worry about this an even smaller subset of the already small subset who do write code to such a performance standard that 10% slower == not working at all.

Though I mean I guess it depends what we mean when we say caring about the hardware it runs on. I've never written code for HFT or anything similar to that - but I have code where I consider the platform it's likely to run on. It *will* run on any chip but it's tuned for some assumptions - Well, I say "any chip" - come to think of it; If you send data over a socket on a big endian chip and read it on a little endian chip do you need to swap the ordering yourself or does the socket layer enforce the ordering? Cause if you need to worry about that, then some of my stuff only works when machines have the same endianness, haha. But ARM and x86 are both little endian anyway. Think the only code I've ever written that's x86 specific is the OS I made for my bachelor project, and the final part of the compiler I made - And the final part wasn't really necessary. We already had LLVM-IR so we could just pass that to LLVM and we'd be done. We only made a backend as a learning exercise.
Some Metal code I've written will work optimally if you can assume a minimum threadGroupSize being supported by the GPU hardware. Similarly you may utilise AVX and have no NEON/SVE alternative because you mainly target x86 chips with the relevant extensions - but whatever the case; For the consumer devices Apple is making, the most important factor is "will it run" so I don't see any of these considerations to be a hurdle for developer adoption of the Apple Silicon platform
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Sure, but the templates thing was just one example. Point was more just that I generally find the C++ community can be a bit exclusionary at times. That's again not a dig at anyone, just my personal experiences and only online. In person everyone's lovely. But even Bjarne Stroustrup seemed to think it was a bit of a problem last I saw him. At least he showed a lot of concern for getting new people involved with an ever more complex language like C++ and making the environment nice for them to come into.

Swift is great. Presently my favourite language too. And yeah, F# is basically the .NET OCaml :) Truth be told I don't like OCaml all that much. It's got a design that makes some things just super simple to do, but it also feels like it makes other things really hard to do. Last time I used it I made a compiler with it (compiled to LLVM so Chris Lattner really has been a hero for me, both Swift and LLVM :p ) it was pretty good for that cause it was fairly easy to do things in a pretty functional manner and really utilise pattern matching heavily. The project I am working on now with OCaml is something completely different though; Distributed systems code. So far I miss C which isn't a great sign, haha. But the university would like it done in OCaml because the goal is to do formal verification on implementations of advanced distributed systems algorithms and we have a lot of tooling in OCaml - whole thing partially funded by Amazon. So yeah that's the fun I get up to haha; But no Swift is great and while I haven't experimented much with it yet the new structured concurrency model looks pretty solid. Hopefully it lives up to the name and does for concurrency what structured programming did to programming in general.


I'd argue that in a great deal of cases you can even do all this in a portable way, making the amount of developers who "need" to worry about this an even smaller subset of the already small subset who do write code to such a performance standard that 10% slower == not working at all.

Though I mean I guess it depends what we mean when we say caring about the hardware it runs on. I've never written code for HFT or anything similar to that - but I have code where I consider the platform it's likely to run on. It *will* run on any chip but it's tuned for some assumptions - Well, I say "any chip" - come to think of it; If you send data over a socket on a big endian chip and read it on a little endian chip do you need to swap the ordering yourself or does the socket layer enforce the ordering? Cause if you need to worry about that, then some of my stuff only works when machines have the same endianness, haha. But ARM and x86 are both little endian anyway. Think the only code I've ever written that's x86 specific is the OS I made for my bachelor project, and the final part of the compiler I made - And the final part wasn't really necessary. We already had LLVM-IR so we could just pass that to LLVM and we'd be done. We only made a backend as a learning exercise.
Some Metal code I've written will work optimally if you can assume a minimum threadGroupSize being supported by the GPU hardware. Similarly you may utilise AVX and have no NEON/SVE alternative because you mainly target x86 chips with the relevant extensions - but whatever the case; For the consumer devices Apple is making, the most important factor is "will it run" so I don't see any of these considerations to be a hurdle for developer adoption of the Apple Silicon platform

Sockets don’t automatically resolve endianness. Usually what you do is use macros (htonl(), etc.) to send things in an agreed-upon format. You are just sending bytes, anyway (if you are operating at the lowest level), so it all depends on what functions you are using to write values into the socket and receive them.
 

satcomer

Suspended
Feb 19, 2008
9,115
1,977
The Finger Lakes Region
The power is there, but AMD and even possibly Intel will catch up in time. I fear the future market fragmentation, with developers having to develop specifically for Apple Silicon ARM and just not having the time to do so.

Not to mention that games for Apple Silicon are just not a thing, and gaming is a huge part of the PC market, and realistically will probably never be a thing, since Apple and gaming just don't work together.

Even the new 10nm Intel CPUs will be much better than before, and AMD is already doing great in raw power.

The idea of Apple controlling both software and hardware is great, something they've been trying to do for decades, but the big question is how the support from the developers will be.

I look forward to the power, but I'm just not so sure about the future.

I am a complete noob and have no idea what I'm talking about in this area, but I'm just wondering what other people here think.

With the draconian Windows 11 specs seem to lock non-Intel Macs so most gamers on AMD machines will have to stay on 10! To bad about that fact!
 

bradl

macrumors 603
Jun 16, 2008
5,952
17,447
Lame, you clearly wrong since I'm talking about now in a consumer market while we are talking about Apple Silicon. I told you it's about the consumer market and yet you brought sever grade parts. Im not wrong. You are.

And the bold goes back to your goal post shift. That is where you are wrong. You made such a broad claim and have been called out on that claim that now you have shifted your line in the sand to maintain your stance. That is the problem here, and even that has been called out:

It doesn’t really matter if we constrain ourselves to consumer devices. Raspberry Pi is a consumer device running a Broadcom ARM SoC, Snapdragon has made several laptop ARM chips. Microsoft has, in beta, a Rosetta equivalent for running x86 on ARM on Windows now, and while the whole consumer market couldn’t currently be satisfied without AMD, Nvidia and Intel, there certainly already exist consumer devices that don’t rely on them. Take out TSMC and Global Foundries on the other hand and we get serious issues, haha.

Good points here. I didn't even bring in Rasberry Pi, which Broadcom makes, and while are SoC, they are geared to the consumer market. So even that kills the final argument.

With that said, this part is completely buried. And like I said before about Silicon, if these other companies can make and produce their own CPUs for over the past 30 to 40 years, with all said fragmentation and contraction, and still continue to make them to this day, Apple Silicon is in no danger of being made obsolete by any other CPU manufacturer.

BL.
 
  • Like
  • Haha
Reactions: KeithBN and sunny5

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
Sockets don’t automatically resolve endianness. Usually what you do is use macros (htonl(), etc.) to send things in an agreed-upon format. You are just sending bytes, anyway (if you are operating at the lowest level), so it all depends on what functions you are using to write values into the socket and receive them.

Figured as much. The code I've worked with at the socket layer have all just been on homogenous machines so it hasn't really mattered.
Thanks for confirming it :)
 

Wolfpup

macrumors 68030
Sep 7, 2006
2,929
105
The power is there, but AMD and even possibly Intel will catch up in time. I fear the future market fragmentation, with developers having to develop specifically for Apple Silicon ARM and just not having the time to do so.

Not to mention that games for Apple Silicon are just not a thing, and gaming is a huge part of the PC market, and realistically will probably never be a thing, since Apple and gaming just don't work together.

Even the new 10nm Intel CPUs will be much better than before, and AMD is already doing great in raw power.

The idea of Apple controlling both software and hardware is great, something they've been trying to do for decades, but the big question is how the support from the developers will be.

I look forward to the power, but I'm just not so sure about the future.

I am a complete noob and have no idea what I'm talking about in this area, but I'm just wondering what other people here think.
“Catch up” is the wrong framing. Apple is the one that caught up more or less to Intel and AMD, not the other way around. Apple is making CPUs that are legitimately on par with high-end stuff. Right now AMD’s architecture is the most powerful, but all three companies have very similar, very high-end CPUs.

Apple includes a bunch of hardware separate from the CPU and GPU… Arm is doing some of that also, as is I think Intel, and I don’t know how well any of that hardware is ever going to be used by anyone but like Apple for their own programs. It seems somewhat dubious to me, I would rather have more or bigger CPUs, bigger GPUs that can be used by all programs.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
“Catch up” is the wrong framing. Apple is the one that caught up more or less to Intel and AMD, not the other way around. Apple is making CPUs that are legitimately on par with high-end stuff. Right now AMD’s architecture is the most powerful, but all three companies have very similar, very high-end CPUs.

Apple includes a bunch of hardware separate from the CPU and GPU… Arm is doing some of that also, as is I think Intel, and I don’t know how well any of that hardware is ever going to be used by anyone but like Apple for their own programs. It seems somewhat dubious to me, I would rather have more or bigger CPUs, bigger GPUs that can be used by all programs.

AMD and Intel need to catch up with Apple in performance/watt. If they don’t, the fact that they can sell CPUs that are 5% faster but require a nuclear reactor to power up and an A/C unit to cool isn’t going to do them much good.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.