Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
They had the tree based std::map(). Prior to C++98 though, we were using third party libraries like RogueWave which did have support for hash dictionaries. RogueWave support was bundled with the Solaris C++ compilers which slowed down migration to C++98 and the STL.
Huh that’s interesting. Thanks for the brief history lesson; Appreciate it. Hash maps are so useful it’s hard to imagine an STL without them. Especially with how much stuff is in the STL these days. I like a lot about C++ but it really feels like it’s a victim of it’s long development history; We got to where we are not as a design goal but as a coincidence of development trajectories. If we could just start over with C++ today, forget all backwards compatibility and have all the people who’ve worked on it share experience and form a new design bottom up, it’d be fun to see what’d come of it. I also sometimes feel like the C++ community hurts itself by using terminology that makes things sound a lot more scary than they are. Before I touched C++ myself I heard all these things about templates and template meta programming. It seemed a bit frightening. Made me think I’d have to learn brand new concepts. But in reality it’s just the C++ spin on generics. I don’t think there’s anything wrong with calling it templates - that’s not what I mean - I just feel like when people talk about things like this they often fail to simply capture the essence of the idea and the first thing they throw at you when you ask “what’s templates” is extreme esoteric nonsense, when all you wanted to know was “It’s a way of making functions generic in C++”. Same with the new Concepts - “What is Concepts in C++?” “Oh it’s this feature that was proposed for C++14 but is now just getting implemented in C++20 that has undergone these revisions for these reasons with all these considerations […]”. - Maybe that’s just been the people I’ve seen online but it feels much simpler to get a first-look understanding of something with other languages typically.
I don’t want to have a dig at C++ - I like C++. But as I think I may have mentioned, I almost use it like C With Classes, haha. I don’t write good idiomatic C++. I get more and more up to date with the idiomatic way of doing C++ but I’m experienced in C so my go-to way of solving a problem is a C procedure where I may use std::vector instead of a C array. I had a point with saying all this but I kinda forgot it along the way. Something about the nice simplicity of C but features of C++ being lovely additions that make your life easier but the overhead of getting familiar with all of it being very high and taking a pragmatic approach with little-by-little adoption of more idiomatic C++ approaches probably.

Alas, for the foreseeable future however, I have OCaml and Swift projects on the table only :p
 

grandM

macrumors 68000
Oct 14, 2013
1,520
302
This is absolutely inane thinking. You're suggesting that Apple will stop innovation right now with AS and have no further plans to make more powerful and efficient processors and that they have zero interest past the current M1. You realize that's the ONLY way Intel or AMD could catch up right? Or are you in denial?

Also I'm not sure why you see this as some "competition". Apple is making processors for their computers to run MacOS. AMD and Intel moving forward will be built into Windows machines so why does it matter what they do vs. what Apple does? It's not like people will dump Macs simply because other Windows machines have better processors. Not everybody wants to run Windows.

On one end of the stick people complain that Apple is no longer innovating in regards to the Mac. Then they make something revolutionary and enough to make Intel cry like a child and the M1 is not only getting great press but amazing reviews and people just want to bring Apple down and often because they can't stand to see them on top. SMH.

Yeah and nobody saw the M1 coming either. Apple doesn't have to speak publicly what their plans are. If people don't think Apple knows what they are doing at this point then please leave Apple alone and focus on another company's.....cough cough, "Lack of Innovations".

Good for AMD....

Then you should support companies you have trust in. Simple as that.

Agreed.
Apple wrote the book on innovation.
 
  • Like
Reactions: Maconplasma

Shreducator

Cancelled
Oct 17, 2020
201
309
For most consumer products I believe they are positioned well. As far as the highest end work stations AMD seems to currently be ahead with the thread ripper chips.
 

poorcody

macrumors 65816
Jul 23, 2013
1,339
1,584
Sure we can, running the software we need is far more important than that. In fact, electricity for computers is a VERY minimal part of business, so much so that nobody cares about it. It comes under the lights budget.

Big server farms, yes, that makes a difference, desktop PC's and local servers, not so much.
Electricity usage and laptop battery life are just ancillary advantages. The performance per watt is about room to grow: you simply can't keep increasing performance by generating more heat. They are already bumping up against thermodynamic limits (the fact liquid-cooled desktops are mainstream is kind of nuts).

If x86 processors want to stay in the performant race, they are going to have to do something fundamentally different pretty soon. Microsoft sees that -- and they have to hedge their bets.
 

Bug-Creator

macrumors 68000
May 30, 2011
1,785
4,717
Germany
Yeah and nobody saw the M1 coming either.

Erm, plenty random youtubers made spot on predictions what an "A14x" would perform like based the A12z in the DTK and the A13 in the iPhones.
Heck these "Mac on ARM" rumours have been going on at least since 2018 and for sure you will find good info how much an A10 could scale up.

So anybody who didn't see it coming (+/- 20%) was just not looking....
 
  • Like
Reactions: KeithBN

poorcody

macrumors 65816
Jul 23, 2013
1,339
1,584
The power is there, but AMD and even possibly Intel will catch up in time. I fear the future market fragmentation, with developers having to develop specifically for Apple Silicon ARM and just not having the time to do so.
As others have said, most developers don't (need to) care about what processor they are designing for anymore. It's all about the APIs they are dealing with. The native processor is relevant to people who make the OS, compiler, toolkits, etc. (Legacy code and some exceptions apply of course.)

I think even the API race is going to consolidate. Native desktop apps were always better than the various cross-platform toolkits because the apps felt higher quality. With the proliferation of the web, however, web-like UIs are now so common that cross-platform apps with web-based UIs are now more accepted (hence Electron apps like Slack and Visual Studio Code).

I think this trend will continue, and the tools will get much better compared to the clunkiness of things like Electron (e.g. Microsoft's .Net MAUI coming out in November looks to make cross-platform/web apps quite elegantly). So the competition may be over APIs (why learn .NET and SwiftUI and HTML5/CSS when you can learn just one?).

If this turns out to be the case, Apple having a processor advantage may be all the more important, since there will be less to differentiate platforms...
 
  • Like
Reactions: bobcomer

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
Huh that’s interesting. Thanks for the brief history lesson; Appreciate it. Hash maps are so useful it’s hard to imagine an STL without them. Especially with how much stuff is in the STL these days. I like a lot about C++ but it really feels like it’s a victim of it’s long development history; We got to where we are not as a design goal but as a coincidence of development trajectories. If we could just start over with C++ today, forget all backwards compatibility and have all the people who’ve worked on it share experience and form a new design bottom up, it’d be fun to see what’d come of it. I also sometimes feel like the C++ community hurts itself by using terminology that makes things sound a lot more scary than they are. Before I touched C++ myself I heard all these things about templates and template meta programming. It seemed a bit frightening. Made me think I’d have to learn brand new concepts. But in reality it’s just the C++ spin on generics. I don’t think there’s anything wrong with calling it templates - that’s not what I mean - I just feel like when people talk about things like this they often fail to simply capture the essence of the idea and the first thing they throw at you when you ask “what’s templates” is extreme esoteric nonsense, when all you wanted to know was “It’s a way of making functions generic in C++”. Same with the new Concepts - “What is Concepts in C++?” “Oh it’s this feature that was proposed for C++14 but is now just getting implemented in C++20 that has undergone these revisions for these reasons with all these considerations […]”. - Maybe that’s just been the people I’ve seen online but it feels much simpler to get a first-look understanding of something with other languages typically.
I don’t want to have a dig at C++ - I like C++. But as I think I may have mentioned, I almost use it like C With Classes, haha. I don’t write good idiomatic C++. I get more and more up to date with the idiomatic way of doing C++ but I’m experienced in C so my go-to way of solving a problem is a C procedure where I may use std::vector instead of a C array. I had a point with saying all this but I kinda forgot it along the way. Something about the nice simplicity of C but features of C++ being lovely additions that make your life easier but the overhead of getting familiar with all of it being very high and taking a pragmatic approach with little-by-little adoption of more idiomatic C++ approaches probably.

Alas, for the foreseeable future however, I have OCaml and Swift projects on the table only :p
To be fair to the C++ guys, templates were added to the language in 1990 long before C# and Java had Generics and before they even existed.

Swift is actually my favorite language right now but I find it works best for apps targeting the Apple platforms. I have not used OCaml but I did play around a bit with F# which is sort of OCaml.NET.
 

sunny5

macrumors 68000
Jun 11, 2021
1,838
1,706
The only problem with Apple Silicon might be the unified memory. Clearly, LPDDR4 or 5 is not enough to replace GDDR6 and HBM2 for GPU in terms of bandwidth and it will affect the performance. You may use HBM2 as a unified memory but it's very expensive. Apple really need to prove it with up coming M1X MacBook Pro because that's the most doubtful part of Apple Silicon.
 
  • Haha
Reactions: thenewperson

Maconplasma

Cancelled
Sep 15, 2020
2,489
2,215
Erm, plenty random youtubers made spot on predictions what an "A14x" would perform like based the A12z in the DTK and the A13 in the iPhones.
Heck these "Mac on ARM" rumours have been going on at least since 2018 and for sure you will find good info how much an A10 could scale up.

So anybody who didn't see it coming (+/- 20%) was just not looking....
You completely missed the point entirely. I was referring to the type of performance the M1 has delivered coupled with the amazing battery life and low heat distribution giving rival Windows machines in the same class a huge slam down.
 

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
As others have said, most developers don't (need to) care about what processor they are designing for anymore. It's all about the APIs they are dealing with. The native processor is relevant to people who make the OS, compiler, toolkits, etc. (Legacy code and some exceptions apply of course.)

I think even the API race is going to consolidate. Native desktop apps were always better than the various cross-platform toolkits because the apps felt higher quality. With the proliferation of the web, however, web-like UIs are now so common that cross-platform apps with web-based UIs are now more accepted (hence Electron apps like Slack and Visual Studio Code).

I think this trend will continue, and the tools will get much better compared to the clunkiness of things like Electron (e.g. Microsoft's .Net MAUI coming out in November looks to make cross-platform/web apps quite elegantly). So the competition may be over APIs (why learn .NET and SwiftUI and HTML5/CSS when you can learn just one?).

Anyone writing high performance code in C or C++ (e.g. low latency trading applications) is going to care about the hardware the code is running on.

Electron apps are no where near as nice to use as a well written Native app and they consume lots of RAM. However Microsoft's .NET Maui is an evolution of Xmarin forms which is in no way a web based technology. It is a cross platform mobile GUI framework that has more in common with other XMAL/.NET based gun frameworks such as WPF than it does web technologies. I am actually a little surprised it took Microsoft this long to extend it to support desktop environments. Being able to write one app to support all major mobile and desktop platforms is an interesting idea. OTOH Microsoft does have a sad history of half abandoning its GUI platforms.
 
  • Like
Reactions: grandM

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Electricity usage and laptop battery life are just ancillary advantages. The performance per watt is about room to grow: you simply can't keep increasing performance by generating more heat. They are already bumping up against thermodynamic limits (the fact liquid-cooled desktops are mainstream is kind of nuts).

If x86 processors want to stay in the performant race, they are going to have to do something fundamentally different pretty soon. Microsoft sees that -- and they have to hedge their bets.
Performance isn't as important as what it runs. It's a nice cherry on top if you can have both, but Intel PC's are already fast enough for users not to really notice any difference. (given the PC's has a SSD rather than disk)
 

Bug-Creator

macrumors 68000
May 30, 2011
1,785
4,717
Germany
You completely missed the point entirely. I was referring to the type of performance the M1 has delivered coupled with the amazing battery life and low heat distribution giving rival Windows machines in the same class a huge slam down.

And that for sure could have (and was) anticipated by those that understood how (and why) those older A-SoC could have scaled up if Apple had decided to make a desktop/laptop version from them.

Anyone writing high performance code in C or C++ (e.g. low latency trading applications) is going to care about the hardware the code is running on.

"Anyone" writing C or C++ just wants to get it through the compiler and have it run on capable HW. Your " low latency trading applications" are still far down on that totem pole and the question with them is how well the API and database behind them are designed. Sure you don't want some super lacky language used but HW dependency is not an issue.

That is only an issue when you writing stuff that needs to run in an interrupt (drivers) or a calculation that is done a zillion times. At this point you might go for (inline)assembler making if HW specific.

Anyone writing clean C or C++ code relying on certain HW characteristics instead of just a minimum performance just shouldn't be writing any code.
 
  • Like
Reactions: KeithBN

Bug-Creator

macrumors 68000
May 30, 2011
1,785
4,717
Germany
but Intel PC's are already fast enough for users not to really notice any difference. (given the PC's has a SSD rather than disk)

For office work and posting stuff on MacRumors for sure, but there are still jobs that even with the fastest PC still end up being "waiting for results" half your work hours.
 
  • Like
Reactions: JMacHack

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Anyone writing high performance code in C or C++ (e.g. low latency trading applications) is going to care about the hardware the code is running on.
What?

I’ve written millions of lines of code in C and C++. All my code was designed to run fine on multiple ISAs (x86, SPARC, etc.) with the only change being to some optimization flags in the makefile.
 
  • Like
Reactions: KeithBN

poorcody

macrumors 65816
Jul 23, 2013
1,339
1,584
Anyone writing high performance code in C or C++ (e.g. low latency trading applications) is going to care about the hardware the code is running on.
Low latency trading applications? Yeah, that's not a niche at all. I mean, it's not like some companies tried to implement those on programmable FPGA chips for performance.

Electron apps are no where near as nice to use as a well written Native app and they consume lots of RAM.
True, Electron is a brute-force approach to cross-platform. But it's the first cross-platform framework that I've seen that is "good enough" for mainstream applications. If you would have told me Microsoft would create a highly-popular IDE with it, I would have thought you were nuts. And yet, here we are.

However Microsoft's .NET Maui is an evolution of Xmarin forms which is in no way a web based technology. It is a cross platform mobile GUI framework that has more in common with other XMAL/.NET based gun frameworks such as WPF than it does web technologies.
You're a little out-of-date with that. That's how it started, but it's evolved. I have a web-based UI written in Blazor that is wrapped in a cross-platform desktop app with MAUI. It's pretty slick. And unlike Electron, it's not carrying a heavy-weight browser and Javascript engine -- it's tight.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Low latency trading applications? Yeah, that's not a niche at all. I mean, it's not like some companies tried to implement those on programmable FPGA chips for performance.


True, Electron is a brute-force approach to cross-platform. But it's the first cross-platform framework that I've seen that is "good enough" for mainstream applications. If you would have told me Microsoft would create a highly-popular IDE with it, I would have thought you were nuts. And yet, here we are.


You're a little out-of-date with that. That's how it started, but it's evolved. I have a web-based UI written in Blazor that is wrapped in a cross-platform desktop app with MAUI. It's pretty slick. And unlike Electron, it's not carrying a heavy-weight browser and Javascript engine -- it's tight.

Why would you ever use FPGAs for performance? They are terrible for performance. If you want performance you build an ASIC. You only use an FPGA if the economics mean that the cost outlay for an ASIC doesn’t make sense or you think you will be changing the design on the fly in the field.
 

poorcody

macrumors 65816
Jul 23, 2013
1,339
1,584
Why would you ever use FPGAs for performance? They are terrible for performance. If you want performance you build an ASIC. You only use an FPGA if the economics mean that the cost outlay for an ASIC doesn’t make sense or you think you will be changing the design on the fly in the field.
They were all the rage on Wall Street some years ago. For HFT (High Frequency Trading) using them for certain aspects of executing trades could accelerate orders -- we're talking shaving off just hundreds of nanoseconds, which matters when competing with other HFTers. I didn't get it into myself, so I don't know a lot of details. See this paper:

 

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
"Anyone" writing C or C++ just wants to get it through the compiler and have it run on capable HW. Your " low latency trading applications" are still far down on that totem pole and the question with them is how well the API and database behind them are designed. Sure you don't want some super lacky language used but HW dependency is not an issue.

That is only an issue when you writing stuff that needs to run in an interrupt (drivers) or a calculation that is done a zillion times. At this point you might go for (inline)assembler making if HW specific.

Anyone writing clean C or C++ code relying on certain HW characteristics instead of just a minimum performance just shouldn't be writing any code.

If you don't care about performance you probably wouldn't be using C++ to begin with. Not understanding the performance implications of whatever language constructs you chose to when writing your code can have a significant impact on performance. One obvious example is optimizing the memory layout of your software to avoid cache misses.
 

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
You're a little out-of-date with that. That's how it started, but it's evolved. I have a web-based UI written in Blazor that is wrapped in a cross-platform desktop app with MAUI. It's pretty slick. And unlike Electron, it's not carrying a heavy-weight browser and Javascript engine -- it's tight.

You are still using an embedded web view to render the UI. Nothing new about that, it's conceptually not that different from an electron app but on the .NET Core platform. Many mobile apps use the same approach.
 

Bug-Creator

macrumors 68000
May 30, 2011
1,785
4,717
Germany
One obvious example is optimizing the memory layout of your software to avoid cache misses.

We may need to rewind a bit to the start of the discussion where it was claimed that frameworks/libraries wouldn't be ported over to different architectures which is no problem.

Once you have done that you might start looking into issues like cache misses which may not even be related to architecture but different implementations/generations of the same one. So Intel vs AMD, Ryzen vs. Threadripper, 9th gen i7 vs. 12th gen or Apple vs Qualcom.

But you still have a working binary for every one of those, some may just miss 10% of their maximum performance.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
If you don't care about performance you probably wouldn't be using C++ to begin with. Not understanding the performance implications of whatever language constructs you chose to when writing your code can have a significant impact on performance. One obvious example is optimizing the memory layout of your software to avoid cache misses.

The only optimizing you do to avoid cache misses is maximize locality. Nobody takes into account the cache size, because the cache size will change from machine to machine even within the same processor family and generation, and because you are running other apps and the operating system simultaneously, so you can never know how much cache you actually have at any given time. Not to mention complications caused by multiple levels of cache.

The only minor exceptions would be for things like embedded devices running RTOS or no OS at all, and those are fewer and fewer.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.