Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
But yes, almost everyone I’ve worked with in Swift is also an Obj-C developer and/or a C/C++ developer. But in the space I’m in, it seems like large companies want to TypeScript all the things. :(
Oh the horror. Please, for all that is sacred, fight back (though at least it’s TypeScript and not just plain JS)
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
I’ve used Swift code to interact directly with memory mapped hardware registers on the Pi for a small project around the house. I actually thought it was pretty straight-forward coming from C/C++. But there’s definitely some stuff Swift does that isn’t well documented, but if you are aware of it, can lead to some surprisingly clean code if you can properly define your types. There’s definitely some cool quirks for systems development in the language that even lets you start leaning on modern type safety features, even if they aren’t the focus.

But yes, almost everyone I’ve worked with in Swift is also an Obj-C developer and/or a C/C++ developer. But in the space I’m in, it seems like large companies want to TypeScript all the things. :(

I should give it a try.
 

mguzzi

macrumors 6502
Sep 12, 2014
270
175
Columbia SC
We bought an M1 Air to try and there isn't much I can't do with the M1 that I can do on my Intel Mac and the M1 is definitely faster. I have been impressed with the new Mac's and the M1 chip over the last few months.
 

Slartibart

macrumors 68040
Aug 19, 2020
3,145
2,819
Pointers encourage implicit changes rather than explicit. And often IMHO, they are complex instead of simple (especially for beginners). Even worse, they beg for ways to shoot yourself in the foot, or do something really dangerous like read from a section of memory you were not supposed to.

DISCLAIMER: I rapid prototype in python - saying that: programming languages like e.g. Python tend to try to abstract away implementation details like memory addresses from its users and it often focuses on usability instead of speed. As a result, pointers in Python don’t really make sense, don’t they?

And yes I am aware that Python does, by default, can give you some of the benefits of using pointers. ?

Then again there is at least one universe out there where the knowledge about pointers probably doesnt matter. ??
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
You used to start out in college with a course in data structures, with linked lists and hash tables and whatnot, with extensive use of pointers. Those courses were often used as weedout courses: they were so hard that anyone that couldn’t handle the mental challenge of a CS degree would give up, which was a good thing, because if you thought pointers are hard, wait until you try to prove things about fixed point theory.

All the kids who did great in high school writing pong games in BASIC for their Apple II would get to college, take CompSci 101, a data structures course, and when they hit the pointers business their brains would just totally explode, and the next thing you knew, they were majoring in Political Science because law school seemed like a better idea. I’ve seen all kinds of figures for drop-out rates in CS and they’re usually between 40% and 70%. The universities tend to see this as a waste; I think it’s just a necessary culling of the people who aren’t going to be happy or successful in programming careers.



We had an MIT EECS grad and he was given a small task involving simple C string operations. He looked at it for a few days and couldn't figure it out and asked the project leader for help. It was quite a surprise to me but I started using C back in the 1980s I think. I still have my ancient K&R and an updated version. It was very easy to find on shelves in the office as this was way back before the public internet when we had shelves of books and manuals for reference.
Ha, I actually skipped compsci 101 and started at Data Structures! I guess my education is approved by this guy.
 
  • Like
Reactions: pshufd

827538

Cancelled
Jul 3, 2013
2,322
2,833
The power is there, but AMD and even possibly Intel will catch up in time. I fear the future market fragmentation, with developers having to develop specifically for Apple Silicon ARM and just not having the time to do so.

Not to mention that games for Apple Silicon are just not a thing, and gaming is a huge part of the PC market, and realistically will probably never be a thing, since Apple and gaming just don't work together.

Even the new 10nm Intel CPUs will be much better than before, and AMD is already doing great in raw power.

The idea of Apple controlling both software and hardware is great, something they've been trying to do for decades, but the big question is how the support from the developers will be.

I look forward to the power, but I'm just not so sure about the future.

I am a complete noob and have no idea what I'm talking about in this area, but I'm just wondering what other people here think.

I'm an EEE with experience in the world of chips and have done a lot of investing in fabs and chip designers.

You've sort of got this all backwards, fragmentation has always been there. There's far more ARM chips than x86. One of the major issues with x86 and by extension x86_64 is this is an almost 50 year old instruction set and a CISC design. It's incredibly bloated and has to support instructions from before I was born. ARM while not as lean as it once was is a much more efficient and modern instruction set and one I would argue is far more efficient at modern multi-tasking applications. Having every instruction take one cycle (ARM) is a big advantage in a lot of ways, even if it takes more instructions.

Intel's 10nm designs are still suffering from problems with low yields, high heat and energy consumption, the IPC gains are still to be really proven in the real world. We are only really starting to see what really good high end, higher TDP ARM designs can do and there's a lot more headroom for improvement. AMD has been doing great and I expect Zen 4 to have considerable IPC gains. Apple's chip designers have shown immense ability to consistently deliver great gains and I don't expect that to change, they also have first dibs on TSMC's latest and greatest processes for years in the future which puts them at a competitive advantage to Intel and AMD. It's Intel and AMD that have to worry, not Apple.

I'd bank on ARM and RISC-V winning out in the long term against x86 but x86 is not going anywhere for a long time.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
It’s nice to be the young guy in the group again! I learned in Java.
Java was my first university taught language. But I took the unconventional approach of making my first programming language about 7 different ones that I never learned properly. Before I started uni I had written half a dozen or so simple programs (calculations for math classes, random number generators with specific properties, etc.) and all of them in different languages. I was effectively distro-hopping languages; "What if this one is better and nicer to learn?"
I don't advice that unfocused approach. At the same time however, I do advice learning several languages since each encourages a different pattern of thinking that'll make you a better program solver in general, applicable in any language.
(Most of that text is just general and not in response to you specifically, haha)
 

joema2

macrumors 68000
Sep 3, 2013
1,646
866
....One of the major issues with x86 and by extension x86_64 is this is an almost 50 year old instruction set and a CISC design. It's incredibly bloated and has to support instructions from before I was born. ARM...is a much more efficient and modern instruction set...AMD has been doing great and I expect Zen 4 to have considerable IPC gains...
Within your comment is the debate over ISA relevance vs microarchitecture implementation of that ISA. E.g, if Intel's problems are significantly caused by the archaic ISA, why is AMD "doing great" -- using essentially the same ISA.

Before around 1996 the prevailing view was x86 has unsolvable, unsustainable ISA baggage and the only way forward was RISC. Then with the P6 Intel decoupled the front end and decoded macroinstructions to micro-ops.

Starting in 2011 Sandy Bridge further refined that by using a micro-op cache, whereby most of the x86 CPU is dealing with cached, pre-decoded RISC-like operators.

There was a long interval whereby many came to accept the ISA was just the CPU "language" which could be somewhat independent of the microarchitectural implementation. At first this seemed borne out by the data center. Unlike the client side, the server side is much less dependent on a massive consumer software base. IOW Oracle running on a Xeon server competes head-to-head with Oracle on an IBM Power9 RISC server. If RISC is so vastly superior in performance or price/performance or TCO or watts per transaction, then where is the evidence?

In this 2013 paper, researchers concluded that "whether the ISA is RISC or CISC is irrelevant".

That was why when Apple's version of ARM began making such rapid progress it was initially dismissed. E.g, "It's only good for low power", "It can't scale to high perf. levels", "If it scaled to high perf. levels it would burn the same power as x86", etc.

Then when Apple Silicon began really heading up the performance curve on M1 Macs, there was no good explanation for that except "It must be because it's RISC". Then a few people decided it must be because the fixed-length ARM64 instructions permit wider parallel decoding, vs x86 that could "never" go beyond 4-wide decoding. Well, now Intel's Golden Cove microarchitecture will apparently do 6-wide decoding and use a larger micro-op cache, which further insulates the dispatch and execution systems from the ISA.

Design guru Jim Keller has been interviewed several times recently by web personalities but unfortunately the opportunity was squandered to have him discuss this in focused detail. In general Keller is of the "ISA doesn't matter that much" school, but all those interviews would have been a priceless opportunity to probe that specific area in light of the most recent microarchitectural advances by Apple Silicon, Intel and AMD. There are several on this forum who could have done a better job of interviewing him and getting meaningful detailed commentary on this important issue.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
There was a long interval whereby many came to accept the ISA was just the CPU "language" which could be somewhat independent of the microarchitectural implementation. At first this seemed borne out by the data center. Unlike the client side, the server side is much less dependent on a massive consumer software base. IOW Oracle running on a Xeon server competes head-to-head with Oracle on an IBM Power9 RISC server. If RISC is so vastly superior in performance or price/performance or TCO or watts per transaction, then where is the evidence?

Oracle is doing engineered systems these days with Exadata and those are x86 as I don't think that they make their own CPUs anymore. I think that they technically did when they bought Sun but I don't think that they make the CPUs anymore. There might be third-party companies that make them. That said, Oracle does offer ARM Cloud systems.
 

joema2

macrumors 68000
Sep 3, 2013
1,646
866
Oracle is doing engineered systems these days with Exadata and those are x86 as I don't think that they make their own CPUs anymore. I think that they technically did when they bought Sun but I don't think that they make the CPUs anymore...
Yes, understood. My point was not about the SPARC CPUs that Oracle acquired from Sun, but that server-side software somewhat bypasses the argument that "RISC is actually superior, x86 was only successful because of the massive consumer software base". Oracle is available on both Power9 RISC and x86 server platforms. On the server side there is no massive consumer software base to "prop up" x86. Hence if ISA is really a dominant factor in performance or efficiency, where is the evidence on the server side?

A better argument might be Power9 is not purely RISC, it a complex out of order machine with lots of specialized instructions, maybe not that different from Xeon in the big picture.

Apple Silicon seems to have a significant advantage in the low power arena, and maybe that could be attributed to ISA. But it doesn't address whether ISA has a significant impact on higher end desktop machines and servers.

As yet unrevealed is the CPU configuration of Apple's higher-end AS machines -- and also the performance and power/performance which can be sustained as you scale upward. E.g, in a unified memory architecture where do you put 256GB or 1TB of RAM? If you need more GPU performance than will fit on a SoC, how is that done without breaking the unified memory model?

Apple Silicon is very impressive but it's easy to look good if your competitor is asleep at the switch. Now that Intel has awakened, it will be interesting to see how future Apple Silicon CPUs compare to 6-wide Golden Cove and successors fabricated on "Intel 7" (similar to TSMC 5nm). The current momentum is on Apple's side, not just because of ISA but the ability to rapidly develop off-core assets and integrate with software. A good example is the sluggish development of Intel's Quick Sync vs how rapidly Apple has improved their video accelerators on AS.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
Yes, understood. My point was not about the SPARC CPUs that Oracle acquired from Sun, but that server-side software somewhat bypasses the argument that "RISC is actually superior, x86 was only successful because of the massive consumer software base". Oracle is available on both Power9 RISC and x86 server platforms. On the server side there is no massive consumer software base to "prop up" x86. Hence if ISA is really a dominant factor in performance or efficiency, where is the evidence on the server side?

A better argument might be Power9 is not purely RISC, it a complex out of order machine with lots of specialized instructions, maybe not that different from Xeon in the big picture.

Apple Silicon seems to have a significant advantage in the low power arena, and maybe that could be attributed to ISA. But it doesn't address whether ISA has a significant impact on higher end desktop machines and servers.

As yet unrevealed is the CPU configuration of Apple's higher-end AS machines -- and also the performance and power/performance which can be sustained as you scale upward. E.g, in a unified memory architecture where do you put 256GB or 1TB of RAM? If you need more GPU performance than will fit on a SoC, how is that done without breaking the unified memory model?

Apple Silicon is very impressive but it's easy to look good if your competitor is asleep at the switch. Now that Intel has awakened, it will be interesting to see how future Apple Silicon CPUs compare to 6-wide Golden Cove and successors fabricated on "Intel 7" (similar to TSMC 5nm). The current momentum is on Apple's side, not just because of ISA but the ability to rapidly develop off-core assets and integrate with software. A good example is the sluggish development of Intel's Quick Sync vs how rapidly Apple has improved their video accelerators on AS.

The focus is on the cloud and has been on Exadata systems for at least a decade as they could do engineered systems with Exadata. So much less emphasis on ports. Still have to do them for customers that want the Hybrid Cloud.

But I'm not talking about SPARC. Oracle is supporting another ARM platform on their cloud (don't recall the name of it).

What's PL2 on Intel 12th gen? 280 watts?

Part of the momentum with Apple is their ability to hire the best and brightest from other companies.
 

bradl

macrumors 603
Jun 16, 2008
5,952
17,447
.


I took Data Structures in Pascal so that gives you an idea of when I went to school.

That's me. They started CompSci 1000 with Basic, then went to Pascal, then C. If you were lucky, you could test out of those and go straight into the 200 level, which got you into Assembly. And to make it worse, all of it was either on a VAX running VMS, or Ultrix. I tested straight into the 1600 level, which got me straight into Pascal.

However, you remind me of our last assignment in that class. Our midterm project was to code our own simple machine language; our last assignment was to code our own translator to take that machine language, translate it into C, so it could be compiled... and our instructor did not care what language we used to code the translator.

Most of us coded that translator in C, and got it done in roughly 300-400 lines of code. One student did it in 35 lines of Perl...

One student, because he preferred the language over everything taught, coded it in COBOL. It took him 650 pages of 132-column line printer paper.

I was never happier to get out of that class.

BL.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
One student, because he preferred the language over everything taught, coded it in COBOL. It took him 650 pages of 132-column line printer paper.
This is where I imagine the twist from the Metallica song Unforgiven comes in and you sing
”That student he was meeeeee”, which really explains the following:
I was never happier to get out of that class.


All kidding aside I honestly think it sounds like a pretty nice class :)
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Within your comment is the debate over ISA relevance vs microarchitecture implementation of that ISA. E.g, if Intel's problems are significantly caused by the archaic ISA, why is AMD "doing great" -- using essentially the same ISA.

Before around 1996 the prevailing view was x86 has unsolvable, unsustainable ISA baggage and the only way forward was RISC. Then with the P6 Intel decoupled the front end and decoded macroinstructions to micro-ops.

Starting in 2011 Sandy Bridge further refined that by using a micro-op cache, whereby most of the x86 CPU is dealing with cached, pre-decoded RISC-like operators.

There was a long interval whereby many came to accept the ISA was just the CPU "language" which could be somewhat independent of the microarchitectural implementation. At first this seemed borne out by the data center. Unlike the client side, the server side is much less dependent on a massive consumer software base. IOW Oracle running on a Xeon server competes head-to-head with Oracle on an IBM Power9 RISC server. If RISC is so vastly superior in performance or price/performance or TCO or watts per transaction, then where is the evidence?

In this 2013 paper, researchers concluded that "whether the ISA is RISC or CISC is irrelevant".

That was why when Apple's version of ARM began making such rapid progress it was initially dismissed. E.g, "It's only good for low power", "It can't scale to high perf. levels", "If it scaled to high perf. levels it would burn the same power as x86", etc.

Then when Apple Silicon began really heading up the performance curve on M1 Macs, there was no good explanation for that except "It must be because it's RISC". Then a few people decided it must be because the fixed-length ARM64 instructions permit wider parallel decoding, vs x86 that could "never" go beyond 4-wide decoding. Well, now Intel's Golden Cove microarchitecture will apparently do 6-wide decoding and use a larger micro-op cache, which further insulates the dispatch and execution systems from the ISA.

Design guru Jim Keller has been interviewed several times recently by web personalities but unfortunately the opportunity was squandered to have him discuss this in focused detail. In general Keller is of the "ISA doesn't matter that much" school, but all those interviews would have been a priceless opportunity to probe that specific area in light of the most recent microarchitectural advances by Apple Silicon, Intel and AMD. There are several on this forum who could have done a better job of interviewing him and getting meaningful detailed commentary on this important issue.

I wrote something about Jim Keller, who I worked with, but I thought better of it and have removed it.
 
Last edited:
  • Like
Reactions: JMacHack

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Yes, understood. My point was not about the SPARC CPUs that Oracle acquired from Sun, but that server-side software somewhat bypasses the argument that "RISC is actually superior, x86 was only successful because of the massive consumer software base". Oracle is available on both Power9 RISC and x86 server platforms. On the server side there is no massive consumer software base to "prop up" x86. Hence if ISA is really a dominant factor in performance or efficiency, where is the evidence on the server side?

A better argument might be Power9 is not purely RISC, it a complex out of order machine with lots of specialized instructions, maybe not that different from Xeon in the big picture.

Apple Silicon seems to have a significant advantage in the low power arena, and maybe that could be attributed to ISA. But it doesn't address whether ISA has a significant impact on higher end desktop machines and servers.

As yet unrevealed is the CPU configuration of Apple's higher-end AS machines -- and also the performance and power/performance which can be sustained as you scale upward. E.g, in a unified memory architecture where do you put 256GB or 1TB of RAM? If you need more GPU performance than will fit on a SoC, how is that done without breaking the unified memory model?

Apple Silicon is very impressive but it's easy to look good if your competitor is asleep at the switch. Now that Intel has awakened, it will be interesting to see how future Apple Silicon CPUs compare to 6-wide Golden Cove and successors fabricated on "Intel 7" (similar to TSMC 5nm). The current momentum is on Apple's side, not just because of ISA but the ability to rapidly develop off-core assets and integrate with software. A good example is the sluggish development of Intel's Quick Sync vs how rapidly Apple has improved their video accelerators on AS.

I’ve designed PowerPC, SPARC, MIPS, and x86. The first three are definitely purely RISC, including modern implementations.
 
  • Like
Reactions: thedocbwarren

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Yes, understood. My point was not about the SPARC CPUs that Oracle acquired from Sun, but that server-side software somewhat bypasses the argument that "RISC is actually superior, x86 was only successful because of the massive consumer software base". Oracle is available on both Power9 RISC and x86 server platforms. On the server side there is no massive consumer software base to "prop up" x86. Hence if ISA is really a dominant factor in performance or efficiency, where is the evidence on the server side?

A better argument might be Power9 is not purely RISC, it a complex out of order machine with lots of specialized instructions, maybe not that different from Xeon in the big picture.

Apple Silicon seems to have a significant advantage in the low power arena, and maybe that could be attributed to ISA. But it doesn't address whether ISA has a significant impact on higher end desktop machines and servers.

As yet unrevealed is the CPU configuration of Apple's higher-end AS machines -- and also the performance and power/performance which can be sustained as you scale upward. E.g, in a unified memory architecture where do you put 256GB or 1TB of RAM? If you need more GPU performance than will fit on a SoC, how is that done without breaking the unified memory model?

Apple Silicon is very impressive but it's easy to look good if your competitor is asleep at the switch. Now that Intel has awakened, it will be interesting to see how future Apple Silicon CPUs compare to 6-wide Golden Cove and successors fabricated on "Intel 7" (similar to TSMC 5nm). The current momentum is on Apple's side, not just because of ISA but the ability to rapidly develop off-core assets and integrate with software. A good example is the sluggish development of Intel's Quick Sync vs how rapidly Apple has improved their video accelerators on AS.
I’d like to point out that your comparison assumes a completely equal playing ground between ISAs. We know Intel had quite the fab lead until 2015 or so. That surely played a part in leveling any performance or efficiency difference between the various risc designs and intel.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I’d like to point out that your comparison assumes a completely equal playing ground between ISAs. We know Intel had quite the fab lead until 2015 or so. That surely played a part in leveling any performance or efficiency difference between the various risc designs and intel.

That was the entirety of any performance/watt lead. As for performance OR power, there were RISC designs that beat Intel at various times. DEC Alpha destroyed Intel in performance. DEC StrongArm destroyed it in watts. The issue was that RISC designers were designing for niches - either high end workstations or low end portable devices. The mainstream market was locked up by the Wintel alliance, and no chip producer had any reason to try and crack that market - they could make the nicest chips in the world, but without running your existing software, who would buy it? Apple is in a unique position - mainstream market, and they control the software and hardware - so they were the first to be in a position to apply RISC to the mainstream. They tried it once before with PowerPC, but of course the problem there is they had nothing to do with the actual chip designs, and they couldn’t provide enough volume to keep IBM and Motorola interested in making what Apple actually wanted to buy; instead those companies (and the one I worked for, Exponential), tried to compete on pure performance, which was tough because the market was turning toward mobility, and none of us had fabs that were as good as Intel.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
I wrote something about Jim Keller, who I worked with, but I thought better of it and have removed it.
To all interested in Keller interviews, I recommend Ian Cutress' interviews with him (Ian from Anandtech)

Cool you worked with him - Would've been neat to hear your thoughts but I understand keeping it to yourself :)
 
  • Like
Reactions: dgdosen
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.