Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
I never quite got this. What's so "magic" about pointers? The name suggests "It's not the thing, it just points to the thing". Maybe it would have helped if they had been referred to as "addresses" rather than "pointers", because that's what they really are. I believe that the "black magic" was introduced because of the teaching approach that "the actual hardware is irrelevant" and code is viewed as some abstract mathematical recipe (which I oppose to).

I always explain it like a computer memory can be compared to a (huge) number of boxes all neatly lined up, each of which can store an integer number between 0 and 255. Each of those boxes has a unique number, which we'll call its address. The computer can store and retrieve the context of each box (the number between 0 and 255). To the computer, they're just numbers, and it depends on context what those numbers mean. If a box contains the number 65, then it could mean just that - the number 65, or it could mean the letter A, or a pixel with an approximately 25% grey value, or the cpu instruction "LD H, L", or even part of something bigger like the word "Apple" or of some floating point number (which takes 4 or 8 of those boxes combined) or whatever.

Since this "meaning" is important, the programming language tries to keep track of it: "These couple of boxes together are actually one thing, namely the word Apple, or this floating point value, or an image, or a database, or an audio file, or the memory address of some other box," etc.

The "drawback" of languages which expose these pointers to the programmer is that this can "break the abstraction". For some things it's fine if you are able to say "increase the value of the number in this box by 1" - in the context of a grey pixel, it just became a tiny bit lighter; if it was the letter A then it will become the letter B. But if it were part of the executable code, and you tried to "add one to the instruction LD H, L", and computers didn't hate being anthropomorphized so much, they would probably say "stop it, you're making me uncomfortable."
I think the magic of pointers has more to do with how they handle things like addition.
That int* a; a+1 actually will add 4 to the address a holds, because it automatically knows to move forward the sizeof int. And that long long* a; a+1 will add 8 to the address. If you've not properly been introduced to how the pointed to type affects the arithmetic it can seem quite unnatural. Also I get people's confusion if they see, for example, reinterpret casts
float b = *(float*)(void*)&a;
And the syntax for function pointers has always just been insane

I've also seen some confusion relating to pass-by-value and pointers. Too many levels of indirection can get confusing.
So while people can easily get that
int a = 5;
callFunc(a);
gives a copy of a to callFunc and doesn't risk altering a, it''s more easy to get confused when you see
inf changePtr(int** ptr);
taking a double pointer. It eventually clicks for people that it's due to the pass by value thing, but it's just like having 20 nested if blocks - it eventually gets confusing.
 
  • Like
Reactions: JMacHack

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I think the magic of pointers has more to do with how they handle things like addition.
That int* a; a+1 actually will add 4 to the address a holds, because it automatically knows to move forward the sizeof int. And that long long* a; a+1 will add 8 to the address. If you've not properly been introduced to how the pointed to type affects the arithmetic it can seem quite unnatural. Also I get people's confusion if they see, for example, reinterpret casts
float b = *(float*)(void*)&a;
And the syntax for function pointers has always just been insane

I've also seen some confusion relating to pass-by-value and pointers. Too many levels of indirection can get confusing.
So while people can easily get that
int a = 5;
callFunc(a);
gives a copy of a to callFunc and doesn't risk altering a, it''s more easy to get confused when you see
inf changePtr(int** ptr);
taking a double pointer. It eventually clicks for people that it's due to the pass by value thing, but it's just like having 20 nested if blocks - it eventually gets confusing.

Let’s be honest. Even the idea that you can have two variables pointing to the same thing, so that when you modify the contents of memory addressed by one pointer you are modifying the contents of another, is confusing to the average beginning programmer. That’s why I always code in raw, unmacro’d assembler, using hardcoded memory addresses :)
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
Let’s be honest. Even the idea that you can have two variables pointing to the same thing, so that when you modify the contents of memory addressed by one pointer you are modifying the contents of another, is confusing to the average beginning programmer. That’s why I always code in raw, unmacro’d assembler, using hardcoded memory addresses :)
You actually use an assembly language with mnemonics?
I code binary data directly onto punch cards with sexadecimal encoding

But genuinely, back in my first semester of uni, the first bug we ran into I couldn't troubleshoot myself and had to ask a TA for help about related to values vs. references. And that was in Java!
Couldn't figure out why
Code:
int a = 5;
int b = a;
b+=2;
had one behaviour, while
Code:
ArrayList<Integer> a = new ArrayList<>();
ArrayList<Integer b = a;
a.add(5);
b.set(0) = b.get(0)+2;
had another behaviour.
I think in this respect, C is actually more transparent about it. The pointers let you see whether it's a reference or a value. - Now it's not so bad when you know how Java works, since basically everything is a reference but the fact it has those primitive types as well that it treats as values confused me a bit at first.

Though I guess that's only worse in Swift where you can declare something as both struct and class and behaviour depends on that, though as my favourite language presently I see that more as a nice feature than a source of confusion though in reality it's probably both, haha
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
I never quite got this. What's so "magic" about pointers? The name suggests "It's not the thing, it just points to the thing". Maybe it would have helped if they had been referred to as "addresses" rather than "pointers", because that's what they really are. I believe that the "black magic" was introduced because of the teaching approach that "the actual hardware is irrelevant" and code is viewed as some abstract mathematical recipe (which I oppose to).

I always explain it like a computer memory can be compared to a (huge) number of boxes all neatly lined up, each of which can store an integer number between 0 and 255. Each of those boxes has a unique number, which we'll call its address. The computer can store and retrieve the context of each box (the number between 0 and 255). To the computer, they're just numbers, and it depends on context what those numbers mean. If a box contains the number 65, then it could mean just that - the number 65, or it could mean the letter A, or a pixel with an approximately 25% grey value, or the cpu instruction "LD H, L", or even part of something bigger like the word "Apple" or of some floating point number (which takes 4 or 8 of those boxes combined) or whatever.

Since this "meaning" is important, the programming language tries to keep track of it: "These couple of boxes together are actually one thing, namely the word Apple, or this floating point value, or an image, or a database, or an audio file, or the memory address of some other box," etc.

The "drawback" of languages which expose these pointers to the programmer is that this can "break the abstraction". For some things it's fine if you are able to say "increase the value of the number in this box by 1" - in the context of a grey pixel, it just became a tiny bit lighter; if it was the letter A then it will become the letter B. But if it were part of the executable code, and you tried to "add one to the instruction LD H, L", and computers didn't hate being anthropomorphized so much, they would probably say "stop it, you're making me uncomfortable."
You know now that you mention it, I can’t remember exactly what specifically gave me the voodoo impression. I understood the pointer as an address analogy, but I believe I struggled when introduced to pointer arithmetic and something else. Like exploiting the addresses or some such. It’s been a few years lol. Some other posters could probably explain better than me.
 

springerj

macrumors member
Jan 29, 2004
78
10
Portland, OR
I don't think CPU architecture has anything to do with developer decisions, provided it can do the job. Developers write in high level code and rarely (never?) get wound up in the hardware. The high level code they write for Mac has to use Mac conventions and libraries; that's where they have to do something special to support Mac. What will drive developers, besides personal preference, is the size of the Mac user base, and the faster arm processors will absolutely result in more users.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
If it's good enough for the Quake guys it's good enough for us :p - But yeah

It's even worse than that. For example, it is a common C pattern to compare two pointers by value, but I the standard make such comparisons undefined if the pointers don't point to elements within the same array. And so on. The standard committee really has outsmarted themselves on the spec... I am still not sure that one can "legally" implement low-level memory or data structure manipulation in standard-compliant C or C++.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
It's even worse than that. For example, it is a common C pattern to compare two pointers by value, but I the standard make such comparisons undefined if the pointers don't point to elements within the same array. And so on. The standard committee really has outsmarted themselves on the spec... I am still not sure that one can "legally" implement low-level memory or data structure manipulation in standard-compliant C or C++.
Does this mean the “What the ****” Inverse Square Root is illegal now?
 

kaioshade

macrumors regular
Nov 24, 2010
176
108
iOS apps on Mac is up to the developers. Its opt-in.

Apple said iOS apps can work on MacOS. But it totally depends on the developer.

I think simple iOS apps could work on a Mac with minimal changes.

But since the Mac doesn't have a touchscreen... the developer might have to modify the input methods for use on a Mac.

And while a Macbook has a webcam... it doesn't have depth-sensing FaceID cameras. So anything with AR is out. Snapchat wouldn't be very fun on a Macbook.

And what about any app that uses GPS like Uber and Lyft?

Or the rotation sensor? Magnetometer?

So yeah... the platform for MacOS and iOS are compatible software-wise.

But there are hardware features in iPhones that simply do not exist in a Macintosh. That's one big reason why developers aren't racing to make their iPhone apps work on a Mac.

Another big reason, like I said, is the whole touchscreen thing. Most iPhone apps aren't built for mouse and keyboard.

It’s not an issue with the Apple Silicon architecture. How many ios apps could you run on Intel macs?

Disabling sideloading (which was never a feature - it was a workaround that they never intended to permit) was necessary, due to copyright. If I submit an app to the App Store, I still own the copyright to it, and if I don’t want it to run on Macs (because, for example, it would eat into sales of my mac app, or I don’t want to have the support burden of dealing with customers who expect it to run on a mac, where I haven’t tested it), Apple has no right to permit installation onto Macs.

Yea. I understand these points. And I realize there are a lot of other considerations for making it work on macOS. That’s why it’s a mild disappointment rather than rage. I absolutely love the machine.

But, for example a game that has full controller & input support, it’s baffling why it wouldn’t be available. But again, minor gripe.
 
  • Like
Reactions: Michael Scrip

thedocbwarren

macrumors 6502
Nov 10, 2017
430
378
San Francisco, CA
Yea. I understand these points. And I realize there are a lot of other considerations for making it work on macOS. That’s why it’s a mild disappointment rather than rage. I absolutely love the machine.

But, for example a game that has full controller & input support, it’s baffling why it wouldn’t be available. But again, minor gripe.
For me the draw was the ARM platform, efficiency, and the idea the iOS simulator has less to simulate and thus faster.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Does this mean the “What the ****” Inverse Square Root is illegal now?

I don't think it ever was legal, but I am not sure whether C had type aliasing rules from the start or whether it's a newer addition. In standard-compliant C you have to use memcpy(), cutting-edge C++ can use std::bitcast :)

To make all this worse, various compilers implement their own rules, so that tricks with pointer reinterpretation often work as expected, but sometimes they don't. A lot of bugs happened that way. C is quite insane that way — plenty of operations might have undefined results according to the standard, but the language won't prohibit (or even warn) you from using them. Modern approaches like taken in Rust or Swift, where this kind of type recasting is simply not possible in the regular language (you have to use unsafe primitives and memory transmutation) make much more sense.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
I don't think it ever was legal, but I am not sure whether C had type aliasing rules from the start or whether it's a newer addition. In standard-compliant C you have to use memcpy(), cutting-edge C++ can use std::bitcast :)

To make all this worse, various compilers implement their own rules, so that tricks with pointer reinterpretation often work as expected, but sometimes they don't. A lot of bugs happened that way. C is quite insane that way — plenty of operations might have undefined results according to the standard, but the language won't prohibit (or even warn) you from using them. Modern approaches like taken in Rust or Swift, where this kind of type recasting is simply not possible in the regular language (you have to use unsafe primitives and memory transmutation) make much more sense.
Well, I remember liking C because of the weird stuff that it would let you do. But also not liking it because it would let you do things that you shouldn’t. Makes sense that fast 1/sqrt wasn’t legal.

Maybe I should start messing around in it again. This thread has piqued my interest
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
You actually use an assembly language with mnemonics?
I code binary data directly onto punch cards with sexadecimal encoding

You laugh, but I honestly don’t really know the mnemonics for most of the instructions in the various chips I designed. I always have to look them up. When you are designing a CPU, you only care about the binary encoding of the instructions, not the assembly mnemonics (which are arbitrary, of course, other than the fact that whatever assembler you are using will expect whatever it expects). The only time I ever really had to think about mnemonics was when I was asked to extend the integer ALU operations to 64-bit. I had to document the new instruction formats, so I had to go look up the 32-bit versions. Actually, I don’t even know if the “official” x86-64 mnemonics are the same as what I wrote down in that document at AMD. :)
 
  • Like
Reactions: casperes1996

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
For me the draw was the ARM platform, efficiency, and the idea the iOS simulator has less to simulate and thus faster.

The iOS Simulator doesn't do ARM emulation on Intel. Apps that run in the simulator on Intel are compiled for Intel, and everything is fully native. The iOS Simulator on Intel basically is chunks of iOS that are recompiled for Intel.

No speed difference and there is no "amount to simulate" difference between the two platforms. Besides the implicit speed difference between the two chips. Simulator apps are basically a lot like Universal Mac apps. Fully native for the platform you're running on.

Developers have even been distributing Intel versions of all their iOS libraries specifically because of the Simulator.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
It's even worse than that. For example, it is a common C pattern to compare two pointers by value, but I the standard make such comparisons undefined if the pointers don't point to elements within the same array. And so on. The standard committee really has outsmarted themselves on the spec... I am still not sure that one can "legally" implement low-level memory or data structure manipulation in standard-compliant C or C++.
Standards compliance is more important for compiler writers than other programmers. If you really have to write low-level code, you probably won't care about extreme portability anyway. You should have a specific class of target systems in mind, and you can then take advantage of the way things actually work in them.

For example, I always assume that the system is little-endian. I don't spend any effort trying to support hypothetical big-endian systems I don't have access to – how would I even know if the code works on them? I assume that 64-bit integer operations are fast, and I only use smaller integers in limited situations. When I design a file format, I care more about making the data in a memory-mapped file directly usable than about reading and writing such files on unusual systems. And in order to even support those memory-mapped files, I already have to use system-dependent code.

The real portability challenge with C/C++ is linking to dependencies. { Linux, macOS } x { gcc, clang } x { x86-64, ARM } x { libstdc++, libc++ } is already 16 separate platforms. Trying to make your code fully portable and standards-compliant is a waste of effort, when it's already difficult to support all these common platforms at the same time.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
I don't think it ever was legal, but I am not sure whether C had type aliasing rules from the start or whether it's a newer addition. In standard-compliant C you have to use memcpy(), cutting-edge C++ can use std::bitcast :)
Don't forget reinterpret_cast - bit_cast's little brother :p
To make all this worse, various compilers implement their own rules, so that tricks with pointer reinterpretation often work as expected, but sometimes they don't. A lot of bugs happened that way. C is quite insane that way — plenty of operations might have undefined results according to the standard, but the language won't prohibit (or even warn) you from using them. Modern approaches like taken in Rust or Swift, where this kind of type recasting is simply not possible in the regular language (you have to use unsafe primitives and memory transmutation) make much more sense.
Swift technically does have "bit_pattern" constructors for a lot of its types. It doesn't work on an existing value, instead it makes a new value of some type based on the bit pattern of some other value, but it's kinda the same as reinterpreting raw bits, and it's not considered an unsafe operation or anything.
You laugh, but I honestly don’t really know the mnemonics for most of the instructions in the various chips I designed. I always have to look them up. When you are designing a CPU, you only care about the binary encoding of the instructions, not the assembly mnemonics (which are arbitrary, of course, other than the fact that whatever assembler you are using will expect whatever it expects). The only time I ever really had to think about mnemonics was when I was asked to extend the integer ALU operations to 64-bit. I had to document the new instruction formats, so I had to go look up the 32-bit versions. Actually, I don’t even know if the “official” x86-64 mnemonics are the same as what I wrote down in that document at AMD. :)
Sure, but I imagine that process is all around massively different from software engineering projects too :p
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Well, I remember liking C because of the weird stuff that it would let you do. But also not liking it because it would let you do things that you shouldn’t. Makes sense that fast 1/sqrt wasn’t legal.

The only non-standard-compliant part was casting from float to int. If you use one of supported ways to do this bitwise type reinterpretation, it's perfectly valid.

Maybe I should start messing around in it again. This thread has piqued my interest

I recommend you to check out Zig. It's a modern language that was designed as a replacement for C. I think it's one of the more interesting new languages on the block. Excellent design decisions.

Standards compliance is more important for compiler writers than other programmers.

Standards are important for programmers because your correctly performing code might actually be buggy and stop performing correctly when you use a different compiler (version) or a different platform. The mostly unknown corner cases of the C/C++ standards and various expectations that exist between different players are the main reason why C/C++ ecosystem is such a terrible mess.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Don't forget reinterpret_cast - bit_cast's little brother :p

Using reinterpret_cast for type puning is undefined behavior though. That's why bit_cast was introduced in the first place.

Swift technically does have "bit_pattern" constructors for a lot of its types. It doesn't work on an existing value, instead it makes a new value of some type based on the bit pattern of some other value, but it's kinda the same as reinterpreting raw bits, and it's not considered an unsafe operation or anything.

Yep, Swift's is the most elegant solution that directly communicates intent in the code while resulting in optimal performance (all these methods are essentially no-ops). But this only works because Swift explicitly states that floating point values are encoded with IEEE 754. C generally makes no such promises.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Don't forget reinterpret_cast - bit_cast's little brother :p

Swift technically does have "bit_pattern" constructors for a lot of its types. It doesn't work on an existing value, instead it makes a new value of some type based on the bit pattern of some other value, but it's kinda the same as reinterpreting raw bits, and it's not considered an unsafe operation or anything.

Sure, but I imagine that process is all around massively different from software engineering projects too :p

When I was designing the PowerPC x704, I actually designed it using C :) It’s been so long that I don’t remember the details, but, essentially, each function represented a specific gate with specific transistor sizes, and the variables were wires. Then you’d use comments to control things like where the gates physically belong. Then we had tools which could parse all that and physically build the layout, but you could also compile it to make sure that the functionality was correct. Essentially we used it instead of structural Verilog (which should not be confused with behavioral Verilog, which is what most Verilog-users use).

I remember it got a little weird because you might call the nand2x1000 function in your code, but then you could override that with a comment to make it a nand2x1600 or whatever. It got kludgey because we kept writing tools like optimizers to change gate sizes. Then we figured out that having to touch the c code to change something like where a cell lives would kick off unnecessary dependencies, so we started having sidecar files with a lot of this stuff. We were making it all up as we went along.

But all that mess did heavily influence the design methodology me and my coworker came up with at AMD (we both came from Exponential, though I took a very short stop at Sun).
 

fuchsdh

macrumors 68020
Jun 19, 2014
2,028
1,831
Realistically in 10 years if Apple's chips are super far behind on performance-per-watt and power, they can... just go with another chip vendor?

The upsides of switching to your own chips is tremendous, despite the costs and risks, but even if Apple's chips weren't years ahead of the competition they'd still likely have done it. It's not like Intel is going to say it hates money and refuse chips to Apple like a scorned ex if they came back.They gain more than they gamble, in this instance.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
Realistically in 10 years if Apple's chips are super far behind on performance-per-watt and power, they can... just go with another chip vendor?

The upsides of switching to your own chips is tremendous, despite the costs and risks, but even if Apple's chips weren't years ahead of the competition they'd still likely have done it. It's not like Intel is going to say it hates money and refuse chips to Apple like a scorned ex if they came back.They gain more than they gamble, in this instance.

I think that it would be difficult for Apple to go to commodity chips in the future. Look at the new features in Monterey that won't be supported on Intel Macs for lack of hardware support. Apple's going to add stuff to their silicon to improve performance and I'd guess that they will see a big performance regression in having to go back to doing it in software.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
Standards are important for programmers because your correctly performing code might actually be buggy and stop performing correctly when you use a different compiler (version) or a different platform. The mostly unknown corner cases of the C/C++ standards and various expectations that exist between different players are the main reason why C/C++ ecosystem is such a terrible mess.
In my experience, undefined behavior is a non-issue. Compiler writers are not evil. They don't do arbitrary things just because the standard allows it. They adhere to the principle of least surprise and implement undefined behavior in the most natural way on each platform. And because all relevant platforms are effectively the same in most applications, things generally work as you would expect.

I rarely see language-level issues in cross-platform C/C++ software. Even standard library issues are rare, once you learn to avoid the worst parts (such as std::hash, std::unordered_map, std::unordered_set, and random distributions). Build systems and dependency handling are the real issues, because they are out of the scope of the language. Everyone uses their favorite tools and makes arbitrary assumptions, and the solutions are rarely portable.
 

altaic

Suspended
Jan 26, 2004
712
484
When I was designing the PowerPC x704, I actually designed it using C :) It’s been so long that I don’t remember the details, but, essentially, each function represented a specific gate with specific transistor sizes, and the variables were wires. Then you’d use comments to control things like where the gates physically belong. Then we had tools which could parse all that and physically build the layout, but you could also compile it to make sure that the functionality was correct. Essentially we used it instead of structural Verilog (which should not be confused with behavioral Verilog, which is what most Verilog-users use).

I remember it got a little weird because you might call the nand2x1000 function in your code, but then you could override that with a comment to make it a nand2x1600 or whatever. It got kludgey because we kept writing tools like optimizers to change gate sizes. Then we figured out that having to touch the c code to change something like where a cell lives would kick off unnecessary dependencies, so we started having sidecar files with a lot of this stuff. We were making it all up as we went along.

But all that mess did heavily influence the design methodology me and my coworker came up with at AMD (we both came from Exponential, though I took a very short stop at Sun).

So you defined a hardware description domain specific language on top of C and it’s preprocessor. That sounds a bit nightmarish. Strongly typed functional languages are way better suited.

In case you haven’t seen it before, Clash is a DSL atop Haskell for just your purpose— it’s pretty neat. And, yeah, it didn’t exist in the x704 era… but SML did!
 
Last edited:

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
So you defined a hardware description domain specific language on top of C and it’s preprocessor. That sounds a bit nightmarish. Strongly typed functional languages are way better suited.

In case you haven’t seen it before, Clash is a DSL atop Haskell for just your purpose— it’s pretty neat. And, yeah, it didn’t exist in the x704 era… but SML did!

Just from the front page of the website, doesn’t look like Clash would have been much help. We did the synthesizing ourselves, in our heads (or on scratch paper), so we literally defined each structure gate by gate. In other words, we were not coding the functionality - we had another C model for that - but the structure.

A fun sidenote - the gate didn‘t necessarily already exist in our cell library. We had a gate naming convention, so any gate not in the library was synthesized on the fly. Any gate that was used more than a few times later had its layout hand-edited. But part of the reason that worked well is these were CML or ECL gates, not CMOS, and bipolar transistors are always pre-laid out (because they are vertical, not horizontal, devices).
 
  • Like
Reactions: casperes1996
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.