Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Technerd108

macrumors 68030
Original poster
Oct 24, 2021
2,945
4,150
How secure are our laptop and phone chips??

Intel and AMD had Spectre. I believe there was something like that or the same thing also affected some cellphone chipsets but I am not sure.

So it seems like hardware vectors are more a target than ever. What can we as consumers do to know if our hardware is safe and how likely are these vulnerabilities even going to be exploited.

For example with Spectre I am pretty sure you needed physical access to the device in order to exploit it. If that is true then I think customers should have been able to decide whether or not they wanted a software update that massively slowed their devices all for a threat they were most likely not going to encounter. I understand theft could be an issue but hard drive encryption and a locked device should be enough and if it isn't then the malicious actor is too good to stop anyway. I think this would apply to any new threat to any device or platform?

So as new threats to new cpu architectures emerge how do we as customers understand and mitigate the risk. For example even in 13th gen Intel cpu uses a similar prefetch which can leak info. Intel uses a method that doesn't leak info but these prefetch cpu implementations are common and widespread across the industry so it is not a problem unique to any OEM, Manufacturer or platform.

How do hardware engineers for Intel, AMD, and Apple among others protect against these type of attacks and prepare and harden the cpu for future attacks as hackers are always looking for all vulnerabilities and with the introduction of AI they can scan for vulnerabilities on a scale not available before. So these threats are only going to grow exponetially and no one will be safe unless everyone works together to address the problem.

Microsoft, Google, Apple, Intel, Amd, MediaTek, etc should all work together to harden their respective software/hardware and share information with each other to target the latest threats.
 
  • Like
Reactions: Wheel_D

maflynn

macrumors Haswell
May 3, 2009
73,572
43,556
All chip makers now have vulnerabilities. both intel and AMD have been able to use microcode to address the issues, it seems for Apple that's not an option.

While these stories are quite popular for sites, i.e., click bait, for the majority of people does it really impact them?

While it may sound like I'm sticking my head in the sand, there's not much we can do, other then practice safe computing habits. I would not avoid one processor over another due to these reports and just do what you can do minimize your risks
 
  • Like
Reactions: wlossw and Chuckeee

za9ra22

macrumors 65816
Sep 25, 2003
1,441
1,897
How secure are our laptop and phone chips??

Intel and AMD had Spectre. I believe there was something like that or the same thing also affected some cellphone chipsets but I am not sure.

So it seems like hardware vectors are more a target than ever. What can we as consumers do to know if our hardware is safe and how likely are these vulnerabilities even going to be exploited.

For example with Spectre I am pretty sure you needed physical access to the device in order to exploit it. If that is true then I think customers should have been able to decide whether or not they wanted a software update that massively slowed their devices all for a threat they were most likely not going to encounter. I understand theft could be an issue but hard drive encryption and a locked device should be enough and if it isn't then the malicious actor is too good to stop anyway. I think this would apply to any new threat to any device or platform?

So as new threats to new cpu architectures emerge how do we as customers understand and mitigate the risk. For example even in 13th gen Intel cpu uses a similar prefetch which can leak info. Intel uses a method that doesn't leak info but these prefetch cpu implementations are common and widespread across the industry so it is not a problem unique to any OEM, Manufacturer or platform.

How do hardware engineers for Intel, AMD, and Apple among others protect against these type of attacks and prepare and harden the cpu for future attacks as hackers are always looking for all vulnerabilities and with the introduction of AI they can scan for vulnerabilities on a scale not available before. So these threats are only going to grow exponetially and no one will be safe unless everyone works together to address the problem.

Microsoft, Google, Apple, Intel, Amd, MediaTek, etc should all work together to harden their respective software/hardware and share information with each other to target the latest threats.
Apologies for the length which is barely even a tiny scratch on the subject, and rather generalized and disorganized, but:

Hardware attacks in the form of CPU/chipset vulnerabilities like Meltdown, Sceptre, Downfall, inception and the latest Apple Silicon GoFetch vulnerability are not commonly as easy to exploit as other forms of attack and vectors, so for the most part, since Spectre at least, are rather more of an academic issue than a practical one.

The problem is rooted in the performance gains to be had from 'prefetching' instructions and data so the CPU can be queued up ready for the next action before it is needed. If you can get the right code into the system, in theory you can read this data, or extrapolate it from the prefetch instruction and clock cycle. Basically that's why to defend this on an M1 or M2 (we don't know for sure about the M3 yet), system performance would be seriously hit, because Apple would have to halt prefetch, change the way the prefetch function works, or how the data is encrypted - though there are other ways of indirectly defending it.

The good news is that actually attacking a system this way is increasingly difficult. The problem being in somehow inserting the right code into the system. The bad news is that where in the past it would have been easier to effectively attack a Windows system, because the kernel owned the entire system so attacking it in one place could get you anywhere else inside the machine, with this kind of vulnerability, the code only needs a place to execute where it can then sniff the data it is looking for.

Some are saying that this vulnerability could only be exploited by an attacker's physical presence, but that isn't true. It 'only' requires malicious code being able or authorized to run on the machine. Malicious websites or email attachments could theoretically inject that code, though sandboxing makes it far more difficult now than it would have been. The protections in macOS mean that an attacker would need to either craft the code to subvert the macOS kernel itself, which is quite hard to achieve, or to convince the user to click OK to a security dialog requesting permission to execute a program, since the kernel doesn't permit anything not correctly signed as safe to be executed without that.

And that brings us back to the weakest point in the security of any system: the user. You can build the most secure computing platform ever known, but put a human in front of it, and you have a two-legged, half-brained security vulnerability right there.

This is arguably one of Apple's strongest arguments against claims of monopolistic behavior in its ecosystem, because the very nature of them acting as gatekeeper means the onus of responsibility for securing the ecosystem is on them. Open it up, and there's a free-for-all where security, system integrity, user data and device trust can't be ensured for anyone.

But there's another part of the problem - more my area than CPU design - and that is cross contamination. Given that macOS, iOS, iPadOS and so on all have the same core, an attacker may be able to exploit an older and insecure iOS device for example to inject the code, and execute an exploit in the raw across a trusted relationship to a Mac.

It is a valid question as to what users can and should be told about these kind of threats, but the problem is that even in places where reasonably technologically minded people gather (such as here perhaps), there's an awful lot more opinion than there is knowledge. For ordinary users who think nothing of this sort of thing, and who assume that if they have antivirus software for example, they've fixed the problem, explaining that the internet isn't a playground for buying stuff and whinging about Apple/Microsoft/whomeverelse but to their computer is actually the front line in a war zone isn't going to do much good. Which is why Apple in particular try and lock down their systems as tight as possible, so that they can do the hard work and users don't have to wrestle with it. It is (sometimes/often) the very stifling of user freedoms which help most in keeping users safe.

Which means that this is one of those times when Apple treating their users like idiots who don't know what they're doing might actually help.

The other side of this coin is what Apple are doing about the threat itself, and those who say 'nothing' are the real idiots. Threat researchers almost always report their findings to the appropriate place - whether Apple, Microsoft, Adobe, Google, Intel etc - immediately they can verify a problem does exist, even if only theoretically. What happens next is almost never in the public eye for obvious reasons. Apple have in-house and external teams who take these reports and work on them, sometimes also with the people who found the vulnerability and reported on it, sometimes not. There's almost never a public face to this work. Even when a patch is released, there's rarely any detail provided to the public because detail is dangerous to future security.

There are times when vulnerabilities are found by threat actors of course, and you get zero day exploits as a result, where a problem surfaces because it is being attacked. In this case, there are no indicators of active exploits or attempts, which gives at least a small clue as to how difficult the vulnerability is to find and craft a means to use it.

So the answer to 'how secure are out laptop and phone and chips' (and also desktop and tablet, and IoT and embedded) is that inherently they aren't. Their job is to move data and execute code. Securing them isn't about the chip, but the system as a whole by better securing the perimeter, better managing the data within the system, and hardening the system so that if something does get in, it is constrained to a sandbox it can't do harm in and can't escape from.

One point you made is that "Microsoft, Google, Apple, Intel, Amd, MediaTek, etc should all work together to harden their respective software/hardware and share information with each other to target the latest threats." and the fact is that they do. Threat intelligence and mitigation is an industry-wide activity served by a legion of truly astonishing professional talent, both in the realm of independent researchers, and corporate engineers. In most instances, corporate boundaries don't exist in this community, except in the instance of proprietary information of course. Even then, there's engineering knowledge that is shared. In the threat intelligence community, there is nothing like the kind of tribalism you see here in terms of partisanship, just some boundaries defined by essential NDAs.
 
Last edited:

Technerd108

macrumors 68030
Original poster
Oct 24, 2021
2,945
4,150
Apologies for the length which is barely even a tiny scratch on the subject, and rather generalized and disorganized, but:

Hardware attacks in the form of CPU/chipset vulnerabilities like Meltdown, Sceptre, Downfall, inception and the latest Apple Silicon GoFetch vulnerability are not commonly as easy to exploit as other forms of attack and vectors, so for the most part, since Spectre at least, are rather more of an academic issue than a practical one.

The problem is rooted in the performance gains to be had from 'prefetching' instructions and data so the CPU can be queued up ready for the next action before it is needed. If you can get the right code into the system, in theory you can read this data, or extrapolate it from the prefetch instruction and clock cycle. Basically that's why to defend this on an M1 or M2 (we don't know for sure about the M3 yet), system performance would be seriously hit, because Apple would have to halt prefetch, change the way the prefetch function works, or how the data is encrypted - though there are other ways of indirectly defending it.

The good news is that actually attacking a system this way is increasingly difficult. The problem being in somehow inserting the right code into the system. The bad news is that where in the past it would have been easier to effectively attack a Windows system, because the kernel owned the entire system so attacking it in one place could get you anywhere else inside the machine, with this kind of vulnerability, the code only needs a place to execute where it can then sniff the data it is looking for.

Some are saying that this vulnerability could only be exploited by an attacker's physical presence, but that isn't true. It 'only' requires malicious code being able or authorized to run on the machine. Malicious websites or email attachments could theoretically inject that code, though sandboxing makes it far more difficult now than it would have been. The protections in macOS mean that an attacker would need to either craft the code to subvert the macOS kernel itself, which is quite hard to achieve, or to convince the user to click OK to a security dialog requesting permission to execute a program, since the kernel doesn't permit anything not correctly signed as safe to be executed without that.

And that brings us back to the weakest point in the security of any system: the user. You can build the most secure computing platform ever known, but put a human in front of it, and you have a two-legged, half-brained security vulnerability right there.

This is arguably one of Apple's strongest arguments against claims of monopolistic behavior in its ecosystem, because the very nature of them acting as gatekeeper means the onus of responsibility for securing the ecosystem is on them. Open it up, and there's a free-for-all where security, system integrity, user data and device trust can't be ensured for anyone.

But there's another part of the problem - more my area than CPU design - and that is cross contamination. Given that macOS, iOS, iPadOS and so on all have the same core, an attacker may be able to exploit an older and insecure iOS device for example to inject the code, and execute an exploit in the raw across a trusted relationship to a Mac.

It is a valid question as to what users can and should be told about these kind of threats, but the problem is that even in places where reasonably technologically minded people gather (such as here perhaps), there's an awful lot more opinion than there is knowledge. For ordinary users who think nothing of this sort of thing, and who assume that if they have antivirus software for example, they've fixed the problem, explaining that the internet isn't a playground for buying stuff and whinging about Apple/Microsoft/whomeverelse but to their computer is actually the front line in a war zone isn't going to do much good. Which is why Apple in particular try and lock down their systems as tight as possible, so that they can do the hard work and users don't have to wrestle with it. It is (sometimes/often) the very stifling of user freedoms which help most in keeping users safe.

Which means that this is one of those times when Apple treating their users like idiots who don't know what they're doing might actually help.

The other side of this coin is what Apple are doing about the threat itself, and those who say 'nothing' are the real idiots. Threat researchers almost always report their findings to the appropriate place - whether Apple, Microsoft, Adobe, Google, Intel etc - immediately they can verify a problem does exist, even if only theoretically. What happens next is almost never in the public eye for obvious reasons. Apple have in-house and external teams who take these reports and work on them, sometimes also with the people who found the vulnerability and reported on it, sometimes not. There's almost never a public face to this work. Even when a patch is released, there's rarely any detail provided to the public because detail is dangerous to future security.

There are times when vulnerabilities are found by threat actors of course, and you get zero day exploits as a result, where a problem surfaces because it is being attacked. In this case, there are no indicators of active exploits or attempts, which gives at least a small clue as to how difficult the vulnerability is to find and craft a means to use it.

So the answer to 'how secure are out laptop and phone and chips' (and also desktop and tablet, and IoT and embedded) is that inherently they aren't. Their job is to move data and execute code. Securing them isn't about the chip, but the system as a whole by better securing the perimeter, better managing the data within the system, and hardening the system so that if something does get in, it is constrained to a sandbox it can't do harm in and can't escape from.

One point you made is that "Microsoft, Google, Apple, Intel, Amd, MediaTek, etc should all work together to harden their respective software/hardware and share information with each other to target the latest threats." and the fact is that they do. Threat intelligence and mitigation is an industry-wide activity served by a legion of truly astonishing professional talent, both in the realm of independent researchers, and corporate engineers. In most instances, corporate boundaries don't exist in this community, except in the instance of proprietary information of course. Even then, there's engineering knowledge that is shared. In the threat intelligence community, there is nothing like the kind of tribalism you see here in terms of partisanship, just some defined by essential NDAs.
Please feel free to expand as much as possible, I am learning and it is truly interesting to me.

I am not singling out Apple here and I agree all computing devices are vulnerable to the single point of failure which more often than not is the user.

But I disagree with Apple being the gatekeeper and being entrusted completely with securing my device because it gives the user a false sense of security who then clicks okay on something when they shouldn't assuming their system is bulletproof. Apple should focus on educating users of the risks and maybe admitting that even they are not immune from malware and hacking.

We as users, need to take the responsibility to learn as much as we can to keep ourselves as safe as we can. We shouldn't rely on others. When the **** hits the fan generally others shift the blame to something or someone else "out of their control" even if they blatantly screwed up. Obviously, people who are not computer literate have a problem. They are targets. I honestly without judgement or criticism think that at this point we shouldn't just give Grandma a laptop and let have at it. The internet is too dangerous and like driving a car you need a certain level of proficiency just to be able to drive. The same thing is becoming true of the internet. Anyone can learn and apply for a license but everyone needs a basic level of training to be safe. This should be true of using a laptop, tablet or phone connected to the internet. It would solve a huge amount of security issues, exploits, and theft.

We have to make the user responsible again. This means more freedom not less. It also means more risk and the onus of responsibility shifted towards the user but it certainly doesn't absolve companies of doing their part to make sure their devices are secure.

This is a un popular view and the let the corporation handle my security seems to be the preferred model. People rarely want to take responsibility and security is something they feel is out of their hands and their is nothing they can do about it. So they let other's handle it. But this leads to the "others" at some point being able to abuse that power. I don't trust anyone enough to let them handle it if I don't know what they are doing. I think if the average user has to take a more active role and has to take some basic learning of security threats and how they work then they will feel more empowered and not be the weak link in the chain.

Turning the user into a hardened target I think would be the most beneficial thing you can do in IT security overall!
 

Technerd108

macrumors 68030
Original poster
Oct 24, 2021
2,945
4,150
All chip makers now have vulnerabilities. both intel and AMD have been able to use microcode to address the issues, it seems for Apple that's not an option.

While these stories are quite popular for sites, i.e., click bait, for the majority of people does it really impact them?

While it may sound like I'm sticking my head in the sand, there's not much we can do, other then practice safe computing habits. I would not avoid one processor over another due to these reports and just do what you can do minimize your risks
I agree all chipmakers have had these issues now or a majority of them at least. However that doesn't lesson the vulnerability because it is more common. It is a serious threat just like Spectre BUT the difference and correct me if I am wrong is that Spectar could not be remote activated. You needed physical access and that is not the case with the M series chips. M1 and M2 can't be patched at all. All you can do is turn off the performance cores-that would be awful! M3 would take a hit like Intel and AMD did. M4 will probably still be affected as Apple may not have updated the chip design? December they knew but is that enough time since production probably started on a18 right around the same time?

For now it is a nothing burger. BUT that is not guaranteed to stay that way for long. How many people now have m series Macs, iPads?

As for not much you can do, of course there is. Stay up to date on your updates. Keep up to date on the latest information regarding the chip flaw and then you will know if there is a mitigation. Practice layered safe computing methods as you said.

I would very much avoid a chip with an exploitable flaw. I sold all the laptops I had with the specter vulnerability as fast as I could. Who wants something that has an unfixable vulnerability? Honestly? I literally went through the exact same thing when this happened to Intel and AMD.

And the one thing you can be sure is these type of exploits will continue to affect Intel, Amd, Qualcomm, Mediatek, Apple and others. Since they do threat research as you say then hopefully they will allocate more money for R&D into hardening our chips!
 
  • Like
Reactions: Cape Dave

za9ra22

macrumors 65816
Sep 25, 2003
1,441
1,897
Please feel free to expand as much as possible, I am learning and it is truly interesting to me.

I am not singling out Apple here and I agree all computing devices are vulnerable to the single point of failure which more often than not is the user.

But I disagree with Apple being the gatekeeper and being entrusted completely with securing my device because it gives the user a false sense of security who then clicks okay on something when they shouldn't assuming their system is bulletproof. Apple should focus on educating users of the risks and maybe admitting that even they are not immune from malware and hacking.

We as users, need to take the responsibility to learn as much as we can to keep ourselves as safe as we can. We shouldn't rely on others. When the **** hits the fan generally others shift the blame to something or someone else "out of their control" even if they blatantly screwed up. Obviously, people who are not computer literate have a problem. They are targets. I honestly without judgement or criticism think that at this point we shouldn't just give Grandma a laptop and let have at it. The internet is too dangerous and like driving a car you need a certain level of proficiency just to be able to drive. The same thing is becoming true of the internet. Anyone can learn and apply for a license but everyone needs a basic level of training to be safe. This should be true of using a laptop, tablet or phone connected to the internet. It would solve a huge amount of security issues, exploits, and theft.

We have to make the user responsible again. This means more freedom not less. It also means more risk and the onus of responsibility shifted towards the user but it certainly doesn't absolve companies of doing their part to make sure their devices are secure.

This is a un popular view and the let the corporation handle my security seems to be the preferred model. People rarely want to take responsibility and security is something they feel is out of their hands and their is nothing they can do about it. So they let other's handle it. But this leads to the "others" at some point being able to abuse that power. I don't trust anyone enough to let them handle it if I don't know what they are doing. I think if the average user has to take a more active role and has to take some basic learning of security threats and how they work then they will feel more empowered and not be the weak link in the chain.

Turning the user into a hardened target I think would be the most beneficial thing you can do in IT security overall!
Ah, the dichotomy.

The thing is that I don't disagree at all that the user really needs to take responsibility. However much I 'enjoy' if that is the right word, working in the information security field, the one thing that would fix our ills faster and more effectively than anything else is the user acting responsibly and in a well informed way.

I would not mind in the slightest being put out of a job if that happens (I'm due for retirement from this anyway!)

But here's the thing. TWO things actually. Firstly, you can't actually communicate these kind of threats to the user in a way that doesn't also hand over the fundamental weaknesses in systems to the lowest-hanging fruit of the bad guys, who are the ones who can and have done users the most damage.

The second is an unfortunate consequence of western economies, and that is lawyers, and the litigious societies we live in. If Apple themselves remove their 'gatekeeping' role, those people who get hit with the result will sue, and likely in the million or so.

I'm not arguing that Apple should have this role at all, but after 20+ years of very intensive data security work, I can readily see why they haven't had much option, given they steered this way to begin with, and users just sat back and let them. Actually encouraged them.

I'm not saying that Apple didn't take some liberties with this either, because they did. I'm not even saying that 'big tech' are really good guys and we just misunderstand them - they're not and we don't. What I am saying is that right now, Apple's own Apple ecosystem has put them (and us) between a rock and a hard place.

I don't know they could have done it any other way, but even if they could and should, they didn't. Now we have an almost entirely safe and secure ecosystem which will crumble if users have to take control of their own fate. Personally.. well, I know what I'm doing so on that basis I shouldn't care, but it seems easy to say users should take responsibility... I have been waiting 40 years in the IT business for even a glimmer of that, and it hasn't happened, so by all means, tell us how that's going to happen?
 

Technerd108

macrumors 68030
Original poster
Oct 24, 2021
2,945
4,150
"I'm not arguing that Apple should have this role at all, but after 20+ years of very intensive data security work, I can readily see why they haven't had much option, given they steered this way to begin with, and users just sat back and let them. Actually encouraged them.

I'm not saying that Apple didn't take some liberties with this either, because they did. I'm not even saying that 'big tech' are really good guys and we just misunderstand them - they're not and we don't. What I am saying is that right now, Apple's own Apple ecosystem has put them (and us) between a rock and a hard place.

I don't know they could have done it any other way, but even if they could and should, they didn't. Now we have an almost entirely safe and secure ecosystem which will crumble if users have to take control of their own fate. Personally.. well, I know what I'm doing so on that basis I shouldn't care, but it seems easy to say users should take responsibility... I have been waiting 40 years in the IT business for even a glimmer of that, and it hasn't happened, so by all means, tell us how that's going to happen?"

I agree as the market is now ignorance is encouraged actually. It just works has become a mantra in IT world. Sure it is nice when it just works but a fundamental understanding of how it works would also be nice.

As to Apple I don't see how they would have done anything differently or will do anything differently unless they are forced. Honestly it is in Apple's DNA to lock everything down and enforce non compatibility. It is not a new thing. It is just something like messages is a basic phone function. Like making a call. When you call an Apple phone from android there is very little if any loss in features or compatibility between devices(you know because it is a phone) and the same should be true of texting. It should have the same functionality. This to me is when Apple went to far with the proprietary thing and set me off a bit.

Now that I think of it my entire thread on saying bye to Apple is something I should have known from the start. You are either all in or you are OUT. No in between. However, there was a time when Apple was more open with Windows and other platforms and that was a good time. I think there is a way for Apple to maintain absolute control and open up compatibility and some apps to be cross platform. It would actually bring them business. But they have chosen not to embrace this and that is my main gripe. They could make so we can have the best of both worlds and come and go as we please. If I could use an Android phone with my Mac and get the same connectivity as my iPhone or close to it I would be a more loyal Apple customer. If you give me an option that is often times all people want. They may take it from time to time but more often it will reinforce there idea to stay because they have the freedom to do what they want but are happy where they are.

How are you going to get users to a basic level of competency? To take responsibility for their actions online? It is probably never going to happen-you are right. I wish there was an internet license. Maybe take a test once every 5 years. The test can be updated for the latest info every 5 years so it isn't a burden. Make the fees very cheap and classes free. Make the first classes and tests free and a small fee every 5 years after that. But that will never happen.

It should happen. I think kids should have a computer literacy course mandatory in school. My son was never taught how to use a computer just given online tasks assuming he already knew. That was from an early age. I taught him a lot and he is very competent but what his friends do or tell him to do he has learned to research things first because often they are wrong or it is malicious sites. Just teaching him to update his devices and his gaming laptop the OS, various drivers, and Nvidia graphics was a long process. But how many kids are just given a device and never taught anything? They need to be taught the basics. How Windows or MacOS works, how basic internet security works and so on.
 
  • Like
Reactions: signer-ink-beast

za9ra22

macrumors 65816
Sep 25, 2003
1,441
1,897
"I'm not arguing that Apple should have this role at all, but after 20+ years of very intensive data security work, I can readily see why they haven't had much option, given they steered this way to begin with, and users just sat back and let them. Actually encouraged them.

I'm not saying that Apple didn't take some liberties with this either, because they did. I'm not even saying that 'big tech' are really good guys and we just misunderstand them - they're not and we don't. What I am saying is that right now, Apple's own Apple ecosystem has put them (and us) between a rock and a hard place.

I don't know they could have done it any other way, but even if they could and should, they didn't. Now we have an almost entirely safe and secure ecosystem which will crumble if users have to take control of their own fate. Personally.. well, I know what I'm doing so on that basis I shouldn't care, but it seems easy to say users should take responsibility... I have been waiting 40 years in the IT business for even a glimmer of that, and it hasn't happened, so by all means, tell us how that's going to happen?"

I agree as the market is now ignorance is encouraged actually. It just works has become a mantra in IT world. Sure it is nice when it just works but a fundamental understanding of how it works would also be nice.

As to Apple I don't see how they would have done anything differently or will do anything differently unless they are forced. Honestly it is in Apple's DNA to lock everything down and enforce non compatibility. It is not a new thing. It is just something like messages is a basic phone function. Like making a call. When you call an Apple phone from android there is very little if any loss in features or compatibility between devices(you know because it is a phone) and the same should be true of texting. It should have the same functionality. This to me is when Apple went to far with the proprietary thing and set me off a bit.

Now that I think of it my entire thread on saying bye to Apple is something I should have known from the start. You are either all in or you are OUT. No in between. However, there was a time when Apple was more open with Windows and other platforms and that was a good time. I think there is a way for Apple to maintain absolute control and open up compatibility and some apps to be cross platform. It would actually bring them business. But they have chosen not to embrace this and that is my main gripe. They could make so we can have the best of both worlds and come and go as we please. If I could use an Android phone with my Mac and get the same connectivity as my iPhone or close to it I would be a more loyal Apple customer. If you give me an option that is often times all people want. They may take it from time to time but more often it will reinforce there idea to stay because they have the freedom to do what they want but are happy where they are.

How are you going to get users to a basic level of competency? To take responsibility for their actions online? It is probably never going to happen-you are right. I wish there was an internet license. Maybe take a test once every 5 years. The test can be updated for the latest info every 5 years so it isn't a burden. Make the fees very cheap and classes free. Make the first classes and tests free and a small fee every 5 years after that. But that will never happen.

It should happen. I think kids should have a computer literacy course mandatory in school. My son was never taught how to use a computer just given online tasks assuming he already knew. That was from an early age. I taught him a lot and he is very competent but what his friends do or tell him to do he has learned to research things first because often they are wrong or it is malicious sites. Just teaching him to update his devices and his gaming laptop the OS, various drivers, and Nvidia graphics was a long process. But how many kids are just given a device and never taught anything? They need to be taught the basics. How Windows or MacOS works, how basic internet security works and so on.
You avoid the basic question, which is actually the critical one in this - and it isn't just Apple wrestling with this, but the entire IT industry: How do you educate users so they take responsibility, yet not also bankrupt your business with frivolous lawsuits, or (in this instance) telegraph your deepest security vulnerabilities to the very people who will use them, and who everyone knows will use them?

This isn't a philosophical question but an actual and real one. It's easy to talk about what ought to be, and I'd bet that between us you and I would find barely much distance if it was. But Apple (and others in the IT industry) don't live in a philosophical land but an actual and real one. In Apple's case, with an address in California, a real one subject to some very stringent laws.

I didn't raise Apple's gatekeeping role as a discussion point but as a token of how they play a role in the security of the entire ecosystem. It's an actual thing. What I'd like to know - I think they would too - is how to turn that over to the user in actual, factual, practical ways. I've spent a long time trying to work out how to move responsibility from them to the user, and come up mostly blank in the context of how the class action suits would invariably work afterwards.

Which, sadly, is my last post of the day. While you might think otherwise, we're mostly on the same side, but my line of work is about actual ways to fix things, not philosophical ones, so apologies if this comes off a bit blunt!
 
  • Like
Reactions: Technerd108

Technerd108

macrumors 68030
Original poster
Oct 24, 2021
2,945
4,150
You avoid the basic question, which is actually the critical one in this - and it isn't just Apple wrestling with this, but the entire IT industry: How do you educate users so they take responsibility, yet not also bankrupt your business with frivolous lawsuits, or (in this instance) telegraph your deepest security vulnerabilities to the very people who will use them, and who everyone knows will use them?

This isn't a philosophical question but an actual and real one. It's easy to talk about what ought to be, and I'd bet that between us you and I would find barely much distance if it was. But Apple (and others in the IT industry) don't live in a philosophical land but an actual and real one. In Apple's case, with an address in California, a real one subject to some very stringent laws.

I didn't raise Apple's gatekeeping role as a discussion point but as a token of how they play a role in the security of the entire ecosystem. It's an actual thing. What I'd like to know - I think they would too - is how to turn that over to the user in actual, factual, practical ways. I've spent a long time trying to work out how to move responsibility from them to the user, and come up mostly blank in the context of how the class action suits would invariably work afterwards.

Which, sadly, is my last post of the day. While you might think otherwise, we're mostly on the same side, but my line of work is about actual ways to fix things, not philosophical ones, so apologies if this comes off a bit blunt!
No it didn't come off blunt.

It makes sense.

Real world is a jungle. Being successful at levels of Apple, Microsoft, and Google is an amazing achievement in itself no matter how they got there.

They have to be pragmatic first.

There is an easy way. EULA. No one reads it. Put in a few lines about not being responsible for a user's security or data but will takes all steps necessary to ensure the user's security and privacy but there is NO guarantee and you use at your own risk. They may already have it in there but either way they need to highlight it so user's understand that they are at least partially responsible for their own security and privacy and you slowly go from there.

It is possible but there is no profit involved in making such a move so why go through the hassle. They have more control now and they like it that way and for a lot of reasons you brought up it makes total sense. There is way more risk and little upside for Apple and that is more important than anything else. And I have to agree.


I am not saying Apple was wrong to keep everything locked down in the software like sideloading. Keeping the app store locked down. Those are safety and security issues I agree with Apple looking at it only from their profit/risk perspective.

What I don't see as a security risk at all is making Apple products more compatible with other devices outside of Apple. That doesn't open a security hole. They could have a few basic cross platform apps and that wouldn't hurt security or privacy. They could make iMessage more compatible with Android without being forced. Those are the things Apple could do that would benefit everyone.

Let them be gatekeepers. I get it. But it still doesn't change the fact the weakest link is still the user. It also doesn't change the fact that Apple markets their superior security which leads the user to a false sense of security and could cause them to engage in risky behavior they wouldn't if they felt responsible for their security.

My answer to your question is simply I agree with your assessment although I wish it wasn't right. Lol
 
Last edited:

maflynn

macrumors Haswell
May 3, 2009
73,572
43,556
that doesn't lesson the vulnerability because it is more common
No but it does mean its something that we shouldn't lose sleep over.

It is a serious threat just like Spectre
The likelihood of these vulnerabilities are really low. In fact from my googling it seems to indicate that its really hard to exploit and there known instances of spectre exploits in the wild.

My point again is people get all up and arms over something like this, but its like worrying about the Dengue Fever, even if you never travel to a location where it prevalent.
 

za9ra22

macrumors 65816
Sep 25, 2003
1,441
1,897
No it didn't come off blunt.

It makes sense.

Real world is a jungle. Being successful at levels of Apple, Microsoft, and Google is an amazing achievement in itself no matter how they got there.

They have to be pragmatic first.

There is an easy way. EULA. No one reads it. Put in a few lines about not being responsible for a user's security or data but will takes all steps necessary to ensure the user's security and privacy but there is NO guarantee and you use at your own risk. They may already have it in there but either way they need to highlight it so user's understand that they are at least partially responsible for their own security and privacy and you slowly go from there.

It is possible but there is no profit involved in making such a move so why go through the hassle. They have more control now and they like it that way and for a lot of reasons you brought up it makes total sense. There is way more risk and little upside for Apple and that is more important than anything else. And I have to agree.


I am not saying Apple was wrong to keep everything locked down in the software like sideloading. Keeping the app store locked down. Those are safety and security issues I agree with Apple looking at it only from their profit/risk perspective.

What I don't see as a security risk at all is making Apple products more compatible with other devices outside of Apple. That doesn't open a security hole. They could have a few basic cross platform apps and that wouldn't hurt security or privacy. They could make iMessage more compatible with Android without being forced. Those are the things Apple could do that would benefit everyone.

Let them be gatekeepers. I get it. But it still doesn't change the fact the weakest link is still the user. It also doesn't change the fact that Apple markets their superior security which leads the user to a false sense of security and could cause them to engage in risky behavior they wouldn't if they felt responsible for their security.

My answer to your question is simply I agree with your assessment although I wish it wasn't right. Lol
Sadly, in practical terms, dumping an exclusion into the EULA won't really work as a defense. You can make actual use - meaning access - to the software a condition of signing the EULA, but since it's a form of contract, you can't put unfair or unenforcible terms into it. A condition that says or implies 'we make the software, but it's your job to keep it safe' is not a fair contractual term since the user has no access to repair faults or failures inside the software, or to understand what risks there are when those risks can't be openly discussed in non-expert terms.

Apple's responsibility falls under the general legal principle of 'duty of care'. They make the product from the ground up, and if a user of the product suffers a definable harm as a result of that use, they have a claim against Apple. Obviously that extends to a loss of security of personal data resulting in ID theft or the palpable risk of it from within the system itself, and when testing the case in court, the question would be whether Apple could say that a 'reasonable person' would be expected to know how to secure their system. It's incredibly unlikely a court would agree that Apple could pass this duty to a user.

Microsoft face exactly the same problem, though they can defray some of the legal risk because in the majority of cases they are not manufacturers of the hardware, (and also that Window's poor security footprint is somewhat 'notorious'). But the time and expense they put into patching and securing Windows isn't because they're just nice guys and know someone has to do it, it's because they have a legal department reminding them that a hole in Windows that lets the bad guys in, is a class action not yet started.

In both cases, Apple and Microsoft have increasingly built-in secure provisions, but in fact this actually reinforces their duty of care, not provides a way to defray it to the user.

How Apple dealt with this back in the Jobs era is their problem, because for simplicity and profit, they created the 'ecosystem'. For whatever reason (many will say 'greed') they became gatekeeper of that ecosystem in the specific sense not of data (music and the like, or even cloud storage) but executable code. The App Store, and tightening restrictions on running non-App Store executables, has vastly improved Apple device security. Microsoft saw this and were rather envious so have been doing the same thing, though absent the high level of control, rather less successfully. Even so, a number of Apple's security ideas have found their way to Microsoft and into Windows too.

The reality is that if you ignore Apple's corporate culture, gatekeeping executable code is a hugely valuable security measure, because securing the perimeter secures everyone/everyone inside. Leaving aside the profit motive, the App Store not only gave a lot of smaller developers access to a ready customer base, but also gave Apple responsibility to weed out injurious code.

On the Windows side, the latter is done with the vast array of variously useful or less effective anti-malware software, where in some environments, on top of the basic layers in Windows itself, it's necessary to run maybe 3 security platforms within each system - which is one factor which gives macOS a degree of superiority as an operating system, because the perimeter defense of the ecosystem means this isn't generally necessary.

As I say, you and I would not have all that much daylight between us in regard to how we see Apple and their corporate policies play out, but the reality of securing our computers isn't a simple one. Breaking Apple away from their gatekeeping won't make us safer and won't save us money. If anything, the macOS, iOS, etcOS, environments will begin to leak more data, and cost users more money in subscriptions for malware defense software, which will increase the turnover of systems as older ones bog down faster due to the processor loads of non-productive background use.

As a security researcher, I'd much rather see the Windows side leveraged up to macOS, iOS, etcOS levels of security than the other way around. Not least because what we may save from breaking the gatekeeping/ecosystem Apple has built will cost us a lot more in a fast-splintering malware defense industry, and the 'experts' feeding off system security at end-user expense.

I'd much rather the gatekeeping stay in place, even if under a form of regulation.
 
  • Like
Reactions: Technerd108
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.