How secure are our laptop and phone chips??
Intel and AMD had Spectre. I believe there was something like that or the same thing also affected some cellphone chipsets but I am not sure.
So it seems like hardware vectors are more a target than ever. What can we as consumers do to know if our hardware is safe and how likely are these vulnerabilities even going to be exploited.
For example with Spectre I am pretty sure you needed physical access to the device in order to exploit it. If that is true then I think customers should have been able to decide whether or not they wanted a software update that massively slowed their devices all for a threat they were most likely not going to encounter. I understand theft could be an issue but hard drive encryption and a locked device should be enough and if it isn't then the malicious actor is too good to stop anyway. I think this would apply to any new threat to any device or platform?
So as new threats to new cpu architectures emerge how do we as customers understand and mitigate the risk. For example even in 13th gen Intel cpu uses a similar prefetch which can leak info. Intel uses a method that doesn't leak info but these prefetch cpu implementations are common and widespread across the industry so it is not a problem unique to any OEM, Manufacturer or platform.
How do hardware engineers for Intel, AMD, and Apple among others protect against these type of attacks and prepare and harden the cpu for future attacks as hackers are always looking for all vulnerabilities and with the introduction of AI they can scan for vulnerabilities on a scale not available before. So these threats are only going to grow exponetially and no one will be safe unless everyone works together to address the problem.
Microsoft, Google, Apple, Intel, Amd, MediaTek, etc should all work together to harden their respective software/hardware and share information with each other to target the latest threats.
Apologies for the length which is barely even a tiny scratch on the subject, and rather generalized and disorganized, but:
Hardware attacks in the form of CPU/chipset vulnerabilities like Meltdown, Sceptre, Downfall, inception and the latest Apple Silicon GoFetch vulnerability are not commonly as easy to exploit as other forms of attack and vectors, so for the most part, since Spectre at least, are rather more of an academic issue than a practical one.
The problem is rooted in the performance gains to be had from 'prefetching' instructions and data so the CPU can be queued up ready for the next action before it is needed. If you can get the right code into the system, in theory you can read this data, or extrapolate it from the prefetch instruction and clock cycle. Basically that's why to defend this on an M1 or M2 (we don't know for sure about the M3 yet), system performance would be seriously hit, because Apple would have to halt prefetch, change the way the prefetch function works, or how the data is encrypted - though there are other ways of indirectly defending it.
The good news is that actually attacking a system this way is increasingly difficult. The problem being in somehow inserting the right code into the system. The bad news is that where in the past it would have been easier to effectively attack a Windows system, because the kernel owned the entire system so attacking it in one place could get you anywhere else inside the machine, with this kind of vulnerability, the code only needs a place to execute where it can then sniff the data it is looking for.
Some are saying that this vulnerability could only be exploited by an attacker's physical presence, but that isn't true. It 'only' requires malicious code being able or authorized to run on the machine. Malicious websites or email attachments could theoretically inject that code, though sandboxing makes it far more difficult now than it would have been. The protections in macOS mean that an attacker would need to either craft the code to subvert the macOS kernel itself, which is quite hard to achieve, or to convince the user to click OK to a security dialog requesting permission to execute a program, since the kernel doesn't permit anything not correctly signed as safe to be executed without that.
And that brings us back to the weakest point in the security of any system: the user. You can build the most secure computing platform ever known, but put a human in front of it, and you have a two-legged, half-brained security vulnerability right there.
This is arguably one of Apple's strongest arguments against claims of monopolistic behavior in its ecosystem, because the very nature of them acting as gatekeeper means the onus of responsibility for securing the ecosystem is on them. Open it up, and there's a free-for-all where security, system integrity, user data and device trust can't be ensured for anyone.
But there's another part of the problem - more my area than CPU design - and that is cross contamination. Given that macOS, iOS, iPadOS and so on all have the same core, an attacker may be able to exploit an older and insecure iOS device for example to inject the code, and execute an exploit in the raw across a trusted relationship to a Mac.
It is a valid question as to what users can and should be told about these kind of threats, but the problem is that even in places where reasonably technologically minded people gather (such as here perhaps), there's an awful lot more opinion than there is knowledge. For ordinary users who think nothing of this sort of thing, and who assume that if they have antivirus software for example, they've fixed the problem, explaining that the internet isn't a playground for buying stuff and whinging about Apple/Microsoft/whomeverelse but to their computer is actually the front line in a war zone isn't going to do much good. Which is why Apple in particular try and lock down their systems as tight as possible, so that they can do the hard work and users don't have to wrestle with it. It is (sometimes/often) the very stifling of user freedoms which help most in keeping users safe.
Which means that this is one of those times when Apple treating their users like idiots who don't know what they're doing might actually help.
The other side of this coin is what Apple are doing about the threat itself, and those who say 'nothing' are the
real idiots. Threat researchers almost always report their findings to the appropriate place - whether Apple, Microsoft, Adobe, Google, Intel etc - immediately they can verify a problem does exist, even if only theoretically. What happens next is almost never in the public eye for obvious reasons. Apple have in-house and external teams who take these reports and work on them, sometimes also with the people who found the vulnerability and reported on it, sometimes not. There's almost never a public face to this work. Even when a patch is released, there's rarely any detail provided to the public because detail is dangerous to future security.
There are times when vulnerabilities are found by threat actors of course, and you get zero day exploits as a result, where a problem surfaces
because it is being attacked. In this case, there are no indicators of active exploits or attempts, which gives at least a small clue as to how difficult the vulnerability is to find and craft a means to use it.
So the answer to 'how secure are out laptop and phone and chips' (and also desktop and tablet, and IoT and embedded) is that inherently they aren't. Their job is to move data and execute code. Securing them isn't about the chip, but the system as a whole by better securing the perimeter, better managing the data within the system, and hardening the system so that if something does get in, it is constrained to a sandbox it can't do harm in and can't escape from.
One point you made is that "Microsoft, Google, Apple, Intel, Amd, MediaTek, etc should all work together to harden their respective software/hardware and share information with each other to target the latest threats." and the fact is that they do. Threat intelligence and mitigation is an industry-wide activity served by a legion of truly astonishing professional talent, both in the realm of independent researchers, and corporate engineers. In most instances, corporate boundaries don't exist in this community, except in the instance of proprietary information of course. Even then, there's engineering knowledge that is shared. In the threat intelligence community, there is nothing like the kind of tribalism you see here in terms of partisanship, just some boundaries defined by essential NDAs.