Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
In comparison context, a CPU is an ASIC relative to the FPGA that is emulating it, and FPGAs are slower than ASICs. The CPU interprets code and manipulates data in a specific way. But if you have a bunch of heavily used functions, you can distill the code down to gate patterns and instantiate them into an FPGA, and by eliminating the code interpretation part, the FPGA can do the data manipulation more efficiently than the CPU. Up to a point.

In terms of getting the actual work done, if you code the FPGA for the work, it will generally get it done much faster than a CPU, for a given function. Not as fast as hard-coding, but significantly faster than a CPU, within the limitations of dataflow availability.

That really depends on the given function. Yes, if a Harvard architecture is not the best tool for the job, an FPGA can be faster. But, then, you could just make an ASIC that does the job that other way, and it would blow an FPGA out of the water.

Take something as simple as a 64 bit multiply. You aren’t going to do that faster on an FPGA than on an ASIC.
 

exoticSpice

Suspended
Jan 9, 2022
1,242
1,952
Last edited:
  • Like
Reactions: JMacHack

BigPotatoLobbyist

macrumors 6502
Dec 25, 2020
301
155
Except in video encoding and rendering...
Did you read what I said? Illiteracy is an ongoing crisis in internet fora, (has been for some time frankly) I swear.

I was discussing the *CPU* synthetic benchmarks whereby Intel scales 14 cores that, while not as efficient as Apple's on a per-core basis, can be tuned to relatively power-efficient points on their voltage curves to garner impressive aggregate throughput (albeit in the 30-45w range anyways) via said parallelism - whereas Apple, having fewer cores - is unable to scale their superior microarchitectures in a similarly voltage-ideal way with the current 8/10-core configurations vs the 12900HK.
So for a small part of the performance curve, the increased core count on Intel's 12900K - even with inferior microarchitecture and physical implementations - proves "more efficient". Again, I was discussing this within the confines of the presumably CPU-exclusive benchmark, not Adobe Lightroom or Final Cut etc.
 

Fomalhaut

macrumors 68000
Oct 6, 2020
1,993
1,724
They have a lot of openings and AI is the space to be in that writes its own paycheck. Need CentOS/Redhat/Ubuntu Linux experience so people with x64 Macs have an advantage on getting up to speed.

https://tenstorrent.com/careers/
Most major Linux distros have ARM64 repos, so would be almost identical to the x86_64 versions in use. You might find third party software that doesn't have an ARM repo, so you might have to do some work to recompile these. ARM-specific hardware drivers might also be lacking.

But for general Linux skills, I would expect employment possibilities to be the same if you learnt Linux on ARM.
 

mi7chy

macrumors G4
Oct 24, 2014
10,625
11,296
mmm. I wonder what this is?? On Mac Player is called Fusion.


VMware Fusion on M1 is on public tech preview. You can check it out though. These are all arm64 iso.

This person got VMware Fusion working on M1 Mac running Linux.

This guy also got UTM linux VM running on his M1 Mac.

Worse than UMT. Doesn't support CentOS 7 x64 and ARM64 also gets stuck at install menu. Where's lemon hiding?

Install success
On x64 laptop 1
On M1 laptop -2

Since you guys are experts maybe you'll have better luck. CentOS 7 x64 preferred but ARM64 is better than nothing.

http://isoredirect.centos.org/centos/7/isos/x86_64/
http://isoredirect.centos.org/altarch/7/isos/aarch64/
 
Last edited:

ian87w

macrumors G3
Feb 22, 2020
8,704
12,638
Indonesia
Hopefully, in 10 years, wether you're running Windows or some other OS, the hardware will be capable enough of perfectly emulating any platform for older software flawlessly (or, with all the inherent flaws of the older hardware), via sheer compute power, or through the use of reprogrammable hardware (e.g., integrated FPGAs).
Or, more and more apps are developed in a way that it is easier to recompile to a new architecture, or apps are developed into a universal binary (eg. Android APK can be compiled to support both x86 and ARM).
I'm already surprised how fast some apps being native to Apple Silicon, especially apps form the usual "slow" suspects like Adobe.
 

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
That really depends on the given function. Yes, if a Harvard architecture is not the best tool for the job, an FPGA can be faster. But, then, you could just make an ASIC that does the job that other way, and it would blow an FPGA out of the water.

Take something as simple as a 64 bit multiply. You aren’t going to do that faster on an FPGA than on an ASIC.
There can be no doubt about that. I imagine the most efficient form of general processing would involve dedicated math data acquisition units connected by a configurable logic/routing fabric into which the process structure would be instantiated. For repetitive and embarrassingly parallel work, such a design could be unbeatable. For general purpose processing, though, it might be less practical, depending on how quickly and easily the fabric could be configured for brief tasks.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
Or, more and more apps are developed in a way that it is easier to recompile to a new architecture, or apps are developed into a universal binary (eg. Android APK can be compiled to support both x86 and ARM).
I'm already surprised how fast some apps being native to Apple Silicon, especially apps form the usual "slow" suspects like Adobe.
I think we are going to see the opposite. Computers have a lot of special-purpose hardware again, rather than using the CPU and the GPU for everything, and developers have to access that hardware using vendor-specific APIs.

Different platforms also make assumptions that are incompatible with other platforms. My favorite example of that is that you can't copy data on macOS by simply copying the files. Some apps (such as Photos) rely on file system features that platform-independent tools are unaware of. If you copy such data with tools such as rsync, the copy will be incomplete.
 

ahurst

macrumors 6502
Oct 12, 2021
410
815
I looked at the code for a Genomics pipeline once and I was really surprised by how inefficient it was. It made multiple passes over large datasets which could have been done in one pass. The solution was correct; just not optimized. Not specifically a problem when you have a big hardware budget but a waste if you have a CS background.
Extremely and painfully true. The sciences are an area where you need to write and use code to run experiments and analyze data, but also receive zero formal training on basic programming principles or software development or development tools.

The end result is you have a group of very smart people writing the most baffling and idiosyncratic code possible, accidentally including word documents or unrelated PDFs in their git commits (which are frequently just informatively named “update files for git”), but also (usually) somehow working well enough to solve their respective use cases.

I once took on the role of optimizing a notoriously slow R preprocessing pipeline for analyzing shape tracing data for motor learning research. All of the code was stuffed in a single 1500-line file and took a good 50-60 minutes to run on a fast computer. With a little tidyverse magic and some mild optimization via profiling, I was able to get it down to ~2 minutes. People usually just don’t know the kind of performance they’re leaving on the table!
 
  • Like
Reactions: pshufd and Andropov

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Intel has joined the RISC-V group.

Does Intel believe that its next ISA will be RISC-V instead of ARM?

That is more about Intel trying to find new clients for its foundry services than about instruction set futures. Have to keep up with the competition.

"...SiFive... has successfully taped out the company’s first system-on-chip (SoC) on TSMC’s N5 process technology. ..."
https://riscv.org/news/2021/04/sifi...ith-7-2-gbps-hbm3-anton-shilov-toms-hardware/

or

"... Microsoft and Cadence collaborated with SiFive, a TSMC IP Alliance partner, to tape out the first full SoC design in TSMC’s OIP VDE. It contained its 64-bit multi-core RISC-V CPU, the Freedom Unleashed 540, which is capable of running a RISC-V Linux distribution and its applications via TSMC OIP VDE. The SiFive implementation was done in the U.S. and India. ..."

Similarly.

https://www.sifive.com/press/sifive-and-samsung-foundry-extend-partnership-to-accelerate



Intel's RiSC-V investment fund will probably convince some players to try out the RISC-V libraries that Intel will put together with folks like Si-Five and others. This is just keeping up with the other two bleeding edge advanced node players.


Intel is looking to be the only foundry where can pick up x86 , Arm, and/or RISC-V libraries that are ready to go.

RISC-V can be a quite small SoC die which actually might make more sense to do on a marginally risky Intel fab process than Intel's larger products. One of those could be on Intel 3 , Intel 20A before a Intel product gets there. If Intel found something to be a 'pipe cleaner' before their stuff went through that would probably help. [ The old pattern of our stuff first and foundry on "sloppy seconds" is not what Intel needs at this point; or going forward. ]



Google , Qualcomm , IBM , Raspberry Pi , Cadence , Xilinx (now AMD) , and Samsung are members too.


Doesn't mean IBM is dumping PowerPC or Samsung is nuking their Arm SoCs. It is a wide collection of folks who are in the foundation.

[ TSMC isn't an official member but they are not "out of the loop" either. ]
 
Last edited:

Mikael H

macrumors 6502a
Sep 3, 2014
864
539
Why start there in 2022? We’re two years away from CentOS 7 being EOL. Admittedly I’ve only tried running Ubuntu Server 20.04 and Fedora 35 on my MacBook Pro, but both work as well as expected (that is: I haven’t yet stumbled across anything I use that doesn’t work) in their Arm versions on UTM.
 
  • Like
Reactions: JMacHack

jeanlain

macrumors 68020
Mar 14, 2009
2,462
957
Did you read what I said?
I know you were talking about CPU performance. I also know that the video encoder you referred to as "crap" is there for a reason. The GPU is also big and fat for good reasons.
My point, which was implicit, is that beating intel's CPU cores is not Apple's goal. Apple's goal is to produce a product that meets the needs of most users. It's meaningless to speculate about what Apple could do if they didn't included these crucial parts of the SoC.
Intel, OTOH, is keen on beating everyone else in raw CPU perf, at the expense of heat and battery life.

Anyway, nothing suggests that dropping the fat GPU and the video encoders would improve the CPU core performance. Apple has shown that they can produce huge SoCs if they want to.
 
  • Like
Reactions: huge_apple_fangirl

MayaUser

macrumors 68040
Nov 22, 2021
3,178
7,203
Why start there in 2022? We’re two years away from CentOS 7 being EOL. Admittedly I’ve only tried running Ubuntu Server 20.04 and Fedora 35 on my MacBook Pro, but both work as well as expected (that is: I haven’t yet stumbled across anything I use that doesn’t work) in their Arm versions on UTM.
Ignore that user, shes always off topic and always with mac negativity in mind
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,521
19,679
It's a fail on MBA M1.

What am I doing wrong? Why don't I have any problems? ?

Screenshot 2022-02-09 at 10.46.17.png
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,521
19,679
Regarding the ADL mobile vs M1 efficiency comparison: what makes this so difficult is the lack of good data. Probably the best source is the recent video by the Hardware Unboxed video channel, but even then it is complicated as it is not clear what they are measuring and how.

IMO the most useful part of that video is the slide where they showcase CB23 multicore running on ADL with the package TDP limited to 45W. It is roughly 6% faster than M1 8+2 in this config (13070 vs. 12378 points) , so this gives you 6% improvement over M1 at 30% higher package power. That's a 20% efficiency win for M1.

Please note that this is in a benchmark that is strongly biased agains the M1: it is optimised for Intel SIMD, scales extremely well with SMT and high number of cores. The efficiency calculation is also biased agains M1 — it's package power includes DRAM and things like the SSD controller (Intel's does not). To stress this again: taking the most favourable benchmarks, conditions and formulas for Intel, we still end up with an efficiency lead of at least 20% for Apple. And this difference is much higher in the real world.

Now, don't get me wrong, this is not about Intel bashing or other nonsense. This is about relative merits of each product. Alder Lake brings a great performance improvements over the previous x86 CPUs, and its sustained multicore throughput is pretty much unmatched. It is a great product for folks who are looking for that kind of thing and don't mind the drawbacks (extremely high power usage that approaches that of desktop CPUs). But to claim that it has comparable efficiency to M1 is just nonsensical... where do folks even get such ideas?
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Worse than UMT. Doesn't support CentOS 7 x64 and ARM64 also gets stuck at install menu. Where's lemon hiding?

Install success
On x64 laptop 1
On M1 laptop -2

Since you guys are experts maybe you'll have better luck. CentOS 7 x64 preferred but ARM64 is better than nothing.

http://isoredirect.centos.org/centos/7/isos/x86_64/
http://isoredirect.centos.org/altarch/7/isos/aarch64/
There's no way to install the x64 version in M1 VMWare, it's ARM64 only. No idea why you're having a problem installing the ARM version, I haven't tried the M1 version of VMWare, but given the symptoms, it may be sitting there looking for something over the internet and for whatever reason, the internet isn't working in the VM. Try changing the type of internet connection to bridged.
 
  • Like
Reactions: JMacHack

BigPotatoLobbyist

macrumors 6502
Dec 25, 2020
301
155
I know you were talking about CPU performance. I also know that the video encoder you referred to as "crap" is there for a reason. The GPU is also big and fat for good reasons.
My point, which was implicit, is that beating intel's CPU cores is not Apple's goal. Apple's goal is to produce a product that meets the needs of most users. It's meaningless to speculate about what Apple could do if they didn't included these crucial parts of the SoC.
Intel, OTOH, is keen on beating everyone else in raw CPU perf, at the expense of heat and battery life.

Anyway, nothing suggests that dropping the fat GPU and the video encoders would improve the CPU core performance. Apple has shown that they can produce huge SoCs if they want to.
It was a hypothetical.


"Nothing suggests that dropping the GPU and video encoders would improve the CPU core performance"

Okay, that's correct, much like intel dropping CPU cores won't ipso facto improve graphics performance or video encoding performance. But in practice, yield issues and pragmatic concerns about SOC area would leave more room for other functional blocks in either scenario.
 

mi7chy

macrumors G4
Oct 24, 2014
10,625
11,296
What am I doing wrong? Why don't I have any problems? ?

View attachment 1956366

That's not CentOS Linux 7 stable release as specified in the original request while CentOS Stream 9 is an early beta release relative to Stream 8 minor beta release. Kudos though for spending all that time to confirm CentOS Linux 7 and Stream 8 don't work and jumping to Stream 9. That's just one of many dependencies so many more hurdles with PiTorch, TensorFlow, ONNX, Buda, etc.
 

dgdosen

macrumors 68030
Dec 13, 2003
2,817
1,463
Seattle
That's not CentOS Linux 7 stable release as specified in the original request while CentOS Stream 9 is an early beta release relative to Stream 8 minor beta release. Kudos though for spending all that time to confirm CentOS Linux 7 and Stream 8 don't work and jumping to Stream 9. That's just one of many dependencies so many more hurdles with PiTorch, TensorFlow, ONNX, Buda, etc.
Didn’t Redhat EOL CentOS?
 

mi7chy

macrumors G4
Oct 24, 2014
10,625
11,296
Didn’t Redhat EOL CentOS?

Historically, CentOS Linux was the free community-made mirror-alternative to paid Red Hat Enterprise Linux. Since Red Hat acquired CentOS they've EOL current CentOS Linux 8 stable and removed ability to download (probably to drive people to paid RHEL). However, CentOS Linux 7 stable is still supported until 2024 and available for download. CentOS Stream now serves a different purpose and has become the beta release for RHEL stable. There are Rocky Linux and AlmaLinux alternatives that serve the original purpose of CentOS but their 8.5 stable release also doesn't work on M1 but a non-issue on x64.
 
  • Like
Reactions: bobcomer

mi7chy

macrumors G4
Oct 24, 2014
10,625
11,296
Interesting, Tenstorrent is adopting the SoC embedded in GPU idea but scaled up to Risc-V embedded in AI processor.


Forefather of AS, Jim Keller, view on x86-64 vs ARM vs RISC-V. Spoiler alert, x86-64 Zen has removed some legacy baggage, ARM added more baggage and RISC-V clean slate since it's new.


A good engineer is a good teacher.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.