Google landed me here searching for Time Machine performance issues ?
I have used Samba for Time Machine for multiple computers at home since it was a brand new feature that you had to patch and compile in manually. My server is a Ubuntu Linux machine with ZFS RAID for storage. Admittedly I do other software work that necessitates my need for a Linux machine, if you were to be doing this on your own for just backups and storage, a BSD machine running FreeNAS would be much easier to configure and use.
The company that writes the FreeNAS software sells pre-configured systems that you just add your own drives to, or you can download the software (which is free/open source) and roll your own hardware if you choose. FreeNAS has a web ui for administration and the setup for Time Machine is all point and click, no command line required. They sell their pre-built machines on Amazon (search for iX Systems).
Some considerations/caveats:
1) Disk performance isn't a huge consideration unless you have some need to do other things on the same box, because gigabit LAN speed will be less than your disk performance anyway. ZFS does a very good job of balancing reads and writes across drives in an array, so if you put enough flash drives together using ZFS you can get disks to read/write as fast as RAM (albeit with more latency), in the neighborhood of 15-20 gigabytes per second, but 400 megabits (which is a reasonable amount to expect from gigabit LAN speeds in your house or office) is more like 0.5 gigabytes per second. With that in mind, mechanical drives designed for NAS use last a very long time and are cheaper. My current Western Digital Reds are around 7 years old and none have failed in that time.
2) If you build your own and prefer Linux, ZFS is approaching stable/1.0 on Linux and Ubuntu 20.04 has it as much of a first class file system option as it will be. There were issues which were not stable prior to 0.8.3 but were all fixed in that version afaik, and Ubuntu 20.04 ships 0.8.4 as the default ZFS Linux version. Ubuntu 20.04 also has Samba 4.11 as the default Samba version which is fine for all stable Time Machine features as of now. Here's the wiki for recommended Samba settings if you're setting this up manually:
https://wiki.samba.org/index.php/Configure_Samba_to_Work_Better_with_Mac_OS_X
3) For those unfamiliar OSX is BSD Unix under the hood, not Linux. FreeBSD was the first BSD to support ZFS and it's still better supported there than it is on Linux, although Linux will eventually catch up in all likelihood. ZFS is a file system developed by Sun Microsystems in the early 2000s with the stated intent of getting rid of the expensive RAID systems that were prevalent then, and solving the hardware issues that nagged RAID systems over the years with software solutions. By putting the RAID controller in software in the operating system, with very low level access to the disks, you also have greatly improved performance versus what was possible in the old days via an external disk control device. ZFS works by not only doing the common task of keeping a journal on the file system to track disk consistency, but also constantly hashing (encrypting) the journal and checking the results to prevent situations like say... sudden power loss, which could cause inconsistency in the disk array and a resulting loss of data. The end result of this is ZFS will do a better job of warning you about a failing drive than the disk's internal monitoring system will. It is a rock solid means of storing data on a server and when you're talking about network storage there is no other choice that holds a candle to ZFS really, so that's what you should use.
My RAID array in my NAS is too big to back up anywhere else (it's the largest storage device I have), and as of today the array is over 12 years old, has been exported/imported back and forth between Linux and FreeBSD about a half dozen times, has been through dozens of software and OS upgrades and one complete swap of disks when I replaced my original ones with the WD Reds, and I have never lost a single file in all that time.
4) Unix/Linux servers do not implement the whole Time Machine specification, specifically quotas. Samba may improve this support over time but a better way is to enforce a quota on the server side via ZFS, and Samba will see and report it to your Macs that way. When you specify a storage limit quota on a ZFS data set it shows the OS a disk with that size as its maximum capacity. So when your Mac connects to that disk, it will see it as the quota you set. Simple, no fuss, no need to worry about what Time Machine features the server does or does not support. If you're using FreeNAS you can specify quotas in the GUI when you create your Time Machine data set.
5) Apple has started de-prioritizing iCloud enabled directories in Time Machine backups under the hood in the past few OSX versions. I don't know if they advertise this, but it has caused me grief once or twice when iCloud ate a file I was working on, I went looking for it on my Time Machine backups, and found that there were only shortcuts to the iCloud version of the file on there (tl;dr: iCloud has lost more of my data than my NAS has...). So, I would explicitly specify this in your Time Machine settings, you can exclude folders so tell it to ignore whatever folders you send to iCloud.
6) People are complaining about Catalina being slow, and that's what landed me here from Google as I mentioned above. I don't remember Time Machine being as painfully slow as I'm looking at right now in versions between 10.10 and 10.13 but the current estimate for Time Machine to restore my desktop drive over my gigabit LAN is around 24 hours for a 500gb hard drive. This is after I manually disabled the throttle setting that Time Machine uses by default (a bug on Apple's part, there's no reason the recovery console should be Time Machine throttled, wtf is the user going to do in the recovery console other than restore...), prior to disabling that setting it showed me an estimate of four days. See here for explanation of the setting:
https://blog.shawjj.com/speed-up-time-machine-backups-by-10x-f6274330dc6f
If you're keeping score, my desktop is an Intel Nuc hackintosh (basically a higher-end Mac Mini) with a 2 core i7 3.1ghz Haswell mobile CPU, 16gb of RAM, and a 500gb SATA SSD. I also have a stock Macbook Air (also a Haswell i7 CPU from the same chip generation, also with a 500gb SSD, but with only 8gb of RAM) but I've never had to restore it so I'm not sure if it would be much different. Considering it would be over wifi rather than gigabit, I suspect it would be much slower.
Despite the slow performance of Time Machine for doing a full disk restore, I still like it for its ability to keep incremental backups, so I don't intend to replace it with any other solution that uses disk images. ZFS also has the ability to do incremental snapshots of whole disks, so if you need off-site capability you could roll a system with Time Machine on the clients, ZFS and Samba on the server, and then ZFS snapshots sent offsite every day as well. This would be more complex, of course, but it would make your data functionally immortal, without having to use a pay-by-the-GB storage service like AWS S3.
What about ditching Samba?
It is possible to run ZFS on a Mac, too. It's third party, since we all remember Apple dangling ZFS at us and then taking it away when Oracle bought Sun Microsystems and changed the license, but consider the possibilities...
If you use ZFS on your Mac and set up regular automatic snapshots, and then send the snapshots offsite instead of using Time Machine, you would have the same Time Machine functionality, but for the ability to browse them in a GUI, while having none of the Time Machine overhead. Your backup performance would be pegged to your disk performance and your network performance. While this would be a developer solution, not an end user friendly solution, it's an interesting option.
After all, your Mac is a BSD Unix machine under the hood. It has cron.
The only caveat with this is I would at least mirror the local drives. Absent some form of RAID, ZFS's anal retentiveness about disk and data consistency would not be helping you, it would be (potentially) hurting you. It works by comparing two disks (or more) against each other to make sure they agree. If you only have one disk, you only have one source of truth for it to look at, and all that checksum computation would be for naught.