Many thanks to the three of you who responded. Following your suggestions I think I have learned a good bit about my computer, in addition to the particular issues I brought up. I will summarize my current understanding.
I looked everywhere suggested and did not find any preference setting that would allow gpu use to be modified. Probably the Photos code is what it is, and is not adjustable in this way.
I downloaded VideoProc and the Swordsmith 4k HEVC video suggested for testing. Hardware Info Detected by VideoProc Converter tells me acceleration is available for all things involving H264 video, but not for HEVC. I get the same whether Use High Quality Engine is checked or not checked. Among the various outputs which can be selected, it seems hardware acceleration is available for forms which use H264, but almost nothing else. The one exception listed, output to HEVC video, is so rare as to lead me suspect this is an error in the information provided.
It is very interesting that according to VideoProc Converter my Radeon Pro Vega II, which I thought was fairly current, is not capable of working with HEVC graphics. I also found this post:
HEVC Hardware Accelerated encoding no longer works in Monterey, so I am not alone in my observation. I suspect Apple might want to obscure this significant limitation.
The face embedding link was very informative. The described new complex algorithm creates and ranks many pair-wise comparisons, and is said to greatly increase the speed and accuracy of face recognition. I believe this is processed at least in part by the Neural Engine present in Apple Silicon/iOS devices, but lacking in Intel-based Macs such as my Mac Pro. This would help explain face recognition proceeding faster on my MacBook Pro than on my Mac Pro. This also suggests that Apple’s software focus has been on Photos in iOS, and not on Photos in macOS.
Perhaps the lack of hardware acceleration which I suggested is due to relatively feeble implementation of Neural Engine function in macOS software, plus my gpu’s inability to process HEVC images. I do not know the extent of this limitation, but expect it reduces potential the benefit from this software implementation.
This may partially explain instability I recently observed in Photos. Supposedly Photos analyzes images for faces, then assigns the faces to the person through background processing. My Mac has been working on this most of the time since February, 2020. It has not completed the analysis, but it often appeared to have stopped working in the background, so I gave up waiting for Photos to automatically identify and name faces. For the last month or so I manually tagged faces, and Photos crashed once or twice a day, or more.
My guess now is that the code used in macOS Photos and iOS Photos is similar, but differs in its calls for hardware support since the available hardware differs. For example, relatively simple iOS calls to Apple Silicon’s Neural Engine would correspond in macOS to much more complex calls to Intel-based hardware. If Apple is more focussed on iOS, we could expect the more complex programming required for macOS on Intel to be less optimized and debugged. This might explain what I saw as poorer performance and lack of use of hardware acceleration.
A few days ago macOS 12.5 was released, with few changes noted. However, since updating, I have noticed a remarkable change. Photos now runs constantly in the background, with Activity Monitor showing cpu use steady near 100%. My guess is that from 12.4 to 12.5, among other things, Apple debugged and improved Photos. I hope the constant processing I see now reflects more functional application of the face recognition process.
Lots of speculation and guesses here. Please correct me where I am wrong. Again, thanks to all for your help.