I think you're way off in left field. The Amiga copper was a part of the Angus chip which executed instructions (copper lists) based on the location of the video beam of the display.
The Amigas coprocessors were Agnus, Denise and Paula. Paula did peripheral and audio, Denise video and Agnus several other tasks. I was not thinking solely of the copper, but of the whole system. The mentioned chips had to synchronize with each other and the CPU, e.g. for access to Ram, ports etc.
While Denise was working on displaying the screen, it may have had to synchronize with Agnus for possible copper list instructions. But the coprocessors did basically work independently from each other (which is what contributed big time to Amigas magic) and despite their very different silicon structures, they could work together, just like an Intel and an ARM.
In a hypothetical Intel/ARM combination, the ARM chips could e.g. execute low-demand tasks, such as system background maintenance, user input etc. If the user started a big/demanding program, such as e.g. Video cutting, that thread could be executed by the Intel chip, while the ARM would still take care of the non-demanding tasks its already dealing with.
Once the demanding program ends, the Intel chip could go back to deep sleep, while the ARM would carry on, while using comparably less power. Apple does something like that already for several years now: The m-coprocessor chips in the iPhones are meant to prevent waking the main CPU for low-demanding tasks.
Once the whole system is migrated to ARM (talking about years here), a high-power ARM variant could then do the heavy lifting (perhaps in combination with a low-power core for the low-demand tasks as drafted above), while a medium-class Intel could still be available for compatibility reasons.
Thus the idea of having it as an option in the high-end Macs: At that (hypothetical) point in time, not everyone would need an Intel CPU (just like not every user needing a discrete GPU today). Yes, there would be discussions, just as there have been when Apple removed the dGPU option from an increasing number of machines. But eventually most people would be okay with the changes.
A coprocessor is entirely the opposite of what you were originally proposing. Your original assertion was for Intel or the ARM chip to be in low power mode when the other is executing.
I was thinking more of the synchronization mechanisms in that coprocessor system. When both coprocessors want to access the same resource at the same time, they have to agree, which one is first and which one has to wait. Due to the limitation of 80's technology, however, even the waiting chip would continue to run full power (which was no problem, as the Amiga was a desktop machine with "unlimited" energy available). But with the technological improvements I mentioned, in 2018 the waiting chip would go into sleep mode while waiting.
And with a sophisticated system, the waiting phases could be adjusted dynamically: For example, if the demanding program running on Intel is waiting for a user interaction, it could go to sleep until the ARM chip has dealt with the user input, such as e.g. typing in a name for a file to be saved. Or until the disk I/O is done. Or until the user finished moving the mouse and some intense calculations have to start.
As human users tend to be slow in comparison, the waiting phase for the Intel could be pretty long (a couple of seconds are very long already for modern silicon) and all the small waiting phases would eventually add up and lead to less energy consumption and less heat generation. The former is interesting primarily for mobile systems, the latter also for anemic desktops like the iMac or a potential hockey-puck-sized mini.