I think the battery life increase would be substantially less than you think. I'm drawing 2 to 5 watts TOTAL most of the time working on my MacBook Air. My guess is a large percentage of that is the screen. This is nothing like turbo on Intel.
I think the OS is doing a really good job using appropriate cores.
I'm with you on this. The only time I can see this making a big difference would be if you were doing something CPU intensive, in which case you'd be slowing it down by a lot in order to improve battery life. I'm assuming here (I haven't seen benchmarks, but the vague graph Apple published implied this) that the efficiency cores do indeed provide better performance-per-Watt, otherwise you'd just use the same total Wh of energy spread over a longer period to complete the same task.
The whole point of the architecture is if the CPU isn't doing much, then neither are the performance cores. It obviously works well, otherwise iPhones would have miserable battery life.
Food for thought mental experiment:
Let's say the screen, SSD, and other components draw 5W (I've seen older measurements that indicate it's in that ballpark for an MBA). And let's say the efficiency cores are in the ballpark of 10x slower than the performance cores (you can kind of suss out the general speed from multicore benchmarks, they're probably somewhat faster than that but not wildly so). We'll round up a bit from your measurements to make the math easier and assume the efficiency cores use up to 2W and the performance cores up to 20W.
If you're doing light-duty tasks that are not really CPU bound, but the MacOS ramps up the performance cores, say, 10% of the time, your average power draw is going to be 5W + 2W + 2W = 9W. If you disabled the performance cores, it's going to feel less "snappy" because some operations take longer, but power draw will decrease to 5W + 2W = 7W. So you'd get maybe 20-25% longer battery life in that hypothetical. In reality I doubt it'll be that much difference, since I suspect that at low load the screen is using the vast majority of the power.
Your best case scenario is a garbage web page that has some background task doing stupid, useless things and maxing out an entire core. Having it run slower loses you nothing, so instead of 5W + 2W + 5W (one core) = 12W, you only use 7W and get close to double the battery life for the same amount of reading.
But in the other direction, let's say you're doing something processor intensive that takes an hour on the efficiency cores and 6 minutes on the performance cores. You're going to use (5W + 2W ) * 1hr = 7Wh versus (5W + 2W + 20W) * 0.1hr = 2.7Wh. The power to run all the other stuff in the computer is the same either way, so the faster the task gets finished, the better off your battery is when it's finished.
Essentially, it comes down to what your goal with the battery life is. If your goal is to just "sit there using the computer", then the best-case web-browsing scenario kicks in and you come out ahead. Sometimes that's a realistic use case. If however your goal is to complete tasks, then it's entirely possible you come out ahead with higher battery drain as long as the tasks are completed faster.
Web browsing on inefficient pages are a good example of the former, handbrake encoding is a perfect example of the latter--who cares what the instantaneous power draw is, your goal is to get the video encode done, so if you use less Watt-hours at the end of the encode you come out ahead (not to mention user time saved).
A variant of this paradox came up when ultra-fast SSDs first came out. They drew substantially
more power instantaneously than other storage. But since they also read data exponentially faster, the actual goal--to get data on or off storage--was completed much faster, letting them sit at idle most of the time. In some cases it appeared to have a negative impact on battery life, but in reality you were getting the same tasks done much faster so even with reduced battery life you got more done.