Why are you only thinking that the GPU/CPU is getting boost in cores? M1 and A chip has taught us that also the neural engine plays an important part in signal processing/image enhancements etc. Bottom line for Apple and customers is if the machine is faster for some work process such as image manipulations, video editing/filters/exports. How this is balanced in the M chip with the different compute parts does not really matter. So will the neural engine stay the same as the M1?
This is really difficult to speculate about since the implication are unclear. For example, I don’t really know where the neural engine is used. Apple-internal signal processing and possibly image classification, sure, but beyond that? Apple currently has three options for machine learning: the NPU (hidden behind limited API), the CPU AMX units (hidden behind a more general purpose API) and the GPU - they offer intrinsics to do matrix multiplication faster, but don’t accelerate data types commonly used in ML. It’s all quite confusing. Where Nvidia and AMD seem to build ML-relevant hardware into their GPUs, it seems that Apple is pushing the AMX units instead.