Background assumptions:
If the future is such that everyone will be running an LLM like ChatGPT on local hardware, then RAM demand will increase drastically. And I mean drastically.
An 8GB Macbook Air is simply not enough to run any decent LLM. The minimum may be as high as 32GB of RAM. Ideally, you're using 128GB+ of RAM in order to run better and larger LLMs.
Hence, if Apple raises the base to 16/512, then there will still be plenty of demand for upgrades to 32/1TB and beyond, preserving their profit margins.
By next year, I predict that threads here will go from "Is 8GB enough for my use case?" to "Is 64GB enough to run Vicuna 33b parameter model?".
Note: LLMs require high bandwidth RAM such as those found on a GPU. However, because Apple has a unified memory model, all system RAM already has high bandwidth. For PCs, they will have to drastically increase VRAM on a GPU, not system RAM. For Apple, they just need to increase system RAM.
- For over a decade, RAM requirements did not increase as they did before. The demand for RAM on laptops plateaued because most common applications can be comfortably used on an 8GB or 16GB machine.
- Apple relies on the 8/265 to 16/512 $400 upgrade in order to have the profit margins they normally have for their products. Without this upgrade, Apple's Mac business would be much less lucrative.
- The 8/256 base is by design because it's just enough for a comfortable experience but will require a $400 upgrade if you want to do anything beyond the basics. The requirements for a "comfortable experience" must raise in order for Apple to want to raise the base specs.
- If Apple sets the base at 16/512, then they need to replace those who would have upgraded from 8/265 to 16/512, to now upgrade from 16/512 to 32/512 or 16/512 to 16/2TB.
If the future is such that everyone will be running an LLM like ChatGPT on local hardware, then RAM demand will increase drastically. And I mean drastically.
An 8GB Macbook Air is simply not enough to run any decent LLM. The minimum may be as high as 32GB of RAM. Ideally, you're using 128GB+ of RAM in order to run better and larger LLMs.
Hence, if Apple raises the base to 16/512, then there will still be plenty of demand for upgrades to 32/1TB and beyond, preserving their profit margins.
By next year, I predict that threads here will go from "Is 8GB enough for my use case?" to "Is 64GB enough to run Vicuna 33b parameter model?".
Note: LLMs require high bandwidth RAM such as those found on a GPU. However, because Apple has a unified memory model, all system RAM already has high bandwidth. For PCs, they will have to drastically increase VRAM on a GPU, not system RAM. For Apple, they just need to increase system RAM.
Last edited: