Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

flopticalcube

macrumors G4
Original poster
http://www.theregister.co.uk/2012/10/12/nand_shrink_trap/

The NAND flash industry is facing a process size shrink crunch and no replacement technology is ready. Unless 3D die stacking works, we are facing a solid state storage capacity shortage.
...
Park's conclusion is that new memory types are needed; a post-NAND era is beckoning with replacement memory technology offering DRAM-like speed and addressability and NAND non-volatility.
...

Time for a new way.
 

Renzatic

Suspended
Excuse my ignorance in computer science for a second, but this made me think of something...

Park's conclusion is that new memory types are needed; a post-NAND era is beckoning with replacement memory technology offering DRAM-like speed and addressability and NAND non-volatility.

Say someone does find a way to create storage that has the speed and bandwidth of ram. Wouldn't the two basically be one and the same? As in, you wouldn't need separate storage and ram, but rather you'd have a drive that has a partition acting as ram you could resize if you find yourself needing more? Or does a CPU need a buffer between the two that only physical ram can provide?
 

flopticalcube

macrumors G4
Original poster
Excuse my ignorance in computer science for a second, but this made me think of something...



Say someone does find a way to create storage that has the speed and bandwidth of ram. Wouldn't the two basically be one and the same? As in, you wouldn't need separate storage and ram, but rather you'd have a drive that has a partition acting as ram you could resize if you find yourself needing more? Or does a CPU need a buffer between the two that only physical ram can provide?

CPUs already have several levels of memory internally each increasing in speed and decreasing in size, they are called caches.
 

Renzatic

Suspended
So, in theory, if your storage could be accessed as quickly as ram, you wouldn't need to separate the two at all? You wouldn't even need a second "ram" partition, since the CPU could pull assets off the drive directly, and process code on the cache?
 

flopticalcube

macrumors G4
Original poster
So, in theory, if your storage could be accessed as quickly as ram, you wouldn't need to separate the two at all? You wouldn't even need a second "ram" partition, since the CPU could pull assets off the drive directly, and process code on the cache?

Correct. I would suspect that for most applications, faster dynamic memory (RAM as it is now) would still be beneficial but I could easily see a point at which for some device there need not be any working RAM as we have it now. Indeed some microcontrollers already do this working entirely inside their own registers.
 

chown33

Moderator
Staff member
Aug 9, 2009
10,995
8,878
A sea of green
So, in theory, if your storage could be accessed as quickly as ram, you wouldn't need to separate the two at all? You wouldn't even need a second "ram" partition, since the CPU could pull assets off the drive directly, and process code on the cache?

Yes, it would be possible. Whether it would be economical to have all non-volatile RAM is a separate question. There might also be enough difference in power consumption or speed to have volatile RAM for a dedicate purpose.

Here's an example technology:
http://en.wikipedia.org/wiki/Memristor#Potential_applications
Williams' solid-state memristors can be combined into devices called crossbar latches, which could replace transistors in future computers, taking up a much smaller area.
They can also be fashioned into non-volatile solid-state memory, which would allow greater data density than hard drives with access times potentially similar to DRAM, replacing both components.

Then again, think of how many previous forecasts there were for the doom of semiconductor futures. The "end of Moore's Law" has been 5 years away for almost a decade.
 

Renzatic

Suspended
Correct. I would suspect that for most applications, faster dynamic memory (RAM as it is now) would still be beneficial but I could easily see a point at which for some device there need not be any working RAM as we have it now. Indeed some microcontrollers already do this working entirely inside their own registers.

If we ever got to that point, the only thing I can think of that'd require more pure memory would be previous state saves for programs that toss around tons of data. Like multiple levels of undos in Final Cut, Zbrush, and other similar programs.

That'd be more an issue for the pro scene. For everyone else, it shouldn't be a problem.

..by near future standards, of course. I'm kinda vague on how much memory is required for code execution, but I think you'd need processors with a good deal more cache than what we've got now.
 

kdarling

macrumors P6
So, in theory, if your storage could be accessed as quickly as ram, you wouldn't need to separate the two at all? You wouldn't even need a second "ram" partition, since the CPU could pull assets off the drive directly, and process code on the cache?

THE GOOD:

The key is that the storage must have random access. That is, any byte or word must be available directly for instant read/write.

In some of the must-never-fail embedded systems I've helped build in the past, we used battery backed static RAM for this.

THE BAD?

When I first read the threat title about Flash nearing the end of its life, I automatically assumed it was about the fact that MLC Flash has an unpowered data retention life of anywhere from months to a decade, and averaging about five years.

That means that if you have any good memories on your original iPhone in your desk drawer, it might be a good idea to make sure you have a backup on another storage medium.

THE UGLY:

As a side note, Windows Mobile before version 5 only had one type of RAM and it was partitioned as needed between code and storage. The trouble was, it wasn't battery backed, and whenever you pulled your battery you had to reload all your apps and you lost all your pictures etc.
 

jnpy!$4g3cwk

macrumors 65816
Feb 11, 2010
1,119
1,302
I've been reading articles like this for 35 years. Once in a while, a change takes place so fast that people lose money on short-term investments, but, not very often. It is easy to say, "We need something better, and here are three technologies that might do better." It is another thing to get the cost down, make the investments necessary for scalability, develop the market, and deploy in a mass-production environment. Looking back 35 years, most of the predictions that said "This is going to happen in 3 years" were premature; things turned out to be 7-10 years. Lots of people have lost money trying to push things way too fast. Of course, plenty of people have also lost money making the prediction that things will never change. In this business, timing is everything.
 

Renzatic

Suspended
THE GOOD:

THE BAD?

THE UGLY:

The bad and the ugly wouldn't be much of an issue for any mid-term future tech, though. I didn't know about the memory shelf life of NAND flash, but that's not really too, too terrible. Magnetic drives, unless stored in pristine, airtight conditions, are usually only expected to reliably maintain data for a decade, aren't they?
 

MorphingDragon

macrumors 603
Mar 27, 2009
5,159
6
The World Inbetween
CPUs already have several levels of memory internally each increasing in speed and decreasing in size, they are called caches.

Caches are only good if you're repeatedly accessing the same data or in a pattern ideal for a particular architecture. They' re there to hide the fact that RAM is so slow on conventional computers.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.