Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yeah, well, I don't think it's debatable that what you say is true. If the file is kept in memory then that's faster than if kept on a RAM drive. Of course. :p But there are a lot o files not kept in memory - which is also not really debatable. ;) Probably the best use of memory exclusively for OS Speed would be to keep every OS component that ever gets accessed resident. I'm not arguing that. I've of course been specifically talking about application data.

If you wanted to put your OS in a RAM Drive probably a dedicated PCIe device would be the ticket. There used to be a few that acted like ATA devices. i-RAM and Hyperdrive were a few. Violin used to offer a few DRAM based solutions as well but I dunno what they're up to these days. (Here's their site if anyone is interested: http://www.violin-memory.com/products/ ) And MRAM has been looming as a promising tech for almost 10 years I think. I dunno of any products based on it tho there probably are some. <shrug>
 
Last edited:
Why? Because the conventional wisdom has always been that memory is a rare and expensive commodity. :) So for OS apps you mean something like Finder? Try it, go to a large folder of 1000 files all or most with custom icons. Scroll through to the bottom to get them all rendered and into the cache. Close that folder and load a few different similar ones. Now open Apple Mail assuming you haves as many emails as I and read for 20 to 40min. Now go back to that 1st 1000 file folder. Oops all the icons were dumped out of memory. :(
That's handling memory efficiently. Something like a cache of icons isn't all that important. There are many applications that use the memory for caching but also for operations because it's faster than a disk. They didn't when they invented ramdisk. The way they handled memory back than is completely different from what they are doing now and for a good reason: the most expensive part of a computer was its memory. It's now one of the cheapest parts.

Look at photo editing applications. Many photographers have something like a Mac Pro with 32GB or more (64GB and 128GB aren't all that uncommon!) for their job. The reason is very simple: these apps load the photo's into memory and run almost all of the editing in memory. It's only when you save they write back to disk. In these scenarios the use of a ramdisk is the most stupidest thing you can possibly do. The content will be in memory twice, it won't give you any speed benefit and it will be unsafe as hell because contents are not written to non-volatile storage. No more power means dataloss.

Now back to your example of the icons in Finder. The OS is in control of the memory. We should not see that as RAM but as virtual memory addresses since there also are things as swap (or page file). Anyway, the OS handles the memory. The app request what memory it needs, what it wants. The OS gives it what it needs or what it wants. It does the latter only if there is enough memory available. Now, what if the memory usage declines because you fired up another app? The part that the OS gives to apps for free (i.e. everything more than what it needs) will have to be handed back to the OS. You can test this by monitoring Safari's memory usage when you try to fill your entire memory contents. You'll see it will drop in memory usage. That's because it was given more memory than it needed to make things go faster (it's for caching webpages). This is also the reason why most Macs have little to no "free memory". Memory problems usually arise in this part: applications do not easily return memory. There are quite a few who don't return it. You have to close them in order to return the memory back to the OS. That behaviour is a bug and should be fixed. In case of the icons being cached in Finder this is good example of "nice if you have additional memory where I can store stuff but you can have it back if you need it". It returns that memory when needed. Caching icons isn't a critical task. I'd be shocked if Finder did still cache them. That would be erratic behaviour (it would mean that it would eat up memory).

Even with 32GB and my dynamic pager turned off (mostly only affecting anonymous data pages and cache accesses) my system doesn't use much more than about 3 to 4GB. But the OS data I use in even a few hours is much much larger than that - maybe 20GB. It all gets kicked out.
It depends on what the application needs and how much it request when it knows there is enough memory to be used. Some are more conservative in this than others. Also, not every part of a file is kept in memory. It's usually portions (think changes to the file, something like that).

To the snake oil remark all one needs is a sense of speed or a stopwatch to prove or disprove that. And, use a proper RAM Drive "that works well on OS X". ;)
If you do some research you'll quickly find out there are no proper ramdrives. There were some issues with ramdrives in previous OS X versions and there have been in many other operating systems. I would not call them reliable. If you really need fast I/O go buy an ssd and leave the memory handling to the operating system. The cheapest and most reliable way to get more I/O performance.
 
I agree with most of what you're saying. Most of it is the conventional wisdom we've all heard over the years. Some of it applies all of the time and some only in certain situations. I do take exception on a few minor points and one worth mentioning is in your very first paragraph:

That's handling memory efficiently. Something like a cache of icons isn't all that important.

Efficient to what end? Isn't important to who? It's extremely critical to me. I chose Apple products almost exclusively based on it's icon handling and performance (in 2006) and I almost switched to Windoze in 2008 because of the differences at that time between the two. When evaluating the importance of various OS conventions, house-keeping, and so on, we need to consider the individual - not just the cost/performance of the machine! I'm NOT the machine and as Charlie Chaplin clearly dramatized, I don't wanna be either. :) Displays like the following and using the OS to manipulate files and folders of files is (for me) more efficient than using apps and feature and performance is critical to me:

16ZBogu

BTW, with the settings I posted earlier Finder does "still cache them" and it's wonderful!!! Truly wonderful!!! :) Nothing "erratic " about it.

And finally, it doesn't really matter to me if a RAM Drive is "proper" or not. If it doesn't cause OS instability and it's faster then that's all there is to it. The fact is that for application data RDU Pro fits that description. About 10x speed increase and no crashes that I've seen so far. It's a great advantage if used right.
 
Last edited:
Let me pose a question... if you fill your Mac Pro 12 core with 128GB 8x 16GB sticks, OSX 10.6~10.8 can only address 96GB but is it possible to still harness any of the extra RAM into a RAMDrive?

I would think yes, once you configure the RAMDrive the OS then has the remaining available

Curious
 
Efficient to what end? Isn't important to who? It's extremely critical to me.
Efficient to the OS and applications. Efficiency in this case means that memory is actual being used and not sitting still. In some cases it means you have more memory for something and the user can benefit from that because the user experience improves but that's just luxury. However, you also need memory to actually do something. It's nice that Finder caches icon previews and such but it is much nicer that you can actually run it so you can copy, move files, etc. Or simply put: it's nice that it caches things so it becomes faster but it's all about the core functionality of the application. That's what you are using it for, not because it caches things like icon previews nicely.

I chose Apple products almost exclusively based on it's icon handling and performance (in 2006) and I almost switched to Windoze in 2008 because of the differences at that time between the two.
Biggest mistake a lot of people make: think performance is only 1 component. It's definitely not. Performance of a machine is a chain of components both hardware and software. It's about your cpu, gpu, memory, disk as much as it is about the OS and how it handles something like memory. If something isn't right in that chain you'll probably end up with some kind of performance issue. You'll also see this in reality: people think they have performance issues due to not enough memory so they fix that by putting more mem in the machine. Performance improves slightly but the problem remains. Some research is done and they find out the disk is now the bottleneck. Most times it's about the triangle of cpu, memory and disk (or I/O). Things are moved from disk to mem and back and the cpu is used to do the calculations on those things.

Displays like the following and using the OS to manipulate files and folders of files is (for me) more efficient than using apps and feature and performance is critical to me:
You are looking at the wrong component in that case. Working with files is something that is I/O bound so you need to look at your disk, not at your memory. Finder caches things in memory so the application itself has a nice smooth feeling to it when switching back and forth between apps. It's not about making I/O faster because for that you need to use a faster disk. If such performance is critical to you then buy something like a Velociraptor or ssd.

Messing around with memory (ramdisk and such) is like comforting someone while in fact they need to get stitches. It's nice to hear comforting words and such but it doesn't solve the problem. Stitches does because they'll close the wound. Back in the days there was no such thing as fast I/O. Disks were slow so we turned to memory because that was similar but a lot faster. This is more like attending the wound by putting a bandage on it because that's all you have near you.

BTW, with the settings I posted earlier Finder does "still cache them" and it's wonderful!!! Truly wonderful!!! :) Nothing "erratic " about it.
It's erratic when it uses up memory and returning it at all. It's called a memory leak. In your case it does return it, using a ramdisk doesn't change Finders behaviour in that. As for using ramdisk: it might work all right but most users see some strange behaviours with it over time. It is not to be trusted as it simply isn't reliable. A disk (hdd, ssd) is much more reliable. Also, using ramdisk means there is less memory for applications. Another reason to look at the entire chain and fix the entire chain. That means that in most cases it is a better idea to upgrade to something like an ssd.

In the end a user shouldn't be messing around with these things. If an application needs memory to speed things up it should use it. It is that simple. If it doesn't then file a bug with the developer. It's their job to deliver a properly functioning application. One that works smoothly. There are many other components that will make an app run smoothly. Most of them have actually nothing to do with system resources at all. It's about the code itself. If you press a button it should respond within a certain amount of time for example. You'll understand this better if you've ever been taught something about microcontrollers and/or UNIX. Some commands/microcode take up more cpu cycles than others. Same thing goes for UNIX commands that you use in scripts. There are some combinations that will take only a few seconds, whereas others will take minutes yet they do the same job with the same outcome. Sorting data for example.

Btw, my computers don't have a problem with the caching of the icon previews at all due to the use of an ssd. Memory can be used for the real important things (like my vm's).

Let me pose a question... if you fill your Mac Pro 12 core with 128GB 8x 16GB sticks, OSX 10.6~10.8 can only address 96GB but is it possible to still harness any of the extra RAM into a RAMDrive?
Any of the 96GB can be used for that. You can't go beyond that because OS X can't address more than 96GB of memory. It doesn't do anything with the other 32GB. The only option is to upgrade to 10.9 because Mavericks will be able to address 128GB of memory iirc.

Edit: on discussions.apple.com there's also a topic on this matter. You should click and read the links in the reply from kappy as they explain memory management in OS X:RAMdisk in OSX revisited.
 
Last edited:
Efficient to the OS and applications. Efficiency in this case means that memory is actual being used and not sitting still.
My opening questions were rhetorical in nature. ;)

Hehe, not even one of the points you make here matters to me as not a single one of them apply. I know you think they do especially because this has been pounded into our heads since the days of the 64kilobyte systems. Even with 4GB of memory some of these things are true and meaningful. But with 32GB or more almost all those rules and conventions fall on their faces and die. Tested, proven, experienced.

So it's a nice techy post but holds very little meaning for me and probably the same for most other users with 32 to 256GB or memory.
 
Last edited:
@dyn:

In the end the only thing that matters is how fast a task gets done. E.g. think about compiling large programs. These consist of lots of files and each has to be read at least once during the compilation process. This is creates random read and write access on the HDD.

If I copy the source code to a ram-disk first, this is sequential read from the HDD and write to the ram-disk. Then I build the program, which creates the random read and write access—only this time on the ram-disk. There, the difference between sequential and random access is negligible.

Thus I trade speed against available memory. Of course I still have to make sure there is enough memory to avoid page-out during the build process.
The point is, while the memory management of the OS tries to speed up things once the data has been loaded into the application or from a file, it sometimes is slow because it does not know in advance what I want next.
By putting all the files I want to be processed on the fastest storage possible (and since I know, that processing these files involves some heavy disk access), I can create a environment which is optimized for a task.

So, while a ram-disk is definitely no panacea, sometimes it can help to tweak performance (as in measured by the stopwatch).

----------

Let me pose a question... if you fill your Mac Pro 12 core with 128GB 8x 16GB sticks, OSX 10.6~10.8 can only address 96GB but is it possible to still harness any of the extra RAM into a RAMDrive?

I would think yes, once you configure the RAMDrive the OS then has the remaining available

Curious

A ram-disk (at least the built in ram-disk) does not make this last region of memory accessible. The memory for the ram-disk is taken from the memory which the system sees.

So you still can not use the last 32 GB of RAM (at least for now).
 
A ram-disk (at least the built in ram-disk) does not make this last region of memory accessible. The memory for the ram-disk is taken from the memory which the system sees.

So you still can not use the last 32 GB of RAM (at least for now).

Thanks for the definitive answer.
 
I know you think they do especially because this has been pounded into our heads since the days of the 64kilobyte systems. Even with 4GB of memory some of these things are true and meaningful. But with 32GB or more almost all those rules and conventions fall on their faces and die. Tested, proven, experienced.
That's not what I think, it's how things work. Period. You can't change the way operating systems and applications use memory unless you create them yourself ;) It is what it is no matter how much memory you use. And it's a good thing because it allows applications to run smooth on low memory systems as well as on those with heaps of memory. No need to upgrade your system to be able to run something.

Using a ramdisk might help but you are still not solving the problem, only masking it. You'll also create other problems since ram is volatile. Reboots, system hang ups, power outages and so on will cause dataloss. If you do not have enough ram in your system you'll also cause out of memory errors. Thus you'll need more memory which is a problem because operating systems have limits (for OS X this is 96GB).

Mind you, people who use 32GB or more of memory do so because they need it. A machine that can use those amounts of memory isn't cheap and neither is the memory. Also something like ECC is going to be a requirement for such large amounts of memory.

Ramdisks should only be used as scratchdisk. They are risky which has indeed been tested, proven and experienced by many since the day it existed.

So it's a nice techy post but holds very little meaning for me and probably the same for most other users with 32 to 256GB or memory.
You can't use more than 96GB an a Mac running OS X. People with such amounts of memory do other things with it. Hardly anybody uses ramdisks any more.

If you want a smooth and good performing computer you'll need more than just ram. It requires to invest in cpu, disk, memory and probably some more components.

In the end the only thing that matters is how fast a task gets done.
Exactly. That's all users care about and probably should care about.

However, for people who want to use something like a ramdisk it is more beneficial to actually know how a computer works and what a ramdisk is and, most of all, what it isn't. What I see here is people trying to solve a problem the wrong way. They are digging a hole so they can use the sand to fill another hole. They forget that there still is a hole, it's just elsewhere now. If you have any kind of performance problem it is a wise thing to look at the entire picture, not just 1 part. Try to find out what is slow and why. Then you'll know what to do to fix it.

If I copy the source code to a ram-disk first, this is sequential read from the HDD and write to the ram-disk. Then I build the program, which creates the random read and write access—only this time on the ram-disk. There, the difference between sequential and random access is negligible.
There are quite a lot of applications that do something like that automatically. Firefox is one of them. Mozilla advises against setting limits on this as well as using it on a ramdisk. Firefox will use memory when needed and use disk cache when needed. They let the OS handle it so it can balance the load of the machine. You want this so you can have other applications running smoothly at the same time as well. In some cases you do need to do this yourself but that's more for the experienced. As for compiling you could use something like memcache. Not in all cases because it can cause problems (which is similar to ramdisk; not everything likes being put in memory); macports is an example of that (some ports simply do not compile, they'll throw an error).

So, while a ram-disk is definitely no panacea, sometimes it can help to tweak performance (as in measured by the stopwatch).
Exactly. It has its uses but they are not as big and general as Tesselator is making us believe. This is why you should know what a ramdisk is and when you should use it.

[/quote]
A ram-disk (at least the built in ram-disk) does not make this last region of memory accessible. The memory for the ram-disk is taken from the memory which the system sees.

So you still can not use the last 32 GB of RAM (at least for now).[/QUOTE]
The link I posted has some details as to why it can't do this. It also explains why a ramdisk requires you to have a certain amount of ram; applications and such are not placed into ram, they are given a virtual address that is either in the active or inactive part of ram or in the pagefile/swap. If you don't have enough of those virtual addresses you'll run into memory problems. Although it's a good read it is quite technical.
 
That's not what I think, it's how things work. Period.
See!?! Told ya you thought so. :)

You can't change the way operating systems and applications use memory unless you create them yourself ;)
Speak for yourself. I changed the way mine works. For the better too. ;) And I can get even more aggressive with the mods if I choose too. :)

Using a ramdisk might help but you are still not solving the problem, only masking it.
Fine by me, I don't care. 10x speed increase is an acceptable "mask" IMHO! I'll take it! :D
And that's on MP1,1. On my newer systems or for someone with a MP5,1 it's likely more like 50x speed-up. :)


Hardly anybody uses ramdisks any more.
About half the people in this thread seem to be disproof of that. :p
 
Last edited:
Bear with me here, as I intend to provoke:

Done that, but I hope the discussion will nevertheless stay civil.

That said, I made one error at the beginning: When you want to start people talking, you give them just enough to get started (but not too much, because that might restrict the discussion). My mistake was that i used the term "Ramdrive", which seemingly let some people down a specific path...

Just in case I can still influence the discussion, let me restate the original central premises:
- RAM is waays faster than any hard drive (whether stationary or spinning)
- A lot of RAM is standing around unused, especially in machines which have a lot of it.
- Most of the time, the SSD/HDD is doing nothing

==> There has got to be a way, for these three premises to be managed in such a way, which gives a performance boost, especially if such a solution was implemented at the OS -level...

RGDS,
 
I've run windows XP from a Raid 0 Gigabyte iram drive setup back in the day. Each card had a whopping 4gb of ram yielding a 8gb partition.

Today, RAM caching of the OS on windows is also taking off. AMD has some cool tech: http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/58400-amd-radeon-ramdisk-review.html

One review showed 16k random file read speeds at 710MB/sec!! Mumbo Jumbo aside, that kind of access time will make an OS fly!
 
I've run windows XP from a Raid 0 Gigabyte iram drive setup back in the day. Each card had a whopping 4gb of ram yielding a 8gb partition.

Today, RAM caching of the OS on windows is also taking off. AMD has some cool tech: http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/58400-amd-radeon-ramdisk-review.html

One review showed 16k random file read speeds at 710MB/sec!! Mumbo Jumbo aside, that kind of access time will make an OS fly!

I know a RAM drive will benchmark like nobody's business, but is it a smarter use of memory than the OS's own disk caching solution?... Especially for OS files?

Take Windows... (Im not an expert on OS X) but Windows SuperFetch loads the most often used files into RAM before they are needed (just like a cache). In effect, Superfetch is like an intelligent RAM disk managed by the OS, without the overhead of a file system, and without the need to move contents (it can execute in place). Using a RAM disk seems counter productive... Whatever Windows thinks is accessed the most is going to be prefetched into RAM... if its on a RAM drive, it's now in RAM twice... In the RAM disk area of memory and in the free part of memory that prefetch is utilizing. So the RAM disk is just stealing space from the available cache. So whether it started on a HD, SSD, or RAM Drive, if its used often, Superfetch will have it in RAM before its needed. What am I missing here? If nothing, caching the OS in a Ram drive seems like a total waste?

Is OS X similar in its disk to ram caching algorithms?
 
Last edited:
I've run windows XP from a Raid 0 Gigabyte iram drive setup back in the day. Each card had a whopping 4gb of ram yielding a 8gb partition.

Today, RAM caching of the OS on windows is also taking off. AMD has some cool tech: http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/58400-amd-radeon-ramdisk-review.html

One review showed 16k random file read speeds at 710MB/sec!! Mumbo Jumbo aside, that kind of access time will make an OS fly!

Yep. The benchmarks they show pretty much supports everything I've said in the past weeks here about RDU on OS X too - almost perfectly! Nice find.

I think they kinda missed the boat with some of their tests though. At least I can think of many more practical tests than loading a game or stating VMWare. Just loading apps and stuff isn't where a RAM Drive shines it's brightest! At least they got one almost right with the FireFox 100 Tab test. Still not a hit though. It kinda makes me wanna question the prowess of the reviewer. :p



I know a RAM drive will benchmark like nobody's business, but is it a smarter use of memory than the OS's own disk caching solution?...
In many cases yes, the differences are astounding!

Especially for OS files?
That depends on which OS files. You could essentially ask these same questions about an SSD ya know. Right: With memory cacheing being so fast who needs an SSD? :) The problem with putting OS X files in a RAM Drive is figuring out which ones to place there. Some will show a huge benefit, others only marginally so, and yet others not at all - and unless you're running a bare skeleton of a system we don't have the ability to place the entirety of OS X in a RAM Drive so selectivity is needed. Personally I would just recommend not using the RAM Drive for the OS at all. Use it for small (<8GB) yet ferocious projects where it's easy to implement and manage and has proven itself.
 
Last edited:
That depends on which OS files. You could essentially ask these same questions about an SSD ya know. Right: With memory cacheing being so fast who needs an SSD? :) The problem with putting OS X files in a RAM Drive is figuring out which ones to place there. Some will show a huge benefit, others only marginally so, and yet others not at all - and unless you're running a bare skeleton of a system we don't have the ability to place the entirety of OS X in a RAM Drive so selectivity is needed. Personally I would just recommend not using the RAM Drive for the OS at all. Use it for small (<8GB) yet ferocious projects where it's easy to implement and manage and has proven itself.

I can understand the benefits of a RAM drive on a dataset that would otherwise involve a lot of disk I/O. That's easy to understand. Where a RAM drive seems less valueable is in "caching" OS executables and libraries that are read only and normally prefetched by the OS into RAM to improve performance anyway. Such a scheme results in most of the OS being in RAM twice (on the RAM Drive and in the prefetch area) with no benefit to performance, and the obvious disadvantage of using RAM that would be better left to the OS to manage.
 
I can understand the benefits of a RAM drive on a dataset that would otherwise involve a lot of disk I/O. That's easy to understand. Where a RAM drive seems less valueable is in "caching" OS executables and libraries that are read only and normally prefetched by the OS into RAM to improve performance anyway. Such a scheme results in most of the OS being in RAM twice (on the RAM Drive and in the prefetch area) with no benefit to performance, and the obvious disadvantage of using RAM that would be better left to the OS to manage.

Yeah, I've never tried to profile it so i dunno what parts of the OS would be good or not to have in the RAM Drive but I noticed some of this is perceptual. If you think of a RAM Drive as stolen system RAM then some things seem redundant whereas if you think of it as a different device partitioned, repurposed, and separate from what RAM is actually needed by the system/apps/etc. then - not so much. It's the jedi thing again I suppose.. This is not the RAM you're looking for. :)
 
Fond memories of my Commodore 64 RAM drive

I know you think they do especially because this has been pounded into our heads since the days of the 64kilobyte systems.

When I was a wee lad I ran a BBS on a Commodore 64. To use modern terms, the BBS was a website with forums, interactive games, blog posts, and file sharing.

The very limited C-64 could not keep the whole BBS application in RAM, so it loaded separate modules from the floppy drive. Unfortunately this meant delays of several seconds every time a user switches areas in the BBS.

I eventually bought an extended RAM module. The BBS application wasn't aware of anything outside the 64K so it still wanted to load modules separately from a drive. However, the OS could make use of the RAM module as a RAM drive, and the BBS application could be copied from floppy disk to RAM drive.

This meant, essentially, the complete elimination of the module load time. I was, AFAIK, the only local BBS to do this.

The moral of this story is:
1) RAM drives certainly can make a very noticeable difference.
2) They even made a difference back when computers only had 64K.
 
The moral of this story is:
1) RAM drives certainly can make a very noticeable difference.
2) They even made a difference back when computers only had 64K.

It's a pretty specific story though, not sure how relevant it is to using a RAM drive today on a modern OS and machine. For example, the fact that you only had 64k and slow 5.2" floppy or worse, cassette storage, made the difference larger. Also, the c64 didn't really have an OS that's similar to anything we're used to today, just a ROM with some memory mapped addresses and a BASIC interpreter.
 
It's a pretty specific story though, not sure how relevant it is to using a RAM drive today on a modern OS and machine. For example, the fact that you only had 64k and slow 5.2" floppy or worse, cassette storage, made the difference larger. Also, the c64 didn't really have an OS that's similar to anything we're used to today, just a ROM with some memory mapped addresses and a BASIC interpreter.

I agree with you.

I didn't mean for an example of a C-64 success to have any bearing on Mac Pros at all, and apologize if that's how I sounded. I mostly meant to reminisce about the early days and the topic of RAM drives in very general terms.

Fun times it was. I increased the capacity of the BBS by continually adding drives as I could afford them. Eventually I had 5 floppy drives, all in their own external boxes, their own power supplies, and daisy chained data cables. In other words, exactly what storage on the new MP will loo... you know what, I don't want to go there. :D
 
When I was a wee lad I ran a BBS on a Commodore 64.

Those were the days. I had one as well. At one point up to six 1541s were online. What was the theme of you board? Mine was about game content and code creation mostly - but there was a lot about languages in general too.

I moved it over to the Amiga when that came out but soon after I moved to Japan and never put it up again after that.
 
Those were the days. I had one as well. At one point up to six 1541s were online. What was the theme of you board? Mine was about game content and code creation mostly - but there was a lot about languages in general too.

I moved it over to the Amiga when that came out but soon after I moved to Japan and never put it up again after that.

I had two 1541 drives, a 1571, a 1581, and two drives that I cannot find on Wikipedia. They were 5.25" double density capable drives that read both sides of the floppy (no flipping required). So they had access to 4x the space of the 1541. I vaguely remember they were not connected in the usual daisy chain manner; they had their own power supply and used some sort of IEEE data connector. My memory fails me.

The forums were essentially a failure, perhaps because I didn't have a theme and I didn't register the board with any local SIGs. I would say 10% was forum use (mostly talking about code modification to the BBS software with other BBS operators), 10% of the use was online gaming, and about 80% was people using all that drive space for file sharing.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.