Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacAztec

macrumors 68040
Original poster
Oct 28, 2001
3,028
1
San Luis Obispo, CA
To all you technical people out there, I'm trying to conceptually work through the differences between a SAN and NAS architecture.

I understand the basics, such as a NAS can be viewed as something like an Apple TimeCapsule, WD Ethernet Drive, etc. It has a dedicated IP, and is viewed as storage on the network.

My understanding of NAS is as follows: A NAS Server communicates (via. Fiber) with storage arrays, to pull files extremely fast. The NAS Server receives requests from users on the network (via. Ethernet, preferably a private NAS Ethernet network, with a separate network for Internet and other things). The client computers send a request to the host NAS server via Ethernet, and the host in turn transmits the request to the NAS array via fiber, and the NAS array sends the data back to the client server via. fiber.

Can someone please elaborate on these topics, if you have the knowledge? It would be much appreciated.

Thanks!
 
I believe the point where you have given your brief description of NAS, is actually about SAN.

SAN runs on fiber, and is a solution to a much larger problem then a NAS can provide. SAN scales better, is decentralized, provides a faster response time, and, I believe, has a better storage of large gig/tera/peta of data.

A good start:
http://compnetworking.about.com/od/networkstorage/f/san-vs-nas.htm

Yeah, that second paragraph is pretty much the textbook definition of a SAN. Anyway, I'll try not to get too geeky. Basically, most NAS devices I have seen are only one box with a number of drives. A SAN is a self-contained network. The one at my office, for example, takes up an entire 42U server rack. They are broken up into shelves, most often with about 14 hard drives in each shelf. Each shelf is connected to the other shelves in the network via an internal network. The drives in a shelf will be in a RAID array. A SAN will have a controller, which is a server dedicated to running the SAN. It does things like control addresses, monitor the health of the SAN, create storage groups, etc.

In many NAS devices I have seen, the drives show up as one big drive. In a SAN, you can group disks together to present one storage unit. So, for example, you could pick 10 drives from anywhere in the SAN unit, and present them as a network drive. Finally, a SAN will give much more redundancy than a NAS. A SAN usually has at least dual connections to its internal network and you LAN. Furthermore, you can mirror your SAN. Say you have 4 shelves with 14 drives each in a rack, you could put another 4 shelves in the rack and mirror all data from one SAN to the other. That way, you get the redundancy of RAID and you can mirror your SAN to another one.
 
In a SAN, you can group disks together to present one storage unit. So, for example, you could pick 10 drives from anywhere in the SAN unit, and present them as a network drive. Finally, a SAN will give much more redundancy than a NAS. A SAN usually has at least dual connections to its internal network and you LAN. Furthermore, you can mirror your SAN. Say you have 4 shelves with 14 drives each in a rack, you could put another 4 shelves in the rack and mirror all data from one SAN to the other. That way, you get the redundancy of RAID and you can mirror your SAN to another one.

You're describing storage arrays. Our Storage arrays here take up 5 racks of 42Us, one for each site, but they are just a small part of our SAN, what with our redundant fabrics (2 completely independant set of switches) over 2 sites. NAS units can have the same mirroring and multiple network drives, we actually have such a NAS too, it takes up 4 racks, has 3 different controller boxes and tons of storage shelves which are seperated into different RAID groups.

Also, usually, you don't present drives as network shares over a SAN to the servers, you present LUNs (logical units) which show up on the server as if they were local hard drives, which you can then partition/add to your volume manager/use as raw disks. Otherwise, you're not really running a SAN, you're running a LAN dedicated to storage.

SAN and NAS are 2 terms to describe very different things. SAN means Storage Area Network. It's basically a network linking up storage and your servers. It will be compromised of switches, bridges, cabling, adapter cards and your actual storage arrays, which can be a server with the proper software to present LUNs to your servers or a real dedicated storage array.

SANs also don't have to run over fiber optics, that's just one method. There's actually 2 protocols used in SANs these days, the older and powerful FiberChannel which can run over fiber optics or GigE copper wiring (as FCIP) or iSCSI which runs over any kind of IP network (so basically, you're free to design the IP network as you want).

Now NAS means Network Attached Storage. Yes, it's that dumb, it's just your good old storage array, which might or might not be connected to a SAN. However, in the industry, it has been standardised to mean storage attached to your normal network, which presents storage as network shares rather than LUNs. Of course, some NAS boxes still support FiberChannel or iSCSI to present actual LUNs too, which makes it all more confusing that it needs to be.

All other distinctions (RAID levels, RAID groups, etc...) have no bearing on if something is a SAN or a NAS. Again, a SAN is a network, there's tons of equipment there, not just a controller box with drives. A NAS on the other hand is just a bunch of drives and a controller plugged into the network. Vendors that sell "SAN storage arrays" don't sell you a SAN in a box, they sell you a box to connect to a SAN usually, something that won't support things like CIFS/NFS/AFP to share drives, but will require zoning controller ports and presenting LUNs.
 
You're describing storage arrays. Our Storage arrays here take up 5 racks of 42Us, one for each site, but they are just a small part of our SAN, what with our redundant fabrics (2 completely independant set of switches) over 2 sites. NAS units can have the same mirroring and multiple network drives, we actually have such a NAS too, it takes up 4 racks, has 3 different controller boxes and tons of storage shelves which are seperated into different RAID groups.

Also, usually, you don't present drives as network shares over a SAN to the servers, you present LUNs (logical units) which show up on the server as if they were local hard drives, which you can then partition/add to your volume manager/use as raw disks. Otherwise, you're not really running a SAN, you're running a LAN dedicated to storage.

SAN and NAS are 2 terms to describe very different things. SAN means Storage Area Network. It's basically a network linking up storage and your servers. It will be compromised of switches, bridges, cabling, adapter cards and your actual storage arrays, which can be a server with the proper software to present LUNs to your servers or a real dedicated storage array.

SANs also don't have to run over fiber optics, that's just one method. There's actually 2 protocols used in SANs these days, the older and powerful FiberChannel which can run over fiber optics or GigE copper wiring (as FCIP) or iSCSI which runs over any kind of IP network (so basically, you're free to design the IP network as you want).

Now NAS means Network Attached Storage. Yes, it's that dumb, it's just your good old storage array, which might or might not be connected to a SAN. However, in the industry, it has been standardised to mean storage attached to your normal network, which presents storage as network shares rather than LUNs. Of course, some NAS boxes still support FiberChannel or iSCSI to present actual LUNs too, which makes it all more confusing that it needs to be.

All other distinctions (RAID levels, RAID groups, etc...) have no bearing on if something is a SAN or a NAS. Again, a SAN is a network, there's tons of equipment there, not just a controller box with drives. A NAS on the other hand is just a bunch of drives and a controller plugged into the network. Vendors that sell "SAN storage arrays" don't sell you a SAN in a box, they sell you a box to connect to a SAN usually, something that won't support things like CIFS/NFS/AFP to share drives, but will require zoning controller ports and presenting LUNs.

I didn't want to get into presenting LUNs and all that. I figured that was TMI. :)

I forgot about iSCSI. Which is sad, really, because our new Equalogic at work runs on iSCSI. I admit, I've never seen a NAS that complex.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.