SAN Storage is a mission critical part of your production or post production workflow. If it isn’t, then you don’t need one. This article is for the first-time SAN storage hunter who is looking to get up to speed quickly.
If you absolutely don’t know what a SAN (Storage Area Network) is, or whether you need a NAS (Network Attached Storage) instead, then please read SAN or NAS:
First, a disclaimer: I’m not an expert in SAN storage. My understanding is, to put it simply, my understanding. This could be wrong or inaccurate. Like you, my first priority is video, and I don’t have the time or brain power to figure out the complications of a SAN system. I want the bottom line, and this is my attempt to simplify things, nothing more. If you really want detailed advice, the person or organization you should be consulting is called a Systems Integrator.
Signs that you need SAN storage
Here are issues that a SAN is meant to solve:
- More than one computer needs the same video files at the same time.
- You have a team of five or more workstations.
- You can’t afford dropped frames or delays.
- You need centralized storage in a server room.
- You work with both Macs and PCs (Windows), and maybe even Linux, Android, iOS, etc.
- You are earning enough money to justify a SAN.
- Your data is mission critical.
If and only if all of the above conditions are met should you start thinking of investing in a SAN. E.g., if you have twenty workstations but they don’t need to access the same data, then a SAN is overkill. Any NAS or DAS will do.
On the other hand, if you meet all criteria but can’t afford a SAN, then there’s something wrong with your business model!
How to select the right Systems Integrator
If you’re an IT or networking expert, then you could build a SAN yourself. It isn’t that hard (not like building a camera or programming an NLE from scratch). Actually, in many cases if you’re half-way there (have some experience in networking and have the time and inclination to make a determined effort) it might be worth the risk.
On the other hand, 99% of video professionals are not engineers, and don’t want to worry about the million problems that come with setting up a network storage architecture. The person or organization that gets a SAN working is called a systems integrator, or simply, integrator. Their job is to study your workflow needs for the foreseeable future and suggest the right vendors. If you pay them enough, they will oversee the installation and also take responsibility for maintenance.
Here are some things you should look for in a good systems integrator (not very different from finding a good plumber):
- Must be in the same area as you are, so they can be physically present whenever needed.
- Must have already installed SAN storage for several facilities similar to yours, hopefully ones that you can call and learn from.
- Must not be affiliated to any vendor. I.e., no hidden agenda. This is deadly.
- Must know networking inside out. If they can’t or won’t answer your questions, or if they speak down to you, avoid them.
- Must offer proactive suggestions. If you’re ‘just another client’ to them, then don’t become their client. Don’t fall for false promises.
- They must be willing to tell you how to build a DIY SAN. If they have ‘trade secrets’, then let them go. There are no secrets in networking that isn’t available for free online.
Only you can decide if you need a systems integrator or not. Here’s one way to know: If you had trouble reading the two articles I’ve linked to above, and don’t understand a word of what I’ve written in this article, then you’re going to find it hard without the help of a good systems integrator.
Should you build a SAN yourself?
You can’t understand how cold the water is without getting some part of your body wet. I strongly urge you to consider building a SAN yourself before you consult anybody. It’ll take a week or so of research, but it’ll be invaluable. Most of us will fail in this endeavor, but we will get the bigger picture. Two reasons for doing this:
- You know your workflow better than anyone else. You will be able to speak in a language you now understand.
- Nobody can screw around with you. The more you know, the more you can challenge your integrator.
- When your business grows further and the time comes to expand or replace your SAN, you’ll have learnt a whole lot more.
What do you need to build a SAN?
You can build a SAN with commonly available hardware. That’s the easy part. The hard part is software. Let’s break it down to four major divisions:
Let’s look at each division one by one. Remember, the reason why building a SAN is tough is because there are too many choices at every turn. It’s like chess, where one move might have multiple ramifications down the line that you can’t even see right now.
Important: I might give you examples of parts with model numbers, but for heaven’s sake don’t assume they will work well together as a SAN! I only mention specific models so you can see what they look like. Don’t use it as a shopping list.
Many people incorrectly assume that a SAN is just storage. In fact, it’s everything in the network (Storage Area Network, anyone?). Storage is just one part of the SAN.
How to estimate storage requirements
You need drives to store data. Since data will be accessed simultaneously, you will need fast storage. The only cheap way to do this with redundancy is RAID.
Your SAN is a working beast, and is not always required to archive your data. E.g., if you work predominantly with Prores HQ footage, your data rate for one stream is 27.5 MB/s. If you have multiple editors (say three) working on a documentary with 200 hours of footage, you’ll need:
- A read speed of 82.5 MB/s for just one stream.
- 20 TB storage.
Let’s say you are using 4 TB 7,200 rpm drives in your SAN, in RAID 6. You’ll need 8 drives (total 32 TB) of which you’ll get 24 TB of storage.
If each drive can sustain 100 MB/s, then this array will deliver a maximum read throughput of 600 MB/s (7 streams per editor). That’ll do fine for Prores HQ workflows.
Welcome to the first big choice. We assumed 7,200 rpm drives, but it’s not that simple. You have the following choices:
- SATA III 6 Gbps
- SAS 6 Gbps
SAS (Serial Attached SCSI) is what is used in servers, because by design they are supposed to be the most reliable. They operate at higher voltages, due to which you can run SAS cables up to 33 feet (10 m), while SATA can only go up to 3.3 feet (1 m).
On the other hand, SATA drives are cheaper, and tend to have faster versions (10,000 rpm, 15,000 rpm, etc.).
Which should you choose? If you’re building your own SAN, then you should be okay with SATA (If Backblaze can live with it, then so can you).
Then there’s the choice of platter drives vs SSDs. Of course, SSDs are damn expensive ($650
The RAID controller
To get the 8 drives working like clockwork, you’ll need a solid hardware RAID controller. The features you’ll need to look out for are:
- The RAID controller should take 8 drives.
- The RAID controller must support RAID 6.
- It’s SAS or SATA or both as the case may be.
- What PCI interface technology it supports.
- What operating system it supports (A server will most likely run on Linux).
Some brands that make RAID controllers are:
and many more. One example that works in our case is the Areca ARC-1223-8* PCIe 2.0 x8 SATA/SAS RAID card
The motherboard (backplane)
Now you have to buy a motherboard that supports PCIe 2.0 x 8 (you get the idea). Supermicro
You are already using RAID for data redundancy. But there are other kinds:
- Power supply – You will need a redundant power supply so that if one fails the SAN will still continue working. It will obviously be used in tandem with a UPS and surge protector.
- Battery Backup Module (BBM) for the RAID controller – prevents loss of data in cache if there’s a power failure.
- Extra hard drives.
- ECC RAM.
- Another server (called a redundant server).
Let’s revert back to the first line of this article. If the SAN is not mission critical, it is pointless. Redundancy is how you ensure a system continues to run in case of failure.
All the parts together form the server (a computer serving files over a network). This server is designed for storage, and is called a storage array. The case is usually rack-mounted, so it can sit neatly in a well-cooled machine room (server room). It will have large fans to keep it cool from within. It will have a Xeon processor that is designed to work 24/7. Every part must be chosen to fit into the chassis and run smoothly. The cabling is done neatly so as to allow for ease of maintenance when (not if) things fail.
In fact, all the components of a server are designed to run 24/7, and if it fails, the redundant server takes over. This is how big websites stay online all the time. A busy post production facility that depends on its SAN needs no less. All it takes to bring a storage array to its knees is a faulty cable or dead processor – just one thing.
So far we’ve built our storage array, and it is time to go to more complex things. There are still a few parts that belong to the storage array that we have omitted, because they are easier to understand as part of their own divisions.
In Part Two we’ll look at the parts of a network and workstation and of course, the software. We’ll also see how much it costs to put all this together.