1. Library
  2. Servers and Infrastructure
  3. Storage

Updated 10 hours ago

When you attach a disk directly to a server, you've married them. If the server dies, you're recovering data. If you need more storage, you're adding disks to that specific machine. If you want another server to access that data, you're copying files.

A Storage Area Network (SAN) is a divorce. It creates a separate network just for storage—a pool of disks that servers connect to over high-speed links. The server sees a disk. It doesn't know—or care—that this disk lives in a rack across the room, is mirrored to a second data center, and will still exist after the server is decommissioned.

Block Storage vs. File Storage

This distinction matters: SANs provide block storage, not files.

When you access a NAS, you're asking for /documents/report.pdf. The storage system finds the file, manages permissions, handles the filesystem. The NAS knows what a file is.

When you access a SAN, you're saying "give me blocks 4096 through 8192 on LUN 7." The SAN doesn't know what's in those blocks—it could be a database, a filesystem, raw application data. The server handles all the file logic. The SAN just delivers blocks, fast.

This is why databases love SANs. The database engine wants direct control over how data is laid out on disk, how writes are cached, how reads are optimized. It doesn't want a file server interpreting its requests. It wants raw blocks.

How It Works

Servers connect to the SAN through host bus adapters (HBAs)—specialized network cards designed for storage traffic. The SAN presents storage as Logical Unit Numbers (LUNs), which appear to the server as ordinary disks.

To the operating system, a LUN looks exactly like a local disk. You format it, mount it, store files on it. The OS doesn't know the disk is actually an allocation from a massive storage array connected over a dedicated network. The abstraction is complete.

This abstraction enables everything else: you can expand a LUN without touching the server, snapshot it for backups, replicate it to another site, or reassign it to a different server entirely. The storage has its own life.

The Protocols

Fibre Channel is the traditional SAN protocol—a dedicated network technology built specifically for storage. It runs at 8, 16, or 32 Gbps with extremely low latency. The downside: you need separate Fibre Channel switches, separate cabling, separate expertise. It's a parallel infrastructure.

iSCSI wraps SCSI storage commands in TCP/IP packets and sends them over regular Ethernet. Your existing network switches work. Your existing network team can manage it. Performance is slightly lower than Fibre Channel, but 10 or 25 Gbps Ethernet is fast enough for most workloads. The cost savings are substantial.

Most organizations today choose iSCSI unless they need maximum possible performance. The infrastructure simplification outweighs the performance gap.

What SANs Enable

Storage pooling: Instead of buying disks for each server, you buy a storage array and allocate from a shared pool. A server needs 2TB today and 10TB next month? Expand the LUN. No hardware changes to the server.

Server independence: Replace a failed server, connect it to the same LUN, and you're running again. The data never moved.

Live migration: Move a running virtual machine from one host to another. Both hosts see the same storage—the VM's disk doesn't need to be copied.

Snapshots and replication: The storage array can snapshot a LUN in milliseconds—a point-in-time copy for backups or testing. It can replicate LUNs to a remote site for disaster recovery. The servers don't participate; they don't even know it's happening.

High availability: Redundant controllers, redundant paths, redundant switches. If a component fails, traffic routes around it automatically. Enterprise SANs are designed for zero downtime.

When SANs Make Sense

High-performance databases: Oracle, SQL Server, PostgreSQL—when latency matters and you need the storage array's caching and optimization.

Virtual machine infrastructure: VMware, Hyper-V, and other hypervisors expect shared storage for features like live migration and high availability.

Mission-critical applications: When the data must survive server failures, when you need instant snapshots, when disaster recovery means replicating to another site in real-time.

When SANs Don't Make Sense

SANs are expensive and complex. You need specialized hardware, specialized knowledge, and ongoing management. For file sharing, backups, or departmental storage, NAS is simpler and cheaper. For a single server that doesn't need shared storage, direct-attached disks are fine.

Cloud block storage (AWS EBS, Azure Disks) provides SAN-like capabilities without the infrastructure—you get block storage that's independent of your compute instances, with snapshots and replication, but someone else manages the array.

Hyper-converged infrastructure bundles compute and storage together, trading some SAN flexibility for dramatically simpler deployment.

The Evolution

Traditional SANs used spinning disks. All-flash SANs replaced them with SSDs, delivering performance that spinning disks could never match—microsecond latency instead of milliseconds.

NVMe over Fabrics pushes further, using modern storage protocols that were designed for flash from the ground up, rather than adapting protocols designed for slower media.

Software-defined storage abstracts the hardware entirely—SAN-like features running on commodity servers and disks, without proprietary arrays.

The destination is clear: storage as a service, whether from a cloud provider or an on-premises software layer, with the hardware becoming invisible. But the core concept remains unchanged: storage that exists independently of the servers that use it.

Frequently Asked Questions About Storage Area Networks

Was this page helpful?

😔
🤨
😃