FreeNAS Server Project

How I started

Going into this project, I only had knowledge of building computers, but didn’t know what it meant to build a server. This is the “server” I built from the ground up; I even built the case from sheet metal. This project is where I found my passion and the desire to learn more about the infrastructure environment.

Hardware

The project started when I was gifted a few hard drives and an AMD A10 6800K. To put it into use, I purchased a motherboard from NCIX outlet on eBay for ~$70, and used a power supply that was laying around.

Motherboard/CPU

The very base of this project was the capabilities of the A85X chipset. With support for 8 SATA3 ports, This allowed me to use the weird collection of hard drives I had lying around. It also supports the A10 6800K APU I was given.

Memory/Hard drives

Since FreeNAS recommends 1GB per terabyte of storage, the amount of memory depended on how much space I intend to use to get the performance out of ZFS. With an odd array of 2x500GB, 3x250GB, 2x1TB, that occupies 7 SATA slots and amounts to roughly 5.75TB in raw storage space. Since the project was still early, I put them in a striped array. On top of the 6GB recommended for raw storage, FreeNAS also needed RAM, so I settled for 8GB of 1333MHz Crucial Ballistics.

Power Supply

The power supply is an OCZ ZS 550watt unit. It was originally purchased to replace a unit that died in a Q6600 system, but since that isn’t in use, I pulled it and re-purposed it. It was perfect, 8 SATA power connectors to power all those hard drives. Moderate in power wattage, it was enough to feed the (12V, 0.8A)  6 watt hard drives.

Software

I chose FreeNAS simply because it was free. I didn’t know what it was or how to use it, but the internet was filled with resources. My main goal at the time was to run a file server accessible over the network. Windows didn’t make the cut because I didn’t want to use homegroup sharing (I didn’t know about regular CIFS sharing via Windows). I would have had to join every client to the homegroup and this wouldn’t work on a large scale basis.

The needs were simple: User based permissions that were supported on every platform – MacOS, Windows, Android, iOS. The issue with using regular Windows NTFS was that MacOS would require a domain account, and I had no knowledge of this Active Directory, nor did I want to spend the money on a Windows Server License (I was still in high school). UNIX-based permissions were perfect since they were simple – I could just assign users to a specific group. The only limitation was assigning multiple groups for ownership for particular shares and folders.

The file server was ready with cross platform support. But since it only stored system images and backed up my documents, there was no need to have it running 24/7. I would regularly turn it off at night and boot it back up when I needed to sync the backup folders (I used Synkron for this).

One of the main issues to start were super slow transfer speeds. I thought that striped volumes were going to improve transfer performance, and I was told that a gigabit switch was needed to see decent throughput. My original setup had it connected to the WDR-3500 by TP-Link. I wanted to use it since it was the first router I bought, and wanted to use it in my room as it had 5GHz. It limited my speed to fast ethernet speed, and real world performance was 11MB/s. I then bought a 5 port D-Link Gigabit switch and performance jumped to about 120MB/s. Fast storage, large capacity, this was just the beginning.

It was a rather specific use case at the time – turning on the server whenever I needed it, and syncing my document folders as a way to back up my data. It went on for about 2 years and it became my main storage volume. 5.75TB was almost full and I added in a 3TB drive to increase the amount

of storage, but because of conversion, it only really amounted to ~7.8TB. The server had no cooling system in place as it was installed in the case I made from sheet metal, and so I migrated everything into an actual case that had 8 3.5″ bays.

SMART errors started to come up and it was really risky to have all this data stored on a striped volume, so I ordered 3 more 3TB Seagate drives during black Friday, amounting to 330$ in cost but allowing me to have redundant storage. There was an issue though: I had to find a way to offload all the data and recreate a full RAIDZ array.

So what I did was use all the extra hard drives I had (on top of what I had in the server and the extra 3x3TB drives) and copied them over on my local machine. I had a 3TB and a 4TB NAS (borrowed) to use. While it did take a long time to copy over >6TB of data, I was now able to decommission the striped array.

I setup the new 4x3TB array in RAIDZ (effectively RAID5, with 1 drive fault tolerance) and the 4 largest disks from the old striped volume in a separate volume. This would in effect give me one large resilient storage volume with another smaller, high performance volume to act as a scratch disk. Temporary files that I don’t need protection for can reside on the striped disk and everything that is important can reside on the RAID array. With 4, 3TB disks in RAID5, this gives me 7.65TB of storage after conversion.  The other array had about 2TB of usable space.

IP Cameras

Another project was coming along – IP cameras. It was perfect to use the scratch disk for the recording medium of the IP cameras. The cameras were recording at 1080P@25FPS. This was equating to about 8Mbps in terms of bandwidth, but real-world  throughput was about 3MB/s due to h.264 encoding overhead. While it added up to about ~ 10MB/s (80Mbps), it didn’t affect the regular transfer speeds of other clients. If anything, the network was always under 10% load.


Our college capstone project came around in the last year of school, and so we decided to use this FreeNAS box to host virtual machines. More on this later. However, our limiting factor was the amount of RAM installed. With all 4 slots occupied, we had a total of 8GB of RAM. We borrowed 16GB (2x8GB) and ordered another 2x8GB set – totaling to 32GB. That solved our problems and once the project was completed and the RAM was returned, the configuration was changed to 2x2GB + 2x8GB totaling 24GB. Still really beefy for a system that is underutilized.

Since money was also a limiting factor in terms of upgrading this build further, I had to wait quite a while. There were some pulled enterprise drives that were sold for a really low price, and so I was able to grab 4 more 3TB drives.  I had two options: Create an 8 drive volume with 2 drive failure tolerance, or keep the current RAIDZ volume and create another one. But since I already used up over have of the new 7.65TB RAID array storage, I wasn’t going to offload the data and create a larger, more protected volume. Although the first option would allow me to use the same amount of space of the 2 separate RAIDZ volumes, it would also allow for 2 drive failure in the volume.

Future Hardware Plans

So now it had 2 sets of 7.65TB of resilient storage – allowing for technically 2 drive failure (1 per each volume). In effect, I have 24TB of raw (unconverted of course) storage space, with matching 24GB of RAM for FreeNAS to use. The only bottlenecks at this point is the network speed. Limited by the speed of gigabit, transfer speeds max out at around 120MB/s. This is what I plan on improving, with plans for copper 10Gbit NICs and switches. But since the home is currently wired with cat5e, the only option is to either do a single peer to peer connection with 2 10Gb ethernet NICs between my computer and the server.

Unfortunately, with large amounts of enterprise disks spinning, it was loud and hot. My room would go over 30 degrees celcius during hot summer days. As a result, I moved it to the basement since it needed to be on 24/7 for it to be of any use for the IP camera surveillance. There were a few articles on running 10GbE over cat5e over a small distance, so that may be an option when it comes to test. Otherwise, Gigabit will have to do.

At this point in the game for the FreeNAS box, I’ve realized the potential of what FreeNAS can do for me – especially since it is based off FreeBSD. Aside from being a fileserver for backups and CCTV footage, I’ve managed to install FAMP (FreeBSD, apache, mysql, PHP) to work as a webserver. The content you are reading is off the FreeNAS server right now.

Future Software Plans

Future plans for this project is to get the hypervisor working again. Upgrading from FreeNAS 9.10 to 11 basically removed the PHPVirtualBox jail template and as a result I wasn’t able to run the virtual machines again. I would have to install the Bhyve hypervisor and see how the process goes from there.

I would also like to get it to become a firewall appliance – using a 4 port gigabit NIC to manage the inbound and outbound connections, especially with QoS enabled as we currently experience bursts in latency when someone sends a Snapchat. I was looking into PfSense and Sophos’ UTM home appliance ISOs. I’ve played around with Sophos’ firewall solution, but I would like to see what PfSense is like, but more on that in a later post.

With hardware and software upgrade plans down the road, I’m excited to see what is possible in a large scale environment in terms of performance and function.

Leave a Reply

Your email address will not be published. Required fields are marked *