Tuesday 12 May 2015

Installation of a Linux RAID1 Server

A couple of years back, 2010, the PC I was working with at work was replaced. I asked around, what happens with the hardware? They told me the guy in charge of putting down the PC's gave them away in exchange of a donation for the local children's hospital, around 20 Euros usually. He said he didn't want to charge any more than that, it ensured that most of the PC's ended up being sold. That seemed like a pretty good deal to me, for a desktop by HP manufactured around 2005, with a 80 GB hard drive, 1 or 2 GB RAM and a CD drive.

At the time, I'd been looking to get a NAS drive, I wasn't sure what to do, get something new out of the box or give it a go using software RAID. In casual conversation with my then boss Uwe (thanks for the idea man!) we brought up the subject, and he said "Well, ready systems are all nice and good, easy to setup, in no time you're ready to go, they're not even expensive anymore. But, should the RAID controller fail, if it's a proprietary file system, you might end up with all data lost anyway." No point then in having a proprietary system. But that's Ok, I got my new desktop now, I'll install Linux Ubuntu server on it, and will configure a software RAID.

I bought a pair of 640 GB SATA drives, a switch, and about 20 meters of cable. Sat down to install the drives with the laptop beside me checking for instructions.

I installed both the server OS and the data partition on the mirrored disks, and used the 80 GB disk as a shared folder. That configuration lasted until January 2015, something over four years, four years in which my wife and I managed to fill up the drives.

I didn't realize my mistake back then, it turned out if I want to increase the size of the array, it'd be a pain, because the operative system is in the mirrored array too. I tried. It was. It didn't work for me. Last Friday I was sitting in front of a stripped out server with two empty 2 TB drives and two full 640 GB drives and no clue.

What do I do? Avoid the last mistake, install the OS on the single drive (sda) and create the RAID1 array on the two 2 TB drives (sdb and sdc). Bad news though, I'd have to transfer the data manually. The good news.... the data was still there, and readable!

After four years I did the ultimate test, remove the RAID array drives, put one of them on a Linux machine, mounted the disk and read the data. And in the end, I managed to install a brand new RAID1 array on the same hardware as 5 years before. In case I have to go through the same again, I'm writing it down here. If you want to try this on your own, go ahead, by all means, but let me tell you already, it takes time, you need to have at least a basic understanding of the Linux commands, some networking skills, and it's really nerveracking!

I'll also post the links from the useful sites I got the information from, it's up to you to decide what steps you follow (they are all more or less the same anyway). Just in case, if you lose your data, it's not my fault! A RAID1 array provides protection in case one of your drives fail. It's up to you to replace them as soon as possible in case of failure. It's a backup of sorts, but if something happens to the server, you never know... at least, I don't know what can happen.

After this preamble, read on, and I will try to guide you through what I did, and if I can, tell you which pitfalls to avoid along the way.

Server installation

The Ubuntu server installation is really straightforward. Ubuntu server is practically an Ubuntu system with no graphical user interface. Download the image, burn it to a CD or USB stick, restart the computer, enter the BIOS and select CD or USB as your preferred boot medium. There's tons of information how to download and burn and ISO image to CD or USB, so I'm not going to stress on that any more.

I will point out a few things that happened to me though. I had been using Ubuntu 10.04 on the former server, and it was working quite well. For whatever reason, I decided to download and install Ubuntu Server 12.04.05. It turns out, when I wanted to install the NFS server, it didn't let me!!!! I never found out why, I went and upgraded the system to 14.04, and that worked for me. Choose either Ubuntu server 10.04 or 14.04.

Other important things. Have only the smaller drive plugged in your PC while installing, and try to find out which is your first (SATA0 or SATA1) connection. In my case it assigned the sda to the drive. You don't need the RAID array drives just yet, and it helps not to confuse you with which partitions or devices to choose. Keep it simple.

During the installation, you'll have to enter information about the root account, keyboard, and which services should be installed. There's a list to choose from, choose at least:
  • OpenSSH
  • SAMBA
OpenSSH allows you to connect remotely to your server. That's a good thing. Mine was residing in a closet for a number of years, nothing better than login in remotely if it was necessary.

SAMBA is the service that provides MS Windows shares from the Linux machine. Mac and Windows PCs usually connect to Windows shares.

Once the system is installed, you'll want to configure the network. I chose a static IP address for the server in my home network. All of the other devices will always request a connection to the server, and since it's the same server every time, it's good to have it static. It makes some things a bit faster too.

To set up a static IP address you'll have to modify the /etc/network/interfaces file. I use vi as my editor. I find it OK, mainly because I don't use it too often, and because I don't use it too often, I haven't found the motivation to learn it properly.

Edit the network interfaces file as root:
user@server:~$sudo vi /etc/network/interfaces

Replace the lines
auto eth0
   iface eth0 inet dhcp

with
auto eth0  
iface eth0 inet static  
    address 192.168.1.100  
    netmask 255.255.255.0  
    network 192.168.1.0  
    broadcast 192.168.1.255  
    gateway 192.168.1.1  
    dns-nameserver 192.168.1.1  

and restart the network interface.
user@server:~$sudo /etc/init.d/networking restart

Ping your favourite test website, and if that works, we'll follow the next step.

At this point, you're ready to install both RAID disks. Turn off the server
user@server:~$sudo shutdown -h now

and plug in the disks.

Creating the RAID arrays

At this point, you probably went to the shop and bought two disks of the same size and maker and you want to make sure they were recognized by the server. fdisk will show you the information about disks on the server:
user@server:~$sudo fdisk -l

Granted, it's a lot of information, and you will want to have a closer look at each device. There are other tools for checking disks and sizes, I'll not go into them, but you'll find some links in the end. We also need to create the partition tables on each device. Again we'll use fdisk on this example with the sdb drive.
user@server:~$sudo fdisk /dev/sdb

fdisk will display the following menu:

  Command (m for help): m 
  Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

  Command (m for help):
  • 
    
We'll add a new partition -> 'n'. fdisk displays
  Command action
   e   extended
   p   primary partition (1-4)

We'll create a primary partition, partition number 1. Enter 'p'
  Partition number (1-4):

and '1'.
  Command (m for help):


In case the prompt asks for a first cylinder, type '1' and enter. Finally, we'll write the partition table to the disk. Enter 'w'. This should appear
  The partition table has been altered!

Repeat the steps for your second drive, sdc.

Before we create the array, we need to install mdadm. mdadm is the tool that let's us create, configure and monitor our RAID array. If you're running Ubuntu, it's as easy as
user@server:~$sudo apt-get install mdadm

We'll create the array using disks sdb1 and sdc1 that we just partitioned into a new device /dev/md0. /dev/md0 will be our RAID1 device.
user@server:~$sudo mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1

Great! Our device has been created! But don't start copying data just yet or rebooting your server. At this point, the configuration has not yet been saved, nor has the array been formatted. Let's format the array
user@server:~$sudo mkfs.ext4 /dev/md0

and copy the details of the array into the mdadm.conf configuration file. Enter
user@server:~$sudo mdadm --detail --scan

 You should receive the array's information in a similar format as this:
ARRAY /dev/md0 metadata=1.2 name=hostname:0 UUID=3326abcd:1234fa87:12340f15:abcd4ae4

Copy the above information and paste it as the last line into /etc/mdadm/mdadm.conf, and before rebooting the system execute
user@server:~$sudo update-initramfs -u

If you don't update initramfs for some reason the new array md0 you just created disappears after boot, instead you get a RAID array md127.


Mounting the array

The array is ready to use, but at this point it hasn't been mounted yet. Mounting the array is no big deal, but we don't want to mount it manually every time we start our server! We'll create a folder where the server will mount the RAID array at each startup automatically (you can use a name that suits better)
user@server:~$sudo mkdir /media/raidArray

and we modify the /etc/fstab file adding our md0 array to the previously created mountpoint. Just add the following line into /etc/fstab:
/dev/md0 /media/raidArray ext4 noatime,rw 0 0 

with no blank space or line at the end of the file. And also very important, the space between each entry is a tabulator. You can reboot the system, and after logging in you should be able to write data into your array. You can check that the array has been correctly mounted by running
user@server:~$cat /proc/mounts 

But let's also check that the array has been created properly and it's running. Two useful commands:
user@server:~$sudo mdadm --detail /dev/md0 
/dev/md0:
        Version : 1.2
  Creation Time : Sat Feb 21 11:18:20 2015
     Raid Level : raid1
     Array Size : 1953382336 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953382336 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat May  9 20:30:23 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : dagobah:0  (local to host dagobah)
           UUID : 3326d607:a531fa87:54b80f15:c6f94ae4
         Events : 142

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

and
user@server:~$user@server:~$cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[0] sdc1[1]
      1953382336 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

You might not get the same, in fact, just after you create the array, the above command might return "synching" as status and a percentage progress bar. For a 2 TB array it took a good hour or two. As far as I read, you can still work on them, but I'd suggest just leave the array sync first and then start meddling.

Sharing data over the network

Slowly we get to the point where we can share the space over the network. I'll differentiate between two types, Linux and Windows. Technically, a Linux machine will be able to access a Windows share, but not the other way around. Furthermore, a Linux share using NFS is faster than using a Windows share (also known as SAMBA share).

Let's start with the Linux share.

What a pain it was. Remember I mentioned my old server was running top notch not a care in the world, it just needed extra space? Well, I got the space, but it became slower! Slower to connect, didn't want to connect at startup... and that because the NFS install was using version 4. After a lengthy search, I found out the problem was solved telling the system to use version 3, and then all was well! But let's focus on the NFS first.

Server side

First of all, we install NFS on the server:
user@server:~$sudo apt-get install nfs-kernel-server nfs-common 

and we tell the server which folders we'll share with whom by editing the file /etc/exports. In this example we'll keep it simple, using the standard options, and sharing just the root folder of the array. The entries after '#' are comments, the important line is the last one:

# /etc/exports: the access control list for filesystems which may be exported
#  to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
/media/raidArray 192.168.1.0/255.255.255.0(rw,sync,no_root_squash,no_subtree_check)

Basically it means, share the folder /media/raidArray to the 192.168.1.x network (set by 192.168.1.0/255.255.255.0) with read and write permissions (the rw option). I'll be honest, I don't know and/or don't remember the other options, but as far as I read they're pretty standard.

There's the last step before we go to our client, disable version 4! Edit the file /etc/default/nfs-kernel-server and add the --no-nfs-version 4 flag:

RPCNFSDCOUNT='8 --no-nfs-version 4' 

Update the NFS server information, run:
user@server:~$sudo exportfs -ra

This step has to be run every time the /etc/exports file is updated or modified. Be best to reboot the server, or at least restart the NFS service:
user@server:~$sudo service nfs-kernel-server restart


Linux client side

Install the NFS client on the machine:
user@client:~$sudo apt-get install nfs-common 

Create a folder to mount the NFS share on:
user@server:~$sudo mkdir /media/nfsMount

You could mount the NFS share manually, but I'm not going to bother with that (there's enough good information about that on the Internet). Instead, we'll add an entry into our client's /etc/fstab file, so the NFS share is always available at startup but not mounted automatically. Add the following line at the end of your client's /etc/fstab:
192.168.1.10:/media/raidArray /media/nfsMount nfs rw,noauto,async,users 0 0

Meaning:
192.168.1.10:/media/raidArray      -> The remote folder we'd like to access 
/media/nfsMount                    -> The client's mount point
nfs                                -> The mount type
rw,noauto,async,users 0 0          -> Options

Again, I'm not familiar with all the options, but, the noauto flag prevents the client mounting the NFS share at startup. Instead, it will be visible on your desktop or file manager and grayed out until you want to access it.

And now for the Windows side

I'll just describe how to configure the samba server. There is heaps of information around how to connect a Windows PC to a shared folder over the network, but I'll give some useful hints.

First of all, if samba is not installed, just type:
user@client:~$sudo apt-get install samba  

Samba shares are defined on a file smb.conf located at /etc/samba/smb.conf. This file is installed when you install samba, and it comes with a lot of useful information as well as configuration parameters. First of all, we need to decide which workgroup we're going to use. As far as I recall, Windows machines use either "MSHOME" or "WORKGROUP" as standard. To make things easy, we can change the workgroup in our server, modifying smb.conf, quite at the beginning:
#
# Sample configuration file for the Samba suite for Debian GNU/Linux.
#
#
# This is the main Samba configuration file. You should read the
# smb.conf(5) manual page in order to understand the options listed
# here. Samba has a huge number of configurable options most of which 
# are not shown in this example
#
# Some options that are often worth tuning have been included as
# commented-out examples in this file.
#  - When such options are commented with ";", the proposed setting
#    differs from the default Samba behaviour
#  - When commented with "#", the proposed setting is the default
#    behaviour of Samba but the option is considered important
#    enough to be mentioned here
#
# NOTE: Whenever you modify this file you should run the command
# "testparm" to check that you have not made any basic syntactic 
# errors. 

#======================= Global Settings =======================

[global]

## Browsing/Identification ###

# Change this to the workgroup/NT-domain name your Samba server will part of
   workgroup = WORKGROUP
.......

Now we'll set up the folder we'd like to share. Scroll way down to the end of the file, and you can add your own entries, such as:

[RaidArray]
path = /media/raidArray
available = yes
valid users = myuser,seconduser
read only = no
browseable = yes
public = no
writable = yes


Remember to leave an empty line at the end of the file. The options are quite obvious, mostly, and you can tweak it them your hearts content!

One thing about users and passwords, when you're on the Windows machine and want to login to the samba share, use the Linux server credentials to login, otherwise you might be able to read from the folder, but you won't have write rights. In the worst case, you won't be able to login at all!

Conclusion

Lots of information, I know, but I wanted to have a repository where I'd keep all together, links, problems, hints, in case I did it again.

At this point in time my server is running pretty much 24/7 as a test, see how robust it is. About once a week I check the RAID array status, and all seems ok. Let's see the electricity bill when it arrives. About the speed, can't complain, I get about 40 to 50 MByte per second on a Gbit network. Streaming to peripheral devices is absolutely no problem, and I have enough space for the next couple of years.

I still have the old 640 giga drives lying around my desk, after I started this project (with initial disastrous consequences) I was happy that I could read the data from my Linux PC. I don't know if that was possible with a commercial system. Possibly yes, but at what expense?

Finally, this is not a 100% reliable nothing-is-ever-gonna-happen system, but it's a bit better than just having your own 1 drive. If something fails while you're trying this, it's not my fault! It's a fun and nerveracking project, doesn't take too long, but it takes some effort.

Useful links

No comments:

Post a Comment

Note: only a member of this blog may post a comment.