Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Connecting Multiple Hard Drives
#1

I have been on Linux for about 2 weeks now, and I am still a GIANT n00b. That's why I am on my Windows partition to write this topic. Anyways, I am looking for a way to link up multiple hard drives. I am looking at starting a server, so I have been researching tutorials on how to install Apache, PHP, nginx, and java. I was on a webpage recently, and I saw a complete server rack. I saw multiple hard drive cases, and I was thinking how do they link all those RAID drives together. As usual I used Google, and I got no results. So here's my question:



How do I link up multiple RAID drives together to form 1 main, networked drive?





Reply
#2

Quote:I have been on Linux for about 2 weeks now, and I am still a GIANT n00b. That's why I am on my Windows partition to write this topic. Anyways, I am looking for a way to link up multiple hard drives. I am looking at starting a server, so I have been researching tutorials on how to install Apache, PHP, nginx, and java. I was on a webpage recently, and I saw a complete server rack. I saw multiple hard drive cases, and I was thinking how do they link all those RAID drives together. As usual I used Google, and I got no results. So here's my question:

 

How do I link up multiple RAID drives together to form 1 main, networked drive?
 

Welcome to the forums :)I have no experience with raid but I did find a howto that may help you.Hybrid or Dave may also be able to help you on this. Cheers!!

Reply
#3

Welcome to the forums domefavor95! :)

 

Quote:How do I link up multiple RAID drives together to form 1 main, networked drive?
 

I'm a little confused as to what you are asking here. Are the drives already in a RAID array, or is that just something you want to set up? When you say 'networked drive', do you mean something like a Windows share?

 

If you haven't got RAID set up already, you have a few options. The highest-end servers come with hardware RAID cards. The linking up of multiple drives and all of the redundancy and so on is done by the hardware, and it just presents it as if it is a single drive to the OS. Alternatively, Linux is capable of something called 'fake hardware RAID'. I haven't got any experience with that.

 

Finally, you can also set up Linux software RAID as well. The performance isn't as good as having a 'proper' RAID card, but it works quite well. My server has two drives, in a Linux software RAID (RAID 1), so that if either drive fails, the server stays running until I can replace the bad drive.

 

Let us know exactly what it is you're looking to do. :)

Reply
#4

As others have mentioned, if you have onboard RAID (H/W) then all the disks can be bound together in a RAID array and will just present themselves to the underlying OS as a single disk.

 

The alternative is to use mdadm (meta-disk admin) as a S/W RAID - I use this to mirror two 320GB disks.

 

What RAID level are you looking at? Linux S/W RAID supports 0,1 and 5 (AFAIK)

Reply
#5

Thanks for the replys. :)

 

I am going to start up a website that will be handling large files, and I have hard drives that I am not using at the moment. I am looking for a way to link up the hard drives together to function as one giant drive.

Reply
#6
You could also work with Logical Volumes and Logical Volume Groups. Via most linux installerens you can setup this. Create LVM partitions first, be sure to leave 200m or so for /boot and then makethe rest LVM partitions. Then you can make LVM groups and within the group you can assign / /swap /home etc. But seems like you want to want to use raid in case of a hard disk failing? I've never used Raid though, may be fun to try it once I buy the new pc I'm planning to use as a server.
Reply
#7

There are a couple of ways you could achieve this -- software RAID at level 0, and LVM.

 

I think LVM is probably more flexible in this particular case, so let's look at that.

 

A word of warning -- when you combine multiple disks together to form one great big volume like this, you multiply the likelihood of data loss because of drive failure by the number of drives. If one drive fails, it brings down the whole ship. You definitely need a backup of the whole volume if you're not willing to lose everything on it.

 

And, as with any operation involving formatting and repartitioning disks and such, you should have a backup before trying any of this anyway!

 

Ubuntu's Alternate installer offers a way to create LVM at install time, but by the sounds of it, you're already installed.

 

If you're already installed, you can still set it up, although a fair bit of command line/terminal action will be required. This is a more advanced setup than most people's Linux installs.

 

I created a little virtual machine test environment to have a play with LVM, and I managed to create one big volume from two virtual disks.

 

(A quick introduction to the way Linux talks about disks and partitioning.

 

Notice that Linux labels your physical disks like this:

 

sda -- first disk

sdb -- second disk

sdc -- third disk

 

and so on.

 

Partitions on individual disks are labelled with a number after the letter of the disk:

 

sda1 -- first primary partition on first disk

sda2 -- second partition on first disk

and so on...

 

However, if you have an extended partition with any number of logical drives in it, the first logical drive is labelled '5', regardless of whether there is or isn't a '4'!

 

sda5 -- first logical disk inside extended partition on first disk

sda6 -- second logical disk inside extended partition on first disk

 

These disk identifiers, when combined with the prefix /dev/, are the 'device nodes'.

)

 

So, I have three disks in my test VM -- sda, the disk where Ubuntu is installed. I don't want to touch that at all. sdb and sdc are the two disks I want to combine into one big LVM volume.

 

These aren't instructions specifically for your setup, so don't follow them just yet. You'll likely have bigger disks and other differences, and we need to understand more about your specific environment. This should give you a taste of how it should be done, so here is what I did:

 

1.

 

Created a partition on sdb, and a partition on sdc, each of which filled the whole disk. You should be able to use GParted Live CD for this. I now have sdb1 and sdc1, where I want the volume group to be created.

 

I used Linux fdisk to set the partition type of each of these to '8e', which is the partition type number for Linux LVM. This is so they are detected properly.

 



Code:
fdisk /dev/sdb
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e

Command (m for help): w

Calling ioctl() to re-read partition table




 



Code:
fdisk /dev/sdc
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e

Command (m for help): w

Calling ioctl() to re-read partition table




 

Now, this is what the disks on my system look like:

 



Code:
$ sudo fdisk -l

Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b44e6

  Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048    39987199    19992576   83  Linux
/dev/sda2        39989246    41940991      975873    5  Extended
/dev/sda5        39989248    41940991      975872   82  Linux swap / Solaris

Disk /dev/sdb: 12.9 GB, 12884901888 bytes
128 heads, 33 sectors/track, 5957 cylinders, total 25165824 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xa93a8c3d

  Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    25165823    12581888   8e  Linux LVM

Disk /dev/sdc: 40.8 GB, 40802189312 bytes
149 heads, 52 sectors/track, 10285 cylinders, total 79691776 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x04ff00e1

  Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048    79691775    39844864   8e  Linux LVM




 

2.

 

Back in Ubuntu, I installed LVM2:

 



Code:
sudo apt-get install lvm2




 

Now, we create a new Volume Group with the two partitions on the two physical disks we want to use. The Volume Group is just a collection of disks that we want to use with LVM, but just creating the Volume Group doesn't do anything else but collect the disks together, ready for the next stage.

 



Code:
sudo vgcreate vg1 /dev/sdb1 /dev/sdc1




 

We ask 'vgcreate' to create a volume group named 'vg1', using /dev/sdb1 and /dev/sdc1.

 

3.

 

With the Volume Group created from our two disks, we now need to create a volume inside it:

 



Code:
sudo lvcreate -L 45G -n data vg1




 

We ask 'lvcreate' to make a 45 GB logical volume, named data on volume group vg1.

 

That ends up as 45 GB or so. I probably could have made it slightly larger, but as you can see, it is larger than the 40 GB sdc disk, so we must be using the combined size of both disks for the volume!

 

4.

 

We now need to format the new logical volume with a filesystem, so we actually write files to it and so on.

 



Code:
sudo mke2fs -j /dev/vg1/data




 

As you can see, the 'device node' for the new LVM logical volume that we made is accessed through /dev/, followed by vg1 (the Volume Group), followed by data (the Logical Volume name itself).

 

5.

 

Finally, we need to create a mount point, which is where the files will actually appear and be accessible from. We must also mount the volume into that mount point.

 

A mount point is just a directory somewhere. Let's say /data.

 



Code:
sudo mkdir /data




 

Now, we will edit the file /etc/fstab, and add our new logical volume into the lists of filesystems that we want to be mounted and ready for when the computer starts up.

 



Code:
sudo gedit /etc/fstab




 

At the bottom, add a new line with the device node, mount point and a few other options (which aren't too important right now).

 



Code:
/dev/vg1/data       /data      auto     rw      0     0




 

Save the file and quit. Finally, let's mount that now so we don't have to reboot to have access to it:

 



Code:
sudo mount -a




 

Now, you can access /data, drop files in there and they will be saved onto these disks. How exciting!

 

-----

 

If you do want to proceed further with this, it would be helpful to know exactly which version of Ubuntu you're running, and also to see the output of:

 



Code:
sudo fdisk -l




 

in a terminal, when you have your extra disks connected. This will show all the information about your disks.

 

Let us know if you want to go further! This is more advanced stuff, but it is quite possible to do. Just make sure you have your stuff backed up. :)

Reply
#8

Quote:I am looking for a way to link up the hard drives together to function as one giant drive.
Again, if you're looking to use RAID then you need to consider the RAID level you require, and weigh the benefits of that level up against the impact of using such a level (eg: mirroring = less space but greater availability and uptime).

 

Try this site: http://www.acnc.com/raid

 

Once you've ascertained which RAID level, we can proceed from there.

 

(I use RAID + LVM in my server for max uptime and flexibility of dynamically shuffling data around whilst in use)

Reply
#9

I went to http://mus1c4ever.bl...ora-core-6.html and under the section where is says,

 



Code:
“[mirandam@charon ~]$ sudo mount /dev/hda1 /media/c_drive -t ntfs-3g -r -o umask=0222
[mirandam@charon ~]$ sudo mount /dev/hda2 /media/d_drive -t ntfs-3g -r -o umask=0222
[mirandam@charon ~]$ sudo mount /dev/hda3 /media/e_drive -t ntfs-3g -r -o umask=0222




 

I modified it per the instructions to sudo mount /dev/hdb2 /media/LinuxSwitch -t vfat -rw -o umask=0000

 

I entered the terminal, typed su - <my password> and copied and pasted

Code:
[b]sudo mount /dev/hdb2 /media/LinuxSwitch -t vfat -rw -o umask=0000[/b]


in.I received the following response,

Quote:“mount: mount point /media/LinuxSwitch does not exist”
Then I tried,

Code:
“[b]sudo mount /dev/hdb2 /LinuxSwitch -t vfat -rw -o umask=0000[/b]”


with this response,

Quote:“mount: mount point /LinuxSwitch does not exist.”
 

What do I need to do to mount a FAT32 partition?

 

Thanks in advance,

 

Bakshara the noob!

Reply
#10

You need to create that mount point as a directory. The error message is informing you that the mount point (the directory) doesn't exist.

eg:

 



Code:
mkdir /media/LinuxSwitch




Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)