System installation
Sunday, Jul 7, 2013
Overview: A Real Challenge
Well, it took me a whole week to fully understand the boot process and different partition tables and their partition requirements. As the intel Z87 motherboard is still shining new for the world, there is almost no documentation I can find nor any useful blog articles out in the net. Therefore, I’ve tried all combinations that I can to get the system installed and partition the disk as I need. In one word, the road to this goal was not easy at all.
Here is the result:
- OS: Arch Linux x86_64 installed on a 4*750G, 2.5” HDD RAID 10
- Partition table and BIOS: MBR and Legacy BIOS
- Bootloader: Grub2 installed on a RAID1
- Extra data storage: 2*2T, 3.5” HDD RAID 1
Here are some time saving tips that may help you, these are all wrong routes that I took when I was building the system, these tips will save you a lot of time, especially if you are a novice user and just bought the latest hardware to test your skills:
Asus Z87 Motherboard’s Linux Support
- Asus TUF SABERTOOTH Z87 will NOT support CentOS 6.4 or Fedora 18, not on BIOS or EFI mode. From what I have tried, CentOS 6.4 installation media (DVD) will give you a kernel panic when loading the installer, and Fedora 18 will go black screen after the Fedora logo.
- The motherboard will NOT boot on EFI mode, it will be able to recognize the EFI partition, however, will simply not boot. I have checked the MSI motherboard support list with Intel Z87 chipset, Windows is the only system that will boot in that motherboard using EFI mode till now.
- The Arch Linux installation media I use is distributed on July 2013, I did not try the old ones, but I think the June’s one will work as well.
- If you are using software array like me, use an Undisputable Power Supply to prevent data loss is necessary. If you cannot use one, turn of the cache of the disk.
Partition and Bootloader Tips
- When you install a boot loader with RAID, then RAID 1 is your only option. And if you are using a MBR, you can use
grub-install /dev/sda
andgrub-install /dev/sdb
to install the boot sector to both of the disks in case the first disk fails. Please pay attention that it is/dev/sdX/dev/mdX
, because the command is trying to install the boot sector to the MBR, not to your partitions or RAID arrays. - When building a large RAID array, change the value in
/proc/sys/dev/raid/speed_limit_max
and/proc/sys/dev/raid/speed_limit_min
to increase the speed, otherwise it will take ages, yes, literally ages. - When building a RAID 10, do not build RAID 1 first and then build RAID 0, it will work but will be extremely slow (about the 1x write speed for a 4-drive RAID 10), use the command similar to this
mdadm -v --create /dev/md0 --level=raid10 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
.
Detailed Boot Process
It is extremely important for you to understand how the system work if you want to correctly install everything all by yourself using a system like Arch Linux. Therefore here I summarize the boot process here and hopefully it will help you debug your boot process.
Overall environment: (please notice that the environment is really important, because different environment may have completely different process)
- Legacy BIOS (not EFI)
- MBR (not GPT)
- RAID1 mounted as
/boot
- RAID10 mounted as
/
- Grub2 installed on
/boot
- MBR Boot information installed on
/dev/sda
and/dev/sdb
How it works:
- System boot up, Legacy BIOS looks for the first boot device’s information. This will go to the
/dev/sda
to load Grub2, if you have this step wrong, your system will tell you no bootable device found, or go keep looking for the next device in the boot order. - Grub loaded, this step you will see the Grub2 menu with different options. If you don’t see this menu, go check whether you have the boot device right.
- Choose the OS in the menu, this will find the Linux image in
/boot
folder, which is generated bymkinitcpio -p linux
command if you are using Arch Linux. If you cannot load the Linux kernel, make sure that your Grub is looking for the right file in the right directory. Check the/boot/grub/grub.cfg
may help, and pay attention to the UUIDs. If you changed anything on the RAID system, especially your/boot
or/
file system’s information, please usegrub-mkconfig -o /boot/grub/grub.cfg
to re-generate the configuration file. - The loaded Linux kernel will try to start the RAID arrays in order to get the file system. This process is using the configuration file
/etc/mdadm.conf
. However, you have to use this file to generate the kernel file, instead of updating it after the kernel has been generated. In other words, this file is used to generate the kernel, not a file for the kernel to reference to after the system has been loaded, because if the RAID does not work, you will not even see the file system. In order to make themkinitcpio
work correctly, you have to edit/etc/mkinitcpio.conf
, add correct HOOKS (normallymdadm
ormdadm_udev
) and MODULES (normallyext4
andraid456
), generate the correctmdadm.conf
file usingmdadm --examine --scan
, and then run themkinitcpio -p linux
command.
After all these steps, your system should have booted successfully and is running.
Quick Summary
Things changed so fast in the computers and technologies. When I did the installation, a lot of things have changed. Grub is now version 2, Linux is now 3.9.X, and many things now are not working as they used to be, as which way people have discussed for a long time and used for a long time. Therefore, when you are using a new system with a new hardware, please do read the documentation and man pages of the packages you are using, and when you are doing a Google search, do pay attention to the date of the article you are reading as well as what version they were using.
Finally my system is up and running, here is some information get from my system.
~ » df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.1G 1.4T 1% / dev 16G 0 16G 0% /dev run 16G 656K 16G 1% /run tmpfs 16G 0 16G 0% /dev/shm tmpfs 16G 0 16G 0% /sys/fs/cgroup tmpfs 16G 0 16G 0% /tmp /dev/md3 1.8T 68M 1.8T 1% /data /dev/md0 233M 33M 200M 15% /boot ~ » mdadm --examine --scan -v ARRAY /dev/md/0 level=raid1 metadata=1.2 num-devices=2 UUID=39004b2c:8630acc3:63c806a6:09559ed3 name=archiso:0 devices=/dev/sdb1,/dev/sda1 ARRAY /dev/md/1 level=raid0 metadata=1.2 num-devices=2 UUID=3938e56f:fa9d31ec:d546f9a5:49b761cd name=archiso:1 devices=/dev/sdd1,/dev/sdc1 ARRAY /dev/md/2 level=raid10 metadata=1.2 num-devices=4 UUID=abf737ef:9226b908:69348382:896a498d name=archiso:2 devices=/dev/sdd2,/dev/sdc2,/dev/sdb2,/dev/sda2 ARRAY /dev/md/3 level=raid1 metadata=1.2 num-devices=2 UUID=2ebf13f4:07f0cfb3:c684198b:f87bdc24 name=archiso:3 devices=/dev/sdf1,/dev/sde1 ~ » cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] [raid0] [raid10] md0 : active raid1 sda1[0] sdb1[1] 249792 blocks super 1.2 [2/2] [UU] md2 : active raid10 sda2[0] sdc2[2] sdb2[1] sdd2[3] 1464386560 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] md1 : active raid0 sdc1[0] sdd1[1] 499840 blocks super 1.2 64k chunks md3 : active raid1 sde1[1] sdf1[0] 1953382336 blocks super 1.2 [2/2] [UU] unused devices: