I’m installing new 4TB disks on an older Dell server, it’s a PowerEdge T110 with a G6950 CPU so it’s not really old, but it’s a couple of generations behind the latest Dell servers.
I tried to enable UEFI booting, but when I turned that option on the system locked up during the BIOS process (wouldn’t boot from the CD or take keyboard input). So I had to make it boot with a BIOS compatible MBR and a GPT partition table.
Number Start (sector) End (sector) Size Code Name
1 2048 4095 1024.0 KiB EF02 BIOS boot partition
2 4096 25169919 12.0 GiB FD00 Linux RAID
3 25169920 7814037134 3.6 TiB 8300 Linux filesystem
After spending way to much time reading various web pages I discovered that the above partition table works. The 1MB partition is for GRUB code and needs to be enabled by a parted command such as the following:
parted /dev/sda set 1 bios_grub on
/dev/sda2 is a RAID-1 array used for the root filesystem. If I was installing a non-RAID system I’d use the same partition table but with a type of 8300 instead of FD00. I have a RAID-1 array over sda2 and sdb2 for the root filesystem and sda3, sdb3, sdc3, sdd3, and sde3 are used for a RAID-Z array. I’m reserving space for the root filesystem on all 5 disks because it seems like a good idea to use the same partition table and the 12G per disk that is unused on sdc, sdd, and sde isn’t worth worrying about when dealing with 4TB disks.
For you or your readers’ information, that “BIOS boot partition†is rendered necessary because with GPT, there is no intersticial space between the partition table and the first partition as there is with MBR, in which GRUB could install its core image, so it has to be dedicated a small partition for that instead.