A non-profit organisation I support has just bought a Dell PowerEdge T410 server to be mainly used as a file server. We need a reasonable amount of space and really good reliability features because the system may have periods without being actively monitored, it also has to be relatively cheap.
Dell servers are quite cheap, but disks are not cheap at all when Dell sells them. I get the impression that disks and RAM are major profit centers for Dell and that the profit margins on the basic servers are quite small. So naturally we decided to buy some SATA disks from a local store, one advantage of this is that Dell sells nothing bigger than 2TB while 3TB disks are available cheaply everywhere else.
So we bought 4 cheap 3TB (2.7TiB) SATA disks, connected them to the server, and found that only 2TiB was accessible. The Dell Tech Center says that some of the RAID controllers don’t support anything larger than 2TiB [1]. Obviously we have one of the older models. There’s lots of SATA sockets on the motherboard that could be used, however there is one problem.
The above picture is the side view of the T410, it was taken with a Samsung Galaxy S so the quality is a little poor (click for the original picture). The server is quite neat, not many cables for a system with 6 disks, not the 12 separate cables you would get in a typical white-box system.
The above picture shows the disk enclosure. You can see that each disk has a single connector for power and data, also the disks aren’t separate, multiple disks have the same power wires and the data cables are paired.
Above you can see the SAS controller, it has two large connectors that can each handle the data cables for 4 disks, nothing like the standard cables.
It’s easy to buy SATA data cables and connect them, but there are no spare power cables in the box. The connector that supplies power to all the disks appears to be something proprietary to Dell which goes straight to the double-connectors on each disk that supply power and data. This setup makes cabling very neat but also provides no good option for cabling regular disks. I’m sure I could make my own cables and if I hunted around the net enough I could probably buy some matching power cables, but it would be a hassle and the result wouldn’t be neat.
So the question was then whether to go to more effort, expense, and possibly risk the warranty to get the full 3TB or to just use the SAS controller and get 2TiB (2.2TB) per disk. One factor we considered is the fact that the higher sector numbers typically give much slower access times due to being on shorter tracks (see my ZCAV results page for test results from past disks [2]). We decided that 2.2TB (2TiB) out of 3TB (2.7TiB) was adequate capacity and that losing some of the slow parts of the disk wasn’t a big deal.
I’ve now setup a RAID-Z2 array on the disks, and ZFS reports 3.78TiB available capacity, which isn’t a lot considering that we have 4*3TB disks in the array. But the old server had only 200G of storage, so it’s a good improvement in capacity, performance, and RAID-Z2 should beat the hell out of RAID-6 for reliability.
Those don’t look like proprietary connectors (they’re not technically SATA connectors though).
The SAS connector going to the card looks like SFF-8484. The connectors going to the drives are probably SFF-8482.
Welcome to the world of SAS.
Alexander: Thanks for that information, so all I need then is a cable to go from multiple SATA connectors to a SFF-8484 connector. While such a cable would probably violate some standards it should work and there’s probably someone in China manufacturing them. But it would still be a PITA and the users have accepted a reduction in disk space for the moment.
Hopefully the current hardware configuration will run for at least 3 years without running out of space. Then once the purchase price of the current hardware is forgotten we can consider what to do next.