|
The News Tribune published an article in 2004 about the “Dove of Oneness”, a mentally ill woman who got thousands of people to believe her crazy ideas about NESARA [1]. In recent time the QANON conspiracy theory has drawn on the NESARA cult and encouraged it’s believers to borrow money and spend it in the belief that all debts will be forgiven (something which was not part of NESARA). The Wikipedia page about NESARA (proposed US legislation that was never considered by the US congress) notes that the second edition of the book about it was titled “Draining the Swamp: The NESARA Story – Monetary and Fiscal Policy Reform“. It seems like the Trump cult has been following that for a long time.
David Brin (best-selling SciFi Author and NASA consultant) wrote an insightful blog post about the “Tytler Calumny” [2], which is the false claim that democracy inevitably fails because poor people vote themselves money. When really the failure is of corrupt rich people subverting the government processes to enrich themselves at the expense of their country. It’s worth reading, and his entire blog is also worth reading.
Cory Doctorow has an insightful article about his own battle with tobacco addiction and the methods that tobacco companies and other horrible organisations use to prevent honest discussion about legislation [3].
Cory Doctorow has an insightful article about “consent theater” which is describes how “consent” in most agreements between corporations and people is a fraud [4]. The new GDPR sounds good.
The forum for the War Thunder game had a discussion on the accuracy of the Challenger 2 tank which ended up with a man who claims to be a UK tank commander posting part of a classified repair manual [5]. That’s pretty amusing, and also good advertising for War Thunder. After reading about this I discovered that it’s free on Steam and runs on Linux! Unfortunately it whinged about my video drivers and refused to run.
Corey Doctorow has an insightful and well researched article about the way the housing market works in the US [6]. For house prices to increase conditions for renters need to be worse, that may work for home owners in the short term but then in the long term their children and grandchildren will end up renting.
My first Linux system in 1992 was a 386 with 4MB of RAM and a 120MB hard drive which (for some reason I forgot) only was supported by Linux for about 90MB. My first hard drive was 70MB and could do 500KB/s for contiguous IO, my first Linux hard drive was probably a bit faster, maybe 1MB/s. My current Linux workstation has 64G of RAM and 2*1TB NVMe devices that can sustain about 1.1GB/s. The laptop I’m using right now has 8GB of RAM and a 180GB SSD that can do 380MB/s.
My laptop has 2000* the RAM of my first Linux system and maybe 400* the contiguous IO speed. Currently I don’t even run a VM with less than 4GB of RAM, NB I’m not saying that smaller VMs aren’t useful merely that I don’t happen to be using them now. Modern AMD64 CPUs support 2MB “huge pages”. As a proportion of system RAM if I used 2MB pages everywhere they would be a smaller portion of system RAM than the 4KB pages on my first Linux system!
I am not suggesting using 2MB pages for general systems. For my workstations the majority of processes are using less than 10MB of resident memory and given the different uses for memory mapped shared objects, memory mapped file IO, malloc(), stack, heap, etc there would be a lot of inefficiency having 2MB the limit for all allocation. But as systems worked with 4MB of RAM or less and 4K pages it would surely work to have only 2MB pages with 64GB or more of RAM.
Back in the 90s it seemed ridiculous to me to have 256 byte pages on a 68030 CPU, but 4K pages on a modern AMD64 system is even more ridiculous. Apparently AMD64 supports 1GB pages on some CPUs, that seems ridiculously large but when run on a system with 1TB of RAM that’s comparable to 4K pages on my first Linux system. Currently AWS offers 24TB EC2 instances and the Google Cloud Project offers 12TB virtual machines. It might even make sense to have the entire OS using 1GB pages for some usage scenarios on such systems, wasting tens of GB of RAM to save TLB thrashing might be a good trade-off.
My personal laptop has 2000* the RAM of my first Linux system and maybe 400* the contiguous IO speed. An employer recently assigned me a Thinkpad Carbon X1 Gen6 with an NVMe device that could sustain 5GB/s until the CPU overheated, that’s 5000* the contiguous IO speed of my first Linux hard drive. My Linux hard drive had a 28ms average access time and my first Linux hard drive probably was a little better, let’s call it 20ms for the sake of discussion. It’s generally quoted that access times for NVMe are at best 10us, that’s 2000* better than my first Linux hard drive. As seek times are the main factor for swap performance a laptop with 8GB of RAM and a fast NVMe device could be expected to give adequate performance with 2000* the swap of my first Linux system. For the work laptop in question I had 8G of swap and my personal laptop has 6G of swap which is somewhat comparable to the 4MB of swap on my first Linux system in that swap is about equal to RAM size, so I guess my personal laptop is performing better than it can be expected to.
These are just some idle thoughts about hardware changes over the years. Don’t take it as advice for purchasing hardware and don’t take it too seriously in general. Also when writing comments don’t restrict yourself to being overly serious, feel free to run the numbers on what systems with petabytes of Optane might be like, speculate on what NUMA systems in laptops might be like, etc. Go wild.
OS security features and server class systems are things that surely belong together. If a program is important enough to buy expensive servers to run it then it’s important enough that you want to have all the OS security features enabled. For such an important program you will also want to have all possible monitoring systems running so you can predict hardware failures etc. Therefore you would expect that you could buy a server, setup the vendor’s management software, configure your Linux kernel with security features such as “lockdown” (a LSM that restricts access to /dev/mem, the iopl() system call, and other dangerous things [1]), and have it run nicely! You will be disappointed if you try doing that on a HP or Dell server though.
HP Problems
[370742.622525] Lockdown: hpasmlited: raw io port access is restricted; see man kernel_lockdown.7
The above message is logged when trying to INSTALL (not even run) the hp-health package from the official HP repository (as documented in my previous blog post about the HP ML-110 Gen9 [2]) with “lockdown=integrity” (the less restrictive lockdown option). Now the HP package in question is in their repository for Debian/Stretch (released in 2017) and the Lockdown LSM was documented by LWN as being released in 2019, so not supporting a Debian/Bullseye feature in Debian/Stretch packages isn’t inherently a bad thing apart from the fact that they haven’t released a new version of that package since. The Stretch package that I am testing now was released in 2019. Also it’s been regarded as best practice to have device drivers for this sort of thing since long before 2017.
# hplog -v
ERROR: Could not open /dev/cpqhealth/cdt.
Please make sure the Health Monitor is started.
Attempting to run the “hplog -v” command (to view the HP hardware log) gives the above error. Strace reveals that it could and did open /dev/cpqhealth/cdt but had problems talking to something (presumably the Health Monitor daemon) over a Unix domain socket. It would be nice if they could at least get the error message right!
Dell Problems
[ 13.811165] Lockdown: smbios-sys-info: /dev/mem,kmem,port is restricted; see man kernel_lockdown.7
[ 13.820935] Lockdown: smbios-sys-info: raw io port access is restricted; see man kernel_lockdown.7
[ 18.118118] Lockdown: dchcfg: raw io port access is restricted; see man kernel_lockdown.7
[ 18.127621] Lockdown: dchcfg: /dev/mem,kmem,port is restricted; see man kernel_lockdown.7
[ 19.371391] Lockdown: dsm_sa_datamgrd: raw io port access is restricted; see man kernel_lockdown.7
[ 19.382147] Lockdown: dsm_sa_datamgrd: /dev/mem,kmem,port is restricted; see man kernel_lockdown.7
Above is a sample of the messages when booting a Dell PowerEdge R710 with “lockdown=integrity” with the srvadmin-omacore package installed from the official Dell repository I describe in my blog post about the Dell PowerEdge R710 [3]. Now that repository is for Ubuntu/Xenial which was released in 2015, but again it was best practice to have device drivers for this many years ago. Also the newest Debian based releases that Dell apparently supports are Ubuntu/Xenial and Debian/Jessie which were both released in 2015.
# omreport system esmlog
Error! No Embedded System Management (ESM) log found on this system.
Above is the result when I try to view the ESM log (the Dell hardware log).
How Long Should Server Support Last?
The Wikipedia List of Dell PowerEdge Servers shows that the R710 is a Generation 11 system. Generation 11 was first released in 2010 and Generation 12 was first released in 2012. Generation 13 was the latest hardware Dell sold in 2015 when they apparently ceased providing newer OS support for Generation 11. Dell currently sells Generation 15 systems and provides more recent support for Generation 14 and Generation 15. I think it’s reasonable to debate whether Dell should support servers for 4 generations. But given that a major selling point of server class systems is that they have long term support I think it would make sense to give better support for this and not drop support when it’s only 2 versions from the latest release! The support for Dell Generation 11 hardware only seems to have lasted for 3 years after Generation 12 was first released. Also it appears that software support for Dell Generation 13 ceased before Generation 14 was released, that sucks for the people who bought Generation 13 when they were new!
HP is currently selling “Gen 10” servers which were first released at the end of 2017. So it appears that HP stopped properly supporting Gen 9 servers as soon as Gen 10 servers were released!
One thing to note about these support times, when the new generation of hardware was officially released the previous generation was still on sale. So while HP Gen 10 servers officially came out in 2017 that doesn’t necessarily mean that someone who wanted to buy a ML-110 Gen10 could actually have done so.
For comparison Red Hat Enterprise Linux has been supported for 4-6 years for every release they made since 2005 and Ubuntu has always had a 5 year LTS support for servers.
How To Do It Properly
The correct way of interfacing with hardware is via a device driver that is supported in the kernel.org tree. That means it goes through the usual kernel source code quality checks which are really good at finding bugs and gives users an assurance that the code won’t cause security problems. Generally nothing about the code from Dell or HP gives me confidence that it should be directly accessing /dev/kmem or raw IO ports without risk of problems.
Once a driver is in the kernel.org tree it will usually stay there forever and not require further effort from the people who submit it. Then it just works for everyone and tends to work with any other kernel features that people use, like LSMs.
If they released the source code to the management programs then it would save them even more effort as they could be maintained by the community.
MIT Technology Review has an interesting article about Google Project Zero shutting down a “western” intelligence operation [1].
There’s an Internet trend of people eating rotten meat they call “high meat” (rotten meat) [2]. This is up there with people setting themselves on fire and “nut shot” videos.
A young female who was making popular Twitter posts about motorbikes turned out to be a 50yo man using deep fake technology [3]. He has long hair IRL and just needed to replace his face. After coming out of the closet he has continued making such videos and remains popular.
FYHTECH has an informative blog post about using sgdisk to backup and restore GPT partition tables [4]. This is in the Debian package gdisk along with several other tools for managing partition tables. One interesting thing to note is that you can backup a partition table and restore to a smaller device (with a bunch of warnings that you can ignore if you know what you are doing). This is the only way I’ve discovered to cleanly truncate a GPT partitioned disk, which is sometimes necessary when running VMs.
Insightful blog post about PCIe bifurcation and how PCIe lanes are assigned to sockets [5]. This explains why many motherboards have sockets with unused PCIe lanes, EG *8 sockets that are wired for *4. The PCIe slots all go back to the CPU which has a limited number of *16 PCIe connections that are bifurcated to make the larger number of PCIe slots on the motherboard.
New Republic has an interesting article on the infamous transphobe Jordan Peterson’s battle with tranquiliser dependency [6].
Wired has an interesting article about the hack of RSA infrastructure related to the SecureID keys 10 years ago [7]. Apparently some 10 year NDAs had delayed it.
There are many posts about the situation with Freenode, I think that this one best captures the problems in the shortest amount of text [8]. You could spend a few hours reading about it as I have just done, but just reading this gives you the basics that you need to know to avoid Freenode. That blog post has links to articles about Andrew Lee’s involvement with Mt Gox and claims to be the heir to the throne of Korea (which is not a monarchy).
Nicholas Wade wrote an insightful and informative article about the origin of Covid19, which leads to the conclusion that it was made in a Chinese laboratory [9]. I first saw this in David Brin’s Facebook feed. I would be hesitant to share this sort of thing if it wasn’t reviewed by a reliable source, I think David Brin has the skill to analyse this sort of article and the contacts to allow him to seek verification of any scientific issues that are outside his field. I believe that this article is reliable and it’s conclusion is most likely to be correct.
Interesting Wired article about an art project using display computers at Apple stores to photograph people [10]. Ends with a visit from the FBI.
I recently bought a couple of PowerEdge T320 servers, so now to learn about setting them up. They are a little newer than the R710 I recently setup (which had iDRAC version 6), they have iDRAC version 7.
RAM Speed
One system has a E5-2440 CPU with 2*16G DDR3 DIMMs and a Memtest86+ speed of 13,043MB/s, the other is essentially identical but with a E5-2430 CPU and 4*16G DDR3 DIMMs and a Memtest86+ speed of 8,270MB/s. I had expected that more DIMMs means better RAM performance but this isn’t what happened. I firstly upgraded the BIOS, as I expected it didn’t make a difference but it’s a good thing to try first.
On the E5-2430 I tried removing a DIMM after it was pointed out on Facebook that the CPU has 3 memory channels (here’s a link to a great site with information on that CPU and many others [1]). When I did that I was prompted to disable advanced ECC (which treats pairs of DIMMs as a single unit for ECC allowing correcting more than 1 bit errors) and I had to move the 3 remaining DIMMS to different slots. That improved the performance to 13,497MB/s. I then put the spare DIMM into the E5-2440 system and the performance increased to 13,793MB/s, when I installed 4 DIMMs in the E5-2440 system the performance remained at 13,793MB/s and the E5-2430 went down to 12,643MB/s.
This is a good result for me, I now have the most RAM and fastest RAM configuration in the system with the fastest CPU. I’ll sell the other one to someone who doesn’t need so much RAM or performance (it will be really good for a small office mail server and NAS).
Firmware Update
BIOS
The first issue is updating the BIOS, unfortunately the first link I found to the Dell web site didn’t have a link to download the Linux installer. It offered a Windows binary, an EFI program, and a DOS binary. I’m not about to install Windows if there is any other option and EFI is somewhat annoying, so that leaves DOS. The first Google result for installing FreeDOS advised using “unetbootin”, that didn’t work at all for me (created a USB image that the Dell BIOS didn’t recognise as bootable) and even if it did it wouldn’t have been a good solution.
I went to the FreeDOS download page [2] and got the “Lite USB” zip file. That contained “FD12LITE.img” which I could just dd to a USB stick. I then used fdisk to create a second 32MB partition, used mkfs.fat to format it, and then copied the BIOS image file to it. I booted the USB stick and then ran the BIOS update program from drive D:. After the BIOS update this became the first system I’ve seen get a totally green result from “spectre-meltdown-checker“!
I found the link to the Linux installer for the new Dell BIOS afterwards, but it was still good to play with FreeDOS.
PERC Driver
I probably didn’t really need to update the PERC (PowerEdge Raid Controller) firmware as I’m just going to run it in JBOD mode. But it was easy to do, a simple bash shell script to update it.
Here are the perccli commands needed to access disks, it’s all hot-plug so you can insert disks and do all this without a reboot:
# show overview
perccli show
# show controller 0 details
perccli /c0 show all
# show controller 0 info with less detail
perccli /c0 show
# clear all "foreign" RAID members
perccli /c0 /fall delete
# add a vd (RAID) of level RAID0 (r0) with the drive 32:0 (enclosure:slot from above command)
perccli /c0 add vd r0 drives=32:0
The “perccli /c0 show” command gives the following summary of disk (“PD” in perccli terminology) information amongst other information. The EID is the enclosure, Slt is the “slot” (IE the bay you plug the disk into) and the DID is the disk identifier (not sure what happens if you have multiple enclosures). The allocation of device names (sda, sdb, etc) will be in order of EID:Slt or DID at boot time, and any drives added at run time will get the next letters available.
----------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp
----------------------------------------------------------------------------------
32:0 0 Onln 0 465.25 GB SATA SSD Y N 512B Samsung SSD 850 EVO 500GB U
32:1 1 Onln 1 465.25 GB SATA SSD Y N 512B Samsung SSD 850 EVO 500GB U
32:3 3 Onln 2 3.637 TB SATA HDD N N 512B ST4000DM000-1F2168 U
32:4 4 Onln 3 3.637 TB SATA HDD N N 512B WDC WD40EURX-64WRWY0 U
32:5 5 Onln 5 278.875 GB SAS HDD Y N 512B ST300MM0026 U
32:6 6 Onln 6 558.375 GB SAS HDD N N 512B AL13SXL600N U
32:7 7 Onln 4 3.637 TB SATA HDD N N 512B ST4000DM000-1F2168 U
----------------------------------------------------------------------------------
The PERC controller is a MegaRAID with possibly some minor changes, there are reports of Linux MegaRAID management utilities working on it for similar functionality to perccli. The version of MegaRAID utilities I tried didn’t work on my PERC hardware. The smartctl utility works on those disks if you tell it you have a MegaRAID controller (so obviously there’s enough similarity that some MegaRAID utilities will work). Here are example smartctl commands for the first and last disks on my system. Note that the disk device node doesn’t matter as all device nodes associated with the PERC/MegaRAID are equal for smartctl.
# get model number etc on DID 0 (Samsung SSD)
smartctl -d megaraid,0 -i /dev/sda
# get all the basic information on DID 0
smartctl -d megaraid,0 -a /dev/sda
# get model number etc on DID 7 (Seagate 4TB disk)
smartctl -d megaraid,7 -i /dev/sda
# exactly the same output as the previous command
smartctl -d megaraid,7 -i /dev/sdc
I have uploaded etbemon version 1.3.5-6 to Debian which has support for monitoring smartctl status of MegaRAID devices and NVMe devices.
IDRAC
To update IDRAC on Linux there’s a bash script with the firmware in the same file (binary stuff at the end of a shell script). To make things a little more exciting the script insists that rpm be available (running “apt install rpm” fixes that for a Debian system). It also creates and runs other shell scripts which start with “#!/bin/sh” but depend on bash syntax. So I had to make /bin/sh a symlink to /bin/bash. You know you need this if you see errors like “typeset: not found” and “[: -eq: unexpected operator” and then the system reboots. Dell people, please test your scripts on dash (the Debian /bin/sh) or just specify #!/bin/bash.
If the IDRAC update works it will take about 8 minutes.
Lifecycle Controller
The Lifecycle Controller is apparently for installing OS and firmware updates. I use Linux tools to update Linux and I generally don’t plan to update the firmware after deployment (although I could do so from Linux if needed). So it doesn’t seem to offer anything useful to me.
Setting Up IDRAC
For extra excitement I decided to try to setup IDRAC from the Linux command-line. To install the RAC setup tool you run “apt install srvadmin-idracadm7 libargtable2-0” (because srvadmin-idracadm7 doesn’t have the right dependencies).
# srvadmin-idracadm7 is missing a dependency
apt install srvadmin-idracadm7 libargtable2-0
# set the IP address, netmask, and gatewat for IDRAC
idracadm7 setniccfg -s 192.168.0.2 255.255.255.0 192.168.0.1
# put my name on the front panel LCD
idracadm7 set System.LCD.UserDefinedString "Russell Coker"
Conclusion
This is a very nice deskside workstation/server. It’s extremely quiet with hardly any fan noise and the case is strong enough to contain the noise of hard drives. When running with 3* 3.5″ SATA disks and 2*10k 2.5″ SAS disks on a wooden floor it wasn’t annoyingly loud. Without the SAS disks it was as quiet as you can expect any PC to be, definitely not the volume you expect from a serious server! I bought the T320 systems loaded with SAS disks which made them quite loud, I immediately put the disks on ebay and installed SATA SSDs and hard drives which gives me more performance and more space than the SAS disks with less cost and almost no noise.
8*3.5″ drive bays gives room for expansion. I currently have 2*SATA SSDs and 3*SATA disks, the SSDs are for the root filesystem (including /home) and the disks are for a separate filesystem for large files.
It seems that Netflix has an ongoing issue of not working well with IPv6, apparently they have some sort of region checking code that doesn’t correctly identify IPv6 prefixes. To fix this I wrote the following script to make a small zone file with only A records for Netflix and no AAAA records. The $OUT.header file just has the SOA record for my fake netflix.com domain.
#!/bin/bash
OUT=/etc/bind/data/netflix.com
HEAD=$OUT.header
cp $HEAD $OUT
dig -t a www.netflix.com @8.8.8.8|sed -n -e "s/^.*IN/www IN/p"|grep [0-9]$ >> $OUT
dig -t a android.prod.cloud.netflix.com @8.8.8.8|sed -n -e "s/^.*IN/android.prod.cloud IN/p"|grep [0-9]$ >> $OUT
/usr/sbin/rndc reload > /dev/null
Update
I updated this post to add a line for android.prod.cloud.netflix.com which is the address used by Android devices.
I’ve recently signed up for Internode NBN while using the Arris CM8200 device supplied by Optus (previously used for a regular phone service). I took the configuration mostly from Dean’s great blog post on the topic [1]. One thing I changed was the /etc/networ/interfaces configuration, I used the following:
# VLAN ID 2 for Internode's NBN HFC.
auto eth1.2
iface eth1.2 inet manual
vlan-raw-device eth1
auto nbn
iface nbn inet ppp
pre-up /bin/ip link set eth1.2 up
provider nbn
There is no need to have a section for eth1 when you have a section for eth1.2.
IPv6
IPv6 for only one system
With a line in /etc/ppp/options containing only “ipv6 ,” you get an IPv6 address automatically for the ppp0 interface after starting pppd.
IPv6 for your lan
Internode has documented how to configure the WIDE DHCPv6 client to get an IPv6 “prefix” (subnet) [2]. Just install the wide-dhcpv6-client package and put your interface names in a copy of the Internode example config and that works. That gets you a /64 assigned to your local Ethernet. Here’s an example of /etc/wide-dhcpv6/dhcp6c.conf:
interface ppp0 {
send ia-pd 0;
script "/etc/wide-dhcpv6/dhcp6c-script";
};
id-assoc pd {
prefix-interface br0 {
sla-id 0;
sla-len 8;
};
};
For providing addresses to other systems on your LAN they recommend radvd version 1.1 or greater, Debian/Bullseye will ship with version 2.18. Here is an example /etc/radvd.conf that will work with it. It seems that you have to manually (or with a script) set the value to use in place of “xxxx:xxxx:xxxx:xxxx” from the value that is assigned to eth0 (or whichever interface you are using) by the wide-dhcpv6-client.
interface eth0 {
AdvSendAdvert on;
MinRtrAdvInterval 3;
MaxRtrAdvInterval 10;
prefix xxxx:xxxx:xxxx:xxxx::/64 {
AdvOnLink on;
AdvAutonomous on;
AdvRouterAddr on;
};
};
Either the configuration of the wide dhcp client or radvd removes the default route from ppp0, so you need to run a command like
“ip -6 route add default dev ppp0” to put it back. Probably having “ipv6 ,” is the wrong thing to do when using wide-dhcp-client and radvd.
On a client machine with bridging I needed to have “net.ipv6.conf.br0.accept_ra=2” in /etc/sysctl.conf to allow it to accept route advisory messages on the interface (in this case eth0), for machines without bridging I didn’t need that.
Firewalling
The default model for firewalling nowadays seems to be using NAT and only configuring specific ports to be forwarded to machines on the LAN. With IPv6 on the LAN every system can directly communicate with the rest of the world which may be a bad thing. The following lines in a firewall script will drop all inbound packets that aren’t in response to packets that are sent out. This will give an equivalent result to the NAT firewall people are used to and you can always add more rules to allow specific ports in.
ip6tables -A FORWARD -i ppp+ -m state --state ESTABLISHED,RELATED -j ACCEPT
ip6tables -A FORWARD -i ppp+ -i DROP
Hard Drive Brands
When people ask for advice about what storage to use they often get answers like “use brand X, it works well for me and brand Y had a heap of returns a few years ago”. I’m not convinced there is any difference between the small number of manufacturers that are still in business.
One problem we face with reliability of computer systems is that the rate of change is significant, so every year there will be new technological developments to improve things and every company will take advantage of them. Storage devices are unique among computer parts for their requirement for long-term reliability. For most other parts in a computer system a fault that involves total failure is usually easy to fix and even a fault that causes unreliable operation usually won’t spread it’s damage too far before being noticed (except in corner cases like RAM corruption causing corrupted data on disk).
Every year each manufacturer will bring out newer disks that are bigger, cheaper, faster, or all three. Those disks will be expected to remain in service for 3 years in most cases, and for consumer disks often 5 years or more. The manufacturers can’t test the new storage technology for even 3 years before releasing it so their ability to prove the reliability is limited. Maybe you could buy some 8TB disks now that were manufactured to the same design as used 3 years ago, but if you buy 12TB consumer grade disks, the 20TB+ data center disks, or any other device that is pushing the limits of new technology then you know that the manufacturer never tested it running for as long as you plan to run it. Generally the engineering is done well and they don’t have many problems in the field. Sometimes a new range of disks has a significant number of defects, but that doesn’t mean the next series of disks from the same manufacturer will have problems.
The issues with SSDs are similar to the issues with hard drives but a little different. I’m not sure how much of the improvements in SSDs recently have been due to new technology and how much is due to new manufacturing processes. I had a bad experience with a nameless brand SSD a couple of years ago and now stick to the better known brands. So for SSDs I don’t expect a great quality difference between devices that have the names of major computer companies on them, but stuff that comes from China with the name of the discount web store stamped on it is always a risk.
Hard Drive vs SSD
A few years ago some people were still avoiding SSDs due to the perceived risk of new technology. The first problem with this is that hard drives have lots of new technology in them. The next issue is that hard drives often have some sort of flash storage built in, presumably a “SSHD” or “Hybrid Drive” gets all the potential failures of hard drives and SSDs.
One theoretical issue with SSDs is that filesystems have been (in theory at least) designed to cope with hard drive failure modes not SSD failure modes. The problem with that theory is that most filesystems don’t cope with data corruption at all. If you want to avoid losing data when a disk returns bad data and claims it to be good then you need to use ZFS, BTRFS, the NetApp WAFL filesystem, Microsoft ReFS (with the optional file data checksum feature enabled), or Hammer2 (which wasn’t production ready last time I tested it).
Some people are concerned that their filesystem won’t support “wear levelling” for SSD use. When a flash storage device is exposed to the OS via a block interface like SATA there isn’t much possibility of wear levelling. If flash storage exposes that level of hardware detail to the OS then you need a filesystem like JFFS2 to use it. I believe that most SSDs have something like JFFS2 inside the firmware and use it to expose what looks like a regular block device.
Another common concern about SSD is that it will wear out from too many writes. Lots of people are using SSD for the ZIL (ZFS Intent Log) on the ZFS filesystem, that means that SSD devices become the write bottleneck for the system and in some cases are run that way 24*7. If there was a problem with SSDs wearing out I expect that ZFS users would be complaining about it. Back in 2014 I wrote a blog post about whether swap would break SSD [1] (conclusion – it won’t). Apart from the nameless brand SSD I mentioned previously all of my SSDs in question are still in service. I have recently had a single Samsung 500G SSD give me 25 read errors (which BTRFS recovered from the other Samsung SSD in the RAID-1), I have yet to determine if this is an ongoing issue with the SSD in question or a transient thing. I also had a 256G SSD in a Hetzner DC give 23 read errors a few months after it gave a SMART alert about “Wear_Leveling_Count” (old age).
Hard drives have moving parts and are therefore inherently more susceptible to vibration than SSDs, they are also more likely to cause vibration related problems in other disks. I will probably write a future blog post about disks that work in small arrays but not in big arrays.
My personal experience is that SSDs are at least as reliable as hard drives even when run in situations where vibration and heat aren’t issues. Vibration or a warm environment can cause data loss from hard drives in situations where SSDs will work reliably.
NVMe
I think that NVMe isn’t very different from other SSDs in terms of the actual storage. But the different interface gives some interesting possibilities for data loss. OS, filesystem, and motherboard bugs are all potential causes of data loss when using a newer technology.
Future Technology
The latest thing for high end servers is Optane Persistent memory [2] also known as DCPMM. This is NVRAM that fits in a regular DDR4 DIMM socket that gives performance somewhere between NVMe and RAM and capacity similar to NVMe. One of the ways of using this is “Memory Mode” where the DCPMM is seen by the OS as RAM and the actual RAM caches the DCPMM (essentially this is swap space at the hardware level), this could make multiple terabytes of “RAM” not ridiculously expensive. Another way of using it is “App Direct Mode” where the DCPMM can either be a simulated block device for regular filesystems or a byte addressable device for application use. The final option is “Mixed Memory Mode” which has some DCPMM in “Memory Mode” and some in “App Direct Mode”.
This has much potential for use of backups and to make things extra exciting “App Direct Mode” has RAID-0 but no other form of RAID.
Conclusion
I think that the best things to do for storage reliability are to have ECC RAM to avoid corruption before the data gets written, use reasonable quality hardware (buy stuff with a brand that someone will want to protect), and avoid new technology. New hardware and new software needed to talk to new hardware interfaces will have bugs and sometimes those bugs will lose data.
Filesystems like BTRFS and ZFS are needed to cope with storage devices returning bad data and claiming it to be good, this is a very common failure mode.
Backups are a good thing.
Wifi usually just works. In the past I haven’t had to worry much about performance as for home use things have always been bearable and at work it’s never been my job so I just file a bug report with the relevant people when things go wrong. But a few years ago I had some problems.
For my home network I got a free Wifi AP which wasn’t performing well.
My AP supported 802.11 modes b/g or g/n (b, g, and n are slow, medium, and fast speeds). I initially had the AP running in b/g mode because I had an 802.11b USB wifi device that I used. When I replaced that with one that did 802.11g I tried changing the AP to g/n mode but performance was even worse on my laptop (although quite good on phones) so I switched back.
For phones it appeared to work well giving 54Mb/s while on my laptop (a second hand Thinkpad X1 Carbon) it was giving 11Mb/s at best and often much less than that. The best demonstration of problems was to start transferring a large file while pinging a system on the LAN the AP was connected to. Usually it would give ping times of 1s or more, sometimes 5s+ ping times. While this was happening the “Invalid misc” count increased rapidly, often by more than 100 per second.
The results of Google searches suggest that “Invalid misc” is due to interference and recommend changing the channel. My AP had been on channel 1 which had performed poorly, channels 2-8 were ok, and channel 9 seemed reasonably good. As an aside trying all channels manually is not a good idea, it takes a lot of time and gives little useful data. After changing to channel 9 it still only gave about 500KB/s when transferring large files with ping times of about 100ms, but that’s a big improvement. I tried running “iwlist scanning” to scan the Wifi network for other APs, that showed that channel 1 was used a lot but didn’t make it clear what I should do other than that.
The next thing I tried was the Wifi Analyser app on Android [1] (which doesn’t work on my latest phone, I don’t know if it’s still being actively maintained, it will definitely work on older phones). That has a nice graph mode that shows which channels are used and how the frequencies spread and interfere with other channels. One thing I hadn’t realised before I looked at the graphs is that 802.11n uses 4 channels and interferes past that. If you have two 802.11n devices you don’t have much space left out of the 14 channels available. To make more space I configured the Wifi AP in my ADSL modem to 802.11b/g mode and assigned it a channel away from the others making 4 channels available with no interference.
After that iwconfig reported between 60 and 120Mb/s and I got consistent transfer rates over 1.5MB/s while ping times remained below 100ms.
The 5GHz frequency range is less congested. But at the time I didn’t feel like buying 5GHz equipment.
Since that time I had signed up with an ISP that had a good deal on a Wifi AP that had 5GHz. Now I have all my devices configured to use 5GHz or 2.4GHz depending on which they think is best. So there’s less devices on 2.4GHz and the AP is configured for “20MHz channel width” in the 2.4GHz range (which means 802.11b/g).
Conclusion
802.11n seems to be a bad idea unless you run the only AP in an area. In a suburban area you will have 3 other houses broadcasting in your area and 802.11n is bad for everyone. The worst case scenario would be one person using 802.11n and interfering with everyone else’s 802.11g and then having everyone else turn on 802.11n to try and make things faster.
5GHz is less congested as most people run old hardware. It also has a shorter range which has the upside of getting less interference from other people. I’m considering installing 5GHz APs at both ends of my house and configuring all my new devices to not use 2.4GHz.
Wifi spectrum analysis software is much better than manual testing of channels or trying to deduce things from the output if “iwlist scanning“.
This page has summaries of some USB limits [1]. USB 2.0 has the longest cable segment limit of 5M (1.x, 3.x, and USB-C are all shorter), so USB 2.0 is what you want for long runs. The USB limit for daisy chained devices is 7 (including host and device), so that means a maximum of 5 hubs or a total distance between PC and device of 30M. There are lots of other ways of getting longer distances, the cheapest seems to be putting an old PC at the far end of an Ethernet cable.
Some (many? most?) laptops have USB for the interface to the built in camera, and these are sold from scrapped laptops. You could probably setup a home monitoring system in a typical home by having a centrally located PC with USB hubs fanning out to the corners. But old Android phones on a Wifi network seems like an easier option if you can prevent the phones from crashing all the time.
|
|