Archives

Categories

Release Years

In 2008 I wrote about the idea of having a scheduled release for Debian and other distributions as Mark Shuttleworth had proposed [1]. I still believe that Mark’s original idea for synchronised release dates of Linux distributions (or at least synchronised feature sets) is a good one but unfortunately it didn’t take off.

Having been using Ubuntu a bit recently I’ve found the version numbering system to be really good. Ubuntu version 16.04 was release in April 2016, it’s support ended 5 years later in about April 2021, so any commonly available computers from 2020 should run it well and versions of applications released in about 2017 should run on it. If I have to support a Debian 10 (Buster) system I need to start with a web search to discover when it was released (July 2019). That suggests that applications packaged for Ubuntu 18.04 are likely to run on it.

If we had the same numbering system for Debian and Ubuntu then it would be easier to compare versions. Debian 19.06 would be obviously similar to Ubuntu 18.04, or we could plan for the future and call it Debian 2019.

Then it would be ideal if hardware vendors did the same thing (as car manufacturers have been doing for a long time). Which versions of Ubuntu and Debian would run well on a Dell PowerEdge R750? It takes a little searching to discover that the R750 was released in 2021, but if they called it a PowerEdge 2021R70 then it would be quite obvious that Ubuntu 2022.04 would run well on it and that Ubuntu 2020.04 probably has a kernel update with all the hardware supported.

One of the benefits for the car industry in naming model years is that it drives the purchase of a new car. A 2015 car probably isn’t going to impress anyone and you will know that it is missing some of the features in more recent models. It would be easier to get management to sign off on replacing old computers if they had 2015 on the front, trying to estimate hidden costs of support and lost productivity of using old computers is hard but saying “it’s a 2015 model and way out of date” is easy.

There is a history of using dates as software versions. The “Reference Policy” for SE Linux [2] (which is used for Debian) has releases based on date. During the Debian development process I upload policy to Debian based on the upstream Git and use the same version numbering scheme which is more convenient than the “append git date to last full release” system that some maintainers are forced to use. The users can get an idea of how much the Debian/Unstable policy has diverged from the last full release by looking at the dates. Also an observer might see the short difference between release dates of SE Linux policy and Debian release freeze dates as an indication that I beg the upstream maintainers to make a new release just before each Debian freeze – which is expactly what I do.

When I took over the Portslave [3] program I made releases based on date because there were several forks with different version numbering schemes so my options were to just choose a higher number (which is OK initially but doesn’t scale if there are more forks) or use a date and have people know that the recent date is the most recent version. The fact that the latest release of Portslave is version 2010.04.19 shows that I have not been maintaining it in recent years (due to lack of hardware as well as lack of interest), so if someone wants to take over the project I would be happy to help them do so!

I don’t expect people at Dell and other hardware vendors to take much notice of my ideas (I have tweeted them photographic evidence of a problem with no good response). But hopefully this will start a discussion in the free software community.

Thinkpad X1 Yoga Gen3

I just bought myself a Thinkpad X1 Yoga Gen3 for $359.10. I have been quite happy with the Thinkpad X1 Carbon Gen5 I’ve had for just over a year (apart from my mistake in buying one with lost password) [1] and I normally try to get more use out of a computer than that. If I divide total cost by the time that I’ve had it working that comes out to about $1.30 per day. I would pay more than that for a laptop and I have paid much more than that for laptops in the past, but I prefer not to. I was initially tempted to buy a new Thinkpad by the prices of high end X1 devices dropping, this new Yoga has 16G of RAM and a 2560*1440 screen – that’s a good upgrade from 8G with 1920*1080. The CPU of my new Thinkpad is a quad core i5-8350U that rates 6226 [2] and is a decent upgrade from the dual core i5-6300U that rates 3239 [3] although that wasn’t a factor as I found the old CPU fast enough.

The Yoga Gen3 has a minimum weight of 1.4Kg and mine might not be the lightest model in the range while the old Carbon weighs 1.14Kg. I can really feel the difference. It’s also slightly larger but fortunately still fits in the pocket of my Scottware jacket.

The higher resolution screen and more RAM were not sufficient to make me want to spend some money. The deciding factor is that as I’m working on phones with touch screens it is a benefit to use a laptop with a touch screen so I can do more testing. The Yoga I bought was going cheap because the touch part of the touch screen is broken but the stylus still works, this is apparently a common failure mode of the Yoga.

The Yoga has a brighter screen than the Carbon and seems to have better contrast. I think Lenovo had some newer technology for that generation of laptops or maybe my Carbon is slightly defective in that regard. It’s a hazard of buying second hand that if something basically works but isn’t quite as good as it should be then you will never know.

I’m happy with this purchase and I recommend that everyone who buys laptops secondhand the way I do only get 1440p or better displays. I’ve currently got the Kitty terminal emulator [4] setup with 9 windows that each have 103 or 104 columns and 26 or 28 rows of text. That’s a lot of terminals on a laptop screen!

Links January 2024

Long Now has an insightful article about domestication that considers whether humans have evolved to want to control nature [1].

The OMG Elite hacker cable is an interesting device [2]. A Wifi device in a USB cable to allow remote control and monitoring of data transfer, including remote keyboard control and sniffing. Pity that USB-C cables have chips in them so you can’t use a spark to remove unwanted chips from modern cables.

David Brin’s blog post The core goal of tyrants: The “Red-Caesar” Cult and a restored era of The Great Man has some insightful points about authoritarianism [3].

Ron Garret wrote an interesting argument against Christianity [4], and a follow-up titled Why I Don’t Believe in Jesus [5]. He has a link to a well written article about the different theologies of Jesus and Paul [6].

Dimitri John Ledkov wrote an interesting blog post about how they reduced disk space for Ubuntu kernel packages and RAM for the initramfs phase of boot [7]. I hope this gets copied to Debian soon.

Joey Hess wrote an interesting blog post about trying to make LLM systems produce bad code if trained on his code without permission [8].

Arstechnica has an interesting summary of research into the security of fingerprint sensors [9]. Not surprising that the products of the 3 vendors that supply almost all PC fingerprint readers are easy to compromise.

Bruce Schneier wrote an insightful blog post about how AI will allow mass spying (as opposed to mass surveillance) [10].

ZDnet has an informative article How to Write Better ChatGPT Prompts in 5 Steps [11]. I sent this to a bunch of my relatives.

AbortRetryFail has an interesting article about the Itanic Saga [12]. Erberus sounds interesting, maybe VLIW designs could give a good ration of instructions to power – unlike the Itanium which was notorious for being power hungry.

Bruce Schneier wrote an insightful article about AI and Trust [13]. We really need laws controlling these things!

David Brin wrote an interesting blog post on the obsession with historical cycles [14].

Storage Trends 2024

It has been less than a year since my last post about storage trends [1] and enough has changed to make it worth writing again. My previous analysis was that for <2TB only SSD made sense, for 4TB SSD made sense for business use while hard drives were still a good option for home use, and for 8TB+ hard drives were clearly the best choice for most uses. I will start by looking at MSY prices, they aren't the cheapest (you can get cheaper online) but they are competitive and they make it easy to compare the different options. I'll also compare the cheapest options in each size, there are more expensive options but usually if you want to pay more then the performance benefits of SSD (both SATA and NVMe) are even more appealing. All prices are in Australian dollars and of parts that are readily available in Australia, but the relative prices of the parts are probably similar in most countries. The main issue here is when to use SSD and when to use hard disks, and then if SSD is chosen which variety to use.

Small Storage

For my last post the cheapest storage devices from MSY were $19 for a 128G SSD, now it’s $24 for a 128G SSD or NVMe device. I don’t think the Australian dollar has dropped much against foreign currencies, so I guess this is partly companies wanting more profits and partly due to the demand for more storage. Items that can’t sell in quantity need higher profit margins if they are to have them in stock. 500G SSDs are around $33 and 500G NVMe devices for $36 so for most use cases it wouldn’t make sense to buy anything smaller than 500G.

The cheapest hard drive is $45 for a 1TB disk. A 1TB SATA SSD costs $61 and a 1TB NVMe costs $79. So 1TB disks aren’t a good option for any use case.

A 2TB hard drive is $89. A 2TB SATA SSD is $118 and a 2TB NVMe is $145. I don’t think the small savings you can get from using hard drives makes them worth using for 2TB.

For most people if you have a system that’s important to you then $145 on storage isn’t a lot to spend. It seems hardly worth buying less than 2TB of storage, even for a laptop. Even if you don’t use all the space larger storage devices tend to support more writes before wearing out so you still gain from it. A 2TB NVMe device you buy for a laptop now could be used in every replacement laptop for the next 10 years. I only have 512G of storage in my laptop because I have a collection of SSD/NVMe devices that have been replaced in larger systems, so the 512G is essentially free for my laptop as I bought a larger device for a server.

For small business use it doesn’t make sense to buy anything smaller than 2TB for any system other than a router. If you buy smaller devices then you will sometimes have to pay people to install bigger ones and when the price is $145 it’s best to just pay that up front and be done with it.

Medium Storage

A 4TB hard drive is $135. A 4TB SATA SSD is $319 and a 4TB NVMe is $299. The prices haven’t changed a lot since last year, but a small increase in hard drive prices and a small decrease in SSD prices makes SSD more appealing for this market segment.

A common size range for home servers and small business servers is 4TB or 8TB of storage. To do that on SSD means about $600 for 4TB of RAID-1 or $900 for 8TB of RAID-5/RAID-Z. That’s quite affordable for that use.

For 8TB of less important storage a 8TB hard drive costs $239 and a 8TB SATA SSD costs $899 so a hard drive clearly wins for the specific case of non-RAID single device storage. Note that the U.2 devices are more competitive for 8TB than SATA but I included them in the next section because they are more difficult to install.

Serious Storage

With 8TB being an uncommon and expensive option for consumer SSDs the cheapest price is for multiple 4TB devices. To have multiple NVMe devices in one PCIe slot you need PCIe bifurcation (treating the PCIe slot as multiple slots). Most of the machines I use don’t support bifurcation and most affordable systems with ECC RAM don’t have it. For cheap NVMe type storage there are U.2 devices (the “enterprise” form of NVMe). Until recently they were too expensive to use for desktop systems but now there are PCIe cards for internal U.2 devices, $14 for a card that takes a single U.2 is a common price on AliExpress and prices below $600 for a 7.68TB U.2 device are common – that’s cheaper on a per-TB basis than SATA SSD and NVMe! There are PCIe cards that take up to 4*U.2 devices (which probably require bifurcation) which means you could have 8+ U.2 devices in one not particularly high end PC for 56TB of RAID-Z NVMe storage. Admittedly $4200 for 56TB is moderately expensive, but it’s in the price range for a small business server or a high end home server. A more common configuration might be 2*7.68TB U.2 on a single PCIe card (or 2 cards if you don’t have bifurcation) for 7.68TB of RAID-1 storage.

For SATA SSD AliExpress has a 6*2.5″ hot-swap device that fits in a 5.25″ bay for $63, so if you have 2*5.25″ bays you could have 12*4TB SSDs for 44TB of RAID-Z storage. That wouldn’t be much cheaper than 8*7.68TB U.2 devices and would be slower and have less space. But it would be a good option if PCIe bifurcation isn’t possible.

16TB SATA hard drives cost $559 which is almost exactly half the price per TB of U.2 storage. That doesn’t seem like a good deal. If you want 16TB of RAID storage then 3*7.68TB U.2 devices only costs about 50% more than 2*16TB SATA disks. In most cases paying 50% more to get NVMe instead of hard disks is a good option. As sizes go above 16TB prices go up in a more than linear manner, I guess they don’t sell much volume of larger drives.

15.36TB U.2 devices are on sale for about $1300, slightly more than twice the price of a 16TB disk. It’s within the price range of small businesses and serious home users. Also it should be noted that the U.2 devices are designed for “enterprise” levels of reliability and the hard disk prices I’m comparing to are the cheapest available. If “NAS” hard disks were compared then the price benefit of hard disks would be smaller.

Probably the biggest problem with U.2 for most people is that it’s an uncommon technology that few people have much experience with or spare parts for testing. Also you can’t buy U.2 gear at your local computer store which might mean that you want to have spare parts on hand which is an extra expense.

For enterprise use I’ve recently been involved in discussions with a vendor that sells multiple petabyte arrays of NVMe. Apparently NVMe is cheap enough that there’s no need to use anything else if you want a well performing file server.

Do Hard Disks Make Sense?

There are specific cases like comparing a 8TB hard disk to a 8TB SATA SSD or a 16TB hard disk to a 15.36TB U.2 device where hard disks have an apparent advantage. But when comparing RAID storage and counting the performance benefits of SSD the savings of using hard disks don’t seem to be that great.

Is now the time that hard disks are going to die in the market? If they can’t get volume sales then prices will go up due to lack of economy of scale in manufacture and increased stock time for retailers. 8TB hard drives are now more expensive than they were 9 months ago when I wrote my previous post, has a hard drive price death spiral already started?

SSDs are cheaper than hard disks at the smallest sizes, faster (apart from some corner cases with contiguous IO), take less space in a computer, and make less noise. At worst they are a bit over twice the cost per TB. But the most common requirements for storage are small enough and cheap enough that being twice as expensive as hard drives isn’t a problem for most people.

I predict that hard disks will become less popular in future and offer less of a price advantage. The vendors are talking about 50TB hard disks being available in future but right now you can fit more than 50TB of NVMe or U.2 devices in a volume less than that of a 3.5″ hard disk so for storage density SSD can clearly win. Maybe in future hard disks will be used in arrays of 100TB devices for large scale enterprise storage. But for home users and small businesses the current sizes of SSD cover most uses.

At the moment it seems that the one case where hard disks can really compare well is for backup devices. For backups you want large storage, good contiguous write speeds, and low prices so you can buy plenty of them.

Further Issues

The prices I’ve compared for SATA SSD and NVMe devices are all based on the cheapest devices available. I think it’s a bit of a market for lemons [2] as devices often don’t perform as well as expected and the incidence of fake products purporting to be from reputable companies is high on the cheaper sites. So you might as well buy the cheaper devices. An advantage of the U.2 devices is that you know that they will be reliable and perform well.

One thing that concerns me about SSDs is the lack of knowledge of their failure cases. Filesystems like ZFS were specifically designed to cope with common failure cases of hard disks and I don’t think we have that much knowledge about how SSDs fail. But with 3 copies of metadata BTFS or ZFS should survive unexpected SSD failure modes.

I still have some hard drives in my home server, they keep working well enough and the prices on SSDs keep dropping. But if I was buying new storage for such a server now I’d get U.2.

I wonder if tape will make a comeback for backup.

Does anyone know of other good storage options that I missed?

2.5Gbit Ethernet

I just decided to upgrade the core of my home network from 1Gbit to 2.5Gbit. I didn’t really need to do this, it was only about 5 years ago that I upgrade from 100Mbit to 1Gbit. but it’s cheap and seemed interesting.

I decided to do it because a 2.5Gbit switch was listed as cheap on Ozbargain Computing [1], that was $40.94 delivered. If you are in Australia and like computers then Ozbargain is a site worth polling, every day there’s interesting things at low prices. The seller of the switch is KeeplinkStore [2] who distinguished themselves by phoning me from China to inform me that I had ordered a switch with a UK plug for delivery to Australia and suggesting that I cancel the order and make a new order with an Australian plug. It wouldn’t have been a big deal if I had received a UK plug as I’ve got a collection of adaptors but it was still nice of them to make it convenient for me. The switch basically does what it’s expected to do and has no fan so it’s quiet.

I got a single port 2.5Gbit PCIe card for $18.77 and a dual port card for $34.07. Those cards are a little expensive when compared to 1Gbit cards but very cheap when compared to the computers they are installed in. These cards use the Realtek RTL8125 chipset and work well.

I got a USB-3 2.5Gbit device for $17.43. I deliberately didn’t get USB-C because I still use laptops without USB-C and most of the laptops with USB-C only have a single USB-C port which is used for power. I don’t plan to stop using my 100Mbit USB ethernet device because most of the time I don’t need a lot of speed. But sometimes I do things like testing auto-install on laptops and then having something faster than Gigabit is good. This card worked at 1Gbit speed on a 1Gbit network when used with a system running Debian/Bookworm with kernel 6.1 and worked at 2.5Gbit speed when connected to my LicheePi RISC-V system running Linux 5.10, but it would only do 100Mbit on my laptop running Debian/Unstable with kernel 6.6 (Debian Bug #1061095) [3]. It’s a little disappointing but not many people have such hardware so it probably doesn’t get a lot of testing. For the moment I plan to just use a 1Gbit USB Ethernet device most of the time and if I really need the speed I’ll just use an older kernel.

I did some tests with wget and curl to see if I could get decent speeds. When using wget 1.21.3 on Debian/Bookworm I got transfer speeds of 103MB/s and 18.8s of system CPU time out of 23.6s of elapsed time. Curl on Debian/Bookworm did 203MB/s and took 10.7s of system CPU time out of 11.8s elapsed time. The difference is that curl was using 100KB read buffers and a mix of 12K and 4K write buffers while wget was using 8KB read buffers and 4KB write buffers. On Debian/Unstable wget 1.21.4 uses 64K read buffers and a mix of 4K and 60K write buffers and gets a speed of 208MB/s. As an experiment I changed the read buffer size for wget to 256K and that got the speed up to around 220MB/s but it was difficult to measure as the occasional packet loss slowed things down. The pattern of writing 4K and then writing the rest continued, it seemed related to fwrite() buffering. For anyone else who wants to experiment with the code, the wget code is simpler (due to less features) and the package builds a lot faster (due to fewer tests) so that’s the one to work on.

The client machine for these tests has a E5-2696 v3 CPU, this doesn’t compare well to some of the recent AMD CPUs on single-core performance but is still a decently powerful system. Getting good performance at Gigabit speeds on an ARM or RISC-V system is probably going to be a lot harder than getting good performance at 2.5Gbit speeds on this system.

In conclusion 2.5Gbit basically works apart from a problem with new kernels and a problem with the old version of wget. I expect that when Debian/Trixie is released (probably mid 2025) things will work well. For good transfer rates use wget version 1.21.4 or newer or use curl.

As an aside I use a 1500byte MTU because I have some 100baseT systems on my LAN and the settings regarding TCP acceleration etc are all the defaults.

LicheePi 4A (RISC-V) First Look

I Just bought a LicheePi 4A RISC-V embedded computer (like a RaspberryPi but with a RISC-V CPU) for $322.68 from Aliexpress (the official site for buying LicheePi devices). Here is the Sipheed web page about it and their other recent offerings [1]. I got the version with 16G of RAM and 128G of storage, I probably don’t need that much storage (I can use NFS or USB) but 16G of RAM is good for VMs. Here is the Wiki about this board [2].

Configuration

When you get one of these devices you should make setting up ssh server your first priority. I found the HDMI output to be very unreliable. The first monitor I tried was a Samsung 4K monitor dating from when 4K was a new thing, the LicheePi initially refused to operate at a resolution higher than 1024*768 but later on switched to 4K resolution when resuming from screen-blank for no apparent reason (and the window manager didn’t support this properly). On the Dell 4K monitor I use on my main workstation it sometimes refused to talk to it and occasionally worked. I got it running at 1920*1080 without problems and then switched it to 4K and it lost video sync and never talked to that monitor again. On my Desklab portabable 4K monitor I got it to display in 4K resolution but only the top left 1/4 of the screen displayed.

The issues with HDMI monitor support greatly limit the immediate potential for using this as a workstation. It doesn’t make it impossible but would be fiddly at best. It’s quite likely that a future OS update will fix this. But at the moment it’s best used as a server.

The LicheePi has a custom Linux distribution based on Ubuntu so you want too put something like the following in /etc/network/interfaces to make it automatically connect to the ethernet when plugged in:

auto end0
iface end0 inet dhcp

Then to get sshd to start you have to run the following commands to generate ssh host keys that aren’t zero bytes long:

rm /etc/ssh/ssh_host_*
systemctl restart ssh.service

It appears to have wifi hardware but the OS doesn’t recognise it. This isn’t a priority for me as I mostly want to use it as a server.

Performance

For the first test of performance I created a 100MB file from /dev/urandom and then tried compressing it on various systems. With zstd -9 it took 16.893 user seconds on the LicheePi4A, 0.428s on my Thinkpad X1 Carbon Gen5 with a i5-6300U CPU (Debian/Unstable), 1.288s on my E5-2696 v3 workstation (Debian/Bookworm), 0.467s on the E5-2696 v3 running Debian/Unstable, 2.067s on a E3-1271 v3 server, and 7.179s on the E3-1271 v3 system emulating a RISC-V system via QEMU running Debian/Unstable.

It’s very impressive that the QEMU emulation is fast enough that emulating a different CPU architecture is only 3.5* slower for this test (or maybe 10* slower if it was running Debian/Unstable on the AMD64 code)! The emulated RISC-V is also more than twice as fast as real RISC-V hardware and probably of comparable speed to real RISC-V hardware when running the same versions (and might be slightly slower if running the same version of zstd) which is a tribute to the quality of emulation.

One performance issue that most people don’t notice is the time taken to negotiate ssh sessions. It’s usually not noticed because the common CPUs have got faster at about the same rate as the algorithms for encryption and authentication have become more complex. On my i5-6300U laptop it takes 0m0.384s to run “ssh -i ~/.ssh/id_ed25519 localhost id” with the below server settings (taken from advice on ssh-audit.com [3] for a secure ssh configuration). On the E3-1271 v3 server it is 0.336s, on the QMU system it is 28.022s, and on the LicheePi it is 0.592s. By this metric the LicheePi is about 80% slower than decent x86 systems and the QEMU emulation of RISC-V is 73* slower than the x86 system it runs on. Does crypto depend on instructions that are difficult to emulate?

HostKey /etc/ssh/ssh_host_ed25519_key
KexAlgorithms -ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group14-sha256
MACs -umac-64-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1

I haven’t yet tested the performance of Ethernet (what routing speed can you get through the 2 gigabit ports?), emmc storage, and USB. At the moment I’ve been focused on using RISC-V as a test and development platform. My conclusion is that I’m glad I don’t plan to compile many kernels or anything large like LibreOffice. But that for typical development that I do it will be quite adequate.

The speed of Chromium seems adequate in basic tests, but the video output hasn’t worked reliably enough to do advanced tests.

Hardware Features

Having two Gigabit Ethernet ports, 4 USB-3 ports, and Wifi on board gives some great options for using this as a router. It’s disappointing that they didn’t go with 2.5Gbit as everyone seems to be doing that nowadays but Gigabit is enough for most things.

Having only a single HDMI port and not supporting USB-C docks (the USB-C port appears to be power only) limits what can be done for workstation use and for controlling displays. I know of people using small ARM computers attached to the back of large TVs for advertising purposes and that isn’t going to be a great option for this.

The CPU and RAM apparently uses a lot of power (which is relative – the entire system draws up to 2A at 5V so the CPU would be something below 5W). To get this working a cooling fan has to be stuck to the CPU and RAM chips via a layer of thermal stuff that resembles a fine sheet of blu-tack in both color and stickyness. I am disappointed that there isn’t any more solid form of construction, to mount this on a wall or ceiling some extra hardware would be needed to secure this. Also if they just had a really big copper heatsink I think that would be better. 80386 CPUs with similar TDP were able to run without a fan.

I wonder how things would work with all USB ports in use. It’s expected that a USB port can supply a minimum of 2.5W which means that all the ports could require 10W if they were active. Presumably something significantly less than 5W is available for the USB ports.

Other Devices

Sipheed has a range of other devices in the works. They currently sell the LicheeCluster4A which support 7 compute modules for a cluster in a box. This has some interesting potential for testing and demonstrating cluster software but you could probably buy an AMD64 system with more compute power for less money. The Lichee Console 4A is a tiny laptop which could be useful for people who like the 7″ laptop form factor, unfortunately it only has a 1280*800 display if it had the same resolution display as a typical 7″ phone I would have bought one.

The next device that appeals to me is the soon to be released Lichee Pad 4A which is a 10.1″ tablet with 1920*1200 display, Wifi6, Bluetooth 5.4, and 16G of RAM. It also has 1 USB-C connection, 2*USB-3 sockets, and support for an external card with 2*Gigabit ethernet. It’s a tablet as a “laptop without keyboard” instead of the more common “larger phone” design model.

They are also about to release the LicheePadMax4A which is similar to the other tablet but with a 14″ 2240*1400 display and which ships with a keyboard to make it essentially a laptop with detachable keyboard.

Conclusion

At this time I wouldn’t recommend that this device be used as a workstation or laptop, although the people who want to do such things will probably do it anyway regardless of my recommendations. I think it will be very useful as a test system for RISC-V development. I have some friends who are interested in this sort of thing and I can give them VMs.

It is a bit expensive. The Sipheed web site boasts about the LicheePi4 being faster than the RaspberryPi4, but it’s not a lot faster and the RaspberryPi4 is much cheaper ($127 or $129 for one with 8G of RAM). The RaspberryPi4 has two HDMI ports but a limit of 8G of RAM while the LicheePi has up to 16G of RAM and two Gigabit Ethernet ports but only a single HDMI port. It seems that the RaspberryPi4 might win if you want a cheap low power desktop system.

At this time I think the reason for this device is testing out RISC-V as an alternative to the AMD64 and ARM64 architectures. An open CPU architecture goes well with free software, but it isn’t just people who are into FOSS who are testing such things. I know some corporations are trying out RISC-V as a way of getting other options for embedded systems that don’t involve paying monopolists.

The Lichee Console 4A is probably a usable tiny laptop if the resolution is sufficient for your needs. As an aside I predict that the tiny laptop or pocket computer segment will take off in the near future. There are some AMD64 systems the size of a phone but thicker that run Windows and go for reasonable prices on AliExpress. Hopefully in the near future this device will have better video drivers and be usable as a small and quiet workstation.

I won’t rule out the possibility of making this my main workstation in the not too distant future, all it needs is reliable 4K display and the ability to decode 4K video. It’s performance for web browsing and as an ssh client seems adequate, and that’s what matters for my workstation use. But for the moment it’s just for server use.

SAS vs SATA and Recovery

SAS and SATA are electrically compatible to a degree that allows connecting a SATA storage device to a SAS controller. The SAS controller understands the SATA protocol so this works. A SAS device can’t be physically connected to a SATA controller and if you did manage to connect it then it wouldn’t work.

Some SAS RAID controllers don’t permit mixing SAS and SATA devices in the same array, this is a software issue and could be changed. I know that the PERC controllers used by Dell (at least the older versions) do this and it might affect many/most MegaRAID controllers (which is what PERC is).

If you have a hardware RAID array of SAS disks and one fails then you need a spare SAS disk and as the local computer store won’t have any you need some on hand.

The Linux kernel has support for the MegaRAID/PERC superblocks so for at least some of the RAID types supported by MegaRAID/PERC you can just connect the disks to a Linux system and have it work (I’ve only tested on JBOD AKA a single-disk RAID-0). So if you have a server from Dell or IBM or any other company that uses MegaRAID which fails you can probably just put the disks into a non-RAID SAS controller and have them work. As Linux doesn’t care about the difference between SAS and SATA at the RAID level you could then add a SATA disk to an array of SAS disks. If you want to move an array from a dead Dell to a working IBM server or the other way around then you need it to be all SATA or all SAS. You can use a Linux system to mount an array used by Windows or any other OS and then migrate the data to a different array.

If you have an old array of SAS disks and one fails then it might be a reasonable option to just migrate the data to a new array of SATA SSDs. EG if you had 6*600G SAS disks you could move to 2*4TB SATA SSDs and get more storage, much higher performance, less power use, and less noise for a cost of $800 or so (you can spend more to get better performance) and some migration time.

Having a spare SAS controller for data recovery is convenient. Having a spare SAS disk for any RAID-5/RAID-6 is a good thing. Having lots of spare SAS disks probably isn’t useful as migrating to SATA is a better choice. SATA SSDs are bigger and faster than most SAS disks that are in production. I’m sure that someone uses SAS SSDs but I haven’t yet seen them in production, if you have a SAS system and need the performance that SSDs can give then a new server with U.2 (the SAS equivalent of NVMe) is the way to go). SATA hard drives are also the solution for seriously large storage, 16TB SATA hard drives are cheap and work in all the 3.5″ SAS systems.

It’s hard to sell old SAS disks as there isn’t much use for them.

Links December 2023

David Brin wrote an insightful blog post about the latest round of UFO delusion [1]. There aren’t a heap of scientists secretly working on UFOs.

David Brin wrote an informative and insightful blog post about rich doomsday preppers who want to destroy democracy [2].

Cory Doctorow wrote an interesting article about how ChatGPT helps people write letters and how that decreases the value of the letter [3]. What can we do to show that letters mean something? Hand deliver them? Pay someone to hand deliver them? Cory concentrates on legal letters and petitions but this can apply to other things too.

David Brin wrote an informative blog post about billionaires prepping for disaster – and causing the disaster [4].

David Brin wrote an insightful Wired article about ways of dealing with potential rogue AIs [5].

David Brin has an interesting take on government funded science [6].

Bruce Schneier wrote an insightful article about AI Risks which is worth reading [7].

Ximion wrote a great blog post about how tp use AppStream metadata to indicate what type of hardware/environment is required to use an app [8]. This is great for the recent use of Debian on phones and can provide real benefits for more traditional uses (like all those servers that accidentally got LibreOffice etc installed). Also for Convergence it will be good to have the app launcher take note of this, when your phone isn’t connected to a dock there’s no point offering to launch apps that require a full desktop screen.

Russ Albery wrote an interesting summary of the book Going Infinite about the Sam Bankman-Fried FTX fiasco [9]. That summary really makes Sam sound Autistic.

Cory Doctorow wrote an insightful article Microincentives and Enshittification explaining why Google search has to suck [10].

Charles Stross posted the text of a lecture he gave titles “We’re Sorry We Created the Torment Nexus [11] about sci-fi ideas that shouldn’t be implemented.

The Daily WTF has many stories of corporate computer stupidity, but The White Appliphant is one of the most epic [12].

The Verge has an informative article on new laws in the US and the EU to give a “right to repair” and how this explains the sudden change to 7 year support for Pixel phones [13].

Abuse and Free Software

People in positions of power can get away with mistreating other people. For any organisation to operate effectively there have to be mechanisms to address bad behaviour, both to help the organisation to achieve it’s goals and to protect people who work for it.

When an organisation operates in the public interest there is a greater reason to try to prevent bad behaviour as hurting people is not in the public interest.

There are many forms of power, in the free software community a reputation for doing good technical work or work related to supporting software development gives some power and influence. We have seen examples of technical contributions used to excuse mistreatment of other people.

The latest example of using a professional reputation to cover for abuse is Eben Moglen who has done some good legal work in the past while also treating members of the community badly (as documented by Matthew Garrett) [1]. Matthew has also documented how since 2016 Eben has not been doing good work for the free software community [2]. When news comes out about people who did good work while abusing other people they are usually defended with claims such as “we can’t lose the great contributions of this one person so it’s worth losing the contributions of everyone who can’t work with them“, but in such situations it’s very common to discover that they haven’t been doing great work. This might be partly due to abusive people being better at self-promoting than actually doing good work and might be partly due to the fact that people who are afraid to speak out when they are doing good work might suddenly feel ready to go public if the person’s work (defence) is decreasing.

Bradley Kuhn’s article about this situation is worth reading [3].

I don’t have as much knowledge of the people involved in these disputes as Matthew, but I know enough about what is happening to be confident that Matthew’s summary is accurate.

Fat Finger Shell

I’ve been trying out the Fat Finger Shell which is a terminal emulator for Linux on touch screen devices where the keyboard is overlayed with the terminal output. This means that instead of having a tiny keyboard and a tiny terminal output you have the full screen for both. There is a YouTube video showing how the Fat Finger Shell works [1].

Here is a link to the Github page [2], which hasn’t changed much in the last 11 years.

Currently the shell is hard-coded to a 80*24 terminal and a 640*480 screen which doesn’t match any modern hardware. Some parts of this are easy to change but then there’s the comment “I ran once XGetGeometry and I am harcoded (bad) values for x, y, etc..” which is followed by some magic numbers that are not easy to change which are hacked into the source of xvt.

The configuration of this is almost great. It has a plain text file where each line has 4 numbers representing the X and Y coordinates of opposite corners of a rectangle and additional information on what the key is, which is relatively easy to edit. But then it has an image which has to match that, the obvious improvement would be to not have an image but to just display rectangles for each pair of corner coordinates and display the glyph of the character in question inside it.

I think there is a real need for a terminal like this for use on devices like the PinePhonePro, it won’t be to everyone’s taste but the people who like it will really like it. The features that such a shell needs for modern use are being based on Wayland, supporting a variety of screen resolutions and particularly the commonly used ones like 720*1440 and 1920*1080 (with terminal resolution matching the combination of screen resolution and font), and having code derived from a newer terminal emulator. As a final note it would be good for such a terminal to also take input from a regular keyboard so when you plug your Linux phone into a dock you don’t need to close your existing terminal sessions.

There is a Debian RFP/ITP bug for this [3] which I think should be closed due to nothing happening for 11 years and the fact that so much work is required to make this usable.

The current Fat Finger Shell code is a good demonstration of the concept, but I don’t think it makes sense to move on with this code base. One of the many possible ways of addressing this with modern graphics technology might be to have a semi transparent window overlaying the screen and generating virtual keyboard events for whichever window happens to be below it so instead of being limited to one terminal program by the choice of input method have that input work for any terminal that the user may choose as well as any other text based program (email, IM, etc).