Archives

Categories

Links August 2025

Dimitri John Ledkov wrote an informative blog post about self encrypting disks and UEFI with Linux [1].

This Coffeezilla video highlights an interesting scam, run a broker for day traders and don’t execute trades, just be the counterparty for every trade and rely on day traders losing [2].

First Sight is a Dust SciFi short film that’s worthy of Black Mirror [3].

Apple published an interesting article about the operation of the Secure Enclave in iPhone, Mac, and all other significant Apple hardware [4].

Reese Waters made an amusing video about a conservative catfight that’s going on, nice to see horrible people attacking each other [5].

Chengyuan Ma wrote an informative summary of the history of the Great Firewall of China [6]. The V2Ray proxy has a nice feature set!

Interesting article about the JP Morgan Workplace Activity Data Utility (WADU) AI spyware system [7]. Corporate work is going to become even more horrible.

Veritasium has a great video about the history of vulcanised rubber and the potential for significant problems if rubber leaf blight spreads to other countries [8].

Jalopnik has an interesting article about how Reagan killed the safest car ever built [9].

David Brin wrote an interesting article “Tolkien Enemy of Progress” about fiction that justifies autocratic rule [10].

ZRAM and VMs

I’ve just started using zram for swap on VMs. The use of compression for swap in Linux apparently isn’t new, it’s been in the Linux kernel since version 3.2 (since 2012). But until recent years I hadn’t used it. When I started using Mobian (the Debian distribution for phones) zram was in the default setup, it basically works and I never needed to bother with it which is exactly what you want from such a technology. After seeing it’s benefits in Mobian I started using it on my laptops where it worked well.

Benefits of ZRAM

ZRAM means that instead of paging data to storage it is compressed to another part of RAM. That means no access to storage which is a significant benefit if storage is slow (typical for phones) or if storage wearing out is a problem.

For servers you typically have SSDs that are fast and last for significant write volumes, for example the 120G SSDs referenced in my blog post about swap (not) breaking SSD [1] are running well in my parents’ PC because they outlasted all the other hardware connected to them and 120G isn’t usable for anything more demanding than my parents use nowadays. Those are Intel 120G 2.5″ DC grade SATA SSDs. For most servers ZRAM isn’t a good choice as you can just keep doing IO on the SSDs for years.

A server that runs multiple VMs is a special case because you want to isolate the VMs from each other. Support for quotas for storage IO in Linux isn’t easy to configure while limiting the number of CPU cores is very easy. If a system or VM using ZRAM for swap starts paging excessively the bottleneck will be CPU, this probably isn’t going to be great on a phone with a slow CPU but on a server class CPU it will be less of a limit. Whether compression is slower or faster than SSD is a complex issue but it will definitely be just a limit for that VM. When I setup a VM server I want to have some confidence that a DoS attack or configuration error on one VM isn’t going to destroy the performance of other VMs. If the VM server has 4 cores (the smallest VM server I run) and no VM has more than 2 cores then I know that the system can still run adequately if half the CPU performance is being wasted.

Some servers I run have storage limits that make saving the disk space for swap useful. For servers I run in Hetzner (currently only one server but I have run up to 6 at various times in the past) the storage is often limited, Hetzner seems to typically have storage that is 8* the size of RAM so if you have many VMs configured with the swap that they might need in the expectation that usually at most one of them will be actually swapping then it can make a real difference to usable storage. 5% of storage used for swap files isn’t uncommon or unreasonable.

Big Servers

I am still considering the implications of zram on larger systems. If I have a ML server with 512G of RAM would it make sense to use it? It seems plausible that a system might need 550G of RAM and zram could make the difference between jobs being killed with OOM and the jobs just completing. The CPU overhead of compression shouldn’t be an issue as when you have dozens of cores in the system having one or two used for compression is no big deal. If a system is doing strictly ML work there will be a lot of data that can’t be compressed, so the question is how much of the memory is raw input data and the weights used for calculations and how much is arrays with zeros and other things that are easy to compress.

With a big server nothing less than 32G of swap will make much difference to the way things work and if you have 32G of data being actively paged then the fastest NVMe devices probably won’t be enough to give usable performance. As zram uses one “stream” per CPU code if you have 44 cores that means 44 compression streams which should handle greater throughput. I’ll write another blog post if I get a chance to test this.

Dell T320 H310 RAID and IT Mode

The Problem

Just over 2 years ago my Dell T320 server had a motherboard failure [1]. I recently bought another T320 that had been gutted (no drives, PSUs, or RAM) and put the bits from my one in it.

I installed Debian and the resulting installation wouldn’t boot, I tried installing with both UEFI and BIOS modes with the same result. Then I realised that the disks I had installed were available even though I hadn’t gone through the RAID configuration (I usually make a separate RAID-0 for each disk to work best with BTRFS or ZFS). I tried changing the BIOS setting for SATA disks between “RAID” and “AHCI” modes which didn’t change things and realised that the BIOS setting in question probably applies to the SATA connector on the motherboard and that the RAID card was in “IT” mode which means that each disk is seen separately.

If you are using ZFS or BTRFS you don’t want to use a RAID-1, RAID-5, or RAID-6 on the hardware RAID controller, if there are different versions of the data on disks in the stripe then you want the filesystem to be able to work out which one is correct. To use “IT” mode you have to flash a different unsupported firmware on the RAID controller and then you either have to go to some extra effort to make it bootable or have a different device to boot from.

The Root Causes

Dell has no reason to support unusual firmware on their RAID controllers. Installing different firmware on a device that is designed for high availability is going to have some probability of data loss and perhaps more importantly for Dell some probability of customers returning hardware during the support period and acting innocent about why it doesn’t work. Dell has a great financial incentive to make it difficult to install Dell firmware on LSI cards from other vendors which have equivalent hardware as they don’t want customers to get all the benefits of iDRAC integration etc without paying the Dell price premium.

All the other vendors have similar financial incentives so there is no official documentation or support on converting between different firmware images. Dell’s support for upgrading the Dell version is pretty good, but it aborts if it sees something different.

The Attempts

I tried following the instructions in this document to flash back to Dell firmware [2]. This document is about the H310 RAID card in my Dell T320 AKA a “LSI SAS 9211-8i”. The sas2flash.efi program didn’t seem to do anything, it returned immediately and didn’t give an error message.

This page gives a start of how to get inside the Dell firmware package but doesn’t work [3]. It didn’t cover the case where sasdupie aborts with an error because it detects the current version as “00.00.00.00” not something that the upgrade program is prepared to upgrade from. But it’s a place to start looking for someone who wants to try harder at this.

This forum post has some interesting information, I gave up before trying it, but it may be useful for someone else [4].

The Solution

Dell tower servers have as a standard feature an internal USB port for a boot device. So I created a boot image on a spare USB stick and installed it there and it then loads the kernel and mounts the filesystem from a SATA hard drive. Once I got that working everything was fine. The Debian/Trixie installer would probably have allowed me to install an EFI device on the internal USB stick as part of the install if I had known what was going to happen.

The system is now fully working and ready to sell. Now I just need to find someone who wants “IT” mode on the RAID controller and hopefully is willing to pay extra for it.

Whatever I sell the system for it seems unlikely to cover the hours I spent working on this. But I learned some interesting things about RAID firmware and hopefully this blog post will be useful to other people, even if only to discourage them from trying to change firmware.

Colmi P80 SmartWatch First Look

I just bought a Colmi P80 SmartWatch from Aliexpress for $26.11 based on this blog post reviewing it [1]. The main things I was after in this was a larger higher resolution screen because my vision has apparently deteriorated during the time I’ve been wearing a Pinetime [2] and I now can’t read messages on it when not wearing my reading glasses.

The watch hardware is quite OK. It has a larger and higher resolution screen and looks good. The review said that GadgetBridge (the FOSS SmartWatch software in the F-Droid repository) connected when told that the watch was a P79 and in a recent release got support for sending notifications. In my tests with GadgetBridge it doesn’t set the time, can’t seem to send notifications, can’t read the battery level, and seems not to do anything other than just say “connected”. So I installed the proprietary app, as an aside it’s a neat feature to have the watch display a QR code for installing the app, maybe InfiniTime should have a similar QR code for getting GadgetBridge from the F-Droid repository.

The proprietary app is quote OK for the basic functionality and a less technical relative who is using one is happy. For my use the proprietary app is utterly broken. One of my main uses is to get notifications of Jabber messages from the Conversations app (that’s in F-Droid). I have Conversations configured to always have a notification of how many accounts are connected which prevents Android from killing it, with GadgetBridge that notification isn’t reported but the actual message contents are (I don’t know how/why that happens) but with the Colmi app I get repeated notifcation messages on the watch about the accounts being connected. Also the proprietary app has on/off settings for messages to go to the watch for a hard coded list of 16 common apps and an “Others” setting for the rest. GadgetBridge lists the applications that are actually installed so I can configure it not to notify me about Reddit, connecting to my car audio, and many other less common notifications. I prefer the GadgetBridge option to have an allow-list for apps that I want notifications from but it also has a configuration option to use a deny list so you could have everything other than the app that gives lots of low value notifications. The proprietary app has a wide range of watch faces that it can send to the watch which is a nice feature that would be good to have in InfiniTime and GadgetBridge.

The P80 doesn’t display a code on screen when it is paired via Bluetooth so if you have multiple smart watches then you are at risk of connecting to the wrong one and there doesn’t seem to be anything stopping a hostile party from connecting to one. Note that hostile parties are not restricted to the normal maximum transmission power and can use a high gain antenna for reception so they can connect from longer distances than normal Bluetooth devices.

Conclusion

The Colmi P80 hardware is quite decent, the only downside is that the vibration has an annoying “tinny” feel. Strangely it has a rotation sensor for a rotating button (similar to analogue watches) but doesn’t seem to have a use for it as the touch screen does everything.

The watch firmware is quite OK (not great but adequate) but lacking a password for pairing is a significant lack.

The Colmi Android app has some serious issues that make it unusable for what I do and the release version of GadgetBridge doesn’t work with it, so I have gone back to the PineTime for actual use.

The PineTime cost twice as much, has less features (no sensor for O2 level in blood), but seems more solidly constructed.

I plan to continue using the P80 with GadgetBridge and Debian based SmartWatch software to help develop the Debian Mobile project. I expect that at some future time GadgetBridge and the programs written for non-Android Linux distributions will support the P80 and I will transition to it. I am confident that it will work well for me at some future time and that I will get $26.11 of value from it. At this time I recommend that people who do the sort of things I do get one of each and that less technical people get a Colmi P80.

Server CPU Sockets

I am always looking for ways of increasing the compute power I have at a reasonable price. I am very happy with my HP z840 dual CPU workstation [1] that I’m using as a server and my HP z640 single CPU workstation [2]. Both of them were available second hand at quite reasonable prices and could be cheaply upgraded to faster CPUs. But if I can get something a lot faster for a reasonable price then I’ll definitely get it.

Socket LGA2011-v3

The home server and home workstation I currently use have socket LGA2011-v3 [3] which supports the E5-2699A v4 CPU which gives a rating of 26,939 according to Passmark [4]. That Passmark score is quite decent, you can get CPUs using DDR4 RAM that go up to almost double that but it’s a reasonable speed and it works in systems that are readily available at low prices. The z640 is regularly on sale for less than $400AU and the z840 is occasionally below $600.

The Dell PowerEdge T430 is an ok dual-CPU tower server using the same socket. One thing that’s not well known is that is it limited to something like 135W per CPU when run with two CPUs. So it will work correctly with a single E5-2697A v4 with 145W TDP (I’ve tested that) but will refuse to boot with two of them. In my test system I tried replacing the 495W PSUs with 750W PSUs and it made no difference, the motherboard has the limit. With only a single CPU you only get 8/12 DIMM sockets and not all PCIe slots work. There are many second hand T430s on sale with only a single CPU presumably because the T330 sucks. My T430 works fine with a pair of E5-2683 v4 CPUs.

The Dell PowerEdge T630 also takes the same CPUs but supports higher TDP than the T430. They also support 18*3.5″ disks or 32*2.5″ but they are noisy. I wouldn’t buy one for home use.

AMD

There are some nice AMD CPUs manufactured around the same time and AMD has done a better job of making multiple CPUs that fit the same socket. The reason I don’t generally use AMD CPUs is that they are used in a minority of the server grade systems so as I want ECC RAM and other server features I generally can’t find AMD systems at a reasonable price on ebay etc. There are people who really want second hand server grade systems with AMD CPUs and outbid me. This is probably a region dependent issue, maybe if I was buying in the US I could get some nice workstations with AMD CPUs at low prices.

Socket LGA1151

Socket LGA1151 [5] is used in the Dell PowerEdge T330. It only supports 2 memory channels and 4 DIMMs compared to the 4 channels and 8 DIMMs in LGA2011, and it also has a limit of 64G total RAM for most systems and 128G for some systems. By today’s standards even 128G is a real limit for server use, DDR4 RDIMMs are about $1/GB and when spending $600+ on system and CPU upgrade you wouldn’t want to spend less than $130 on RAM. The CPUs with decent performance for that socket like the i9-9900K aren’t supported by the T330 (possibly they don’t support ECC RAM). The CPUs that Dell supports perform very poorly. I suspect that Dell deliberately nerfed the T330 to drive sales of the T430.

The Lenovo P330 uses socket LGA1151-2 but has the same issues of taking slow CPUs in addition to using UDIMMs which are significantly more expensive on the second hand market.

Socket LGA2066

The next Intel socket after LGA2011-v3 is LGA2066 [6]. That is in The Dell Precision 5820 and HP Z4 G4. It takes an i9-10980XE for 32,404 on Passmark or a W-2295 for 30,906. The variant of the Dell 5820 that supports the i9 CPUs doesn’t seem to support ECC RAM so it’s not a proper workstation. The single thread performance difference between the W-2295 and the E5-2699A v4 is 2640 to 2055, a 28% increase for the W-2295. There are “High Frequency Optimized” cpus for socket LGA2011-v3 but they all deliver less than 2,300 on the Passmark single-thread tests which is much less than what you can get from socket LGA2066. The W-2295 costs $1000 on ebay and the E5-2699A v4 is readily available for under $400 and a few months ago I got a matched pair for a bit over $400. Note that getting a matched pair of Intel CPUs is a major pain [7].

Comparing sockets LGA2011-v3 and LGA2066 for a single-CPU system is a $300 system (HP x640) + $400 CPU (E5-2699A v4) vs $500 system (Dell Precision 5820) + $1000 CPU (W-2295), so more than twice the price for a 30% performance benefit on some tasks. The LGA2011-v3 and USB-C both launched in 2014 so LGA2011-v3 systems don’t have USB-C sockets, a $20 USB-C PCIe card doesn’t change the economics.

Socket LGA3647

Socket LGA3647 [8] is used in the Dell PowerEdge T440. It supports 6 channels of DDR4 RAM which is a very nice feature for bigger systems. According to one Dell web page the best CPU Dell officially supports for this is the Xeon Gold 5120 which gives performance only slightly better than the E5-2683 v4 which has a low enough TDP that a T430 can run two of them. But according to another Dell web page they support 16 core CPUs which means performance better than a T430 but less than a HP z840. The T440 doesn’t seem like a great system, if I got one cheap I could find a use for it but I wouldn’t pay the prices that they go for on ebay. The Dell PowerEdge T640 has the same socket and is described as supporting up to 28 core CPUs. But I anticipate that it would be as loud as the T630 and it’s also expensive.

This socket is also used in the HP Z6 G4 which takes a W-3265 or Xeon Gold 6258R CPU for the high end options. The HP Z6 G4 systems on ebay are all above $1500 and the Xeon Gold 6258R is also over $1000 so while the Xeon Gold 6258R in a Z6 G4 will give 50% better performance on multithreaded operations than the systems I currently have it’s costing almost 3* as much. It has 6 DIMM sockets which is a nice improvement over the 4 in the z640. The Z6 G4 takes a maximum of 768G of RAM with the optional extra CPU board (which is very expensive both new and on ebay) compared to my z840 which has 512G and half it’s DIMM slots empty. The HP Z8 G4 has the same socket and takes up to 3TB of RAM if used with CPUs that support it (most CPUs only support 768G and you need a “M” variant to support more). The higher performance CPUs supported in the Z6 G4 and Z8 G4 don’t have enough entries in the Passmark database to be accurate, but going from 22 cores in the E5-2699A v4 to 28 in the Xeon Platinum 8180 when using the same RAM technology doesn’t seem like a huge benefit. The Z6 and Z8 G4 systems run DDR4 RAM at up to 2666 speed while the z640 and z840 only to 2400, a 10% increase in RAM speed is nice but not a huge difference.

I don’t think that any socket LGA3647 systems will ever be ones I want to buy. They don’t offer much over LGA2011-v3 but are in newer and fancier systems that will go for significantly higher prices.

DDR5

I think that DDR5 systems will be my next step up in tower server and workstation performance after the socket LGA2011-v3 systems. I don’t think anything less will offer me enough of a benefit to justify a change. I also don’t think that they will be in the price range I am willing to pay until well after DDR6 is released, some people are hoping for DDR6 to be released late this year but next year seems more likely. So maybe in 2027 there will be some nice DDR5 systems going cheap.

CPU Benchmark Results

Here are the benchmark results of CPUs I mentioned in this post according to passmark.com [9]. I didn’t reference results of CPUs that only had 1 or 2 results posted as they aren’t likely to be accurate.

CPU Single Thread Multi Thread TDP
E5-2683 v4 1,713 17,591 120W
Xeon Gold 5120 1,755 18,251 105W
i9-9900K 2,919 18,152 95W
E5-2697A v4 2,106 21,610 145W
E5-2699A v4 2,055 26,939 145W
W-3265 2,572 30,105 205W
W-2295 2,642 30,924 165W
i9-10980XE 2,662 32,397 165W
Xeon Gold 6258R 2,080 40,252 205W

Links July 2025

Louis Rossman made an informative YouTube video about right to repair and the US military [1]. This is really important as it helps promote free software and open standards.

The ACM has an insightful article about hidden controls [2]. We need EU regulations about hidden controls in safety critical systems like cars.

This Daily WTF article has some interesting security implications for Windows [3].

Earth.com has an interesting article about the “rubber hand illusion” and how it works on Octopus [4]. For a long time I have been opposed to eating Octopus because I think they are too intelligent.

The Washington Post has an insightful article about the future of spies when everything is tracked by technology [5].

Micah Lee wrote an informative guide to using Signal groups for activism [6].

David Brin wrote an insightful blog post about the phases of the ongoing US civil war [7].

Christian Kastner wrote an interesting blog post about using Glibc hardware capabilities to use different builds of a shared library for a range of CPU features [8].

David Brin wrote an insightful and interesting blog post comparing President Carter with the criminals in the Republican party [9].

Annoying Wrongness on TV

One thing that annoys me on TV shows and movies is getting the details wrong. Yes it’s fiction and yes some things can’t be done correctly and in some situations correctly portraying things goes against the plot. But otherwise I think they should try to make it accurate.

I was just watching The Americans (a generally good show that I recommend watching) and in Season 4 Episode 9 there’s a close up of a glass of wine which clearly shows that the Tears of Wine effect is missing, the liquid in the glass obviously has the surface tension of water not of wine. When they run a show about spies then have to expect that the core audience will be the type of detail oriented people who notice these things. Having actors not actually drink alcohol on set is a standard practice, if they have to do 10 takes of someone drinking a glass of wine then that would be a problem if they actually drank real wine. But they could substitute real wine for the close up shots and of course just getting it right the first time is a good option.

Some ridiculous inaccuracy we just need to deal with, like knives making a schwing sound when pulled out of scabbards and “silenced” guns usually still being quite loud (so many people are used to it being wrong). Organisations like the KGB had guns that were actually silent, but they generally looked obviously different to regular guns and had a much lower effective range.

The gold coins shown on TV are another ridiculous thing. The sound of metal hitting something depends on how hard it is and how dense it is. Surely most people have heard the sounds of dropping steel nuts and ball bearings and the sound of dropping lead sinkers and knows that the sounds of items of similar size and shape differ greatly based on density and hardness. A modern coin made of copper, cupro-nickel (the current “silver” coins), or copper-aluminium (the current “gold” coins) sounds very different to a gold coin when dropped on a bench. For a show like The Witcher it wouldn’t be difficult to make actual gold coins of a similar quality to iron age coin production, any jeweller could make the blanks and making stamps hard enough to press gold isn’t an engineering challenge (stamping copper coins would be much more difficult). The coins used for the show could be sold to fans afterwards.

Once coins are made they can’t be just heaped up. Even if you are a sorcerer you probably couldn’t fill a barrel a meter high with gold coins and not have it break from the weight and/or have the coins at the bottom cold welded. Gold coins are supposed to have a precise amount of gold and if you pile them up too high then the cold welding process will transfer gold between coins changing the value. If someone was going to have a significant quantity of gold stored then it would be in gold ingots with separators between layers to prevent cold welding.

Movies tend not to show coins close up, I presume that’s because they considered it too difficult to make coins and they just use some random coins from their own country.

Another annoying thing is shows that don’t match up the build dates of objects used. It’s nice when they get it right like the movie Titanic featuring a M1911 pistol which is something that a rich person in 1912 would likely have. The series Carnival Row (which I recommend) has weapons that mostly match our WW1 era, everything that doesn’t involve magic seems legit. One of the worst examples of this is the movie Anna (by Luc Besson which is mostly a recreation of his film Nikita but in the early 90s and with the KGB). That film features laptops with color screens and USB ports before USB was invented and when color screens weren’t common on laptops, as an aside military spec laptops tend to have older designs than consumer spec ones.

I’ve mostly given up on hoping that movies will get “hacking” scenes that are any more accurate than knives making a “schwing” sound. But it shouldn’t be that hard for them to find computer gear that was manufactured in the right year to use for the film.

Why can’t they hire experts on technology to check everything?

Bad Product Comparisons and EVs

When companies design products a major concern seems to be what the reviewers will have to say about it. For any product of significant value the users are unable to perform any reasonable test before buying, for a casual user some problems may only be apparent after weeks of use so professional reviews are important to many people. The market apparently doesn’t want reviews of the form “here’s a list of products that are quite similar and all do the job well, you can buy any of them, it’s no big deal” which would be the most technically accurate way of doing it.

So the reviewers compare the products on the criteria that are easiest to measure, this lead to phones being compared by how light and thin they are. I think it’s often the case that users would be better served by thicker heavier phones that have larger batteries but instead they are being sold phones that have good battery life in a fresh installation but which don’t last a day with a full load of apps installed.

The latest issue with bad reviews driving poor product design is electric cars. For a while the advocates of old fashioned cars have touted the range of petrol cars which has become an issue for comparing EVs. I have been driving cars for 35 years and so far I have never driven anywhere that’s out of range of the current electric charging network, even with the range of the LEAF (which is smaller than many other EVs). If I ever felt the need to drive across the Nullarbor Plain then I could rent a car to do that and the costs of such car rental would be small compared to the money I’m saving by driving an EV and also small when compared to the premium I would have to pay for an EV with a larger range.

Some of the recent articles I’ve seen about EVs have covered vehicles with a battery range over 700Km which is greater than the legal distance a commercial driver can drive without a break. I’ve also seen articles about plans to have a small petrol or Diesel motor in an EV to recharge the battery without directly driving the wheels. A 9KW Diesel motor could provide enough electricity on average to keep the charge maintained in a LEAF battery and according to the specs of Diesel generators would take about 55Kg of fuel to provide the charge a LEAF needs to drive 1000Km. The idea of a mostly electric hybrid car that can do 1000Km on one tank of fuel is interesting as a thought experiment but doesn’t seem to have much actual use. Apparently a Chinese company is planning to release a car that can do 1400Km one one tank of fuel using such technology which is impressive but not particularly useful.

The next issue of unreasonable competition is in charge speed. Charging a car at 2KW from a regular power socket is a real limit to what you can do with a car. It’s a limit that hasn’t bothered me so far because the most driving I typically do in a week is less than one full charge, so at most I have to charge overnight twice in a week. But if I was going to drive to another city without hiring a car that has better range I’d need a fast charger. Most current models of the Nissan LEAF support charging speeds up to 50KW which means fully charging the battery in under an hour (or slightly over an hour for the long range version). If I was to drive from Melbourne to Canberra in my LEAF I’d have to charge twice which would be an annoyance at those speeds. There are a variety of EVs that can charge at 100KW and some as high as 350KW. 350KW is enough to fully charge the largest EV batteries in half an hour which seems to be as much as anyone would need. But there are apparently plans for 1MW car chargers which would theoretically be able to charge a Hummer (the EV with the largest battery) in 12 minutes. One obvious part of the solution to EV charging times is to not drive a Hummer! Another thing to note is that batteries can’t be charged at a high rate for all charge levels, this is why advertising for fast chargers makes claims like “80% charge in half an hour” which definitely doesn’t mean “100% charge in 37.5 minutes”!

There are significant engineering issues with high power applications. A 1MW cable is not just a bigger version of a regular power cable, there are additional safety issues, user training is required and cooling of the connector is probably required. That’s a lot to just get a better number in the table at the end of a review. There is research in progress on the Megawatt Charging System which is designed to charge heavy vehicles (presumably trucks and buses) at up to 3.75MW. Charging a truck at that rate is reasonable as the process of obtaining and maintaining a heavy vehicle license requires a significant amount of effort and some extra training in 3.75MW charging probably doesn’t make much difference.

A final issue with fast charging is the capacity of the grid. A few years ago I attended a lecture by an electrical engineer who works for the Victorian railway system which was very interesting. The Vic rail power setup involved about 100MW of grid connectivity with special contracts with the grid operators due to the fact that 1MW trains suddenly starting and stopping causes engineering problems that aren’t trivial to solve. They were also working on battery packs and super capacitors to deal with regenerative braking and to avoid brownouts in long sections of track. For a medium size petrol station 14 bays for fuelling cars is common. If 6 such petrol stations were replaced with fast charging stations that can charge cars at 1MW each that would draw the same power as the train network for the entire state! There is a need for significant engineering work to allow most cars to be electric no matter how it’s done, but we don’t need to make that worse just for benchmarks.

Function Keys

For at least 12 years laptops have been defaulting to not having the traditional PC 101 key keyboard function key functionality and instead have had other functions like controlling the volume and have had a key labelled Fn to toggle the functions. It’s been a BIOS option to control whether traditional function keys or controls for volume etc are the default and for at least 12 years I’ve configured all my laptops to have the traditional function keys as the default.

Recently I’ve been working in corporate IT and having exposure to many laptops with the default BIOS settings for those keys to change volume etc and no reasonable option for addressing it. This has made me reconsider the options for configuring these things.

Here’s a page listing the standard uses of function keys [1]. Here is a summary of the relevant part of that page:

  • F1 key launches help doesn’t seem to get much use. The main help option in practice is Google (I anticipate controversy about this and welcome comments) and all the software vendors are investigating LLM options for help which probably won’t involve F1.
  • F2 is for renaming files but doesn’t get much use. Probably most people who use graphical file managers use the right mouse button for it. I use it when sorting a selection of photos.
  • F3 is for launching a search (which is CTRL-F in most programs).
  • ALT-F4 is for closing a window which gets some use, although for me the windows I close are web browsers (via CTRL-W) and terminals (via CTRL-D).
  • F5 is for reloading a page which is used a lot in web browsers.
  • F6 moves the input focus to the URL field of a web browser.
  • F8 is for moving a file which in the degenerate case covers the rename functionality of F2.
  • F11 is for full-screen mode in browsers which is sometimes handy.

The keys F1, F3, F4, F7, F9, F10, and F12 don’t get much use for me and for the people I observe. The F2 and F8 keys aren’t useful in most programs, F6 is only really used in web browsers – but the web browser counts as “most programs” nowadays.

Here’s the description of Thinkpad Fn keys [2]. I use Thinkpads for fun and Dell laptops for work, so it would be nice if they both worked in similar ways but of course they don’t. Dell doesn’t document how their Fn keys are laid out, but the relevant bit is that F1 to F4 are the same as on Thinkpads which is convenient as they are the ones that are likely to be commonly used and needed in a hurry.

I have used the KDE settings on my Thinkpad to map the function F1 to F3 keys to the Fn equivalents which are F1 to mute-audio, F2 for vol-down, and F3 for vol-up to allow using them without holding down the Fn key while having other function keys such as F5 and F6 have their usual GUI functionality. Now I have to could train myself to use F8 in situations where I usually use F2, at least when using a laptop.

The only other Fn combinations I use are F5 and F6 for controlling screen brightness, but that’s not something I use much.

It’s annoying that the laptop manufacturers forced me to this. Having a Fn key to get extra functions and not need 101+ keys on a laptop size device is a reasonable design choice. But they could have done away with the PrintScreen key to make space for something else. Also for Thinkpads a touch pad is something that could obviously be removed to gain some extra space as the Trackpoint does all that’s needed in that regard.

The Fuss About “AI”

There are many negative articles about “AI” (which is not about actual Artificial Intelligence also known as “AGI”). Which I think are mostly overblown and often ridiculous.

Resource Usage

Complaints about resource usage are common, training Llama 3.1 could apparently produce as much pollution as “10,000 round trips by car between Los Angeles and New York City”. That’s not great but when you compare to the actual number of people doing such drives in the US and the number of people taking commercial flights on that route it doesn’t seem like such a big deal. Apparently commercial passenger jets cause CO2 emissions per passenger about equal to a car with 2 people. Why is it relevant whether pollution comes from running servers, driving cars, or steel mills? Why not just tax polluters for the damage they do and let the market sort it out? People in the US make a big deal about not being communist, so why not have a capitalist solution, make it more expensive to do undesirable things and let the market sort it out?

ML systems are a less bad use of compute resources than Bitcoin, at least ML systems give some useful results while Bitcoin has nothing good going for it.

The Dot-Com Comparison

People often complain about the apparent impossibility of “AI” companies doing what investors think they will do. But this isn’t anything new, that all happened before with the “dot com boom”. I’m not the first person to make this comparison, The Daily WTF (a high quality site about IT mistakes) has an interesting article making this comparison [1]. But my conclusions are quite different.

The result of that was a lot of Internet companies going bankrupt, the investors in those companies losing money, and other companies then bought up their assets and made profitable companies. The cheap Internet we now have was built on the hardware from bankrupt companies which was sold for far less than the manufacture price. That allowed it to scale up from modem speeds to ADSL without the users paying enough to cover the purchase of the infrastructure. In the early 2000s I worked for two major Dutch ISPs that went bankrupt (not my fault) and one of them continued operations in the identical manner after having the stock price go to zero (I didn’t get to witness what happened with the other one). As far as I’m aware random Dutch citizens and residents didn’t suffer from this and employees just got jobs elsewhere.

There are good things being done with ML systems and when companies like OpenAI go bankrupt other companies will buy the hardware and do good things.

NVidia isn’t ever going to have the future sales that would justify a market capitalisation of almost 4 Trillion US dollars. This market cap can support paying for new research and purchasing rights to patented technology in a similar way to the high stock price of Google supported buying YouTube, DoubleClick, and Motorola Mobility which are the keys to Google’s profits now.

The Real Upsides of ML

Until recently I worked for a company that used ML systems to analyse drivers for signs of fatigue, distraction, or other inappropriate things (smoking which is illegal in China, using a mobile phone, etc). That work was directly aimed at saving human lives with a significant secondary aim of saving wear on vehicles (in the mining industry drowsy drivers damage truck tires and that’s a huge business expense).

There are many applications of ML in medical research such as recognising cancer cells in tissue samples.

There are many less important uses for ML systems, such as recognising different types of pastries to correctly bill bakery customers – technology that was apparently repurposed for recognising cancer cells.

The ability to recognise objects in photos is useful. It can be used for people who want to learn about random objects they see and could be used for helping young children learn about their environment. It also has some potential for assistance for visually impaired people, it wouldn’t be good for safety critical systems (don’t cross a road because a ML system says there are no cars coming) but could be useful for identifying objects (is this a lemon or a lime). The Humane AI pin had some real potential to do good things but there wasn’t a suitable business model [2], I think that someone will develop similar technology in a useful way eventually.

Even without trying to do what the Humane AI Pin attempted, there are many ways for ML based systems to assist phone and PC use.

ML systems allow analysing large quantities of data and giving information that may be correct. When used by a human who knows how to recognise good answers this can be an efficient way of solving problems. I personally have solved many computer problems with the help of LLM systems while skipping over many results that were obviously wrong to me. I believe that any expert in any field that is covered in the LLM input data could find some benefits from getting suggestions from an LLM. It won’t necessarily allow them to solve problems that they couldn’t solve without it but it can provide them with a set of obviously wrong answers mixed in with some useful tips about where to look for the right answers.

Jobs and Politics

Noema Magazine has an insightful article about how “AI” can allow different models of work which can enlarge the middle class [3].

I don’t think it’s reasonable to expect ML systems to make as much impact on society as the industrial revolution, and the agricultural revolutions which took society from more than 90% farm workers to less than 5%. That doesn’t mean everything will be fine but it is something that can seem OK after the changes have happened. I’m not saying “apart from the death and destruction everything will be good”, the death and destruction are optional. Improvements in manufacturing and farming didn’t have to involve poverty and death for many people, improvements to agriculture didn’t have to involve overcrowding and death from disease. This was an issue of political decisions that were made.

The Real Problems of ML

Political decisions that are being made now have the aim of making the rich even richer and leaving more people in poverty and in many cases dying due to being unable to afford healthcare. The ML systems that aim to facilitate such things haven’t been as successful as evil people have hoped but it will happen and we need appropriate legislation if we aren’t going to have revolutions.

There are documented cases of suicide being inspired by Chat GPT systems [4]. There have been people inspired towards murder by ChatGPT systems but AFAIK no-one has actually succeeded in such a crime yet. There are serious issues that need to be addressed with the technology and with legal constraints about how people may use it. It’s interesting to consider the possible uses of ChatGPT systems for providing suggestions to a psychologist, maybe ChatGPT systems could be used to alleviate mental health problems.

The cases of LLM systems being used for cheating on assignments etc isn’t a real issue. People have been cheating on assignments since organised education was invented.

There is a real problem of ML systems based on biased input data that issue decisions that are the average of the bigotry of the people who provided input. That isn’t going to be worse than the current situation of bigoted humans making decisions based on hate and preconceptions but it will be more insidious. It is possible to search for that so for example a bank could test it’s mortgage approval ML system by changing one factor at a time (name, gender, age, address, etc) and see if it changes the answer. If it turns out that the ML system is biased on names then the input data could have names removed. If it turns out to be biased about address then there could be weights put in to oppose that.

For a long time there has been excessive trust in computers. Computers aren’t magic they just do maths really fast and implement choices based on the work of programmers – who have all the failings of other humans. Excessive trust in a rule based system is less risky than excessive trust in a ML system where no-one really knows why it makes the decisions it makes.

Self driving cars kill people, this is the truth that Tesla stock holders don’t want people to know.

Companies that try to automate everything with “AI” are going to be in for some nasty surprises. Getting computers to do everything that humans do in any job is going to be a large portion of an actual intelligent computer which if it is achieved will raise an entirely different set of problems.

I’ve previously blogged about ML Security [5]. I don’t think this will be any worse than all the other computer security problems in the long term, although it will be more insidious.

How Will It Go?

Companies spending billions of dollars without firm plans for how to make money are going to go bankrupt no matter what business they are in. Companies like Google and Microsoft can waste some billions of dollars on AI Chat systems and still keep going as successful businesses. Companies like OpenAI that do nothing other than such chat systems won’t go well. But their assets can be used by new companies when sold at less than 10% the purchase price.

Companies like NVidia that have high stock prices based on the supposed ongoing growth in use of their hardware will have their stock prices crash. But the new technology they develop will be used by other people for other purposes. If hospitals can get cheap diagnostic ML systems because of unreasonable investment into “AI” then that could be a win for humanity.

Companies that bet their entire business on AI even when it’s not necessarily their core business (as Tesla has done with self driving) will have their stock price crash dramatically at a minimum and have the possibility of bankruptcy. Having Tesla go bankrupt is definitely better than having people try to use them as self driving cars.