Archives

Categories

Server CPU Sockets

I am always looking for ways of increasing the compute power I have at a reasonable price. I am very happy with my HP z840 dual CPU workstation [1] that I’m using as a server and my HP z640 single CPU workstation [2]. Both of them were available second hand at quite reasonable prices and could be cheaply upgraded to faster CPUs. But if I can get something a lot faster for a reasonable price then I’ll definitely get it.

Socket LGA2011-v3

The home server and home workstation I currently use have socket LGA2011-v3 [3] which supports the E5-2699A v4 CPU which gives a rating of 26,939 according to Passmark [4]. That Passmark score is quite decent, you can get CPUs using DDR4 RAM that go up to almost double that but it’s a reasonable speed and it works in systems that are readily available at low prices. The z640 is regularly on sale for less than $400AU and the z840 is occasionally below $600.

The Dell PowerEdge T430 is an ok dual-CPU tower server using the same socket. One thing that’s not well known is that is it limited to something like 135W per CPU when run with two CPUs. So it will work correctly with a single E5-2697A v4 with 145W TDP (I’ve tested that) but will refuse to boot with two of them. In my test system I tried replacing the 495W PSUs with 750W PSUs and it made no difference, the motherboard has the limit. With only a single CPU you only get 8/12 DIMM sockets and not all PCIe slots work. There are many second hand T430s on sale with only a single CPU presumably because the T330 sucks. My T430 works fine with a pair of E5-2683 v4 CPUs.

The Dell PowerEdge T630 also takes the same CPUs but supports higher TDP than the T430. They also support 18*3.5″ disks or 32*2.5″ but they are noisy. I wouldn’t buy one for home use.

AMD

There are some nice AMD CPUs manufactured around the same time and AMD has done a better job of making multiple CPUs that fit the same socket. The reason I don’t generally use AMD CPUs is that they are used in a minority of the server grade systems so as I want ECC RAM and other server features I generally can’t find AMD systems at a reasonable price on ebay etc. There are people who really want second hand server grade systems with AMD CPUs and outbid me. This is probably a region dependent issue, maybe if I was buying in the US I could get some nice workstations with AMD CPUs at low prices.

Socket LGA1151

Socket LGA1151 [5] is used in the Dell PowerEdge T330. It only supports 2 memory channels and 4 DIMMs compared to the 4 channels and 8 DIMMs in LGA2011, and it also has a limit of 64G total RAM for most systems and 128G for some systems. By today’s standards even 128G is a real limit for server use, DDR4 RDIMMs are about $1/GB and when spending $600+ on system and CPU upgrade you wouldn’t want to spend less than $130 on RAM. The CPUs with decent performance for that socket like the i9-9900K aren’t supported by the T330 (possibly they don’t support ECC RAM). The CPUs that Dell supports perform very poorly. I suspect that Dell deliberately nerfed the T330 to drive sales of the T430.

The Lenovo P330 uses socket LGA1151-2 but has the same issues of taking slow CPUs in addition to using UDIMMs which are significantly more expensive on the second hand market.

Socket LGA2066

The next Intel socket after LGA2011-v3 is LGA2066 [6]. That is in The Dell Precision 5820 and HP Z4 G4. It takes an i9-10980XE for 32,404 on Passmark or a W-2295 for 30,906. The variant of the Dell 5820 that supports the i9 CPUs doesn’t seem to support ECC RAM so it’s not a proper workstation. The single thread performance difference between the W-2295 and the E5-2699A v4 is 2640 to 2055, a 28% increase for the W-2295. There are “High Frequency Optimized” cpus for socket LGA2011-v3 but they all deliver less than 2,300 on the Passmark single-thread tests which is much less than what you can get from socket LGA2066. The W-2295 costs $1000 on ebay and the E5-2699A v4 is readily available for under $400 and a few months ago I got a matched pair for a bit over $400. Note that getting a matched pair of Intel CPUs is a major pain [7].

Comparing sockets LGA2011-v3 and LGA2066 for a single-CPU system is a $300 system (HP x640) + $400 CPU (E5-2699A v4) vs $500 system (Dell Precision 5820) + $1000 CPU (W-2295), so more than twice the price for a 30% performance benefit on some tasks. The LGA2011-v3 and USB-C both launched in 2014 so LGA2011-v3 systems don’t have USB-C sockets, a $20 USB-C PCIe card doesn’t change the economics.

Socket LGA3647

Socket LGA3647 [8] is used in the Dell PowerEdge T440. It supports 6 channels of DDR4 RAM which is a very nice feature for bigger systems. According to one Dell web page the best CPU Dell officially supports for this is the Xeon Gold 5120 which gives performance only slightly better than the E5-2683 v4 which has a low enough TDP that a T430 can run two of them. But according to another Dell web page they support 16 core CPUs which means performance better than a T430 but less than a HP z840. The T440 doesn’t seem like a great system, if I got one cheap I could find a use for it but I wouldn’t pay the prices that they go for on ebay. The Dell PowerEdge T640 has the same socket and is described as supporting up to 28 core CPUs. But I anticipate that it would be as loud as the T630 and it’s also expensive.

This socket is also used in the HP Z6 G4 which takes a W-3265 or Xeon Gold 6258R CPU for the high end options. The HP Z6 G4 systems on ebay are all above $1500 and the Xeon Gold 6258R is also over $1000 so while the Xeon Gold 6258R in a Z6 G4 will give 50% better performance on multithreaded operations than the systems I currently have it’s costing almost 3* as much. It has 6 DIMM sockets which is a nice improvement over the 4 in the z640. The Z6 G4 takes a maximum of 768G of RAM with the optional extra CPU board (which is very expensive both new and on ebay) compared to my z840 which has 512G and half it’s DIMM slots empty. The HP Z8 G4 has the same socket and takes up to 3TB of RAM if used with CPUs that support it (most CPUs only support 768G and you need a “M” variant to support more). The higher performance CPUs supported in the Z6 G4 and Z8 G4 don’t have enough entries in the Passmark database to be accurate, but going from 22 cores in the E5-2699A v4 to 28 in the Xeon Platinum 8180 when using the same RAM technology doesn’t seem like a huge benefit. The Z6 and Z8 G4 systems run DDR4 RAM at up to 2666 speed while the z640 and z840 only to 2400, a 10% increase in RAM speed is nice but not a huge difference.

I don’t think that any socket LGA3647 systems will ever be ones I want to buy. They don’t offer much over LGA2011-v3 but are in newer and fancier systems that will go for significantly higher prices.

DDR5

I think that DDR5 systems will be my next step up in tower server and workstation performance after the socket LGA2011-v3 systems. I don’t think anything less will offer me enough of a benefit to justify a change. I also don’t think that they will be in the price range I am willing to pay until well after DDR6 is released, some people are hoping for DDR6 to be released late this year but next year seems more likely. So maybe in 2027 there will be some nice DDR5 systems going cheap.

CPU Benchmark Results

Here are the benchmark results of CPUs I mentioned in this post according to passmark.com [9]. I didn’t reference results of CPUs that only had 1 or 2 results posted as they aren’t likely to be accurate.

CPU Single Thread Multi Thread TDP
E5-2683 v4 1,713 17,591 120W
Xeon Gold 5120 1,755 18,251 105W
i9-9900K 2,919 18,152 95W
E5-2697A v4 2,106 21,610 145W
E5-2699A v4 2,055 26,939 145W
W-3265 2,572 30,105 205W
W-2295 2,642 30,924 165W
i9-10980XE 2,662 32,397 165W
Xeon Gold 6258R 2,080 40,252 205W

Links July 2025

Louis Rossman made an informative YouTube video about right to repair and the US military [1]. This is really important as it helps promote free software and open standards.

The ACM has an insightful article about hidden controls [2]. We need EU regulations about hidden controls in safety critical systems like cars.

This Daily WTF article has some interesting security implications for Windows [3].

Earth.com has an interesting article about the “rubber hand illusion” and how it works on Octopus [4]. For a long time I have been opposed to eating Octopus because I think they are too intelligent.

The Washington Post has an insightful article about the future of spies when everything is tracked by technology [5].

Micah Lee wrote an informative guide to using Signal groups for activism [6].

David Brin wrote an insightful blog post about the phases of the ongoing US civil war [7].

Christian Kastner wrote an interesting blog post about using Glibc hardware capabilities to use different builds of a shared library for a range of CPU features [8].

David Brin wrote an insightful and interesting blog post comparing President Carter with the criminals in the Republican party [9].

Annoying Wrongness on TV

One thing that annoys me on TV shows and movies is getting the details wrong. Yes it’s fiction and yes some things can’t be done correctly and in some situations correctly portraying things goes against the plot. But otherwise I think they should try to make it accurate.

I was just watching The Americans (a generally good show that I recommend watching) and in Season 4 Episode 9 there’s a close up of a glass of wine which clearly shows that the Tears of Wine effect is missing, the liquid in the glass obviously has the surface tension of water not of wine. When they run a show about spies then have to expect that the core audience will be the type of detail oriented people who notice these things. Having actors not actually drink alcohol on set is a standard practice, if they have to do 10 takes of someone drinking a glass of wine then that would be a problem if they actually drank real wine. But they could substitute real wine for the close up shots and of course just getting it right the first time is a good option.

Some ridiculous inaccuracy we just need to deal with, like knives making a schwing sound when pulled out of scabbards and “silenced” guns usually still being quite loud (so many people are used to it being wrong). Organisations like the KGB had guns that were actually silent, but they generally looked obviously different to regular guns and had a much lower effective range.

The gold coins shown on TV are another ridiculous thing. The sound of metal hitting something depends on how hard it is and how dense it is. Surely most people have heard the sounds of dropping steel nuts and ball bearings and the sound of dropping lead sinkers and knows that the sounds of items of similar size and shape differ greatly based on density and hardness. A modern coin made of copper, cupro-nickel (the current “silver” coins), or copper-aluminium (the current “gold” coins) sounds very different to a gold coin when dropped on a bench. For a show like The Witcher it wouldn’t be difficult to make actual gold coins of a similar quality to iron age coin production, any jeweller could make the blanks and making stamps hard enough to press gold isn’t an engineering challenge (stamping copper coins would be much more difficult). The coins used for the show could be sold to fans afterwards.

Once coins are made they can’t be just heaped up. Even if you are a sorcerer you probably couldn’t fill a barrel a meter high with gold coins and not have it break from the weight and/or have the coins at the bottom cold welded. Gold coins are supposed to have a precise amount of gold and if you pile them up too high then the cold welding process will transfer gold between coins changing the value. If someone was going to have a significant quantity of gold stored then it would be in gold ingots with separators between layers to prevent cold welding.

Movies tend not to show coins close up, I presume that’s because they considered it too difficult to make coins and they just use some random coins from their own country.

Another annoying thing is shows that don’t match up the build dates of objects used. It’s nice when they get it right like the movie Titanic featuring a M1911 pistol which is something that a rich person in 1912 would likely have. The series Carnival Row (which I recommend) has weapons that mostly match our WW1 era, everything that doesn’t involve magic seems legit. One of the worst examples of this is the movie Anna (by Luc Besson which is mostly a recreation of his film Nikita but in the early 90s and with the KGB). That film features laptops with color screens and USB ports before USB was invented and when color screens weren’t common on laptops, as an aside military spec laptops tend to have older designs than consumer spec ones.

I’ve mostly given up on hoping that movies will get “hacking” scenes that are any more accurate than knives making a “schwing” sound. But it shouldn’t be that hard for them to find computer gear that was manufactured in the right year to use for the film.

Why can’t they hire experts on technology to check everything?

Bad Product Comparisons and EVs

When companies design products a major concern seems to be what the reviewers will have to say about it. For any product of significant value the users are unable to perform any reasonable test before buying, for a casual user some problems may only be apparent after weeks of use so professional reviews are important to many people. The market apparently doesn’t want reviews of the form “here’s a list of products that are quite similar and all do the job well, you can buy any of them, it’s no big deal” which would be the most technically accurate way of doing it.

So the reviewers compare the products on the criteria that are easiest to measure, this lead to phones being compared by how light and thin they are. I think it’s often the case that users would be better served by thicker heavier phones that have larger batteries but instead they are being sold phones that have good battery life in a fresh installation but which don’t last a day with a full load of apps installed.

The latest issue with bad reviews driving poor product design is electric cars. For a while the advocates of old fashioned cars have touted the range of petrol cars which has become an issue for comparing EVs. I have been driving cars for 35 years and so far I have never driven anywhere that’s out of range of the current electric charging network, even with the range of the LEAF (which is smaller than many other EVs). If I ever felt the need to drive across the Nullarbor Plain then I could rent a car to do that and the costs of such car rental would be small compared to the money I’m saving by driving an EV and also small when compared to the premium I would have to pay for an EV with a larger range.

Some of the recent articles I’ve seen about EVs have covered vehicles with a battery range over 700Km which is greater than the legal distance a commercial driver can drive without a break. I’ve also seen articles about plans to have a small petrol or Diesel motor in an EV to recharge the battery without directly driving the wheels. A 9KW Diesel motor could provide enough electricity on average to keep the charge maintained in a LEAF battery and according to the specs of Diesel generators would take about 55Kg of fuel to provide the charge a LEAF needs to drive 1000Km. The idea of a mostly electric hybrid car that can do 1000Km on one tank of fuel is interesting as a thought experiment but doesn’t seem to have much actual use. Apparently a Chinese company is planning to release a car that can do 1400Km one one tank of fuel using such technology which is impressive but not particularly useful.

The next issue of unreasonable competition is in charge speed. Charging a car at 2KW from a regular power socket is a real limit to what you can do with a car. It’s a limit that hasn’t bothered me so far because the most driving I typically do in a week is less than one full charge, so at most I have to charge overnight twice in a week. But if I was going to drive to another city without hiring a car that has better range I’d need a fast charger. Most current models of the Nissan LEAF support charging speeds up to 50KW which means fully charging the battery in under an hour (or slightly over an hour for the long range version). If I was to drive from Melbourne to Canberra in my LEAF I’d have to charge twice which would be an annoyance at those speeds. There are a variety of EVs that can charge at 100KW and some as high as 350KW. 350KW is enough to fully charge the largest EV batteries in half an hour which seems to be as much as anyone would need. But there are apparently plans for 1MW car chargers which would theoretically be able to charge a Hummer (the EV with the largest battery) in 12 minutes. One obvious part of the solution to EV charging times is to not drive a Hummer! Another thing to note is that batteries can’t be charged at a high rate for all charge levels, this is why advertising for fast chargers makes claims like “80% charge in half an hour” which definitely doesn’t mean “100% charge in 37.5 minutes”!

There are significant engineering issues with high power applications. A 1MW cable is not just a bigger version of a regular power cable, there are additional safety issues, user training is required and cooling of the connector is probably required. That’s a lot to just get a better number in the table at the end of a review. There is research in progress on the Megawatt Charging System which is designed to charge heavy vehicles (presumably trucks and buses) at up to 3.75MW. Charging a truck at that rate is reasonable as the process of obtaining and maintaining a heavy vehicle license requires a significant amount of effort and some extra training in 3.75MW charging probably doesn’t make much difference.

A final issue with fast charging is the capacity of the grid. A few years ago I attended a lecture by an electrical engineer who works for the Victorian railway system which was very interesting. The Vic rail power setup involved about 100MW of grid connectivity with special contracts with the grid operators due to the fact that 1MW trains suddenly starting and stopping causes engineering problems that aren’t trivial to solve. They were also working on battery packs and super capacitors to deal with regenerative braking and to avoid brownouts in long sections of track. For a medium size petrol station 14 bays for fuelling cars is common. If 6 such petrol stations were replaced with fast charging stations that can charge cars at 1MW each that would draw the same power as the train network for the entire state! There is a need for significant engineering work to allow most cars to be electric no matter how it’s done, but we don’t need to make that worse just for benchmarks.

Function Keys

For at least 12 years laptops have been defaulting to not having the traditional PC 101 key keyboard function key functionality and instead have had other functions like controlling the volume and have had a key labelled Fn to toggle the functions. It’s been a BIOS option to control whether traditional function keys or controls for volume etc are the default and for at least 12 years I’ve configured all my laptops to have the traditional function keys as the default.

Recently I’ve been working in corporate IT and having exposure to many laptops with the default BIOS settings for those keys to change volume etc and no reasonable option for addressing it. This has made me reconsider the options for configuring these things.

Here’s a page listing the standard uses of function keys [1]. Here is a summary of the relevant part of that page:

  • F1 key launches help doesn’t seem to get much use. The main help option in practice is Google (I anticipate controversy about this and welcome comments) and all the software vendors are investigating LLM options for help which probably won’t involve F1.
  • F2 is for renaming files but doesn’t get much use. Probably most people who use graphical file managers use the right mouse button for it. I use it when sorting a selection of photos.
  • F3 is for launching a search (which is CTRL-F in most programs).
  • ALT-F4 is for closing a window which gets some use, although for me the windows I close are web browsers (via CTRL-W) and terminals (via CTRL-D).
  • F5 is for reloading a page which is used a lot in web browsers.
  • F6 moves the input focus to the URL field of a web browser.
  • F8 is for moving a file which in the degenerate case covers the rename functionality of F2.
  • F11 is for full-screen mode in browsers which is sometimes handy.

The keys F1, F3, F4, F7, F9, F10, and F12 don’t get much use for me and for the people I observe. The F2 and F8 keys aren’t useful in most programs, F6 is only really used in web browsers – but the web browser counts as “most programs” nowadays.

Here’s the description of Thinkpad Fn keys [2]. I use Thinkpads for fun and Dell laptops for work, so it would be nice if they both worked in similar ways but of course they don’t. Dell doesn’t document how their Fn keys are laid out, but the relevant bit is that F1 to F4 are the same as on Thinkpads which is convenient as they are the ones that are likely to be commonly used and needed in a hurry.

I have used the KDE settings on my Thinkpad to map the function F1 to F3 keys to the Fn equivalents which are F1 to mute-audio, F2 for vol-down, and F3 for vol-up to allow using them without holding down the Fn key while having other function keys such as F5 and F6 have their usual GUI functionality. Now I have to could train myself to use F8 in situations where I usually use F2, at least when using a laptop.

The only other Fn combinations I use are F5 and F6 for controlling screen brightness, but that’s not something I use much.

It’s annoying that the laptop manufacturers forced me to this. Having a Fn key to get extra functions and not need 101+ keys on a laptop size device is a reasonable design choice. But they could have done away with the PrintScreen key to make space for something else. Also for Thinkpads a touch pad is something that could obviously be removed to gain some extra space as the Trackpoint does all that’s needed in that regard.

The Fuss About “AI”

There are many negative articles about “AI” (which is not about actual Artificial Intelligence also known as “AGI”). Which I think are mostly overblown and often ridiculous.

Resource Usage

Complaints about resource usage are common, training Llama 3.1 could apparently produce as much pollution as “10,000 round trips by car between Los Angeles and New York City”. That’s not great but when you compare to the actual number of people doing such drives in the US and the number of people taking commercial flights on that route it doesn’t seem like such a big deal. Apparently commercial passenger jets cause CO2 emissions per passenger about equal to a car with 2 people. Why is it relevant whether pollution comes from running servers, driving cars, or steel mills? Why not just tax polluters for the damage they do and let the market sort it out? People in the US make a big deal about not being communist, so why not have a capitalist solution, make it more expensive to do undesirable things and let the market sort it out?

ML systems are a less bad use of compute resources than Bitcoin, at least ML systems give some useful results while Bitcoin has nothing good going for it.

The Dot-Com Comparison

People often complain about the apparent impossibility of “AI” companies doing what investors think they will do. But this isn’t anything new, that all happened before with the “dot com boom”. I’m not the first person to make this comparison, The Daily WTF (a high quality site about IT mistakes) has an interesting article making this comparison [1]. But my conclusions are quite different.

The result of that was a lot of Internet companies going bankrupt, the investors in those companies losing money, and other companies then bought up their assets and made profitable companies. The cheap Internet we now have was built on the hardware from bankrupt companies which was sold for far less than the manufacture price. That allowed it to scale up from modem speeds to ADSL without the users paying enough to cover the purchase of the infrastructure. In the early 2000s I worked for two major Dutch ISPs that went bankrupt (not my fault) and one of them continued operations in the identical manner after having the stock price go to zero (I didn’t get to witness what happened with the other one). As far as I’m aware random Dutch citizens and residents didn’t suffer from this and employees just got jobs elsewhere.

There are good things being done with ML systems and when companies like OpenAI go bankrupt other companies will buy the hardware and do good things.

NVidia isn’t ever going to have the future sales that would justify a market capitalisation of almost 4 Trillion US dollars. This market cap can support paying for new research and purchasing rights to patented technology in a similar way to the high stock price of Google supported buying YouTube, DoubleClick, and Motorola Mobility which are the keys to Google’s profits now.

The Real Upsides of ML

Until recently I worked for a company that used ML systems to analyse drivers for signs of fatigue, distraction, or other inappropriate things (smoking which is illegal in China, using a mobile phone, etc). That work was directly aimed at saving human lives with a significant secondary aim of saving wear on vehicles (in the mining industry drowsy drivers damage truck tires and that’s a huge business expense).

There are many applications of ML in medical research such as recognising cancer cells in tissue samples.

There are many less important uses for ML systems, such as recognising different types of pastries to correctly bill bakery customers – technology that was apparently repurposed for recognising cancer cells.

The ability to recognise objects in photos is useful. It can be used for people who want to learn about random objects they see and could be used for helping young children learn about their environment. It also has some potential for assistance for visually impaired people, it wouldn’t be good for safety critical systems (don’t cross a road because a ML system says there are no cars coming) but could be useful for identifying objects (is this a lemon or a lime). The Humane AI pin had some real potential to do good things but there wasn’t a suitable business model [2], I think that someone will develop similar technology in a useful way eventually.

Even without trying to do what the Humane AI Pin attempted, there are many ways for ML based systems to assist phone and PC use.

ML systems allow analysing large quantities of data and giving information that may be correct. When used by a human who knows how to recognise good answers this can be an efficient way of solving problems. I personally have solved many computer problems with the help of LLM systems while skipping over many results that were obviously wrong to me. I believe that any expert in any field that is covered in the LLM input data could find some benefits from getting suggestions from an LLM. It won’t necessarily allow them to solve problems that they couldn’t solve without it but it can provide them with a set of obviously wrong answers mixed in with some useful tips about where to look for the right answers.

Jobs and Politics

Noema Magazine has an insightful article about how “AI” can allow different models of work which can enlarge the middle class [3].

I don’t think it’s reasonable to expect ML systems to make as much impact on society as the industrial revolution, and the agricultural revolutions which took society from more than 90% farm workers to less than 5%. That doesn’t mean everything will be fine but it is something that can seem OK after the changes have happened. I’m not saying “apart from the death and destruction everything will be good”, the death and destruction are optional. Improvements in manufacturing and farming didn’t have to involve poverty and death for many people, improvements to agriculture didn’t have to involve overcrowding and death from disease. This was an issue of political decisions that were made.

The Real Problems of ML

Political decisions that are being made now have the aim of making the rich even richer and leaving more people in poverty and in many cases dying due to being unable to afford healthcare. The ML systems that aim to facilitate such things haven’t been as successful as evil people have hoped but it will happen and we need appropriate legislation if we aren’t going to have revolutions.

There are documented cases of suicide being inspired by Chat GPT systems [4]. There have been people inspired towards murder by ChatGPT systems but AFAIK no-one has actually succeeded in such a crime yet. There are serious issues that need to be addressed with the technology and with legal constraints about how people may use it. It’s interesting to consider the possible uses of ChatGPT systems for providing suggestions to a psychologist, maybe ChatGPT systems could be used to alleviate mental health problems.

The cases of LLM systems being used for cheating on assignments etc isn’t a real issue. People have been cheating on assignments since organised education was invented.

There is a real problem of ML systems based on biased input data that issue decisions that are the average of the bigotry of the people who provided input. That isn’t going to be worse than the current situation of bigoted humans making decisions based on hate and preconceptions but it will be more insidious. It is possible to search for that so for example a bank could test it’s mortgage approval ML system by changing one factor at a time (name, gender, age, address, etc) and see if it changes the answer. If it turns out that the ML system is biased on names then the input data could have names removed. If it turns out to be biased about address then there could be weights put in to oppose that.

For a long time there has been excessive trust in computers. Computers aren’t magic they just do maths really fast and implement choices based on the work of programmers – who have all the failings of other humans. Excessive trust in a rule based system is less risky than excessive trust in a ML system where no-one really knows why it makes the decisions it makes.

Self driving cars kill people, this is the truth that Tesla stock holders don’t want people to know.

Companies that try to automate everything with “AI” are going to be in for some nasty surprises. Getting computers to do everything that humans do in any job is going to be a large portion of an actual intelligent computer which if it is achieved will raise an entirely different set of problems.

I’ve previously blogged about ML Security [5]. I don’t think this will be any worse than all the other computer security problems in the long term, although it will be more insidious.

How Will It Go?

Companies spending billions of dollars without firm plans for how to make money are going to go bankrupt no matter what business they are in. Companies like Google and Microsoft can waste some billions of dollars on AI Chat systems and still keep going as successful businesses. Companies like OpenAI that do nothing other than such chat systems won’t go well. But their assets can be used by new companies when sold at less than 10% the purchase price.

Companies like NVidia that have high stock prices based on the supposed ongoing growth in use of their hardware will have their stock prices crash. But the new technology they develop will be used by other people for other purposes. If hospitals can get cheap diagnostic ML systems because of unreasonable investment into “AI” then that could be a win for humanity.

Companies that bet their entire business on AI even when it’s not necessarily their core business (as Tesla has done with self driving) will have their stock price crash dramatically at a minimum and have the possibility of bankruptcy. Having Tesla go bankrupt is definitely better than having people try to use them as self driving cars.

Links June 2025

Jonathan McDowell wrote part 2 of his blog series about setting up a voice assistant on Debian, I look forward to reading further posts [1]. I’m working on some related things for Debian that will hopefully work with this.

I’m testing out OpenSnitch on Trixie inspired by this blog post, it’s an interesting package [2].

Valerie wrote an informative article about creating mesh networks using LORA for emergency use [3].

Interesting article about Signal and Windows Recall. That gives us some things to consider regarding ML features on Linux systems [4].

Insightful article about AI and the end of prestige [5]. We should all learn about LLMs.

Jonathan Dowland wrote an informative blog post about how to manage namespaces on Linux [6].

The Consumer Rights wiki is a great resource for raising awareness of corporations exploiting their customers for computer related goods and services [7].

Interesting article about Schizophrenia and the cliff-edge function of evolution [8].

PFAs

For some time I’ve been noticing news reports about PFAs [1]. I hadn’t thought much about that issue, I grew up when leaded petrol was standard, when almost all thermometers had mercury, when all small batteries had mercury, and I had generally considered that I had already had so many nasty chemicals in my body that as long as I don’t eat bottom feeding seafood often I didn’t have much to worry about. I already had a higher risk of a large number of medical issues than I’d like due to decisions made before I was born and there’s not much to do about it given that there are regulations restricting the emissions of lead, mercury etc.

I just watched a Veritasium video about Teflon and the PFA poisoning related to it’s production [2]. This made me realise that it’s more of a problem than I realised and it’s a problem that’s getting worse. PFA levels in the parts-per-trillion range in the environment can cause parts-per-billion in the body which increases the risks of several cancers and causes other health problems. Fortunately there is some work being done on water filtering, you can get filters for a home level now and they are working on filters that can work at a sufficient scale for a city water plant.

There is a map showing PFAs in the environment in Australia which shows some sites with concerning levels that are near residential areas [3]. One of the major causes for that in Australia is fire retardant foam – Australia has never had much if any Teflon manufacturing AFAIK.

Also they noted that donating blood regularly can decrease levels of PFAs in the bloodstream. So presumably people who have medical conditions that require receiving donated blood regularly will have really high levels.

The Intel Arc B580 and PCIe Slot Size

A few months ago I bought a Intel Arc B580 for the main purpose of getting 8K video going [1]. I had briefly got it working in a test PC but then I wanted to deploy it on my HP z840 that I use as a build server and for playing with ML stuff [2]. I only did brief tests of it previously and this was my first attempt at installing it in a system I use. My plan was to keep the NVidia RTX A2000 in place and run 2 GPUs, that’s not an uncommon desire among people who want to do ML stuff and it’s the type of thing that the z840 is designed for, the machine has slots 2, 4, and 6 being PCIe*16 so it should be able to fit 3 cards that each take 2 slots. So having one full size GPU, the half-height A2000, and a NVMe controller that uses *16 to run four NVMe devices should be easy.

Intel designed the B580 to use every millimeter of space possible while still being able to claim to be a 2 slot card. On the circuit board side there is a plastic cover over the board that takes all the space before the next slot so a 2 slot card can’t go on that side without having it’s airflow blocked. On the other side it takes all the available space so that any card that wants to blow air through can’t fit and also such that a medium size card (such as the card for 4 NVMe devices) would block it’s air flow. So it’s impossible to have a computer with 6 PCIe slots run the B580 as well as 2 other full size *16 cards.

Support for this type of GPU is something vendors like HP should consider when designing workstation class systems. For HP there is no issue of people installing motherboards in random cases (the HP motherboard in question uses proprietary power connectors and won’t even boot with an ATX PSU without significant work). So they could easily design a motherboard and case with a few extra mm of space between pairs of PCIe slots. The cards that are double width are almost always *16 so you could pair up a *16 slot and another slot and have extra space on each side of the pair. I think for most people a system with 6 PCIe slots with a bit of extra space for GPU cooling would be more useful than having 7 PCIe slots. But as HP have full design control they don’t even need to reduce the number of PCIe slots, they could just make the case taller. If they added another 4 slots and increased the case size accordingly it still wouldn’t be particularly tall by the standards of tower cases from the 90s! The z8 series of workstations are the biggest workstations that HP sells so they should design them to do these things. At the time that the z840 was new there was a lot of ML work being done and HP was selling them as ML workstations, they should have known how people would use them and design them accordingly.

So I removed the NVidia card and decided to run the system with just the Arc card, things should have been fine but Intel designed the card to be as high as possible and put the power connector on top. This prevented installing the baffle for directing air flow over the PCIe slots and due to the design of the z840 (which is either ingenious or stupid depending on your point of view) the baffle is needed to secure the PCIe cards in place. So now all the PCIe cards are just secured by friction in the slots, this isn’t an unusual situation for machines I assemble but it’s not something I desired.

This is the first time I’ve felt compelled to write a blog post reviewing a product before even getting it working. But the physical design of the B580 is outrageously impractical unless you are designing your entire computer around the GPU.

As an aside the B580 does look very nice. The plastic surround is very fancy, it’s a pity that it interferes with the operation of the rest of the system.

Matching Intel CPUs

To run a SMP system with multiple CPUs you need to have CPUs that are “identical”, the question is what does “identical” mean. In this case I’m interested in Intel CPUs because SMP motherboards and server systems for Intel CPUs are readily available and affordable. There are people selling matched pairs of CPUs on ebay which tend to be more expensive than randomly buying 2 of the same CPU model, so if you can identify 2 CPUs that are “identical” which are sold separately then you can save some money. Also if you own a two CPU system with only one CPU installed then buying a second CPU to match the first is cheaper and easier than buying two more CPUs and removing a perfectly working CPU.

e5-2640 v4 cpus

Intel (R) Xeon (R)
E5-2640V4
SR2NZ 2.40GHZ
J717B324 (e4)
7758S4100843

Above is a pic of 2 E5-2640v4 CPUs that were in a SMP system I purchased along with a plain ASCII representation of the text on one of them. The bottom code (starting with “77”) is apparently the serial number, one of the two codes above it is what determines how “identical” those CPUs are.

The code on the same line as the nominal clock speed (in this case SR2NZ) is the “spec number” which is sometimes referred to as “sspec” [1].

The line below the sspec and above the serial number has J717B324 which doesn’t have a google hit. I looked at more than 20 pics of E5-2640v4 CPUs on ebay, they all had the code SR2NZ but had different numbers on the line below. I conclude that the number on the line below probably indicates the model AND stepping while SR2NZ just means E5-2640v4 regardless of stepping. As I wasn’t able to find another CPU on ebay with the same number on the line below the sspec I believe that it will be unreasonably difficult to get a match for an existing CPU.

For the purpose of matching CPUs I believe that if the line above the serial number matches then the CPUs can be used together. I am not certain that CPUs with this number slightly mismatching won’t work but I definitely wouldn’t want to spend money on CPUs with this number being different.

smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2699A v4 @ 2.40GHz (family: 0x6, model: 0x4f, stepping: 0x1)

When you boot Linux the kernel identifies the CPU in a manner like the above, the combination of family and model seem to map to one spec number. The combination of family, model, and stepping should be all that’s required to have them work together.

I think that Intel did the wrong thing in not making this clearer. It would have been very easy to print the stepping on the CPU case next to the sspec or the CPU model name. It also wouldn’t have been too hard to make the CPU provide the magic number that is apparently the required match for SMP to the OS. Having the Intel web site provide a mapping of those numbers to steppings of CPUs also shouldn’t be difficult for them.

If anyone knows more about these issues please let me know.