Archives

Categories

More About the HP ML110 Gen9 and z640

In May 2021 I bought a ML110 Gen9 to use as a deskside workstation [1]. I started writing this post in April 2022 when it had been my main workstation for almost a year. While this post was in a draft state in Feb 2023 I upgraded it to an 18 core E5-2696 v3 CPU [2]. It’s now March 2025 and I have replaced it.

Hardware Issues

My previous state with this was not having adequate cooling to allow it to boot and not having a PCIe power cable for a video card. As an experiment I connected the CPU fan to the PCIe fan power and discovered that all power and monitoring wires for the CPU and PCIe fans are identical. This allowed me to buy a CPU fan which was cheaper ($26.09 including postage) and easier to obtain than a PCIe fan (presumably due to CPU fans being more commonly used and manufactured in larger quantities). I had to be creative in attaching the CPU fan as it’s cable wasn’t long enough to reach the usual location for a PCIe fan. The PCIe fan also required a baffle to direct the air to the right place which annoyingly HP apparently doesn’t ship with the low end servers, so I made one from a Corn Flakes packet and duct tape.

The Wikipedia page listing AMD GPUs lists many newer ones that draw less than 80W and don’t need a PCIe power cable. I ordered a Radeon RX560 4G video card which cost $246.75. It only uses 8 lanes of PCIe but that’s enough for me, the only 3D game I play is Warzone 2100 which works well at 4K resolution on that card. It would be really annoying if I had to just spend $246.75 to get the system working, but I had another system in need of a better video card which had a PCIe power cable so the effective cost was small. I think of it as upgrading 2 systems for $123 each.

The operation of the PCIe video card was a little different than non-server systems. The built in VGA card displayed the hardware status at the start and then kept displaying that after the system had transitioned to PCIe video. This could be handy in some situations if you know what it’s doing but was confusing initially.

Booting

One insidious problem is that when booting in “legacy” mode the boot process takes an unreasonably long time and often hangs, the UEFI implementation on this system seems much more reliable and also supports booting from NVMe.

Even with UEFI the boot process on this system was slow. Also the early stage of the power on process involves fans being off and the power light flickering which leads you to think that it’s not booting and needs to have the power button pressed again – which turns it off. The Dell power on sequence of turning most LEDs on and instantly running the fans at high speed leaves no room for misunderstanding. This is also something that companies making electric cars could address. When turning on a machine you should never be left wondering if it is actually on.

Noise

This was always a noisy system. When I upgraded the CPU from an 8 core with 85W “typical TDP” to an 18 core with 145W “typical TDP” it became even louder. Then over time as dust accumulated inside the machine it became louder still until it was annoyingly loud outside the room when all 18 cores were busy.

Replacement

I recently blogged about options for getting 8K video to work on Linux [3]. This requires PCIe power which the z640s have (all the ones I have seen have it I don’t know if all that HP made have it) and which the cheaper models in the ML-110 line don’t have. Since then I have ordered an Intel Arc card which apparently has 190W TDP. There are adaptors to provide PCIe power from SATA or SAS power which I could have used, but having a E5-2696 v3 CPU that draws 145W [4] and a GPU that draws 190W [4] in a system with a 350W PSU doesn’t seem viable.

I replaced it with one of the HP z640 workstations I got in 2023 [5].

The current configuration of the z640 has 3*32G RDIMMs compared to the ML110 having 8*32G, going from 256G to 96G is a significant decrease but most tasks run well enough like that. A limitation of the z640 is that when run with a single CPU it only has 4 DIMM slots which gives a maximum of 512G if you get 128G LRDIMMs, but as all DDR4 DIMMs larger than 32G are unreasonably expensive at this time the practical limit is 128G (which costs about $120AU). In this case I have 96G because the system I’m using has a motherboard problem which makes the fourth DIMM slot unusable. Currently my desire to get more than 96G of RAM is less than my desire to avoid swapping CPUs.

At this time I’m not certain that I will make my main workstation the one that talks to an 8K display. But I really want to keep my options open and there are other benefits.

The z640 boots faster. It supports PCIe bifurcation (with a recent BIOS) so I now have 4 NVMe devices in a single PCIe slot. It is very quiet, the difference is shocking. I initially found it disconcertingly quiet.

The biggest problem with the z640 is having only 4 DIMM sockets and the particular one I’m using has a problem limiting it to 3. Another problem with the z640 when compared to the ML110 Gen9 is that it runs the RAM at 2133 while the ML110 runs it at 2400, that’s a significant performance reduction. But the benefits outweigh the disadvantages.

Conclusion

I have no regrets about buying the ML-110. It was the only DDR4 ECC system that was in the price range I wanted at the time. If I knew that the z640 systems would run so quietly then I might have replaced it earlier. But it was only late last year that 32G DIMMs became affordable, before then I had 8*16G DIMMs to give 128G because I had some issues of programs running out of memory when I had less.

Links March 2025

Anarcat’s review of Fish is interesting and shows some benefits I hadn’t previously realised, I’ll have to try it out [1].

Longnow has an insightful article about religion and magic mushrooms [2].

Brian Krebs wrote an informative artivle about DOGE and the many security problems that it has caused to the US government [3].

Techdirt has an insightful article about why they are forced to become a democracy blog after the attacks by Trump et al [4].

Antoine wrote an insightful blog post about the war for the Internet and how in many ways we are losing to fascists [5].

Interesting story about people working for free at Apple to develop a graphing calculator [6]. We need ways for FOSS people to associate to do such projects.

Interesting YouTube video about a wiki for building a cheap road legal car [7].

Interesting video about powering spacecraft with Plutonion 238 and how they are running out [8].

Interesting information about the search for mh370 [9]. I previously hadn’t been convinced that it was hijacked but I am now.

The EFF has an interesting article about the Rayhunter, a tool to detect cellular spying that can run with cheap hardware [10].

  • [1] https://anarc.at/blog/2025-02-28-fish/
  • [2] https://longnow.org/ideas/is-god-a-mushroom/
  • [3] https://tinyurl.com/27wbb5ec
  • [4] https://tinyurl.com/2cvo42ro
  • [5] https://anarc.at/blog/2025-03-21-losing-war-internet/
  • [6] https://www.pacifict.com/story/
  • [7] https://www.youtube.com/watch?v=x8jdx-lf2Dw
  • [8] https://www.youtube.com/watch?v=geIhl_VE0IA
  • [9] https://www.youtube.com/watch?v=HIuXEU4H-XE
  • [10] https://tinyurl.com/28psvpx7
  • Article Recommendations via FOSS

    Google tracking everything we read is bad, particularly since Google abandoned the “don’t be evil” plan and are presumably open to being somewhat evil.

    The article recommendations on Chrome on Android are useful and I’d like to be able to get the same quality of recommendations without Google knowing about everything I read. Ideally without anything other than the device I use knowing what interests me.

    A ML system to map between sources of news that are of interest should be easy to develop and run on end user devices. The model could be published and when given inputs of articles you like give an output of sites that contain other articles you like. Then an agent on the end user system could spider the sites in question and run a local model to determine which articles to present to the user.

    Mapping for hate following is possible for such a system (Google doesn’t do that), the user could have 2 separate model runs for regular reading and hate-following and determine how much of each content to recommend. It could also give negative weight to entries that match the hate criteria.

    Some sites with articles (like Medium) give an estimate of reading time. An article recommendation system should have a fixed limit of articles (both in articles and in reading time) to support the “I spend half an hour reading during lunch” model not doom scrolling.

    For getting news using only FOSS it seems that the best option at the moment is to use the Lemmy FOSS social network which is like Reddit [1] to recommend articles etc.

    The Lemoa client for Lemmy uses GTK [2] but it’s no longer maintained. The Lemonade client for Lemmy is written in Rust [3]. It would be good if one of those was packaged for Debian, preferably one that’s maintained.

    8k Video Cards

    I previously blogged about getting an 8K TV [1]. Now I’m working on getting 8K video out for a computer that talks to it. I borrowed an NVidia RTX A2000 card which according to it’s specs can do 8K [2] with a mini-DisplayPort to HDMI cable rated at 8K but on both Windows and Linux the two highest resolutions on offer are 3840*2160 (regular 4K) and 4096*2160 which is strange and not useful.

    The various documents on the A2000 differ on whether it has DisplayPort version 1.4 or 1.4a. According to the DisplayPort Wikipedia page [3] both versions 1.4 and 1.4a have a maximum of HBR3 speed and the difference is what version of DSC (Display Stream Compression [4]) is in use. DSC apparently causes no noticeable loss of quality for movies or games but apparently can be bad for text. According to the DisplayPort Wikipedia page version 1.4 can do 8K uncompressed at 30Hz or 24Hz with high dynamic range. So this should be able to work.

    My theories as to why it doesn’t work are:

    • NVidia specs lie
    • My 8K cable isn’t really an 8K cable
    • Something weird happens converting DisplayPort to HDMI
    • The video card can only handle refresh rates for 8K that don’t match supported input for the TV

    To get some more input on this issue I posted on Lemmy, here is the Lemmy post [5]. I signed up to lemmy.ml because it was the first one I found that seemed reasonable and was giving away free accounts, I haven’t tried any others and can’t review it but it seems to work well enough and it’s free. It’s described as “A community of privacy and FOSS enthusiasts, run by Lemmy’s developers” which is positive, I recommend that everyone who’s into FOSS create an account there or some other Lemmy server.

    My Lemmy post was about what video cards to buy. I was looking at the Gigabyte RX 6400 Eagle 4G as a cheap card from a local store that does 8K, it also does DisplayPort 1.4 so might have the same issues, also apparently FOSS drivers don’t support 8K on HDMI because the people who manage HDMI specs are jerks. It’s a $200 card at MSY and a bit less on ebay so it’s an amount I can afford to risk on a product that might not do what I want, but it seems to have a high probability of getting the same result. The NVidia cards have the option of proprietary drivers which allow using HDMI and there are cards with DisplayPort 1.4 (which can do 8K@30Hz) and HDMI 2.1 (which can do 8K@50Hz). So HDMI is a better option for some cards just based on card output and has the additional benefit of not needing DisplayPort to HDMI conversion.

    The best option apparently is the Intel cards which do DisplayPort internally and convert to HDMI in hardware which avoids the issue of FOSS drivers for HDMI at 8K. The Intel Arc B580 has nice specs [6], HDMI 2.1a and DisplayPort 2.1 output, 12G of RAM, and being faster than the low end cards like the RX 6400. But the local computer store price is $470 and the ebay price is a bit over $400. If it turns out to not do what I need it still will be a long way from the worst way I’ve wasted money on computer gear. But I’m still hesitating about this.

    Any suggestions?

    Links February 2025

    Oliver Lindburg wrote an interesting article about Designing for Crisis [1].

    Bruce Schneier blogged about how to cryptographically identify other humans in advance of AT technology allowing faking people you know [2].

    Anarcat has an interesting review of qalc which is a really good calculator, I’ll install it on all my workstations [3]. It even does furlongs per fortnight! This would be good to be called from a LLM system when someone asks about mathematical things.

    Krebs has an informative article about a criminal employed by Elon’s DOGE [4]. Conservatives tend to be criminals.

    Krebs wrote an interesting article about the security of the iOS (and presumably Android) apps for DeekSeek [5]. Seems that the DeepSeek people did everything wrong.

    Bruce Schneier and Davi Ottenheimer wrote an insightful article DOGE as a National Cyberattack [6].

    Bruce Schneier and Barath Raghavan wrote an insightful article about why and how computer generated voices should sound “robotic” [7].

    Cory Doctorow has an interesting approach to the trade war between the US and Canada, instead of putting tarrifs on imports from the US the Canadian government should make it legal for Canadians to unlock their own property [8].

    This youtube video about designing a compressed air engine for a model plane is interesting [9].

    Krebs has an interesting article on phishing and mobile phone wallets, Google and Apple need to restrict the number of wallets per phone [10].

    The Daily WTF has a good summary of why Elon’s DOGE organisation is badly designed and run and a brief mention of how it damages the US [11].

    ArsTechnica has an informative article about device code phishing [12]. The increased use of single-sign-on is going to make this more of a problem.

    Shrivu wrote an insightful and informative article on how to backdoor LLMs [13].

    Cory Doctorow wrote an informative post about MLMs and how they are the mirror world version of community organising [14].

    Browser Choice

    Browser Choice and Security Support

    Google seems to be more into tracking web users and generally becoming hostile to users [1]. So using a browser other than Chrome seems like a good idea. The problem is the lack of browsers with security support. It seems that the only browser engines with the quality of security support we expect in Debian are Firefox and the Chrome engine. The Chrome engine is used in Chrome, Chromium, and Microsoft Edge. Edge of course isn’t an option and Chromium still has some of the Google anti-features built in.

    Firefox

    So I tried to use Firefox for the things I do. One feature of Chrome based browsers that I really like is the ability to set a custom page for the new tab. This feature was removed because it was apparently being constantly attacked by malware [2]. There are addons to allow that but I prefer to have a minimal number of addons and not have any that are just to replace deliberately broken settings in the browser. Also those addons can’t set a file for the URL, so I could set a web server for it but it’s annoying to have to setup a web server to work around a browser limitation.

    Another thing that annoyed me was YouTube videos open in new tabs not starting to play when I change to the tab. There’s a Firefox setting for allowing web sites to autoplay but there doesn’t seem to be a way to add sites to the list.

    Firefox is getting vertical tabs which is a really nice feature for wide displays [3].

    Firefox has a Mozilla service for syncing passwords etc. It is possible to run your own server for this, but the server is written in Rust which is difficult to package and run [4]. There are Docker images for it but I prefer to avoid Docker, generally I think that Docker is a sign of failure in software development. If you can’t develop software that can be deployed without Docker then you aren’t developing it well.

    Chromium

    The Ungoogled Chromium project has a lot to offer for safer web browsing [5]. But the changes are invasive and it’s not included in Debian. Some of the changes like “replacing many Google web domains in the source code with non-existent alternatives ending in qjz9zk” are things that could be considered controversial. It definitely isn’t a candidate to replace the current Chromium package in Debian but might be a possibility to have as an extra browser.

    What Next?

    The Falcon browser that is part of the KDE project looks good, but QtWebEngine doesn’t have security support in Debian. Would it be possible to provide security support for it?

    Ungoogled Chromium is available in Flatpak, so I’ll test that out. But ideally it would be packaged for Debian. I’ll try building a package of it and see how that goes.

    The Iridium Browser is another option [6], it seems similar in design to Ungoogled-Chromium but by different people.

    Links January 2025

    Aaron Quigley’s Everything Open lecture about Intelligent Interfaces is one of the most interesting research reports I’ve seen in a long time [1]. This one can be understood and appreciated by people who don’t have a strong background in computer science.

    Statites (satellites that don’t orbit the sun but use solar sails to hover in place) could be used to catch up to interstellar objects [2].

    Slashgear has an interesting article about an AI piloted F16 beating a human piloted F16 [3]. Given the serious handicaps of flying a plane designed for humans and flying to minimise risk to itself and other crewed aircraft this is a serious victory. Hopefully crewed military aircraft will be obsolete soon.

    Amusing video about the performance of cats with MMORPG style descriptions [4].

    John Goerzen wrote an interesting blog post about censorship and the changes to Facebook [5].

    Ron Garret wrote an interesting blog post 15 years ago when going through what he now describes as an existential crisis [6].

    A comment on Ron’s post is references Alan Crowe’s blog post about whether the “self” exists which is an interesting philosophical post [7]. But I’m still going to think of myself as a person.

    Another comment on Ron’s post references Aaron Swartz’ blog post about Noam Chomsky etc [8]. I have to watch Manufacturing Consent: Noam Chomsky and the Media.

    Ron Garret wrote an interesting blog post about his failed attempts to start a company and how it all worked out well for him any way [9].

    Amusing video about a failed crowdfunded e-bike [10].

    Cory Doctorow wrote an insightful article about how Enshittification is not caused by VCs but by lack of controls [11].

    Systemd Hardening and Sending Mail

    A feature of systemd is the ability to reduce the access that daemons have to the system. The restrictions include access to certain directories, system calls, capabilities, and more. The systemd.exec(5) man page describes them all [1]. To see an overview of the security of daemons run “systemd-analyze security” and to get details of one particular daemon run a command like “systemd-analyze security mon.service”.

    I created a Debian wiki page for a systemd-analyze security goal [2]. At this time release goals aren’t a serious thing for Debian so this won’t result in release critical bug reports, but it is still something we can aim for.

    For a simple daemon (EG BIND, dhcpd, and syslogd) this isn’t difficult to do. It might be difficult to understand the implications of some changes (especially when restricting system calls) but you can do some quick tests. The functionality of such programs has a limited scope and once you get it basically working it’s done.

    For some daemons it’s harder. Network-Manager is one of the well known slightly more difficult cases as it could do things like starting a VPN connection. The larger scope and the use of plugins makes it difficult to test the combinations. The systemd restrictions apply to child processes too unlike restrictions by SE Linux and AppArmor which permit a child process to run in a different security context.

    The messages when a daemon fails due to systemd restrictions are usually unclear which makes things harder to setup and makes it more important to get it right.

    My “mon” package (which I forked upstream as etbe-mon [3] is one of the difficult daemons as local test can involve probing large parts of the system. But I have got that working reasonably well for most cases.

    I have a bug report about running mon with Exim [4]. The problem with this is that Exim has a single process model which means that the process doing local delivery can be a child of the process that initially received the message. So the main mon process needs all the access for delivering mail (writing to /home etc). This also means that every other child of mon will get such access including programs that receive untrusted data from the Internet. Most of the extra access needed by Exim is not a problem, but /home access is a potential risk. It also means that more effort is needed when reviewing the access control.

    The problem with this Exim design is that it applies to many daemons. Every daemon that sends email or that potentially could send email in some configuration needs extra access to be granted.

    Can Exim be configured to have it’s sendmail -T” type operation just write a file in a spool directory for another program to process? Do we need to grant permissions to most of the system just for Exim?

    Links December 2024

    Interesting video about the hack of Andrew Tate’s The Real World site [1].

    Informative video about Nick Fuentes covering the racism, anti-semitism, misogyny, and how he is clearly in denial about being gay [2]. It ends with his arrest. Hopefully the first of many arrests. This is what conservatives support.

    Insightful article covering the history of bus-mastering attacks on computer security and ending with pwning via CF cards [3].

    Interesting lecture at the seL4 symposium about attestation of a running Linux kernel [4]. I’m not a fan of most attestation systems but using a separate isolated seL4 process to monitor a Linux VM offers some real benefits.

    Interesting seL4 symposium lecture about CPU drivers and the fact that a modern SoC is a distributed computing environment with lots of untrusted firmware [5]. I like the way he slipped and called it “unworthy firmware” instead of “untrustworthy firmware”, I think I’ll copy that.

    Hisense 65U80G 65″ Inch 8K ULED Android TV (2021)

    The Aim

    I just bought a Hisense 65U80G 65″ Inch 8K ULED Android TV (2021 model) for $1,568 including delivery. I got that deal by googling refurbished 8K TVs and finding the cheapest one I could buy. Amazon and eBay didn’t have any good prices on second hand 8K TVs and new ones start at $3,000 on special. I didn’t assess how Hisense compares to other TVs, as far as I could determine there was only one model of 8K TV on sale in Australia in the price range I was prepared to pay. So I won’t review how this TV compares to other models but how refurbished TVs compare to other display options.

    I bought this because the highest resolution monitor in my price range is 5120*2160 [1]. While I could get a 5128*2880 monitor for around $1,500 paying 3* the money for 33% more pixels is bad value for money. Getting 4* the pixels for under 3* the price is good value even when it’s a TV with the lower display quality that involves.

    Before buying this TV I read this blog post by Daniel Lawrence about using an 8K TV as a primary monitor [2]. While he has an interesting setup with a 65″ TV on a large desk it’s not what I plan to do at this time.

    My Plans for Use

    I don’t plan to make it a main monitor. While 5120*2160 isn’t as good as I like on my desk it’s bearable and the quality of the display is high. High resolution isn’t needed for all tasks, for example I’m writing this blog post on my laptop while watching a movie on the 8K TV.

    One thing I’d like to do with the 8K TV when I get it working as a monitor is to share the screen for team programming projects. I don’t have any specific plans other than team coding projects at the moment. But it will be interesting to experiment with it when I get it working.

    Technical Issues with High Resolution Monitors

    Hardware Needed

    A lot of the graphic hardware out there don’t support resolutions higher than 5120*2880. It seems that most laptops don’t support resolutions higher than that and higher resolutions than 4K are difficult. Only quite recent and high end video cards will do 8K. Apparently the RTX 2080 is one of the oldest ones that does and that’s $400 on ebay. Strangely the GPU chipset spec pages don’t list the maximum resolution and there’s the additional complication that the other chips might not support the resolutions that the GPU itself can support.

    As an aside I don’t use NVidia cards for regular workstations due to reliability problems. But they are good for ML work and for special purpose systems.

    Interface Versions

    To do 8K video it seems that you need HDMI 2.1 (or maybe 2.0 with 4:2:0 chroma subsampling) or DisplayPort 1.3 for 30Hz with 24bit color and 2.0 for higher refresh rates. But using a particular version of the interface doesn’t require supporting all the resolutions that it might support. This TV has HDMI 2.1 inputs, I’ve bought an adaptor cable that does DisplayPort 1.4 to HDMI 2.1 at 8K resolution. So I need a video card that does DisplayPort 1.4 or HDMI 2.1 output. That doesn’t mean that the card will work, but it could work.

    It’s a pity that no-one has made a USB-C video controller that has a basic frame-buffer supporting 8K and the minimal GPU capabilities. The consensus of opinion is that no games will run well at 8K at this time so anyone using 8K resolution doesn’t need GPU power unless it’s for ML stuff.

    I’m thinking of making a system that can be used as a ML server and X/Wayland server so a GPU with a decent amount of RAM and compute power would be good. I’m not particularly interested in spending $1,500+ to get a GPU that can drive a $1,568 TV. I’m looking into getting a RTX A2000 with 12G of RAM which should be adequate for ML experiments and can handle 8K@60Hz output.

    I’ve ordered a DisplayPort to HDMI converter cable so if I get a DisplayPort card it will work.

    Software Support

    When I first got started with 4K monitors I had significant problems in adjusting the UI to be usable. The support for scaling software is much better now than it was then and 8K 65″ has a lower DPI than 4K 32″. So I hope this won’t be an issue.

    Progress So Far

    My first Hisense 8K TV stopped working properly. It would change to a mostly white screen after being used for some time. The screen would change in ways that correlate to changes in what should appear, but not in a way that was usable. It was just a different pattern of white blobs when I changed to a menu view not anything that allowed using it. I presume that this was the problem that drove a need for refurbishment as when I first got the TV it was still signed in to Google accounts for YouTube and to NetFlix.

    Best Buy Electrical was good about providing a quick replacement, they took away the old TV and delivered a new one on the same visit and it’s now working well.

    I’ve obtained a NVidia card that can allegedly do 8K output and a combination of cables that might be able to carry an 8K signal. Now I just need to get the NVidia drivers to not cause a kernel panic to get things to work.