1

Video Conferencing (LCA)

I’ve just done a tech check for my LCA lecture. I had initially planned to do what I had done before and use my phone for recording audio and video and my PC for other stuff. The problem is that I wanted to get an external microphone going and plugging in a USB microphone turned off the speaker in the phone (it seemed to direct audio to a non-existent USB audio output). I tried using bluetooth headphones with the USB microphone and that didn’t work. Eventually a viable option seemed to be using USB headphones on my PC with the phone for camera and microphone. Then it turned out that my phone (Huawei Mate 10 Pro) didn’t support resolutions higher than VGA with Chrome (it didn’t have the “advanced” settings menu to select resolution), this is probably an issue of Android build features. So the best option is to use a webcam on the PC, I was recommended a Logitech C922 but OfficeWorks only has a Logitech C920 which is apparently OK.

The free connection test from freeconference.com [1] is good for testing out how your browser works for videoconferencing. It tests each feature separately and is easy to run.

After buying the C920 webcam I found that it sometimes worked and sometimes caused a kernel panic like the following (partial panic log included for the benefit of people Googling this Logitech C920 problem):

[95457.805417] BUG: kernel NULL pointer dereference, address: 0000000000000000
[95457.805424] #PF: supervisor read access in kernel mode
[95457.805426] #PF: error_code(0x0000) - not-present page
[95457.805429] PGD 0 P4D 0 
[95457.805431] Oops: 0000 [#1] SMP PTI
[95457.805435] CPU: 2 PID: 75486 Comm: v4l2src0:src Not tainted 5.15.0-2-amd64 #1  Debian 5.15.5-2
[95457.805438] Hardware name: HP ProLiant ML110 Gen9/ProLiant ML110 Gen9, BIOS P99 02/17/2017
[95457.805440] RIP: 0010:usb_ifnum_to_if+0x3a/0x50 [usbcore]
...
[95457.805481] Call Trace:
[95457.805484]  
[95457.805485]  usb_hcd_alloc_bandwidth+0x23d/0x360 [usbcore]
[95457.805507]  usb_set_interface+0x127/0x350 [usbcore]
[95457.805525]  uvc_video_start_transfer+0x19c/0x4f0 [uvcvideo]
[95457.805532]  uvc_video_start_streaming+0x7b/0xd0 [uvcvideo]
[95457.805538]  uvc_start_streaming+0x2d/0xf0 [uvcvideo]
[95457.805543]  vb2_start_streaming+0x63/0x100 [videobuf2_common]
[95457.805550]  vb2_core_streamon+0x54/0xb0 [videobuf2_common]
[95457.805555]  uvc_queue_streamon+0x2a/0x40 [uvcvideo]
[95457.805560]  uvc_ioctl_streamon+0x3a/0x60 [uvcvideo]
[95457.805566]  __video_do_ioctl+0x39b/0x3d0 [videodev]

It turns out that Ubuntu Launchpad bug #1827452 has great information on this problem [2]. Apparently if the device decides it doesn’t have enough power then it will reconnect and get a different USB bus device number and this often happens when the kernel is initialising it. There’s a race condition in the kernel code in which the code to initialise the device won’t realise that the device has been detached and will dereference a NULL pointer and then mess up other things in USB device management. The end result for me is that all USB devices become unusable in this situation, commands like “lsusb” hang, and a regular shutdown/reboot hangs because it can’t kill the user session because something is blocked on USB.

One of the comments on the Launchpad bug is that a powered USB hub can alleviate the problem while a USB extension cable (which I had been using) can exacerbate it. Officeworks currently advertises only one powered USB hub, it’s described as “USB 3” but also “maximum speed 480 Mbps” (USB 2 speed). So basically they are selling a USB 2 hub for 4* the price that USB 2 hubs used to sell for.

When debugging this I used the “cheese” webcam utility program and ran it in a KVM virtual machine. The KVM parameters “-device qemu-xhci -usb -device usb-host,hostbus=1,hostaddr=2” (where 1 and 2 are replaced by the Bus and Device numbers from “lsusb”) allow the USB device to be passed through to the VM. Doing this meant that I didn’t have to reboot my PC every time a webcam test failed.

For audio I’m using the Sades Wand gaming headset I wrote about previously [3].

AS400

The IBM i operating system on the AS/400 is a system that runs on PPC for “midrange” systems. I did a bit of reading about it after seeing an AS/400 on ebay for $300, if I had a lot more spare time and energy I might have put in a bid for that if it didn’t look like it had been left out in the rain. It seems that AS/400 is not dead, there are cloud services available, here’s one that provides a VM with 2GM of RAM for “only EUR 251 monthly” [1], wow. I’m not qualified to comment on whether that’s good value, but I think it’s worth noting that a Linux VM running an AMD64 CPU with similar storage and the same RAM can be expected to cost about $10 per month.

There is also a free AS/400 cloud named pub400 [2], this is the type of thing I’d do if I had my own AS/400.

2

Your Device Has Been Improved

I’ve just started a Samsung tablet downloading a 770MB update, the description says:

  • Overall stability of your device has been improved
  • The security of your device has been improved

Technically I have no doubt that both those claims are true and accurate. But according to common understanding of the English language I think they are both misleading.

By “stability improved” they mean “fixed some bugs that made it unstable” and no technical person would imagine that after a certain number of such updates the number of bugs will ever reach zero and the tablet will be perfectly reliable. In fact if you should consider yourself lucky if they fix more bugs than they add. It’s not THAT uncommon for phones and tablets to be bricked (rendered unusable by software) by an update. In the past I got a Huawei Mate9 as a warranty replacement for a Nexus 6P because an update caused so many Nexus 6P phones to fail that they couldn’t be replaced with an identical phone [1].

By “security improved” they usually mean “fixed some security flaws that were recently discovered to make it almost as secure as it was designed to be”. Note that I deliberately say “almost as secure” because it’s sometimes impossible to fix a security flaw without making significant changes to interfaces which requires more work than desired for an old product and also gives a higher probability of things going wrong. So it’s sometimes better to aim for almost as secure or alternatively just as secure but with some features disabled.

Device manufacturers (and most companies in the Android space make the same claims while having the exact same bugs to deal with, Samsung is no different from the others in this regards) are not making devices more secure or more reliable than when they were initially released. They are aiming to make them almost as secure and reliable as when they were released. They don’t have much incentive to try too hard in this regard, Samsung won’t suffer if I decide my old tablet isn’t reliable enough and buy a new one, which will almost certainly be from Samsung because they make nice tablets.

As a thought experiment, consider if car repairers did the same thing. “Getting us to service your car will improve fuel efficiency”, great how much more efficient will it be than when I purchased it?

As another thought experiment, consider if car companies stopped providing parts for car repair a few years after releasing a new model. This is effectively what phone and tablet manufacturers have been doing all along, software updates for “stability and security” are to devices what changing oil etc is for cars.

2

USB Microphones

The Situation

I bought myself some USB microphones over ebay, I couldn’t see any with USB type A connectors (the original USB connectors) and bought ones with USB-C connectors. I thought it would be good to have microphones that could work with recent mobile phones and with PCs, because surely it wouldn’t be difficult to get an adaptor. I tested one of the microphones, it worked well on a phone.

I bought a pair of adaptors for USB A ports on a PC or laptop to USB-C (here’s the link to where I bought them). I used one of the adaptors with a USB-C HDMI device which gave the following line from lsusb, I didn’t try using a HDMI monitor on my laptop, having the device recognised was enough.

Bus 003 Device 002: ID 2109:0100 VIA Labs, Inc. USB 2.0 BILLBOARD

I tried connecting a USB-C microphone and Linux didn’t recognise the existence of a USB device, I tried that on a PC and a laptop on multiple ports.

I wondered whether the description of the VIA “BILLBOARD” device as “USB 2.0” was relevant to my problem. According to Big Mess O’ Wires USB-C has separate wires for USB 3.1 and USB 2 [1]. So someone could make a device that converts USB-A to USB-C with only USB-2 wires in place. I tested the USB-A to USB-C adaptor with the HDMI device in a USB “SuperSpeed” (IE 3.x) port and it still identified as USB 2.0. I suspect that the USB-C HDMI device is using all the high speed wires for DisplayPort data (with a conversion to HDMI) and therefore looks like a USB 2.0 device.

The Problem

I want to install a microphone in my workstation for long Zoom training sessions (7 hours in a day) that otherwise require me to use multiple Android devices as I don’t have a device that will do 7 hours of Zoom without running out of battery. A new workstation with USB-C is unreasonably expensive. A PCIe USB-C card would give me the port at the back of the machine, I can’t have the back of the machine near the microphone because it’s too noisy.

If I could have a USB-C hub with reasonable length cables (the 1M cables typical for USB 2.0 hubs would be fine) connected to a USB-C port at the back of my workstation that would work. But there seems to be a great lack of USB-C hubs. NewBeDev has an informative post about the lack of USB-C hubs that have multiple USB-C ports [2]. There also seems to be a lack of USB-C hubs with cables longer than 20cm.

The Solution

I ended up ordering a Sades Wand gaming headset [3], that has over-ear headphones and an attached microphone which connects to the computer via USB 2.0. I gave the URL for the sades.com.au web site for reference but you will get a significantly better price by buying on ebay ($39+postage vs about $30 including postage).

I guess I won’t be using my new USB-C microphones for a while.

USB Cables and Cameras

This page has summaries of some USB limits [1]. USB 2.0 has the longest cable segment limit of 5M (1.x, 3.x, and USB-C are all shorter), so USB 2.0 is what you want for long runs. The USB limit for daisy chained devices is 7 (including host and device), so that means a maximum of 5 hubs or a total distance between PC and device of 30M. There are lots of other ways of getting longer distances, the cheapest seems to be putting an old PC at the far end of an Ethernet cable.

Some (many? most?) laptops have USB for the interface to the built in camera, and these are sold from scrapped laptops. You could probably setup a home monitoring system in a typical home by having a centrally located PC with USB hubs fanning out to the corners. But old Android phones on a Wifi network seems like an easier option if you can prevent the phones from crashing all the time.

6

Censoring Images

A client asked me to develop a system for “censoring” images from an automatic camera. The situation is that we have a camera taking regular photos from a fixed location which includes part of someone else’s property. So my client made a JPEG with some black rectangles in the sections that need to be covered. The first thing I needed to do was convert the JPEG to a PNG with transparency for the sections that aren’t to be covered.

To convert it I loaded the JPEG in the GIMP and went to the Layer->Transparency->Add Alpha Channel menu to enabled the Alpha channel. Then I selected the “Bucket Fill tool” and used “Mode Erase” and “Fill by Composite” and then clicked on the background (the part of the JPEG that was white) to make it transparent. Then I exported it to PNG.

If anyone knows of an easy way to convert the file then please let me know. It would be nice if there was a command-line program I could run to convert a specified color (default white) to transparent. I say this because I can imagine my client going through a dozen iterations of an overlay file that doesn’t quite fit.

To censor the image I ran the “composite” command from imagemagick. The command I used was “composite -gravity center overlay.png in.jpg out.jpg“. If anyone knows a better way of doing this then please let me know.

The platform I’m using is a ARM926EJ-S rev 5 (v5l) which takes 8 minutes of CPU time to convert a single JPEG at full DSLR resolution (4 megapixel). It also required enabling swap on a SD card to avoid running out of RAM and running “systemctl disable tmp.mount” to stop using tmpfs for /tmp as the system only has 256M of RAM.

3

Video Decoding

I’ve had a saga of getting 4K monitors to work well. My latest issue has been video playing, the dreaded mplayer error about the system being too slow. My previous post about 4K was about using DisplayPort to get more than 30Hz scan rate at 4K [1]. I now have a nice 60Hz scan rate which makes WW2 documentaries display nicely among other things.

But when running a 4K monitor on a 3.3GHz i5-2500 quad-core CPU I can’t get a FullHD video to display properly. Part of the process of decoding the video and scaling it to 4K resolution is too slow, so action scenes in movies lag. When running a 2560*1440 monitor on a 2.4GHz E5-2440 hex-core CPU with the mplayer option “-lavdopts threads=3” everything is great (but it fails if mplayer is run with no parameters). In doing tests with apparent performance it seemed that the E5-2440 CPU gains more from the threaded mplayer code than the i5-2500, maybe the E5-2440 is more designed for server use (it’s in a Dell PowerEdge T320 while the i5-2500 is in a random white-box system) or maybe it’s just because it’s newer. I haven’t tested whether the i5-2500 system could perform adequately at 2560*1440 resolution.

The E5-2440 system has an ATI HD 6570 video card which is old, slow, and only does PCIe 2.1 which gives 5GT/s or 8GB/s. The i5-2500 system has a newer ATI video card that is capable of PCIe 3.0, but “lspci -vv” as root says “LnkCap: Port #0, Speed 8GT/s, Width x16” and “LnkSta: Speed 5GT/s (downgraded), Width x16 (ok)”. So for reasons unknown to me the system with a faster PCIe 3.0 video card is being downgraded to PCIe 2.1 speed. A quick check of the web site for my local computer store shows that all ATI video cards costing less than $300 have PCI3 3.0 interfaces and the sole ATI card with PCIe 4.0 (which gives double the PCIe speed if the motherboard supports it) costs almost $500. I’m not inclined to spend $500 on a new video card and then a greater amount of money on a motherboard supporting PCIe 4.0 and CPU and RAM to go in it.

According to my calculations 3840*2160 resolution at 24bpp (probably 32bpp data transfers) at 30 frames/sec means 3840*2160*4*30/1024/1024=950MB/s. PCIe 2.1 can do 8GB/s so that probably isn’t a significant problem.

I’d been planning on buying a new video card for the E5-2440 system, but due to some combination of having a better CPU and lower screen resolution it is working well for video playing so I can save my money.

As an aside the T320 is a server class system that had been running for years in a corporate DC. When I replaced the high speed SAS disks with SSDs SATA disks it became quiet enough for a home workstation. It works very well at that task but the BIOS is quite determined to keep the motherboard video running due to the remote console support. So swapping monitors around was more pain than I felt like going through, I just got it working and left it. I ordered a special GPU power cable but found that the older video card that doesn’t need an extra power cable performs adequately before the cable arrived.

Here is a table comparing the systems.

2560*1440 works well 3840*2160 goes slow
System Dell PowerEdge T320 White Box PC from rubbish
CPU 2.4GHz E5-2440 3.3GHz i5-2500
Video Card ATI Radeon HD 6570 ATI Radeon R7 260X
PCIe Speed PCIe 2.1 – 8GB/s PCIe 3.0 downgraded to PCIe 2.1 – 8GB/s

Conclusion

The ATI Radeon HD 6570 video card is one that I had previously tested and found inadequate for 4K support, I can’t remember if it didn’t work at that resolution or didn’t support more than 30Hz scan rate. If the 2560*1440 monitor dies then it wouldn’t make sense to buy anything less than a 4K monitor to replace it which means that I’d need to get a new video card to match. But for the moment 2560*1440 is working well enough so I won’t upgrade it any time soon. I’ve already got the special power cable (specified as being for a Dell PowerEdge R610 for anyone else who wants to buy one) so it will be easy to install a powerful video card in a hurry.

4

Comparing Compression

I just did a quick test of different compression options in Debian. The source file is a 1.1G MySQL dump file. The time is user CPU time on a i7-930 running under KVM, the compression programs may have different levels of optimisation for other CPU families.

Facebook people designed the zstd compression system (here’s a page giving an overview of it [1]). It has some interesting new features that can provide real differences at scale (like unusually large windows and pre-defined dictionaries), but I just tested the default mode and the -9 option for more compression. For the SQL file “zstd -9” provides significantly better compression than gzip while taking only slightly less CPU time than “gzip -9” while zstd with the default option (equivalent to “zstd -3”) gives much faster compression than “gzip -9” while also being slightly smaller. For this use case bzip2 is too slow for inline compression of a MySQL dump as the dump process locks tables and can hang clients. The lzma and xz compression algorithms provide significant benefits in size but the time taken is grossly disproportionate.

In a quick check of my collection of files compressed with gzip I was only able to fine 1 fild that got less compression with zstd with default options, and that file got better compression with “zstd -9”. So zstd seems to beat gzip everywhere by every measure.

The bzip2 compression seems to be obsolete, “zstd -9” is much faster and has slightly smaller output.

Both xz and lzma seem to offer a combination of compression and time taken that zstd can’t beat (for this file type at least). The ultra compression mode 22 gives 2% smaller output files but almost 28 minutes of CPU time for compression is a bit ridiculous. There is a threaded mode for zstd that could potentially allow a shorter wall clock time for “zstd --ultra -22” than lzma/xz while also giving better compression.

Compression Time Size
zstd 5.2s 130m
zstd -9 28.4s 114m
gzip -9 33.4s 141m
bzip2 -9 3m51 119m
lzma 6m20 97m
xz 6m36 97m
zstd -19 9m57 99m
zstd --ultra -22 27m46 95m

Conclusion

For distributions like Debian which have large archives of files that are compressed once and transferred a lot the “zstd --ultra -22” compression might be useful with multi-threaded compression. But given that Debian already has xz in use it might not be worth changing until faster CPUs with lots of cores become more commonly available. One could argue that for Debian it doesn’t make sense to change from xz as hard drives seem to be getting larger capacity (and also smaller physical size) faster than the Debian archive is growing. One possible reason for adopting zstd in a distribution like Debian is that there are more tuning options for things like memory use. It would be possible to have packages for an architecture like ARM that tends to have less RAM compressed in a way that decreases memory use on decompression.

For general compression such as compressing log files and making backups it seems that zstd is the clear winner. Even bzip2 is far too slow and in my tests zstd clearly beats gzip for every combination of compression and time taken. There may be some corner cases where gzip can compete on compression time due to CPU features, optimisation for CPUs, etc but I expect that in almost all cases zstd will win for compression size and time. As an aside I once noticed the 32bit of gzip compressing faster than the 64bit version on an Opteron system, the 32bit version had assembly optimisation and the 64bit version didn’t at that time.

To create a tar archive you can run “tar czf” or “tar cJf” to create an archive with gzip or xz compression. To create an archive with zstd compression you have to use “tar --zstd -cf”, that’s 7 extra characters to type. It’s likely that for most casual archive creation (EG for copying files around on a LAN or USB stick) saving 7 characters of typing is more of a benefit than saving a small amount of CPU time and storage space. It would be really good if tar got a single character option for zstd compression.

The external dictionary support in zstd would work really well with rsync for backups. Currently rsync only supports zlib, adding zstd support would be a good project for someone (unfortunately I don’t have enough spare time).

Now I will change my database backup scripts to use zstd.

Update:

The command “tar acvf a.zst filenames” will create a zstd compressed tar archive, the “a” option to GNU tar makes it autodetect the compression type from the file name. Thanks Enrico!

5

A Good Time to Upgrade PCs

PC hardware just keeps getting cheaper and faster. Now that so many people have been working from home the deficiencies of home PCs are becoming apparent. I’ll give Australian prices and URLs in this post, but I think that similar prices will be available everywhere that people read my blog.

From MSY (parts list PDF ) [1] 120G SATA SSDs are under $50 each. 120G is more than enough for a basic workstation, so you are looking at $42 or so for fast quiet storage or $84 or so for the same with RAID-1. Being quiet is a significant luxury feature and it’s also useful if you are going to be in video conferences.

For more serious storage NVMe starts at around $100 per unit, I think that $124 for a 500G Crucial NVMe is the best low end option (paying $95 for a 250G Kingston device doesn’t seem like enough savings to be worth it). So that’s $248 for 500G of very fast RAID-1 storage. There’s a Samsung 2TB NVMe device for $349 which is good if you need more storage, it’s interesting to note that this is significantly cheaper than the Samsung 2TB SSD which costs $455. I wonder if SATA SSD devices will go away in the future, it might end up being SATA for slow/cheap spinning media and M.2 NVMe for solid state storage. The SATA SSD devices are only good for use in older systems that don’t have M.2 sockets on the motherboard.

It seems that most new motherboards have one M.2 socket on the motherboard with NVMe support, and presumably support for booting from NVMe. But dual M.2 sockets is rare and the price difference is significantly greater than the cost of a PCIe M.2 card to support NVMe which is $14. So for NVMe RAID-1 it seems that the best option is a motherboard with a single NVMe socket (starting at $89 for a AM4 socket motherboard – the current standard for AMD CPUs) and a PCIe M.2 card.

One thing to note about NVMe is that different drivers are required. On Linux this means means building a new initrd before the migration (or afterwards when booted from a recovery image) and on Windows probably means a fresh install from special installation media with NVMe drivers.

All the AM4 motherboards seem to have RADEON Vega graphics built in which is capable of 4K resolution at a stated refresh of around 24Hz. The ones that give detail about the interfaces say that they have HDMI 1.4 which means a maximum of 30Hz at 4K resolution if you have the color encoding that suits text (IE for use other than just video). I covered this issue in detail in my blog post about DisplayPort and 4K resolution [2]. So a basic AM4 motherboard won’t give great 4K display support, but it will probably be good for a cheap start.

$89 for motherboard, $124 for 500G NVMe, $344 for a Ryzen 5 3600 CPU (not the cheapest AM4 but in the middle range and good value for money), and $99 for 16G of RAM (DDR4 RAM is cheaper than DDR3 RAM) gives the core of a very decent system for $656 (assuming you have a working system to upgrade and peripherals to go with it).

Currently Kogan has 4K resolution monitors starting at $329 [3]. They probably won’t be the greatest monitors but my experience of a past cheap 4K monitor from Kogan was that it is quite OK. Samsung 4K monitors started at about $400 last time I could check (Kogan currently has no stock of them and doesn’t display the price), I’d pay an extra $70 for Samsung, but the Kogan branded product is probably good enough for most people. So you are looking at under $1000 for a new system with fast CPU, DDR4 RAM, NVMe storage, and a 4K monitor if you already have the case, PSU, keyboard, mouse, etc.

It seems quite likely that the 4K video hardware on a cheap AM4 motherboard won’t be that great for games and it will definitely be lacking for watching TV documentaries. Whether such deficiencies are worth spending money on a PCIe video card (starting at $50 for a low end card but costing significantly more for 3D gaming at 4K resolution) is a matter of opinion. I probably wouldn’t have spent extra for a PCIe video card if I had 4K video on the motherboard. Not only does using built in video save money it means one less fan running (less background noise) and probably less electricity use too.

My Plans

I currently have a workstation with 2*500G SATA SSDs in a RAID-1 array, 16G of RAM, and a i5-2500 CPU (just under 1/4 the speed of the Ryzen 5 3600). If I had hard drives then I would definitely buy a new system right now. But as I have SSDs that work nicely (quiet and fast enough for most things) and almost all machines I personally use have SSDs (so I can’t get a benefit from moving my current SSDs to another system) I would just get CPU, motherboard, and RAM. So the question is whether to spend $532 for more than 4* the CPU performance. At the moment I’ll wait because I’ll probably get a free system with DDR4 RAM in the near future, while it probably won’t be as fast as a Ryzen 5 3600, it should be at least twice as fast as what I currently have.

IT Asset Management

In my last full-time position I managed the asset tracking database for my employer. It was one of those things that “someone” needed to do, and it seemed that only way that “someone” wouldn’t equate to “no-one” was for me to do it – which was ok. We used Snipe IT [1] to track the assets. I don’t have enough experience with asset tracking to say that Snipe is better or worse than average, but it basically did the job. Asset serial numbers are stored, you can have asset types that allow you to just add one more of the particular item, purchase dates are stored which makes warranty tracking easier, and every asset is associated with a person or listed as available. While I can’t say that Snipe IT is better than other products I can say that it will do the job reasonably well.

One problem that I didn’t discover until way too late was the fact that the finance people weren’t tracking serial numbers and that some assets in the database had the same asset IDs as the finance department and some had different ones. The best advice I can give to anyone who gets involved with asset tracking is to immediately chat to finance about how they track things, you need to know if the same asset IDs are used and if serial numbers are tracked by finance. I was pleased to discover that my colleagues were all honourable people as there was no apparent evaporation of valuable assets even though there was little ability to discover who might have been the last person to use some of the assets.

One problem that I’ve seen at many places is treating small items like keyboards and mice as “assets”. I think that anything that is worth less than 1 hour’s pay at the minimum wage (the price of a typical PC keyboard or mouse) isn’t worth tracking, treat it as a disposable item. If you hire a programmer who requests an unusually expensive keyboard or mouse (as some do) it still won’t be a lot of money when compared to their salary. Some of the older keyboards and mice that companies have are nasty, months of people eating lunch over them leaves them greasy and sticky. I think that the best thing to do with the keyboards and mice is to give them away when people leave and when new people join the company buy new hardware for them. If a company can’t spend $25 on a new keyboard and mouse for each new employee then they either have a massive problem of staff turnover or a lack of priority on morale.