Archives

Categories

Amazon Prime and Netflix

I’ve been trying both Amazon Prime and Netflix. I signed up for the month free of Amazon Prime to watch “Good Omens” and “Picard”. “Good Omens” is definitely worth the effort of setting up the month free of Amazon Prime and is worth the month’s subscription if you have used your free month in the past and Picard is ok.

Content

Amazon Prime has a medium amount of other content, I’m now paying for a month of Amazon Prime mainly because there’s enough documentaries to take a month. For reference there are plenty of good ones about war and about space exploration. There are also some really rubbish documentaries, for example a 2 part documentary about the Magna Carta where the second part starts with Grover Norquist claiming that the Magna Carta is justification for not having any taxes (the first part seemed ok).

Netflix has a lot of great content. A big problem with Netflix is that there aren’t good ways of searching and organising the content you want to watch. It would be really nice if Netflix could use some machine learning for recommendations and recommend shows based on what I’ve liked and also what I’ve disliked.

On both Netflix and Amazon when you view the details of a show it gives a short list of similar shows which is nice. With Amazon I have no complaints about that. But with Netflix the content library is so great that you get lost in a maze of links. On the Android tablet interface for Netflix it shows 12 similar shows in a grid and on the web interface it’s a row of 20 shows with looped scrolling. Then as you click a different show you get another list of 12/20 shows which will usually have some overlap with the previous one. It would be nice if you could easily swipe left on shows you don’t like to avoid having them repeatedly presented to you.

On Netflix I’ve really enjoyed the “Altered Carbon” series (which is significantly more violent than I anticipated), “Black Mirror” (the episode written by Trent Reznor and starring Miley Cyrus is particularly good), and “Love Death and Robots”. Overall I currently rate “Love Death and Robots” as in many ways the best series I’ve ever watched because the episodes are all short and get straight to the point. One advantage of online video is that they don’t need to pad episodes out or cut them short to fit a TV time slot, they can use as much time as necessary to tell the story.

Watch List

Having a single row of shows to watch is fine for the amount of content that Amazon has, but for the Netflix content you can easily get 100 shows on your watch list and it would be good to be able to search my watch list by genre (it’s a drag to flick through dozens of icons of war documentaries when I’m in the mood for an action movie as the icons are somewhat similar).

As well as a list of shows you selected to watch Netflix has a list of shows that have been recently watched with no way to edit it which is separate from the list of shows selected to watch. So if you watch 5 minutes of a show and decide that it sucks then it stays on the list until you have partially watched 10 other shows recently. For my usage the recently watched list is the most important thing as I’m watching some serial shows and wouldn’t want to go through the 100 shows on my watch list to find them. If I’ve decided that a movie sucked after watching a bit of it I don’t want to be reminded of it by seeing the icon every time I use Netflix for the next month.

Amazon has only a single “watch next” list for shows that you have watched recently and shows that you selected as worth watching. It allows editing the list which is nice, but then Amazon also often keeps shows on the list when you have finished watching them and removed them from the “to watch” setting. Amazon’s watch list is also generally buggy, at one time it decided that a movie was no longer available in my region but didn’t let me remove it from the list.

Quality

Apparently the Netflix web interface on Linux only allows 720p video while the Amazon web interface on all platforms is limited to 720p. In any case my Internet connection is probably only good enough for 1080p at most. I haven’t noticed any quality differences between Netflix and Amazon Prime.

Multiple Users

Netflix allows you to create profiles for multiple users with separate watch lists which is very handy. They also don’t have IP address restrictions so it’s a common practice for people to share a Netflix account with relatives. If you try to use Netflix when the maximum number of sessions for your account is in use it will show a list of what the other people on your account are watching (so if you share with your parents be careful about that).

Amazon doesn’t allow creating multiple profiles, but the content isn’t that great. The trend in video streaming is for proprietary content to force users to subscribe to a service. So sharing an Amazon Prime account with a few people so you want watch the proprietary content would make sense.

Watching Patterns

Sometimes when I’m particularly distracted I can’t focus on one show for any length of time. Both Amazon and Netflix (and probably all other online streaming services) allow me to skip between shows easily. That’s always been a feature of YouTube, but with YouTube you get recommended increasingly viral content until you find yourself watching utter rubbish. At least with Amazon and Netflix there is a minimum quality level even if that is reality TV.

Conclusion

Amazon Prime has a smaller range of content and some really rubbish documentaries. I don’t mind the documentaries about UFOs and other fringe stuff as it’s obvious what it is and you can avoid it. A documentary that has me watching for an hour before it’s revealed to be a promo for Grover Norquist is really bad, did the hour of it that I watched have good content or just rubbish too?

Netflix has a huge range of content and the quality level is generally very high.

If you are going to watch TV then subscribing to Netflix is probably a good idea. It’s reasonably cheap, has a good (not great) interface, and has a lot of content including some great original content.

For Amazon maybe subscribe for 1 month every second year to binge watch the Amazon proprietary content that interests you.

Links February 2020

Truthout has an interesting summary of the US “Wars Without Victory and Weapons Without End” [1]. The Korean war seems mostly a win for the US though.

The Golden Age of White Collar Crime is an informative article about the epidemic of rich criminals in the US that are protected at the highest levels [2]. This disproves the claims about gun ownership preventing crime. AFAIK no-one has shot a corporate criminal in spite of so many deserving it.

Law and Political Economy has an insightful article “Privatizing Sovereignty, Socializing Property: What Economics Doesn’t Teach You About the Corporation” [3]. It makes sense of the corporation law system.

IDR labs has a communism test, I scored 56% [4].

Vice has an interesting article about companies providing free email programs and services and then selling private data [5]. The California Consumer Privacy Act is apparently helping as companies that do business in the US can’t be sure which customers are in CA and need to comply to it for all users. Don’t trust corporations with your private data.

The Atlantic has an interesting article about Coronavirus and the Blindness of Authoritarianism [6]. The usual problem of authoritarianism but with a specific example from China. The US is only just astarting it’s experiment with authoritarianism and they are making the same mistakes.

The Atlantic has an insightful article about Coronavirus and it’s effect on China’s leadership [7]. It won’t change things much.

On The Commons has an insightful article We Now Have a Justice System Just for Corporations [8]. In the US corporations can force people into arbitration for most legal disputes, as they pay the arbitration companies the arbitration almost always gives the company the result they pay for.

Boing Boing has an interesting article about conspiracy theories [9]. Their point is that some people have conspiracy theories (meaning belief in conspiracies that is not based in fact) due to having seen real conspiracies at close range. I think this only applies to a minority of people who believe conspiracy theories, and probably only to people who believe in a very small number of conspiracies. It seems that most people who believe in conspiracy theories believe in many of them.

Douglas Rushkoff wrote a good article about rich people who are making plans to escape after they destroy the environment [10]. Includes the idea of having shock-collars for security guards to stop them going rogue.

Boing Boing has an interesting article on the Brahmin Left and the Merchant Right [11]. It has some good points about the left side of politics representing the middle class more than the working class, especially the major left wing parties that are more centrist nowadays (like Democrats in the US and Labor in Australia).

DisplayPort and 4K

The Problem

Video playback looks better with a higher scan rate. A lot of content that was designed for TV (EG almost all historical documentaries) is going to be 25Hz interlaced (UK and Australia) or 30Hz interlaced (US). If you view that on a low refresh rate progressive scan display (EG a modern display at 30Hz) then my observation is that it looks a bit strange. Things that move seem to jump a bit and it’s distracting.

Getting HDMI to work with 4K resolution at a refresh rate higher than 30Hz seems difficult.

What HDMI Can Do

According to the HDMI Wikipedia page [1], HDMI 1.3 (introduced in June 2006) to 1.4b (introduced in October 2011) supports 30Hz refresh at 4K resolution and if you use 4:2:0 Chroma Subsampling (see the Chroma Subsampling Wikipedia page [2] you can do 60Hz or 75Hz on HDMI 1.3 to 1.4b. Basically for colour 4:2:0 means half the horizontal and half the vertical resolution while giving the same resolution for monochrome. For video that apparently works well (4:2:0 is standard for Blue Ray) and for games it might be OK, but for text (my primary use of computers) it would suck.

So I need support for HDMI 2.0 (introduced in September 2013) on the video card and monitor to do 4K at 60Hz. Apparently none of the combinations of video card and HDMI cable I use for Linux support that.

HDMI Cables

The Wikipedia page alleges that you need either a “Premium High Speed HDMI Cable” or a “Ultra High Speed HDMI Cable” for 4K resolution at 60Hz refresh rate. My problems probably aren’t related to the cable as my testing has shown that a cheap “High Speed HDMI Cable” can work at 60Hz with 4K resolution with the right combination of video card, monitor, and drivers. A Windows 10 system I maintain has a Samsung 4K monitor and a NVidia GT630 video card running 4K resolution at 60Hz (according to Windows). The NVidia GT630 card is one that I tried on two Linux systems at 4K resolution and causes random system crashes on both, it seems like a nice card for Windows but not for Linux.

Apparently the HDMI devices test the cable quality and use whatever speed seems to work (the cable isn’t identified to the devices). The prices at a local store are $3.98 for “high speed”, $19.88 for “premium high speed”, and $39.78 for “ultra high speed”. It seems that trying a “high speed” cable first before buying an expensive cable would make sense, especially for short cables which are likely to be less susceptible to noise.

What DisplayPort Can Do

According to the DisplayPort Wikipedia page [3] versions 1.2–1.2a (introduced in January 2010) support HBR2 which on a “Standard DisplayPort Cable” (which probably means almost all DisplayPort cables that are in use nowadays) allows 60Hz and 75Hz 4K resolution.

Comparing HDMI and DisplayPort

In summary to get 4K at 60Hz you need 2010 era DisplayPort or 2013 era HDMI. Apparently some video cards that I currently run for 4K (which were all bought new within the last 2 years) are somewhere between a 2010 and 2013 level of technology.

Also my testing (and reading review sites) shows that it’s common for video cards sold in the last 5 years or so to not support HDMI resolutions above FullHD, that means they would be HDMI version 1.1 at the greatest. HDMI 1.2 was introduced in August 2005 and supports 1440p at 30Hz. PCIe was introduced in 2003 so there really shouldn’t be many PCIe video cards that don’t support HDMI 1.2. I have about 8 different PCIe video cards in my spare parts pile that don’t support HDMI resolutions higher than FullHD so it seems that such a limitation is common.

The End Result

For my own workstation I plugged a DisplayPort cable between the monitor and video card and a Linux window appeared (from KDE I think) offering me some choices about what to do, I chose to switch to the “new monitor” on DisplayPort and that defaulted to 60Hz. After that change TV shows on NetFlix and Amazon Prime both look better. So it’s a good result.

As an aside DisplayPort cables are easier to scrounge as the HDMI cables get taken by non-computer people for use with their TV.

Self Assessment

Background Knowledge

The Dunning Kruger Effect [1] is something everyone should read about. It’s the effect where people who are bad at something rate themselves higher than they deserve because their inability to notice their own mistakes prevents improvement, while people who are good at something rate themselves lower than they deserve because noticing all their mistakes is what allows them to improve.

Noticing all your mistakes all the time isn’t great (see Impostor Syndrome [2] for where this leads).

Erik Dietrich wrote an insightful article “How Developers Stop Learning: Rise of the Expert Beginner” [3] which I recommend that everyone reads. It is about how some people get stuck at a medium level of proficiency and find it impossible to unlearn bad practices which prevent them from achieving higher levels of skill.

What I’m Concerned About

A significant problem in large parts of the computer industry is that it’s not easy to compare various skills. In the sport of bowling (which Erik uses as an example) it’s easy to compare your score against people anywhere in the world, if you score 250 and people in another city score 280 then they are more skilled than you. If I design an IT project that’s 2 months late on delivery and someone else designs a project that’s only 1 month late are they more skilled than me? That isn’t enough information to know. I’m using the number of months late as an arbitrary metric of assessing projects, IT projects tend to run late and while delivery time might not be the best metric it’s something that can be measured (note that I am slightly joking about measuring IT projects by how late they are).

If the last project I personally controlled was 2 months late and I’m about to finish a project 1 month late does that mean I’ve increased my skills? I probably can’t assess this accurately as there are so many variables. The Impostor Syndrome factor might lead me to think that the second project was easier, or I might get egotistical and think I’m really great, or maybe both at the same time.

This is one of many resources recommending timely feedback for education [4], it says “Feedback needs to be timely” and “It needs to be given while there is still time for the learners to act on it and to monitor and adjust their own learning”. For basic programming tasks such as debugging a crashing program the feedback is reasonably quick. For longer term tasks like assessing whether the choice of technologies for a project was good the feedback cycle is almost impossibly long. If I used product A for a year long project does it seem easier than product B because it is easier or because I’ve just got used to it’s quirks? Did I make a mistake at the start of a year long project and if so do I remember why I made that choice I now regret?

Skills that Should be Easy to Compare

One would imagine that martial arts is a field where people have very realistic understanding of their own skills, a few minutes of contest in a ring, octagon, or dojo should show how your skills compare to others. But a YouTube search for “no touch knockout” or “chi” shows that there are more than a few “martial artists” who think that they can knock someone out without physical contact – with just telepathy or something. George Dillman [5] is one example of someone who had some real fighting skills until he convinced himself that he could use mental powers to knock people out. From watching YouTube videos it appears that such people convince the members of their dojo of their powers, and those people then faint on demand “proving” their mental powers.

The process of converting an entire dojo into believers in chi seems similar to the process of converting a software development team into “expert beginners”, except that martial art skills should be much easier to assess.

Is it ever possible to assess any skills if people trying to compare martial art skills often do it so badly?

Conclusion

It seems that any situation where one person is the undisputed expert has a risk of the “chi” problem if the expert doesn’t regularly meet peers to learn new techniques. If someone like George Dillman or one of the “expert beginners” that Erik Dietrich refers to was to regularly meet other people with similar skills and accept feedback from them they would be much less likely to become a “chi” master or “expert beginner”. For the computer industry meetup.com seems the best solution to this, whatever your IT skills are you can find a meetup where you can meet people with more skills than you in some area.

Here’s one of many guides to overcoming Imposter Syndrome [5]. Actually succeeding in following the advice of such web pages is not going to be easy.

I wonder if getting a realistic appraisal of your own skills is even generally useful. Maybe the best thing is to just recognise enough things that you are doing wrong to be able to improve and to recognise enough things that you do well to have the confidence to do things without hesitation.

Deleted Mapped Files

On a Linux system if you upgrade a shared object that is in use any programs that have it mapped will list it as “(deleted)” in the /proc/PID/maps file for the process in question. When you have a system tracking the stable branch of a distribution it’s expected that most times a shared object is upgraded it will be due to a security issue. When that happens the reasonable options are to either restart all programs that use the shared object or to compare the attack surface of such programs to the nature of the security issue. In most cases restarting all programs that use the shared object is by far the easiest and least inconvenient option.

Generally shared objects are used a lot in a typical Linux system, this can be good for performance (more cache efficiency and less RAM use) and is also good for security as buggy code can be replaced for the entire system by replacing a single shared object. Sometimes it’s obvious which processes will be using a shared object (EG your web server using a PHP shared object) but other times many processes that you don’t expect will use it.

I recently wrote “deleted-mapped.monitor” for my etbemon project [1]. This checks for shared objects that are mapped and deleted and gives separate warning messages for root and non-root processes. If you have the unattended-upgrades package installed then your system can install security updates without your interaction and then the monitoring system will inform you if things need to be restarted.

The Debian package debian-goodies has a program checkrestart that will tell you what commands to use to restart daemons that have deleted shared objects mapped.

Now to solve the problem of security updates on a Debian system you can use unattended-upgrades to apply updates, deleted-mapped.monitor in etbemon to inform you that programs need to be restarted, and checkrestart to tell you the commands you need to run to restart the daemons in question.

If anyone writes a blog post about how to do this on a non-Debian system please put the URL in a comment.

While writing the deleted-mapped.monitor I learned about the following common uses of deleted mapped files:

  • /memfd: is for memfd https://dvdhrm.wordpress.com/tag/memfd/ [2]
  • /[aio] is for asynchronous IO I guess, haven’t found good docs on it yet.
  • /home is used for a lot of harmless mapping and deleting.
  • /run/user is used for systemd dconf stuff.
  • /dev/zero is different for each map and thus looks deleted.
  • /tmp/ is used by Python (and probably other programs) creates temporary files there for mapping.
  • /var/lib is used for lots of temporary files.
  • /i915 is used by some X apps on systems with Intel video, I don’t know why.

Social Media Sharing on Blogs

My last post was read directly (as opposed to reading through Planet feeds) a lot more than usual due to someone sharing it on lobste.rs. Presumably the people who read it that way benefited from reading it and I got a couple of unusually insightful comments from people who don’t usually comment on my blog. The lobste.rs sharing was a win for everyone.

There are a variety of plugins for social media sharing, most of which allow organisations like Facebook to track people who read your blog which is why I haven’t been using them.

Are there good ways of allowing people to easily share your blog posts which work in a reasonable way by not allowing much tracking of users unless they actually want to share content?

Load Average Monitoring

For my ETBE-Mon [1] monitoring system I recently added a monitor for the Linux load average. The Unix load average isn’t a very good metric for monitoring system load, but it’s well known and easy to use. I’ve previously written about the Linux load average and how it’s apparently different from other Unix like OSs [2]. The monitor is still named loadavg but I’ve now made it also monitor on the usage of memory because excessive memory use and load average are often correlated.

For issues that might be transient it’s good to have a monitoring system give a reasonable amount of information about the problem so it can be diagnosed later on. So when the load average monitor gives an alert I have it display a list of D state processes (if any), a list of the top 10 processes using the most CPU time if they are using more than 5%, and a list of the top 10 processes using the most RAM if they are using more than 2% total virtual memory.

For documenting the output of the free(1) command (or /proc/meminfo when writing a program to do it) the best page I found was this StackExchange page [3]. So I compare MemAvailable+SwapFree to MemTotal+SwapTotal to determine the percentage of virtual memory used.

Any suggestions on how I could improve this?

The code is in the recent releases of etbemon, it’s in Debian/Unstable, on the project page on my site, and here’s a link to the loadave.monitor script in the Debian Salsa Git repository [4].

Links January 2020

C is Not a Low Level Language [1] is an insightful article about the problems with C and the overall design of most current CPUs.

Interesting article about how the Boeing 737Max failure started with a takeover by MBA aparatcheks [2].

Interesting article about the risk of blood clots in space [3]. Widespread human spaceflight is further away than most people expect.

Wired has an insightful article about why rich people are so mean [4]. Also some suggestions for making them less mean.

Google published interesting information about their Titan security processor [5]. It’s currently used on the motherboards of GCP servers and internal Google servers. It would be nice if Google sold motherboards with a version of this.

Interesting research on how the alleged Supermicro motherboard backdoor [6]. It shows that while we may never know if the alleged attack took place such things are proven to be possible. In security we should assume that every attack that is possible is carried out on occasion. It might not have happen when people claim it happened, but it probably happened to someone somewhere. Also we know that TAO carried out similar attacks.

Arstechnica has an interesting article about cracking old passwords used by Unix pioneers [7]. In the old days encrypted passwords weren’t treated as secrets (/etc/passwd is world readable and used to have the encrypted passwords) and some of the encrypted passwords were included in source archives and have now been cracked.

Jim Baker (former general counsel of the FBI) wrote an insightful article titled Rethinking Encryption [8]. Lots of interesting analysis of the issues related to privacy vs the ability of the government to track criminals.

The Atlantic has an interesting article The Coalition Out to Kill Tech as We Know It [9] about the attempts to crack down on the power of big tech companies. Seems like good news.

The General Counsel of the NSA wrote an article “I Work for N.S.A. We Cannot Afford to Lose the Digital Revolution” [10].

Thoughts and Prayers by Ken Liu is an insightful story about trolling and NRA types [11].

Cory Doctorow wrote an insightful Locus article about the lack of anti-trust enforcement in the tech industry and it’s free speech implications titles “Inaction is a Form of Action” [12].

systemd-nspawn and Private Networking

Currently there’s two things I want to do with my PC at the same time, one is watching streaming services like ABC iView (which won’t run from non-Australian IP addresses) and another is torrenting over a VPN. I had considered doing something ugly with iptables to try and get routing done on a per-UID basis but that seemed to difficult. At the time I wasn’t aware of the ip rule add uidrange [1] option. So setting up a private networking namespace with a systemd-nspawn container seemed like a good idea.

Chroot Setup

For the chroot (which I use as a slang term for a copy of a Linux installation in a subdirectory) I used a btrfs subvol that’s a snapshot of the root subvol. The idea is that when I upgrade the root system I can just recreate the chroot with a new snapshot.

To get this working I created files in the root subvol which are used for the container.

I created a script like the following named /usr/local/sbin/container-sshd to launch the container. It sets up the networking and executes sshd. The systemd-nspawn program is designed to launch init but that’s not required, I prefer to just launch sshd so there’s only one running process in a container that’s not being actively used.

#!/bin/bash

# restorecon commands only needed for SE Linux
/sbin/restorecon -R /dev
/bin/mount none -t tmpfs /run
/bin/mkdir -p /run/sshd
/sbin/restorecon -R /run /tmp
/sbin/ifconfig host0 10.3.0.2 netmask 255.255.0.0
/sbin/route add default gw 10.2.0.1
exec /usr/sbin/sshd -D -f /etc/ssh/sshd_torrent_config

How to Launch It

To setup the container I used a command like “/usr/bin/systemd-nspawn -D /subvols/torrent -M torrent –bind=/home -n /usr/local/sbin/container-sshd“.

First I had tried the --network-ipvlan option which creates a new IP address on the same MAC address. That gave me an interface iv-br0 on the container that I could use normally (br0 being the bridge used in my workstation as it’s primary network interface). The IP address I assigned to that was in the same subnet as br0, but for some reason that’s unknown to me (maybe an interaction between bridging and network namespaces) I couldn’t access it from the host, I could only access it from other hosts on the network. I then tried the --network-macvlan option (to create a new MAC address for virtual networking), but that had the same problem with accessing the IP address from the local host outside the container as well as problems with MAC redirection to the primary MAC of the host (again maybe an interaction with bridging).

Then I tried just the “-n” option which gave it a private network interface. That created an interface named ve-torrent on the host side and one named host0 in the container. Using ifconfig and route to configure the interface in the container before launching sshd is easy. I haven’t yet determined a good way of configuring the host side of the private network interface automatically.

I had to use a bind for /home because /home is a subvol and therefore doesn’t get included in the container by default.

How it Works

Now when it’s running I can just “ssh -X” to the container and then run graphical programs that use the VPN while at the same time running graphical programs on the main host that don’t use the VPN.

Things To Do

Find out why --network-ipvlan and --network-macvlan don’t work with communication from the same host.

Find out why --network-macvlan gives errors about MAC redirection when pinging.

Determine a good way of setting up the host side after the systemd-nspawn program has run.

Find out if there are better ways of solving this problem, this way works but might not be ideal. Comments welcome.

4K Monitors

A couple of years ago a relative who uses a Linux workstation I support bought a 4K (4096*2160 resolution) monitor. That meant that I had to get 4K working, which was 2 years of pain for me and probably not enough benefit for them to justify it. Recently I had the opportunity to buy some 4K monitors at a low enough price that it didn’t make sense to refuse so I got to experience it myself.

The Need for 4K

I’m getting older and my vision is decreasing as expected. I recently got new glasses and got a pair of reading glasses as a reduced ability to change focus is common as you get older. Unfortunately I made a mistake when requesting the focus distance for the reading glasses and they work well for phones, tablets, and books but not for laptops and desktop computers. Now I have the option of either spending a moderate amount of money to buy a new pair of reading glasses or just dealing with the fact that laptop/desktop use isn’t going to be as good until the next time I need new glasses (sometime 2021).

I like having lots of terminal windows on my desktop. For common tasks I might need a few terminals open at a time and if I get interrupted in a task I like to leave the terminal windows for it open so I can easily go back to it. Having more 80*25 terminal windows on screen increases my productivity. My previous monitor was 2560*1440 which for years had allowed me to have a 4*4 array of non-overlapping terminal windows as well as another 8 or 9 overlapping ones if I needed more. 16 terminals allows me to ssh to lots of systems and edit lots of files in vi. Earlier this year I had found it difficult to read the font size that previously worked well for me so I had to use a larger font that meant that only 3*3 terminals would fit on my screen. Going from 16 non-overlapping windows and an optional 8 overlapping to 9 non-overlapping and an optional 6 overlapping is a significant difference. I could get a second monitor, and I won’t rule out doing so at some future time. But it’s not ideal.

When I got a 4K monitor working properly I found that I could go back to a smaller font that allowed 16 non overlapping windows. So I got a real benefit from a 4K monitor!

Video Hardware

Version 1.0 of HDMI released in 2002 only supports 1920*1080 (FullHD) resolution. Version 1.3 released in 2006 supported 2560*1440. Most of my collection of PCIe video cards have a maximum resolution of 1920*1080 in HDMI, so it seems that they only support HDMI 1.2 or earlier. When investigating this I wondered what version of PCIe they were using, the command “dmidecode |grep PCI” gives that information, seems that at least one PCIe video card supports PCIe 2 (released in 2007) but not HDMI 1.3 (released in 2006).

Many video cards in my collection support 2560*1440 with DVI but only 1920*1080 with HDMI. As 4K monitors don’t support DVI input that meant that when initially using a 4K monitor I was running in 1920*1080 instead of 2560*1440 with my old monitor.

I found that one of my old video cards supported 4K resolution, it has a NVidia GT630 chipset (here’s the page with specifications for that chipset [1]). It seems that because I have a video card with 2G of RAM I have the “Keplar” variant which supports 4K resolution. I got the video card in question because it uses PCIe*8 and I had a workstation that only had PCIe*8 slots and I didn’t feel like cutting a card down to size (which is apparently possible but not recommended), it is also fanless (quiet) which is handy if you don’t need a lot of GPU power.

A couple of months ago I checked the cheap video cards at my favourite computer store (MSY) and all the cheap ones didn’t support 4K resolution. Now it seems that all the video cards they sell could support 4K, by “could” I mean that a Google search of the chipset says that it’s possible but of course some surrounding chips could fail to support it.

The GT630 card is great for text, but the combination of it with a i5-2500 CPU (rating 6353 according to cpubenchmark.net [3]) doesn’t allow playing Netflix full-screen and on 1920*1080 videos scaled to full-screen sometimes gets mplayer messages about the CPU being too slow. I don’t know how much of this is due to the CPU and how much is due to the graphics hardware.

When trying the same system with an ATI Radeon R7 260X/360 graphics card (16* PCIe and draws enough power to need a separate connection to the PSU) the Netflix playback appears better but mplayer seems no better.

I guess I need a new PC to play 1920*1080 video scaled to full-screen on a 4K monitor. No idea what hardware will be needed to play actual 4K video. Comments offering suggestions in this regard will be appreciated.

Software Configuration

For GNOME apps (which you will probably run even if like me you use KDE for your desktop) you need to run commands like the following to scale menus etc:

gsettings set org.gnome.settings-daemon.plugins.xsettings overrides "[{'Gdk/WindowScalingFactor', <2>}]"
gsettings set org.gnome.desktop.interface scaling-factor 2

For KDE run the System Settings app, go to Display and Monitor, then go to Displays and Scale Display to scale things.

The Arch Linux Wiki page on HiDPI [2] is good for information on how to make apps work with high DPI (or regular screens for people with poor vision).

Conclusion

4K displays are still rather painful, both in hardware and software configuration. For serious computer use it’s worth the hassle, but it doesn’t seem to be good for general use yet. 2560*1440 is pretty good and works with much more hardware and requires hardly any software configuration.