I wrote a comment on a social media post where someone claimed that there’s no computer advances in the last 12 years which got long so it’s worth a blog post.
In the last decade or so new laptops have become cheaper than new desktop PCs. USB-C has taken over for phones and for laptop charging so all recent laptops support USB-C docks and monitors with USB-C docks built in have become common. 4K monitors have become cheap and common and higher than 4K is cheap for some use cases such as ultra wide. 4K TVs are cheap and TVs with built-in Android computers for playing internet content are now standard. For most use cases spinning media hard drives are obsolete, SSDs large enough for all the content most people need to store are cheap. We have gone from gigabit Ethernet being expensive to 2.5 gigabit being cheap.
12 years ago smart phones were very limited and every couple of years there would be significant improvements. Since about 2018 phones have been capable of doing most things most people want. 5yo Android phones can run the latest apps and take high quality pics. Any phone that supports VoLTE will be good for another 5+ years if it has security support. Phones without security support still work and are quite usable apart from being insecure. Google and Samsung have significantly increased their minimum security support for their phones and the GKI project from Google makes it easier for smaller vendors to give longer security support. There are a variety of open Android projects like LineageOS which give longer security support on a variety of phones. If you deliberately choose a phone that is likely to be well supported by projects like LineageOS (which pretty much means just Pixel phones) then you can expect to be able to actually use it when it is 10 years old. Compare this to the Samsung Galaxy S3 released in 2012 which was a massive improvement over the original Galaxy S (the S2 felt closer to the S than the S3). The Samsung Galaxy S4 released in 2013 was one of the first phones to have FullHD resolution which is high enough that most people can’t easily recognise the benefits of higher resolution. It wasn’t until 2015 that phones with 4G of RAM became common which is enough that for most phone use it’s adequate today.
Now that 16G of RAM is affordable in laptops running more secure OSs like Qubes is viable for more people. Even without Qubes, OS security has been improving a lot with better compiler features, new languages like Rust, and changes to software design and testing. Containers are being used more but we still aren’t getting all the benefits of that. TPM has become usable in the last few years and we are only starting to take advantage of what it can offer.
In 2012 BTRFS was still at an early stage of development and not many people wanted to use it in production, I was using it in production then and while I didn’t lose any data from bugs I did have some downtime because of BTRFS issues. Now BTRFS is quite solid for server use.
DDR4 was released in 2014 and gave significant improvements over DDR3 for performance and capacity. My home workstation now has 256G of DDR4 which wasn’t particularly expensive while the previous biggest system I owned had 96G of DDR3 RAM. Now DDR5 is available to again increase performance and size while also making DDR4 cheap on the second hand market.
This isn’t a comprehensive list of all advances in the computer industry over the last 12 years or so, it’s just some things that seem particularly noteworthy to me.
Please comment about what you think are the most noteworthy advances I didn’t mention.
I note that this is almost exclusively hardware you’re talking about. Which seems to confirm the idea that outside of a couple select niches, software hasn’t progressed one bit. It just has more, better hardware to play with.
I will concede Rust is indeed an improvement over the state of the art. On the other hand, what compiler features are your thinking of? What changes to software design and testing are you talking about? Aren’t container a solution to a problem (dependency hell) that should have been avoided in the first place? Finally I’ve worked with TPM 2, it’s a bloody mess, and the software stack is even worse.
A 10yo pixel is unusable, unfortunately, because the battery has long since died and cannot be replaced without extreme measures that are not feasibole without replacement parts (like a new display …) that are no longer available.
Speaking from experience here.
USB-C is awesome, but support in desktop computers is highly disappointing.
I would question the advancements of smartphones. Longer lifecycles are great, but I do not consider them breakthroughs (I think they were possible, but people didn’t care about that. Apple made it a selling point and everyone tried to follow. I think it’s huge for environmental impact, though.)
Rust is a breakthrough IMHO, but I think Qubes is a bit niche- I’m not sure KVM is the way to go for security in *most* cases. (It’s certainly great in a few.)
However, I would pose that enshittification has been greater than the technological improvements…
Qubes uses XEN and not KVM.
64 bit computing everywhere. In 2012 PCs were 64 bit — basically since Core 2 came out in 2006 (AMD Athlon64 was a little earlier, but niche) though adoption took a while. Apple introduced the 64 bit iPhone 5s in 2003, Android phones followed about 18 months later. Today I have a $3 64 bit 64 MB RAM 1.0 GHz board running Linux (Milk-V Duo). It’s got double precision FPU and 128 bit vector processing (including 64 bit and 32 bit float as well as 8, 16, 32, 64 bit int).
The rise of real multi-core. In 2012 we were just graduating from dual core to quad core. This year I bought a 24 core (32 thread) 16″ laptop with 32 GB RAM and 1 TB SSD for under $1500 before tax. Oh, and it can run one or two cores at 5.3 GHz up from the 3.46 GHz “turbo” of the i7-860 desktop I had in 2012 (my 2011 17″ MBP also turbo’d to 3.5 GHz, with a 2.4 GHz base). And we now have better IPC too. I can now build a defconfig Linux kernel in 1 minute on a freaking 2.5 kg laptop!
The rise of RISC-V. In 1991 Linus asked “Why do OSes have to be proprietary, so expensive, closed source, and keep disappearing?” And so he made his own, and it was good, and it’s taking over the world. In 2010 Krste, Andrew, and Yunsup asked the same question about instruction sets and also decided to make their own for use in experimental hardware and in teaching. A couple of years later they started getting external emails complaining that they’d changed everything (instructions, mnemonics, instruction encoding) from one semester to the next. What? Who are you and why do you care about an academic project? Turns out there was demand and people were building chips using their ISA and open source CPU core designs. So in 2015 they published the current draft spec, formed a non-profit to administer and develop the spec, and founded a startup (SiFive) to create and license CPU cores — just one of potentially many, but with a head-start. In December 2016 they released the first commercial 32 bit microcontroller RISC-V chip on the Arduino-compatible “HiFive1” board. By early 2018 they had a quad core 1.5 GHz 64 bit Linux chip (FU540) and board (HiFive Unleashed).
Skipping forward, in January 2024 a 64 core 2.0 GHz 128 GB RAM RISC-V computer (Milk-V Pioneer) shipped. The CPU core design is similar to the Arm A72 used in Raspberry Pi 4 and Amazon Graviton 1 (2.3 GHz but only 16 cores, November 2018). We also have now shipping 8 core 1.6 GHz dual-issue chips/boards similar to Arm A55: Banana Pi BPI-F3, starting from $65, Milk-V Jupiter MiniITX board and Roma II laptop announced and shipping soon. At the end of the year we’re expecting the SG2380 SoC with 16 2.4 GHz SiFive P670 cores, comparable to Arm A78, leapfrogging over the Raspberry Pi 5, and RK3588 boards such as the Rock 5 and Orange Pi 5, both in per-core performance and in number of cores. Milk-V say the Oasis board will start from $120 (that may be with zero RAM and storage). By 2026 or so we’ll see RISC-V chips competitive with Apple and x86 CPUs from this decade — at least something similar to the new Snapdragon X Elite.
On the low end, the $0.10 CH32V003 32 bit RISC-V microcontroller (and more capable and only slightly more expensive versions) is rapidly killing off the use of remaining proprietary 8 and 16 bit microcontrollers in new projects. To be fair, there is also the Puya PY32F002A with an Arm Cortex-M0+ CPU core with about the same price and capability, but it’s not generating the same community and excitement.
I think those three are the biggies from the last 12 years.
This one might seem small, but LSP did not exist 12 years ago, and I would say that going from not having LSP to having LSP is a pretty unambiguous win.
Loup: I didn’t think much about software when writing that post because hardware is more obvious. I might write another post about software advances. Although doing pretty much the same thing in 16G now that was done in 8G 10 years ago isn’t great. I would have a hardware time making a case for software advances given the software bloat in recent times. I agree that containers are often misused. TPM is hard because security is always hard, but it offers some benefits.
Matthias: That sucks, hopefully new EU legislation will address that. Also it depends on which Pixel you have, about a year ago one of my relatives had a Pixel 1 or 2 that’s battery had inflated, I had two stores quote $100 to replace it. $100 is quite a lot for a phone with an ebay price of $140 but not THAT expensive for making a working phone keep working.
Alex: Yes I’d like to see desktop video cards having USB-C/Thunderbolt output to connect to a laptop dock. I have a 5120*2160 monitor and need a 8K rated KVM switch and the only good one I could find was for 2 laptops and switched 2*USB-C inputs. I expect this will happen in the near future if only because DisplayPort and HDMI plugs are large and inconvenient. PCIe 5.0 video cards could use bifurcation to have *8 for video and some other lanes for USB-C/Thunderbolt etc.
Regarding whether longer lifecycles are an advancement, I’m not trying to list just advances in computer science and engineering but advances for the users. Sometimes trivial advances in science and engineering provide big benefits for users.
The way Qubes currently works isn’t the way it always has to work. Changing from KVM to seL4 as the virtual machine manager is a possibility.
I agree that enshittification is hurting us a lot, fortunately the EU is doing some things to mitigate that.
Bruce: Yes there have been significant advances in embedded devices. The devices that are below $10 in low quantities really change things. The PineTime smart watch has 64K of RAM and 4.5M of flash. The iPaQ systems ran Linux nicely with 64M of RAM and 16M of flash. We are getting close to the stage of running Linux on smart watches and other tiny devices. Arduino could become obsolete when Linux can run with full capabilities on the smallest devices.
https://milkv.io/pioneer
The Milk-V Pioneer has impressive specs, but $1500US for CPU and motherboard is a lot! But you are correct about RISC-V doing impressive things and changing the world. I have a Sipeed LicheePi RISC-V board which is a fun little machine. Framework has pre-announced a RISC-V motherboard for their modular laptops which is enticing.
Sarah: What do you mean by LSP? There are several definitions.