Archives

Categories

2.5Gbit Ethernet

I just decided to upgrade the core of my home network from 1Gbit to 2.5Gbit. I didn’t really need to do this, it was only about 5 years ago that I upgrade from 100Mbit to 1Gbit. but it’s cheap and seemed interesting.

I decided to do it because a 2.5Gbit switch was listed as cheap on Ozbargain Computing [1], that was $40.94 delivered. If you are in Australia and like computers then Ozbargain is a site worth polling, every day there’s interesting things at low prices. The seller of the switch is KeeplinkStore [2] who distinguished themselves by phoning me from China to inform me that I had ordered a switch with a UK plug for delivery to Australia and suggesting that I cancel the order and make a new order with an Australian plug. It wouldn’t have been a big deal if I had received a UK plug as I’ve got a collection of adaptors but it was still nice of them to make it convenient for me. The switch basically does what it’s expected to do and has no fan so it’s quiet.

I got a single port 2.5Gbit PCIe card for $18.77 and a dual port card for $34.07. Those cards are a little expensive when compared to 1Gbit cards but very cheap when compared to the computers they are installed in. These cards use the Realtek RTL8125 chipset and work well.

I got a USB-3 2.5Gbit device for $17.43. I deliberately didn’t get USB-C because I still use laptops without USB-C and most of the laptops with USB-C only have a single USB-C port which is used for power. I don’t plan to stop using my 100Mbit USB ethernet device because most of the time I don’t need a lot of speed. But sometimes I do things like testing auto-install on laptops and then having something faster than Gigabit is good. This card worked at 1Gbit speed on a 1Gbit network when used with a system running Debian/Bookworm with kernel 6.1 and worked at 2.5Gbit speed when connected to my LicheePi RISC-V system running Linux 5.10, but it would only do 100Mbit on my laptop running Debian/Unstable with kernel 6.6 (Debian Bug #1061095) [3]. It’s a little disappointing but not many people have such hardware so it probably doesn’t get a lot of testing. For the moment I plan to just use a 1Gbit USB Ethernet device most of the time and if I really need the speed I’ll just use an older kernel.

I did some tests with wget and curl to see if I could get decent speeds. When using wget 1.21.3 on Debian/Bookworm I got transfer speeds of 103MB/s and 18.8s of system CPU time out of 23.6s of elapsed time. Curl on Debian/Bookworm did 203MB/s and took 10.7s of system CPU time out of 11.8s elapsed time. The difference is that curl was using 100KB read buffers and a mix of 12K and 4K write buffers while wget was using 8KB read buffers and 4KB write buffers. On Debian/Unstable wget 1.21.4 uses 64K read buffers and a mix of 4K and 60K write buffers and gets a speed of 208MB/s. As an experiment I changed the read buffer size for wget to 256K and that got the speed up to around 220MB/s but it was difficult to measure as the occasional packet loss slowed things down. The pattern of writing 4K and then writing the rest continued, it seemed related to fwrite() buffering. For anyone else who wants to experiment with the code, the wget code is simpler (due to less features) and the package builds a lot faster (due to fewer tests) so that’s the one to work on.

The client machine for these tests has a E5-2696 v3 CPU, this doesn’t compare well to some of the recent AMD CPUs on single-core performance but is still a decently powerful system. Getting good performance at Gigabit speeds on an ARM or RISC-V system is probably going to be a lot harder than getting good performance at 2.5Gbit speeds on this system.

In conclusion 2.5Gbit basically works apart from a problem with new kernels and a problem with the old version of wget. I expect that when Debian/Trixie is released (probably mid 2025) things will work well. For good transfer rates use wget version 1.21.4 or newer or use curl.

As an aside I use a 1500byte MTU because I have some 100baseT systems on my LAN and the settings regarding TCP acceleration etc are all the defaults.

LicheePi 4A (RISC-V) First Look

I Just bought a LicheePi 4A RISC-V embedded computer (like a RaspberryPi but with a RISC-V CPU) for $322.68 from Aliexpress (the official site for buying LicheePi devices). Here is the Sipheed web page about it and their other recent offerings [1]. I got the version with 16G of RAM and 128G of storage, I probably don’t need that much storage (I can use NFS or USB) but 16G of RAM is good for VMs. Here is the Wiki about this board [2].

Configuration

When you get one of these devices you should make setting up ssh server your first priority. I found the HDMI output to be very unreliable. The first monitor I tried was a Samsung 4K monitor dating from when 4K was a new thing, the LicheePi initially refused to operate at a resolution higher than 1024*768 but later on switched to 4K resolution when resuming from screen-blank for no apparent reason (and the window manager didn’t support this properly). On the Dell 4K monitor I use on my main workstation it sometimes refused to talk to it and occasionally worked. I got it running at 1920*1080 without problems and then switched it to 4K and it lost video sync and never talked to that monitor again. On my Desklab portabable 4K monitor I got it to display in 4K resolution but only the top left 1/4 of the screen displayed.

The issues with HDMI monitor support greatly limit the immediate potential for using this as a workstation. It doesn’t make it impossible but would be fiddly at best. It’s quite likely that a future OS update will fix this. But at the moment it’s best used as a server.

The LicheePi has a custom Linux distribution based on Ubuntu so you want too put something like the following in /etc/network/interfaces to make it automatically connect to the ethernet when plugged in:

auto end0
iface end0 inet dhcp

Then to get sshd to start you have to run the following commands to generate ssh host keys that aren’t zero bytes long:

rm /etc/ssh/ssh_host_*
systemctl restart ssh.service

It appears to have wifi hardware but the OS doesn’t recognise it. This isn’t a priority for me as I mostly want to use it as a server.

Performance

For the first test of performance I created a 100MB file from /dev/urandom and then tried compressing it on various systems. With zstd -9 it took 16.893 user seconds on the LicheePi4A, 0.428s on my Thinkpad X1 Carbon Gen5 with a i5-6300U CPU (Debian/Unstable), 1.288s on my E5-2696 v3 workstation (Debian/Bookworm), 0.467s on the E5-2696 v3 running Debian/Unstable, 2.067s on a E3-1271 v3 server, and 7.179s on the E3-1271 v3 system emulating a RISC-V system via QEMU running Debian/Unstable.

It’s very impressive that the QEMU emulation is fast enough that emulating a different CPU architecture is only 3.5* slower for this test (or maybe 10* slower if it was running Debian/Unstable on the AMD64 code)! The emulated RISC-V is also more than twice as fast as real RISC-V hardware and probably of comparable speed to real RISC-V hardware when running the same versions (and might be slightly slower if running the same version of zstd) which is a tribute to the quality of emulation.

One performance issue that most people don’t notice is the time taken to negotiate ssh sessions. It’s usually not noticed because the common CPUs have got faster at about the same rate as the algorithms for encryption and authentication have become more complex. On my i5-6300U laptop it takes 0m0.384s to run “ssh -i ~/.ssh/id_ed25519 localhost id” with the below server settings (taken from advice on ssh-audit.com [3] for a secure ssh configuration). On the E3-1271 v3 server it is 0.336s, on the QMU system it is 28.022s, and on the LicheePi it is 0.592s. By this metric the LicheePi is about 80% slower than decent x86 systems and the QEMU emulation of RISC-V is 73* slower than the x86 system it runs on. Does crypto depend on instructions that are difficult to emulate?

HostKey /etc/ssh/ssh_host_ed25519_key
KexAlgorithms -ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group14-sha256
MACs -umac-64-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1

I haven’t yet tested the performance of Ethernet (what routing speed can you get through the 2 gigabit ports?), emmc storage, and USB. At the moment I’ve been focused on using RISC-V as a test and development platform. My conclusion is that I’m glad I don’t plan to compile many kernels or anything large like LibreOffice. But that for typical development that I do it will be quite adequate.

The speed of Chromium seems adequate in basic tests, but the video output hasn’t worked reliably enough to do advanced tests.

Hardware Features

Having two Gigabit Ethernet ports, 4 USB-3 ports, and Wifi on board gives some great options for using this as a router. It’s disappointing that they didn’t go with 2.5Gbit as everyone seems to be doing that nowadays but Gigabit is enough for most things.

Having only a single HDMI port and not supporting USB-C docks (the USB-C port appears to be power only) limits what can be done for workstation use and for controlling displays. I know of people using small ARM computers attached to the back of large TVs for advertising purposes and that isn’t going to be a great option for this.

The CPU and RAM apparently uses a lot of power (which is relative – the entire system draws up to 2A at 5V so the CPU would be something below 5W). To get this working a cooling fan has to be stuck to the CPU and RAM chips via a layer of thermal stuff that resembles a fine sheet of blu-tack in both color and stickyness. I am disappointed that there isn’t any more solid form of construction, to mount this on a wall or ceiling some extra hardware would be needed to secure this. Also if they just had a really big copper heatsink I think that would be better. 80386 CPUs with similar TDP were able to run without a fan.

I wonder how things would work with all USB ports in use. It’s expected that a USB port can supply a minimum of 2.5W which means that all the ports could require 10W if they were active. Presumably something significantly less than 5W is available for the USB ports.

Other Devices

Sipheed has a range of other devices in the works. They currently sell the LicheeCluster4A which support 7 compute modules for a cluster in a box. This has some interesting potential for testing and demonstrating cluster software but you could probably buy an AMD64 system with more compute power for less money. The Lichee Console 4A is a tiny laptop which could be useful for people who like the 7″ laptop form factor, unfortunately it only has a 1280*800 display if it had the same resolution display as a typical 7″ phone I would have bought one.

The next device that appeals to me is the soon to be released Lichee Pad 4A which is a 10.1″ tablet with 1920*1200 display, Wifi6, Bluetooth 5.4, and 16G of RAM. It also has 1 USB-C connection, 2*USB-3 sockets, and support for an external card with 2*Gigabit ethernet. It’s a tablet as a “laptop without keyboard” instead of the more common “larger phone” design model.

They are also about to release the LicheePadMax4A which is similar to the other tablet but with a 14″ 2240*1400 display and which ships with a keyboard to make it essentially a laptop with detachable keyboard.

Conclusion

At this time I wouldn’t recommend that this device be used as a workstation or laptop, although the people who want to do such things will probably do it anyway regardless of my recommendations. I think it will be very useful as a test system for RISC-V development. I have some friends who are interested in this sort of thing and I can give them VMs.

It is a bit expensive. The Sipheed web site boasts about the LicheePi4 being faster than the RaspberryPi4, but it’s not a lot faster and the RaspberryPi4 is much cheaper ($127 or $129 for one with 8G of RAM). The RaspberryPi4 has two HDMI ports but a limit of 8G of RAM while the LicheePi has up to 16G of RAM and two Gigabit Ethernet ports but only a single HDMI port. It seems that the RaspberryPi4 might win if you want a cheap low power desktop system.

At this time I think the reason for this device is testing out RISC-V as an alternative to the AMD64 and ARM64 architectures. An open CPU architecture goes well with free software, but it isn’t just people who are into FOSS who are testing such things. I know some corporations are trying out RISC-V as a way of getting other options for embedded systems that don’t involve paying monopolists.

The Lichee Console 4A is probably a usable tiny laptop if the resolution is sufficient for your needs. As an aside I predict that the tiny laptop or pocket computer segment will take off in the near future. There are some AMD64 systems the size of a phone but thicker that run Windows and go for reasonable prices on AliExpress. Hopefully in the near future this device will have better video drivers and be usable as a small and quiet workstation.

I won’t rule out the possibility of making this my main workstation in the not too distant future, all it needs is reliable 4K display and the ability to decode 4K video. It’s performance for web browsing and as an ssh client seems adequate, and that’s what matters for my workstation use. But for the moment it’s just for server use.

SAS vs SATA and Recovery

SAS and SATA are electrically compatible to a degree that allows connecting a SATA storage device to a SAS controller. The SAS controller understands the SATA protocol so this works. A SAS device can’t be physically connected to a SATA controller and if you did manage to connect it then it wouldn’t work.

Some SAS RAID controllers don’t permit mixing SAS and SATA devices in the same array, this is a software issue and could be changed. I know that the PERC controllers used by Dell (at least the older versions) do this and it might affect many/most MegaRAID controllers (which is what PERC is).

If you have a hardware RAID array of SAS disks and one fails then you need a spare SAS disk and as the local computer store won’t have any you need some on hand.

The Linux kernel has support for the MegaRAID/PERC superblocks so for at least some of the RAID types supported by MegaRAID/PERC you can just connect the disks to a Linux system and have it work (I’ve only tested on JBOD AKA a single-disk RAID-0). So if you have a server from Dell or IBM or any other company that uses MegaRAID which fails you can probably just put the disks into a non-RAID SAS controller and have them work. As Linux doesn’t care about the difference between SAS and SATA at the RAID level you could then add a SATA disk to an array of SAS disks. If you want to move an array from a dead Dell to a working IBM server or the other way around then you need it to be all SATA or all SAS. You can use a Linux system to mount an array used by Windows or any other OS and then migrate the data to a different array.

If you have an old array of SAS disks and one fails then it might be a reasonable option to just migrate the data to a new array of SATA SSDs. EG if you had 6*600G SAS disks you could move to 2*4TB SATA SSDs and get more storage, much higher performance, less power use, and less noise for a cost of $800 or so (you can spend more to get better performance) and some migration time.

Having a spare SAS controller for data recovery is convenient. Having a spare SAS disk for any RAID-5/RAID-6 is a good thing. Having lots of spare SAS disks probably isn’t useful as migrating to SATA is a better choice. SATA SSDs are bigger and faster than most SAS disks that are in production. I’m sure that someone uses SAS SSDs but I haven’t yet seen them in production, if you have a SAS system and need the performance that SSDs can give then a new server with U.2 (the SAS equivalent of NVMe) is the way to go). SATA hard drives are also the solution for seriously large storage, 16TB SATA hard drives are cheap and work in all the 3.5″ SAS systems.

It’s hard to sell old SAS disks as there isn’t much use for them.

Links December 2023

David Brin wrote an insightful blog post about the latest round of UFO delusion [1]. There aren’t a heap of scientists secretly working on UFOs.

David Brin wrote an informative and insightful blog post about rich doomsday preppers who want to destroy democracy [2].

Cory Doctorow wrote an interesting article about how ChatGPT helps people write letters and how that decreases the value of the letter [3]. What can we do to show that letters mean something? Hand deliver them? Pay someone to hand deliver them? Cory concentrates on legal letters and petitions but this can apply to other things too.

David Brin wrote an informative blog post about billionaires prepping for disaster – and causing the disaster [4].

David Brin wrote an insightful Wired article about ways of dealing with potential rogue AIs [5].

David Brin has an interesting take on government funded science [6].

Bruce Schneier wrote an insightful article about AI Risks which is worth reading [7].

Ximion wrote a great blog post about how tp use AppStream metadata to indicate what type of hardware/environment is required to use an app [8]. This is great for the recent use of Debian on phones and can provide real benefits for more traditional uses (like all those servers that accidentally got LibreOffice etc installed). Also for Convergence it will be good to have the app launcher take note of this, when your phone isn’t connected to a dock there’s no point offering to launch apps that require a full desktop screen.

Russ Albery wrote an interesting summary of the book Going Infinite about the Sam Bankman-Fried FTX fiasco [9]. That summary really makes Sam sound Autistic.

Cory Doctorow wrote an insightful article Microincentives and Enshittification explaining why Google search has to suck [10].

Charles Stross posted the text of a lecture he gave titles “We’re Sorry We Created the Torment Nexus [11] about sci-fi ideas that shouldn’t be implemented.

The Daily WTF has many stories of corporate computer stupidity, but The White Appliphant is one of the most epic [12].

The Verge has an informative article on new laws in the US and the EU to give a “right to repair” and how this explains the sudden change to 7 year support for Pixel phones [13].

Abuse and Free Software

People in positions of power can get away with mistreating other people. For any organisation to operate effectively there have to be mechanisms to address bad behaviour, both to help the organisation to achieve it’s goals and to protect people who work for it.

When an organisation operates in the public interest there is a greater reason to try to prevent bad behaviour as hurting people is not in the public interest.

There are many forms of power, in the free software community a reputation for doing good technical work or work related to supporting software development gives some power and influence. We have seen examples of technical contributions used to excuse mistreatment of other people.

The latest example of using a professional reputation to cover for abuse is Eben Moglen who has done some good legal work in the past while also treating members of the community badly (as documented by Matthew Garrett) [1]. Matthew has also documented how since 2016 Eben has not been doing good work for the free software community [2]. When news comes out about people who did good work while abusing other people they are usually defended with claims such as “we can’t lose the great contributions of this one person so it’s worth losing the contributions of everyone who can’t work with them“, but in such situations it’s very common to discover that they haven’t been doing great work. This might be partly due to abusive people being better at self-promoting than actually doing good work and might be partly due to the fact that people who are afraid to speak out when they are doing good work might suddenly feel ready to go public if the person’s work (defence) is decreasing.

Bradley Kuhn’s article about this situation is worth reading [3].

I don’t have as much knowledge of the people involved in these disputes as Matthew, but I know enough about what is happening to be confident that Matthew’s summary is accurate.

Fat Finger Shell

I’ve been trying out the Fat Finger Shell which is a terminal emulator for Linux on touch screen devices where the keyboard is overlayed with the terminal output. This means that instead of having a tiny keyboard and a tiny terminal output you have the full screen for both. There is a YouTube video showing how the Fat Finger Shell works [1].

Here is a link to the Github page [2], which hasn’t changed much in the last 11 years.

Currently the shell is hard-coded to a 80*24 terminal and a 640*480 screen which doesn’t match any modern hardware. Some parts of this are easy to change but then there’s the comment “I ran once XGetGeometry and I am harcoded (bad) values for x, y, etc..” which is followed by some magic numbers that are not easy to change which are hacked into the source of xvt.

The configuration of this is almost great. It has a plain text file where each line has 4 numbers representing the X and Y coordinates of opposite corners of a rectangle and additional information on what the key is, which is relatively easy to edit. But then it has an image which has to match that, the obvious improvement would be to not have an image but to just display rectangles for each pair of corner coordinates and display the glyph of the character in question inside it.

I think there is a real need for a terminal like this for use on devices like the PinePhonePro, it won’t be to everyone’s taste but the people who like it will really like it. The features that such a shell needs for modern use are being based on Wayland, supporting a variety of screen resolutions and particularly the commonly used ones like 720*1440 and 1920*1080 (with terminal resolution matching the combination of screen resolution and font), and having code derived from a newer terminal emulator. As a final note it would be good for such a terminal to also take input from a regular keyboard so when you plug your Linux phone into a dock you don’t need to close your existing terminal sessions.

There is a Debian RFP/ITP bug for this [3] which I think should be closed due to nothing happening for 11 years and the fact that so much work is required to make this usable.

The current Fat Finger Shell code is a good demonstration of the concept, but I don’t think it makes sense to move on with this code base. One of the many possible ways of addressing this with modern graphics technology might be to have a semi transparent window overlaying the screen and generating virtual keyboard events for whichever window happens to be below it so instead of being limited to one terminal program by the choice of input method have that input work for any terminal that the user may choose as well as any other text based program (email, IM, etc).

Links November 2023

The Long Now has an insightful article about air quality [1].

Every country needs food labelling laws like Mexico has [2]. Also we need to abolish the investor state tribunals, companies should just accept local laws and obey them – or be treated in the same way as pirates on the high seas.

Ian Jackson wrote a good post about conference policies regarding Covid19 [3]. We really need to do more about this, conservatives like to imagine that it’s gone away but people are still getting sick and dying of it.

John Goerzen wrote an informative article about “air gaps” and ways they can be part of a useful and usable security system [4].

This YouTube video has a good introduction to LLMs (Large Languge Models) for machine learning [5].

This eye tracker is interesting technology [6]. The video shows it being used for MS Flight Simulator but it can be used for other things. Unfortunately the price of about $550 Australian puts it out of range of a lot of free software work. I think this would be good for tracking the user FOR THEIR BENEFIT so that notifications won’t be delivered when the user is concentrating.

This ABC article about the risk of a past Covid19 infection exacerbating or accelerating Parkinson’s or Alzheimer’s is a worry [7].

Sam Hartman wrote an insightful blog post about AI safety, consent, and discussions of sex [8].

Links October 2023

The Daily Kos has an interesting article about a new more effective method of desalination [1].

Here is a video of a crazy guy zapping things with 100 car batteries [2]. This is sonmething you should avoid if you want to die of natural causes. Does dying while making a science video count for a Darwin Award?

A Hacker News comment has an interesting explanation of Unix signals [3].

Interesting documentary on the rise of mega corporations [4]. We need to split up Google, Facebook, and Amazon ASAP. Also every phone platform should have competing app stores.

Dave Taht gave an interesting LCA lecture about Internet congestion control [5]. He also referenced a web site about projects to alleviate the buffer bloat problem [6].

This tiny event based sensor is an interesting product [7]. It could lead to some interesting (but possibly invasive) technological developments in phones.

Tara Barnett’s Everything Open lecture Swiss Army GLAM had some interesting ideas for community software development [8]. Having lots of small programs communicating with APIs is an interesting way to get people into development.

Actually Hardcore Overclocking has an interesting youtube video about the differences between x8 and x14 DDR4 DIMMs [9].

Interesting YouTube video from someone who helped the Kurds defend against Turkey about how war tunnels work [10]. He makes a strong case that the Israeli invasion of the Gaza Strip won’t be easy or pleasant.

Hello Kitty

I’ve just discovered a new xterm replacement named Kitty [1]. It boasts about being faster due to threading and using the GPU and it does appear faster on some of my systems but that’s not why I like it.

A trend in terminal programs in recent years has been tabbed operation so you can have multiple sessions in one OS window, this is something I’ve never liked just as I’ve never liked using Screen to switch between sessions when I had the option of just having multiple sessions on screen. The feature that I like most about Kitty is the ability to have a grid based layout of sessions in one OS window. Instead of having 16 OS windows on my workstation or 4 OS windows on a laptop with different entries in the window list and the possibility of them getting messed up if the OS momentarily gets confused about the screen size (a common issue with laptop use) I can just have 1 Kitty window that has all the sessions running.

Kitty has “Kitten” processes that can do various things, one is icat which displays an image file to the terminal and leaves it in the scroll-back buffer. I put the following shell code in one of the scripts called from .bashrc to setup an alias for icat.

if [ "$TERM" == "xterm-kitty" ]; then
  alias icat='kitty +kitten icat'
fi

The kitten interface can be supported by other programs. The version of the mpv video player in Debian/Unstable has a --vo=kitty option which is an interesting feature. However playing a video in a Kitty window that takes up 1/4 of the screen on my laptop takes a bit over 100% of a CPU core for mpv and about 10% to 20% for Kitty which gives a total of about 120% CPU use on my i5-6300U compared to about 20% for mpv using wayland directly. The option to make it talk to Kitty via shared memory doesn’t improve things.

Using this effectively requires installing the kitty-terminfo package on every system you might ssh to. But you can set the term type to xterm-256color when logged in to a system without the kitty terminfo installed. The fact that icat and presumably other advanced terminal functions work over ssh by default is a security concern, but this also works with Konsole and will presumably be added to other terminal emulators so it’s a widespread problem that needs attention.

There is support for desktop notifications in the Kitty terminal encoding [2]. One of the things I’m interested in at the moment is how to best manage notifications on converged systems (phone and desktop) so this is something I’ll have to investigate.

Overall Kitty has some great features and definitely has the potential to improve productivity for some work patterns. There are some security concerns that it raises through closer integration between systems and between programs, but many of them aren’t exclusive to Kitty.

Bluetooth Versions and PineTime

I’ve done some tests with the PineTime [1] on different Android phones. On a Huawei Mate 10 Pro (from 2017 with Bluetooth 4.2) it has very slow transfer speeds for updating the firmware (less than 1KB/s) and unreliable connection to the phone. On a Huawei Nova 7i (from 2020 with Bluetooth 4.2) it has slow transfer speeds (about 2KB/s) and a more reliable connection to the phone. On a Pixel 4 XL (from 2019 with Bluetooth 5.0) it has very fast speeds for updating the firmware and also a reliable link.

Version 5 of the Bluetooth standard [2] was released in 2016 so it’s a little disappointing that the Mate 10 Pro doesn’t support it and very disappointing that the Nova 7i doesn’t support it either. Bluetooth 5 adds higher speeds and longer range for LE (Low Energy) modes which are used for things like smart watches.

It’s extremely disappointing that the PinePhonePro [3] only supports Bluetooth 4.1. It’s a phone released in 2021 that doesn’t even have Bluetooth 4.2 which was released in 2014.

For laptops the Thinkpad X1 Carbon 7th Gen release in 2019 [4] was the first in the X1 Carbon series to have Bluetooth 5. So I will probably be limited in my ability to use my personal laptop or PinePhone for testing Linux software that talks to the PineTime and I’ll have to use a laptop borrowed from work.