SE Linux Play Machine Over Tor

I work on SE Linux to improve security for all computer users. I think that my work has gone reasonably well in that regard in terms of directly improving security of computers and helping developers find and fix certain types of security flaws in apps. But a large part of the security problems we have at the moment are related to subversion of Internet infrastructure. The Tor project is a significant step towards addressing such problems. So to achieve my goals in improving computer security I have to support the Tor project. So I decided to put my latest SE Linux Play Machine online as a Tor hidden service. There is no real need for it to be hidden (for the record it’s in my bedroom), but it’s a learning experience for me and for everyone who logs in.

A Play Machine is what I call a system with root as the guest account with only SE Linux to restrict access.

Running a Hidden Service

A Hidden Service in TOR is just a cryptographically protected address that forwards to a regular TCP port. It’s not difficult to setup and the Tor project has good documentation [1]. For Debian the file to edit is /etc/tor/torrc.

I added the following 3 lines to my torrc to create a hidden service for SSH. I forwarded port 80 for test purposes because web browsers are easier to configure for SOCKS proxying than ssh.

HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServicePort 22 192.168.0.2:22
HiddenServicePort 80 192.168.0.2:22

Generally when setting up a hidden service you want to avoid using an IP address that gives anything away. So it’s a good idea to run a hidden service on a virtual machine that is well isolated from any public network. My Play machine is hidden in that manner not for secrecy but to prevent it being used for attacking other systems.

SSH over Tor

Howtoforge has a good article on setting up SSH with Tor [2]. That has everything you need for setting up Tor for a regular ssh connection, but the tor-resolve program only works for connecting to services on the public Internet. By design the .onion addresses used by Hidden Services have no mapping to anything that reswemble IP addresses and tor-resolve breaks it. I believe that the fact that tor-resolve breaks thins in this situation is a bug, I have filed Debian bug report #776454 requesting that tor-resolve allow such things to just work [3].

Host *.onion
ProxyCommand connect -5 -S localhost:9050 %h %p

I use the above ssh configuration (which can go in ~/.ssh/config or /etc/ssh/ssh_config) to tell the ssh client how to deal with .onion addresses. I also had to install the connect-proxy package which provides the connect program.

ssh root@zp7zwyd5t3aju57m.onion
The authenticity of host ‘zp7zwyd5t3aju57m.onion ()
ECDSA key fingerprint is 3c:17:2f:7b:e2:f6:c0:c2:66:f5:c9:ab:4e:02:45:74.
Are you sure you want to continue connecting (yes/no)?

I now get the above message when I connect, the ssh developers have dealt with connecting via a proxy that doesn’t have an IP address.

Also see the general information page about my Play Machine, that information page has the root password [4].

DNSSEC

reason=”verification failed; insecure key”

I’ve recently noticed OpenDKIM on systems I run giving the above message when trying to verify a DKIM message from my own domain. According to Google searches this is due to DNSSEC not being enabled. I’m not certain that I really need DNSSEC for this reason (I can probably make DKIM work without it), but the lack of it does decrease the utility of DKIM and DNSSEC is generally a good thing to have.

Client (Recursive) Configuration

The Debian Wiki page about DNSSEC is really good for setting up recursive resolvers [1]. Basically if you install the bind9 package on Debian/Wheezy (current stable) it will work by default. If you have upgraded from an older release then it might not work (IE if you modified the BIND configuration and didn’t allow the upgrade to overwrite your changes). The Debian Wiki page is also quite useful if you aren’t using Debian, most of it is more Linux specific than Debian specific.

dig +short test.dnssec-or-not.net TXT | tail -1

After you have enabled DNSSEC on a recursive resolver the above command should return “Yes, you are using DNSSEC“.

dig +noall +comments dnssec-failed.org

The above command queries a zone that’s deliberately misconfigured, it will fail if DNSSEC is working correctly.

Signing a Zone

Digital Ocean has a reasonable tutorial on signing a zone [2].

dnssec-keygen -a NSEC3RSASHA1 -b 2048 -n ZONE example.com

The above command creates a Zone Signing Key.

dnssec-keygen -f KSK -a NSEC3RSASHA1 -b 4096 -n ZONE example.com

The above command creates a Key Signing Key. This will take a very long time if you don’t have a good entropy source, on my systems it took a couple of days. Run this from screen or tmux.

$INCLUDE ksk/Kexample.com.+123+12345.key
$INCLUDE zsk/Kexample.com.+123+34567.key

When you have created the ZSK and KSK you need to add something like the above to your zone file to include the DNSKEY records.

all: example.com.signed

%.signed: %
        dnssec-signzone -A -3 $(shell head -c 100 /dev/random | sha1sum | cut -b 1-16) -k $(shell echo ksk/K$<*.key) -N INCREMENT -o $< -t $< $(shell echo zsk/K$<*.key)
        rndc reload

Every time you change your signed zone you need to create a new signed zone file. Above is the Makefile I’m currently using to generate the signed file. This relies on storing the KSK files in a directory named ksk/ and the ZSK files in a directory named zsk/. Then BIND needs to be configured to use example.com.signed instead of example.com.

The Registrar

Every time you sign the zone a file with a name like dsset-example.com. will be created, it will have the same contents every time which are the DS entries you send to the registrar to have your zone publicly known as being signed.

Many registrars don’t support DNSSEC, if you use such a registrar (as I do) then you need to transfer your zone before you can productively use DNSSEC. Without the DS entries being signed by a registrar and included in the TLD no-one will recognise your signatures on zone data.

ICANN has a list of registrars that support DNSSEC [3]. My next task is to move some of my domains to such registrars, unfortunately they cost more so I probably won’t transfer all my zones. Some of my zones don’t do anything that’s important enough to need DNSSEC.

Fixing Strange Directory Write Access

type=AVC msg=audit(1403622580.061:96): avc:  denied  { write } for  pid=1331 comm="mysqld_safe" name="/" dev="dm-0" ino=256 scontext=system_u:system_r:mysqld_safe_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=dir
type=SYSCALL msg=audit(1403622580.061:96): arch=c000003e syscall=269 success=yes exit=0 a0=ffffffffffffff9c a1=7f5e09bfe798 a2=2 a3=2 items=0 ppid=1109 pid=1331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mysqld_safe" exe="/bin/dash" subj=system_u:system_r:mysqld_safe_t:s0 key=(null)

For a long time (probably years) I’ve been seeing messages like the above in the log from auditd (/var/log/audit/audit.log) when starting mysqld. I haven’t fixed it because the amount of work exceeded the benefit, it’s just a couple of lines logged at every system boot. But today I decided to fix it.

The first step was to find out what was going on, I ran a test system in permissive mode and noticed that there were no attempts to create a file (that would have been easy to fix). Then I needed to discover which system call was triggering this. The syscall number is 269, the file linux/x86_64/syscallent.h in the strace source shows that 269 is the system call faccessat. faccessat(2) and access(2) are annoying cases, they do all the permission checks for access but don’t involve doing the operation so when a program uses those system calls but for some reason doesn’t perform the operation in question (in this case writing to the root directory) then we just get a log entry but nothing happening to examine.

A quick look at the shell script didn’t make the problem obvious, note that this is probably obvious to people who are more skilled at shell scripting than me – but it’s probably good for me to describe how to solve these problems every step of the way. So the next step was to use gdb. Here is the start of my gdb session:

# gdb /bin/sh
[skipped]
Reading symbols from /bin/dash…(no debugging symbols found)…done.
(gdb) b faccessat
Breakpoint 1 at 0x3960
(gdb) r -x /usr/bin/mysqld_safe
[lots skipped]
+ test -r /usr/my.cnf
Breakpoint 1, 0x00007ffff7b0c7e0 in faccessat ()
from /lib/x86_64-linux-gnu/libc.so.6

After running gdb on /bin/sh (which is a symlink to /bin/dash) I used the “b” command to set a breakpoint on the function faccessat (which is a library call from glibc that calls the system call sys_faccessat()). A breakpoint means that program execution will stop when the function is called. I run the shell script with “-x” as the first parameter to instruct the shell to show me the shell commands that are run so I can match shell commands to system calls. The above output shows the first call to faccessat() which isn’t interesting (it’s testing for read access).

I then ran the “c” command in gdb to continue execution and did so a few times until I found something interesting.

+ test -w / -o root = root
Breakpoint 1, 0x00007ffff7b0c7e0 in faccessat ()
from /lib/x86_64-linux-gnu/libc.so.6

Above is the interesting part of the gdb output. It shows that the offending shell command is “test -w /“.

I filed Debian bug #752593 [1] with a patch to fix this problem.

I also filed a wishlist bug against strace asking for an easier way to discover the name of a syscall [2].

Replacement Credit Cards and Bank Failings

I just read an interesting article by Brian Krebs about the difficulty in replacing credit cards [1].

The main reason that credit cards need to be replaced is that they have a single set of numbers that is used for all transactions. If credit cards were designed properly for modern use (IE since 2000 or so) they would act as a smart-card as the recommended way of payment in store. Currently I have a Mastercard and an Amex card, the Mastercard (issued about a year ago) has no smart-card feature and as Amex is rejected by most stores I’ve never had a chance to use the smart-card part of a credit card. If all American credit cards had a smart card feature which was recommended by store staff then the problems that Brian documents would never have happened, the attacks on Target and other companies would have got very few card numbers and the companies that make cards wouldn’t have a backlog of orders.

If a bank was to buy USB smart-card readers for all their customers then they would be very cheap (the hardware is simple and therefore the unit price would be low if purchasing a few million). As banks are greedy they could make customers pay for the readers and even make a profit on them. Then for online banking at home the user could use a code that’s generated for the transaction in question and thus avoid most forms of online banking fraud – the only possible form of fraud would be to make a $10 payment to a legitimate company become a $1000 payment to a fraudster but that’s a lot more work and a lot less money than other forms of credit card fraud.

A significant portion of all credit card transactions performed over the phone are made from the customer’s home. Of the ones that aren’t made from home a significant portion would be done from a hotel, office, or other place where a smart-card reader might be conveniently used to generate a one-time code for the transaction.

The main remaining problem seems to be the use of raised numbers. Many years ago it used to be common for credit card purchases to involve using some form of “carbon paper” and the raised numbers made an impression on the credit card transfer form. I don’t recall ever using a credit card in that way, I’ve only had credit cards for about 18 years and my memories of the raised numbers on credit cards being used to make an impression on paper only involve watching my parents pay when I was young. It seems likely that someone who likes paying by credit card and does so at small companies might have some recent experience of “carbon paper” payment, but anyone who prefers EFTPOS and cash probably wouldn’t.

If the credit card number (used for phone and Internet transactions in situations where a smart card reader isn’t available) wasn’t raised then it could be changed by posting a sticker with a new number that the customer could apply to their card. The customer wouldn’t even need to wait for the post before their card could be used again as the smart card part would never be invalid. The magnetic stripe on the card could be changed at any bank and there’s no reason why an ATM couldn’t identify a card by it’s smart-card and then write a new magnetic stripe automatically.

These problems aren’t difficult to solve. The amounts of effort and money involved in solving them are tiny compared to the costs of cleaning up the mess from a major breach such as the recent Target one, the main thing that needs to be done to implement my ideas is widespread support of smart-card readers and that seems to have been done already. It seems to me that the main problem is the incompetence of financial institutions. I think the fact that there’s no serious competitor to Paypal is one of the many obvious proofs of the incompetence of financial companies.

The effective operation of banks is essential to the economy and the savings of individuals are guaranteed by the government (so when a bank fails a lot of tax money will be used). It seems to me that we need to have national banks run by governments with the aim of financial security. Even if banks were good at their business (and they obviously aren’t) I don’t think that they can be trusted with it, an organisation that’s “too big to fail” is too big to lack accountability to the citizens.

Fingerprints and Authentication

Dustin Kirkland wrote an interesting post about fingerprint authentication [1]. He suggests using fingerprints for identifying users (NOT authentication) and gives an example of a married couple sharing a tablet and using fingerprints to determine who’s apps are loaded.

In response Tollef Fog Heen suggests using fingerprints for lightweight authentication, such as resuming a session after a toilet break [2].

I think that one of the best comments on the issue of authentication for different tasks is in XKCD comic 1200 [3]. It seems obvious that the division between administrator (who installs new device drivers etc) and user (who does everything from playing games to online banking with the same privileges) isn’t working, and never could work well – particularly when the user in question installs their own software.

I think that one thing which is worth considering is the uses of a signature. A signature can be easily forged in many ways and they often aren’t checked well. It seems that there are two broad cases of using a signature, one is to enter into legally binding serious contract such as a mortgage (where wanting to sign is the relevant issue) and the other is cases where the issue doesn’t matter so much (EG signing off on a credit card purchase where the parties at risk can afford to lose money on occasion for efficient transactions). Signing is relatively easy but that’s because it either doesn’t matter much or because it’s just a legal issue which isn’t connected to authentication. The possibility of serious damage (sending life savings or incriminating pictures to criminals in another jurisdiction) being done instantly never applied to signatures. It seems to me that in many ways signatures are comparable to fingerprints and both of them aren’t particularly good for authentication to a computer.

In regard to Tollef’s ideas about “lightweight” authentication I think that the first thing that would be required is direct user control over the authentication required to unlock a system. I have read about some Microsoft research into a computer monitoring the office environment to better facilitate the user’s requests, an obvious extension to such research would be to have greater unlock requirements if there are more unknown people in the area or if the device is in a known unsafe location. But apart from that sort of future development it seems that having the user request a greater or lesser authentication check either at the time they lock their session or by policy would make sense. Generally users have a reasonable idea about the risk of another user trying to login with their terminal so user should be able to decide that a toilet break when at home only requires a fingerprint (enough to keep out other family members) while a toilet break at the office requires greater authentication. Mobile devices could use GPS location to determine unlock requirements, GPS can be forged, but if your attacker is willing and able to do that then you have a greater risk than most users.

Some users turn off authentication on their phone because it’s too inconvenient. If they had the option of using a fingerprint most of the time and a password for the times when a fingerprint can’t be read then it would give an overall increase in security.

Finally it should be possible to unlock only certain applications. Recent versions of Android support widgets on the lock screen so you can perform basic tasks such as checking the weather forecast without unlocking your phone. But it should be possible to have different authentication requirements for various applications. Using a fingerprint scan to allow playing games or reading email in the mailing list folder would be more than adequate security. But reading the important email and using SMS probably needs greater authentication. This takes us back to the XKCD cartoon.

Dr Suelette Dreyfus LCA Keynote

Dr Suelette Dreyfus gave an interesting LCA keynote speech on Monday (it’s online now for people who aren’t attending LCA [1]). One of the interesting points she made was regarding the greater support for privacy protection in Germany, this is apparently due to so many German citizens having read their own Stasi files.

The section of her talk about the technology that is being used against us today was very concerning. I wonder whether we should plan to move away from using any hardware or closed source software from the US, China, and probably most countries other than Germany.

We really need to consider these issues at election time. I have previously blogged some rough ideas about having organisations such as Linux Australia poll parties to determine how well they represent the interests of citizens who use Linux [2]. I think that such things are even more important now. Steven Levy wrote an interesting summary of the situation for Wired [3].

At the end of her talk Suelette suggested that Aspies might be more likely to be whistle-blowers due to being unable to recognise the social signals about such things (IE managers say that they won’t punish people for speaking out but most people recognise that to be lies). It’s a plausible theory but I’m worried that managers might decide to avoid hiring Aspies because of this. I wonder how many managers plan to have illegal activity as an option. But I guess that having criminals refuse to hire me wouldn’t be such a bad thing.

Security is Impossible

The Scope of the Problem

Security is inherently complex because of the large number of ways of circumventing it. For example Internet facing servers have been successfully attacked based on vulnerabilities in the OS, the server application, public key generation, DNS, SSL key certificates (and many other programs and algorithms in use), as well as the infrastructure and employees of all companies in the chain. When all those layers work reasonably well (not perfectly but well enough to not obviously be the weakest link) there are attacks on the end user systems that access the servers (such as the trojan horse programs used to attack PCs used for online banking).

My Area of Interest

The area of security that interests me is Linux software development. There are many related areas such as documentation and default configurations to make it easier for people to secure their systems (instead of insecure systems being the default option) which are all important.

There are also many related fields such as ensuring that all people with relevant access are trustworthy. There are many interesting problems to solve in such areas most of which aren’t a good match for my skills or just require more time than I have available.

I sometimes write blog posts commenting on random security problems in other areas. Sometimes I hope to inspire people to new research, sometimes I hope to just inform users who can consider the issues when implementing solutions to security problems.

Bugs

In the software development side there are ongoing problems of bugs in code that weaken security. The fact that the main area of concern for people who are interested in securing systems is fixing bugs is an indication that the problem of software quality needs a lot of work at the moment.

The other area that gets a reasonable amount of obvious work is in access control. Again it’s an area that needs a lot of work, but the fact that we’re not done with that is an indication of how far there is to go in generally improving computer security.

Authenticating Software Releases

There have been cases where source code repositories have been compromised to introduce trojan horse code, the ones I’ve read about have been discovered reasonably quickly with little harm done – but there could be some which weren’t discovered. Of course it’s likely that such attacks will be discovered because someone will have the original and the copies can be compared.

Repositories of binaries are a bigger problem, it’s not always possible to recompile a program and get a binary which checks out as being identical (larger programs often include the build time in the binary). Even for build processes which don’t include such data it can be very difficult to determine the integrity of a build process. For example programs compiled with different versions of libraries, header files, or compilers will usually differ slightly.

As most developers frequently change the versions of such software they will often be unable to verify their own binaries and any automated verification of such binaries will be impossible for anyone else. So if a developer’s workstation was compromised without their knowledge it might be impossible for them to later check whether they released trojan binaries – without just running the binaries in question and looking for undesired behavior.

The problem of verifying past binaries is solvable for large software companies, Linux distributions, and all other organisations that have the resources to keep old versions of all binaries and libraries used to build software. For proprietary software companies the verification process would have to start with faith in the vendor of their OS and compiler doing the right thing. For Linux distributions and other organisations based on free software it would start by having the source to everything which can then be verified in theory – although in practice verifying all source for compilers, the OS, and libraries would be a huge undertaking.

Espionage

There is a well documented history of military espionage, people who are sworn to secrecy have been subverted by money, blackmail, and by having political beliefs which don’t agree with their government. The history of corporate espionage is less well documented but as corporations perform less stringent background checks than military organisations I think it’s safe to assume that corporate espionage is much more common.

Presumably any government organisation that can have any success at subverting employees of a foreign government can be much more successful in subverting programmers (either in companies such as Microsoft or in the FOSS community). One factor that makes it easier to launch such attacks is the global nature of software development. Government jobs that involve access to secret data have requirements about where the applicant was born and has lived, corporate jobs and volunteer positions in free software development don’t have such requirements.

The effort involved in subverting an existing employee of a software company or contributor to free software or the effort involved in getting an agent accepted in such a project would be quite small when compared to a nuclear weapons program. Therefore I think we should assume that every country which is capable of developing nuclear weapons (even North Korea) can do such things if they wish.

Would the government of such a country want to subvert a major software project that is used by hundreds of millions of people? I can imagine ways that such things could benefit a government and while there would be costs for such actions (both in local politics and international relations) it seems most likely that some governments would consider it to be worth the risk – and North Korea doesn’t seem to have much to lose.

Conclusion

We would like every computer to be like a castle with a strong wall separating them from the bad things which can’t be breached in ways that aren’t obvious. But the way things are progressing with increasingly complex systems depending on more people and other systems it’s becoming more like biology than engineering. We can think of important government systems as being comparable to the way people with compromised immune systems are isolated from any risk of catching a disease, the consequences of an infection are worse so greater isolation measures are required.

For regular desktop PCs getting infected by a trojan is often regarded as being similar to getting a cold in winter. People just accept that their PC will be infected on occasion and don’t bother making any serious effort to prevent it. After an infection is discovered the user (or their management for a corporate PC) tend not to be particularly worried about data loss in spite of some high profile data leaks from companies that do security work and the ongoing attacks against online banking and webcam spying on home PCs. I don’t know what it will take for users to start taking security risks seriously.

I think that a secure boot is a good step in the right direction, but it’s a long way from being able to magically solve all security problems. I’ve previously described some of the ways that secure boot won’t save you [1].

The problems of subverting developers don’t seem to be an immediate concern (although we should consider the possibility that it might be happening already without anyone noticing). The ongoing trend is that the value of computers in society is steadily increasing which therefore increases the rewards for criminals and spy agencies who can compromise them. Therefore it seems that we will definitely face the problems of subverted developers if we can adequately address the current technical problems related to flaws in software and inadequate access control. We just need to fix some of the problems which are exploited more easily to force the attackers to use the more difficult and expensive attacks. Note that it is a really good thing to make attacks more difficult, that decreases the number of organisations that are capable of attack even though it won’t stop determined attackers.

For end user systems the major problem seems to be related to running random programs from the Internet without a security model that adequately protects the system. Both Android and iOS make good efforts at protecting a system in the face of random hostile applications, but they have both been shown to fail in practice (it might be a good idea to have a phone for games that is separate from the phone used for phone calls etc). More research into OS security is needed to address this. But in the mean time users need to refrain from playing games and viewing porn on systems that are used for work, Internet banking, and other important things. While PCs are small and cheap enough that having separate PCs for important and unimportant tasks is practical it seems that most users don’t regard the problems as being serious enough to be worth the effort.

SE Linux Things To Do

At the end of my talk on Monday about the status of SE Linux [1] I described some of the things that I want to do with SE Linux in Debian (and general SE Linux stuff). Here is a brief summary of some of them:

One thing I’ve wanted to do for years is to get X Access Controls working in Debian. This means that two X applications could have windows on the same desktop but be unable to communicate with each other by any of the X methods (this includes screen capture and clipboard). It seems that the Fedora people are moving to sandbox processes with Xephyr for X access (see Dan Walsh’s blog post about sandbox -X [2]). But XAce will take a lot of work and time is always an issue.

An ongoing problem with SE Linux (and most security systems) is the difficulty in running applications with minimum privilege. One example of this is utility programs which can be run by multiple programs, if a utility is usually run by a process that is privileged then we probably won’t notice that it requires excess privileges until it’s run in a different context. This is a particular problem when trying to restrict programs that may be run as part of a user session. A common example is programs that open files read-write when they only need to read them, if the program then aborts when it can’t open the file in question then we will have a problem when it’s run from a context that doesn’t grant it write access. To deal with such latent problems I am considering ways of analysing the operation of systems to try and determine which programs request more access than they really need.

During my talk I discussed the possibility of using a shared object to log file open/read/write to find such latent problems. A member of the audience suggested static code analysis which seems useful for some languages but doesn’t seem likely to cover all necessary languages. Of course the benefit of static code analysis is that it will catch operations that the program doesn’t perform in a test environment – error handling is one particularly important corner case in this regard.

My SE Linux Status Report – LCA 2013

This morning I gave a status report on SE Linux. The talk initially didn’t go too well, I wasn’t in the right mental state for it and I moved through the material too fast. Fortunately Casey Schaufler asked some really good questions which helped me to get back on track. The end result seemed reasonably good. Here’s a summary of the things I discussed:

Transaction hooks for RPM to support SE Linux operations. This supports signing packages to indicate their security status and preventing packages from overwriting other packages or executing scripts in the wrong context. There is also work to incorporate some of the features of that into “dpkg” for Debian.

Some changes to libraries to allow faster booting. Systems with sysvinit and a HDD won’t be affected but with systemd and SSD it makes a real difference. Mostly Red Hat’s work.

Filename transition rules to allow the initial context to be assigned based on file name were created in 2011 but are not starting to get used.

When systemd is used for starting/stopping daemons some hacks such as run_init can be avoided. Fedora is making the best progress in this regard due to only supporting systemd while the support for other init systems will limit what we can do for Debian. This improves security by stopping terminal buffer insertion attacks while also improving reliability by giving the daemon the same inherited settings each time it’s executed.

Labelled NFS has been accepted as part of the NFSv4.2 specification. This is a big deal as labelled NFS work has been going for many years without hitting such a milestone in the past.

ZFS and BTRFS support but we still need to consider management issues for such snapshot based filesystems. Filesystem snapshots have the potential to interact badly with relabelling if we don’t develop code and sysadmin practices to deal with it properly.

The most significant upstream focus of SE Linux development over the last year is SE Android. I hope that will result in more work on the X Access Controls for use on the desktop.

During question time I also gave a 3 minute “lightning talk” description of SE Linux.

Finding an ATM Skimmer

A member of SAGE-AU [1] found two ATM skimmers [2] and gave me permission to publish his description and analysis of the situation. I’ve lightly edited this from a mailing list post to a blog format with permission from the author. This Courier-Mail article refers to the skimmers in question [3].

People were wondering what gave the skimmers away so here goes, NB this is only about the 2 I discovered.

  1. The actual atms in question were the free standing type (but even this doesn’t matter in the scheme of things because they can be on those in a bank of the things).
  2. I’d actually conducted transaction and was waiting for my card to come out of the machine – these things looked that good. The colours matched – especially in the 3/4 or less light that you typically have on the fascia’s of such machine. The backing plate grey matched atm fascia as did the green “bubble” where the card goes.
  3. WHAT REALLY CAUSED SUSPICION – my card was having difficulty coming out of the atm at end of transaction i.e. card coming out extra slow – then only the end couple of mm, I had to physically grab my card with fingertips to get it out and there was barely perceptible movement of skimmer due to my fingers using the green “bubble” as purchase point, THAT was what made me suspect. I then really had close look and found that I could move the “bubble” with its backing plate – I pulled it off the machine and then looked at the atm next to it and found it to look exactly the same. These things are held on by double sided tape.
  4. Grabbed the cleaning lady wandering past showed her the device and asked her to get security. Security and centre operations manager subsequently showed up, while waiting for them I had to stop people from using either machine (everyone amazed at how good these things looked). Centre ops guy went and checked other machines in the centre, I left my details and they called the cops… I went straight to my credit union and reported what had happened and they cancelled my card and ordered a new one on the spot for me.
  5. Coincidently (or not) the centre ops and security lady told me that the machines had been serviced (refilled) not too much earlier that day – i.e. I wondered if the bad guys did the “service” or were tracking armaguard servicing types.

Quick side notes:

  1. 3 more skimmers have been found since then.
  2. Subsequently, I found out these were the type that needed to be picked up for the bad guys to retrieve the data i.e. these weren’t the type that transmitted to some-one sitting near by via Bluetooth/wireless i.e. in this instance I need not have cancelled my card and gotten a new one from my credit union.
    HOWEVER, it is best practice if you discover one and you’ve used that machine to immediately have your financial institution cancel your card and issue you a new one – though getting the new one can take up to a week.
  3. As I understand it, These 2 devices (i.e. others could be different) have 2 usb ports one for the reader and the other to a pinhole camera (commercially available type removed from it’s original housing). The magnetic stripe data is held on the audio track associated with the video and there was an 8GB storage card to hold it all i.e. it makes things easier for the bad guys to match PINs to card details.
  4. If you do find a skimmer DO NOT touch the insides (non public facing parts) of it – this is where the cops can really try lift dna and prints from; gathering prints from externally is far more fraught as everyone and their dog has probably touched the exterior of the skimmer.
  5. In the lead up to Xmas these things or similar are highly likely to become more prevalent as we all go about parting with dosh while gift shopping – SO BE AWARE AND CAREFUL.