Logging in as Root

Martin Meredith wrote a blog post about logging in as root and the people who so strongly advocate against it [1]. The question is whether you should ssh directly to the root account on a remote server or whether you should ssh to a non-root account and use sudo or su to gain administrative privileges.

Does sudo/su make your system more secure?

Some years ago the administrator of a SE Linux Play Machine used the same home directory for play users to login as for administrative logins as for his own logins – he used newrole to gain administrative access (like su or sudo but for SE Linux).

His machine was owned by one of his friends who created a shell function named newrole in one of his login scripts that used netcat to send the administrative password out over the net. He didn’t immediately realise that this was a problem until his friend changed the password and locked him out! This is one example of a system being 0wned due to having the double-authentication – of course if he had logged in directly with administrative privs while using the same home directory that the attacker could write to then he would still have lost but the attacker would have had to do a little more work.

When you login you have lots of shell scripts run on your behalf which have the ability to totally control your environment, if someone has taken over those scripts then they can control everything you see, when you think you run sudo or something they can get the password. When you ssh in to a server your security relies on the security of the client end-point, the encryption of the ssh protocol (including keeping all keys secure to prevent MITM attacks), and the integrity of all the programs that are executed before you have control of the remote system.

One benefit for using sshd to spawn a session without full privileges is in the case where you fear an exploit against sshd and are running SE Linux or some other security system that goes way beyond Unix permissions. It is possible to configure SE Linux in the “strict” configuration to deny administrative rights to any shell that is launched directly by the sshd. Therefore someone who cracks sshd could only wait until an administrator logs in and runs newrole and they wouldn’t be able to immediately take over the system. If the sysadmin suspected that a sshd compromise is possible then a sysadmin could login through some other method (maybe visit the server and login at the console) to upgrade the sshd. This is however a very unusual scenario and I suspect that most people who advocate using sudo exclusively don’t use a SE Linux strict configuration.

Does su/sudo improve auditing?

If you have multiple people with root access to one system it can be difficult to determine who did what. If you force everyone to use su or sudo then you will have a record of which Unix account was used to start the root session. Of course if multiple people start root shells via su and leave them running then it can be difficult to determine which of the people who had such shells running made the mistake – but at least that reduces the list of suspects.

If you put “PermitUserEnvironment yes” in /etc/ssh/sshd_config then you have the option of setting environment variables by ssh authorized_keys entries, so you could have an entry such as the following:

environment=”” ssh-rsa AAAAB3Nz[…]/w==

Then you could have the .bashrc file (or a similar file for your favorite shell) have code such as the following to log the relevant data to syslogd:
if [ "$SSH_TTY" = "" ]; then
  logger -p "user $ORIG_USER ran command \"$BASH_EXECUTION_STRING\" as root"
  logger -p "user $ORIG_USER logged in as root on tty $(tty)"

I think that forcing the use of su or sudo might improve the ability to track other sysadmins if the system is not well configured. But it seems obvious that the same level of tracking can be implemented in other ways with a small amount of effort. It took me about 30 minutes to devise the above shell code and configuration options, it should take people who read this blog post about 5 minutes to implement it (or maybe 10 minutes if they use a different shell or have some other combination of Bash configuration that results in non-obvious use of initialisation scripts (EG if you have a .bash_profile file then .bashrc may not be executed).

Once you have the above infrastructure for logging root login sessions it wouldn’t be difficult to run a little script that asks the sysadmin “what is the purpose for your root login” and logs what they type. If several sysadmins are logged in at the same time and one of them describes the purpose of their login as “to reconfigure LDAP” then you know who to talk to if your LDAP server stops working!

Should you run commands with minimum privilege?

It’s generally regarded that running each command with the minimum privilege is a good idea. But if the only reason you login to a server is to do root tasks (restarting daemons, writing to files that are owned by root, etc) then there really isn’t a lot of potential for achieving anything by doing so. If you need to use a client for a particular service (EG a web browser to test the functionality of a web server or proxy server) then you can login to a different account for that purpose – the typical sysadmin desktop has a dozen xterms open at once, using one for root access to do the work and another for non-root access to do the testing is probably a good option.

Can root be used for local access?

Linux Journal has an article about the distribution that used to be known as Lindows (later Linspire) which used root as the default login for desktop use [2]. It suggests using a non-root account because “If someone can trick you into running a program or if a virus somehow runs while you are logged in, that program then has the ability to do anything at all” – of course someone could trick you into running a program or virus that attempts to run sudo (to see if you enabled it without password checks) and if that doesn’t work waits until you run sudo and sniffs the password (using pty interception or X event sniffing). The article does correctly note that you can easily accidentally damage your system as root. Given that the skills of typical Linux desktop users are significantly less than those of typical system administrators it seems reasonable to assume that certain risks of mistake which are significant for desktop users aren’t a big deal with skilled sysadmins.

I think that it was a bad decision by the Lindows people to use root for everything due to the risk of errors. If you make a mistake on a desktop system as non-root then if your home directory was backed up recently and you use IMAP or caching IMAP for email access then you probably won’t lose much of note. But if you make a serious mistake as root then the minimum damage is being forced to do a complete reinstall, which is time consuming and annoying even if you have the installation media handy and your Internet connection has enough quota for the month to complete the process.

Finally there are some services that seek out people who use the root account for desktop use. Debian has some support channels on IRC [3] and I decided to use the root account from my SE Linux Play Machine [4] to see how they go. #debian has banned strings matching root. #linpeople didn’t like me because “Deopped you on channel #linpeople because it is registered with channel services“. #linuxhelp and #help let me in, but nothing seemed to be happening in those channels. Last time I tried this experiment I had a minor debate with someone who repeated a mantra about not using root and didn’t show any interest in reading my explanation of why root:user_r:user_t is safe for IRC.

I can’t imagine that good the #debian people expect to gain from denying people the ability to access that channel with an IRC client that reports itself to be running as root. Doing so precludes the possibility of educating them if you think that they are doing something wrong (such as running a distribution like Lindows/Linspire).


I routinely ssh directly to servers as root. I’ve been doing so for as long as ssh has been around and I used to telnet to systems as root before that. Logging in to a server as root without encryption is in most cases a really bad idea, but before ssh was invented it was the only option that was available.

For the vast majority of server deployments I think that there is no good reason to avoid sshing directly as root.

19 comments to Logging in as Root

  • Anonymous Coward


    You are one of the few with a brain worth listening to in the programming community. This post is proof.



  • If you have brains to figure out how to safely use a root user, you should also have the brains to figure out how to avoid having that fact shown in your IRC client :-) .

    BTW: “Deopped you on channel #linpeople because it is registered with channel services” has nothing to do with being non-root. It means that this is a channel is registered and “guraded”. There are some people who automatically get op when connecting to it. And even though nobody’s there, you won’t get an op automatically. This is practically the same as having ChanServ permanently in that channel.

  • etbe

    Anon: Thanks.

    Tzafrir: So people without the “brains to figure out” both those things don’t get an opportunity to learn from #debian…

    As for #linpeople, I’m not an IRC expert.

  • zomglol

    I had to look at the date to see if today was April fools day.

    Sadly, it isn’t.

    Logging in remotely as root? Using the root account at all?


    Good luck getting a new job buddy, how to you audit the root user logging in from the console?

    I bet you don’t ship logs either.

    You shouldn’t be in a server room, and you really shouldn’t be advising others.

    You can tell that you haven’t ever been audited, and your company is so small that you are probably the security officer too.

  • Logging in as root directly is bad idea:
    1. There is no audit (a user script has nothing to do with auditing)
    2. Each machine should have a distinct password, you probably don’t want to remember them all ;) [rsa key could be an option here, but the rsa keys should be properly managed]

    In a large organization, privileges should be delegated to various teams, depending on their actual needs (DBA and so on), sudo definitely is the way to go in this case.

    I agree there is nothing wrong for a sysadmin to open a shell as root to perform actual admin tasks only.

  • etbe

    zomglol: I’m guessing that you have had little experience at running real servers. One thing I’ve noticed from experience in doing sysadmin work at many companies (large and small) is that the larger companies tend to have the least effective auditing (they did have some good paperwork that purported to involve auditing). I once worked for one of the world’s largest companies (in terms of market capitalisation) where NONE of the servers that they ran had the latest security patches applied (not even in network facing daemons) and some fairly serious problems were ignored by their security team (such as a web server routinely SEGVing in response to certain combinations of user data).

    Finally anyone who uses the “good luck getting a new job” line obviously knows nothing about the hiring process.

    I’m happy to have people disagree, but please keep in mind the fact that anonymous people need to demonstrate some knowledge before they are taken seriously.

    Frank: What exactly do you mean by “auditing” in this context? Apart from some unusual corner cases that more resemble my SE Linux Play Machine than most servers the root user can wipe any audit logs intentionally through malice or accident so a log that’s entirely optional (coming from the root .login or similar) will in most cases do just as much good in practice as anything else that you might run.

    I agree that it would be ideal for every system to have a distinct password and for the sysadmins to never login from server A to server B so that someone who cracks server A can’t crack server B (at least not easily). But in practice that doesn’t happen. Maybe I should write a blog post about “things that sysadmins should do in theory but almost never do in practice”. ;)

    My experience is that it’s really difficult to convince an average sysadmin that they shouldn’t run random binaries that they download from the Internet on a system that they use to gain administrative access to servers. I once worked at a site where the sysadmins would routinely email Windows executables containing joke animations to a significant number of people in the company and couldn’t understand why this might be a bad idea.

    Now if you were running an MLS system for military use then everything would be entirely different. But my best advice for people considering a career as a military sysadmin is – don’t do that, by all accounts it really doesn’t sound like fun. ;)

  • zomglol

    “I’m guessing that you have had little experience at running real servers.”

    Funny, I have thousands of servers managed with puppet. All faithful domain members, standardized and hardened to limit sudo, and remove su and console as root.

    I can deploy 1000 more overnight with the push of a button, and then put them into service with a workflow.

    Little experience? No, I just know how do properly deploy enterprise systems. Unfortunately, it seems that you do not.

    Larger companies sometimes don’t audit effectively, but that doesn’t make it OK, actually it’s your JOB to ensure that they do. If you are admitting that you have lots of unpatched servers, then I question your capability as a manager of those systems.

    If they are missing patches, and you are responsible for the systems it’s all you buddy.

    Sounds like you have a small amount of experience, but not nearly enough in a real enterprise.

  • etbe

    zomglol: Of course you claim to run thousands of servers, you are writing anonymously and no-one can verify anything you say and you believe that you can claim anything you like. But we can just look at what you write and compare it to what happens at large corporations.

    Anyone who claims that a sysadmin at a large corporation has sole responsibility for deploying patches obviously hasn’t had any experience doing such work. At a small company you can just do things, make it policy that you do changes after hours and work late any time you need to apply patches that involve downtime – it’s easy. At a large company you have to deal with change-control procedures, you need to get managers from the client side to sign-off on the down-time which means convincing them of the need to install the patches. Then you need to have rollback plans in case the patch is deemed to have failed and people with no technical knowledge often get to assess whether the patch succeeded. One time I arranged to do a routine RHEL kernel upgrade on a web server. I did the upgrade at 10PM, the client representative said at 11PM that the web server “felt slower” so at midnight we were testing the old version of the kernel again. That’s the sort of foolishness that happens at big corporations.

    Determining plans for auditing at a large company is something that should start from the CSO, not some sysadmin who gets big ideas.

    At a small company the sysadmin sometimes has a job title like “IT Manager”. At a large company anyone who has their hands on a keyboard is no-where near management and will do as they are told.

  • zomglol

    Absolutely right, I am posting quasi anonymously, and you cannot prove anything.

    That said, I use puppet for configuration management, Likewise for domain authentication and GPO, kickstart for deployment, and cobbler and koan for deployment automation. There’s a lot more to my configuration than just that but what I have given you is a simple but effective combination, google it.

    I replicate repositories, and package all software (even COTS) for automation.

    You may not be able to verify what I state here, but you can get some idea that I’m not your average idiot. If you can’t, that’s your problem.

    I didn’t state that you had sole responsibility, however under RACI for most corporations patch management falls on your team. It is your responsibility to coordinate and deploy patches, and it is also your responsibility if you don’t, unless you have a functional exceptions process.

    “One time I arranged to do a routine RHEL kernel upgrade on a web server. I did the upgrade at 10PM, the client representative said at 11PM that the web server “felt slower” so at midnight we were testing the old version of the kernel again. That’s the sort of foolishness that happens at big corporations.”

    Whining about patches causing the type of issues that you do indicates that you don’t know the first thing about building enterprise solutions. Had you built your applications correctly you would have simply patched one of many servers and bounced it with the appropriate approval. The load balancer would have shifted any traffic destined to that host to your remaining hosts, and it wouldn’t have been a blip on your customers radar.

    Had your patch installation failed, you could have simply have instituted your backout procedure, without impact to service. See how I did that there? I provide application availability, I don’t build “servers”.

    Had you been working in a true enterprise, your patch would have walked through two tiers of testing prior to production deployment, so maybe you should stop writing bad articles and focus on building a three tiered architecture, and high availability so you don’t have any more outages?

    It really sounds like you are chasing the wrong rabbit, you are still thinking about “servers” and not business applications and continuity.

    As for auditing and security, I don’t know about you, but I talk to my C levels quite often. You can work for a large company, be hands-on, and still maintain a relationship with upper management.

    If you can’t that is a you problem.

    Sounds to me like you know a few large company words, but if you do actually work for an entity with more than 10,000 employees; you probably work in a silo.

    Back to the topic, you shouldn’t run as root. Just ask your CSO.

  • etbe

    zomglol: I’ve worked for enough large companies that are quite unlike what you describe that I can’t believe your claims about how generic large companies supposedly operate. Maybe you have found one company that does things well, that’s nice for you but as you don’t seem inclined to share the name of the company this doesn’t help anyone else.

    Load balancers etc cost more money, the fact that they can save money by reducing down-time is something that not everyone is capable of working out. In fact in some places I’ve worked they could have used the same hardware (which was grossly over-specced) to run virtual machines with load balancing and got a better result than from dedicated servers. Of course doing so would have required hiring more skilled staff and that’s always going to be difficult for a big corporation.

    As an aside, I’m rather skeptical about the use of clusters. In every single real deployment I’ve seen the different nodes of a cluster have not had the same patches installed. Having a cluster fail-over and then immediately fall-over because of different and slightly incompatible versions of software seems to be the most common case. My observation is that clusters are just too hard for the average sysadmin.

    Two tiers of testing? That’s another good thing for the “things that sysadmins should do in theory but almost never do in practice” article.

    You have a great “blame the victim” mentality. Senior management can talk to anyone at any time. People who do technical work have every level of management wanting to be kept in the loop and to control all communication. Talk to a C*O and expect that your manager will want an immediate meeting with you. When senior managers and technical people don’t talk it’s a senior management problem, they are the ones who can fix things with a word.

    I guess that technical people could make anonymous tip-offs to company directors about things that go wrong. I’ve never tried that and always wondered if it would do any good.

    As for asking the CSO about root logins vs sudo, the last really large corporation I worked for had a policy of not allowing tcpdump or ethereal to be installed because they could allow sysadmins to sniff secret corporate data (of course a tcpdump of SSL encrypted data would be the hard way of doing that – but let’s not worry about logic). Fortunately there was wireshark which was sanctioned for installation (in email) by all local managers. I know that some people in the security group knew that ethereal had been renamed to wireshark and I believe that they intentionally blocked attempts to update their policy to facilitate the work of the sysadmin team.

    Of course tcpdump over a ssh session has less potential for problems than running an X based tool over the network. It seems that not everyone can figure out how to tunnel X over ssh… :(

  • zomglol

    I have worked for many small, medium, and large corporations and the majority have operated the way that I describe. Load balancers cost money, yes but that cost should be built into the cost of placing the application into service. Spread across multiple app deployments it is an insignificant expense.

    Don’t blame the clustering technology for poor deployment and O&M, that’s a people problem, not a technology problem.

    I struggled at first to believe that you think clusters are difficult for the average sysadmin, but then I am posting this in an article talking about it being OK to log in as root. Clusters are not difficult, it’s another people problem if they are difficult to set up and manage.

    Many companies implement a testing process, it’s a requirement of any decent change management system.

    I’m not blaming any victims, you aren’t a victim, you are a systems administrator. If your company sucks, find another one.

    I’d love to know the companies that you have been working for, my management has no issue with engineers talking to C levels as they are confident enough in us that they don’t have to babysit us, and our interaction lightens the burden on them.

    You said “last really large”, are you implying that you no longer work for a large corporate? How long has it been now?

    You do know that ethereal and wireshark are the same tool right? In addition they both use libpcap which is also used by tcpdump. Sounds like your management doesn’t trust you, that’s a management problem but not a problem with all corporations. ;)

  • etbe

    zomglol: Your argument here is against the way the IT industry works. If you think that the typical sysadmins can manage a cluster properly or that the typical senior managers will believe that a cluster offers useful benefits then you really lack sufficient IT experience.

    Of course I know that wireshark and ethereal are the same thing. That anecdote wouldn’t properly illustrate the stupidity of management if they were different products.

    My patience with anonymous trolls is running low. If you want to actually make a point about why direct root logins shouldn’t be used, what you consider to be the ideal way of doing things, and what benefits it offers then I’ll give you one last chance.

  • zomglol

    Seriously? I lack IT experience? That’s a laugh.

    Lets say that I as your co-worker “borrow” your badge to enter the building, sit at your desk resetting your password with bartpe, log in as root on 50 servers overnight and rm -rf /, and leave a DODWipe disk in your PC and your badge on your desk as I exit.

    If you are still OK with allowing remote SSH then there aren’t words to explain why you shouldn’t be allowed access to the account.

    That’s a TRIVIAL physical attack method using the root account, and YOUR access. All access points back to you, and I didn’t even need your password.

    One more chance? I didn’t need one more chance, I think you are the one that needs one more chance. ;)

  • etbe

    zopmglol: How does requiring su/sudo access stop a colleague from using your computer to login to servers?

    If they can login to your account as non-root then they can setup a shell function to replace sudo as I described in my post and then get your password for later use. This is nothing new and nothing that hasn’t been done before. For your attack scenario it would be quite easy to write a .login script that replaces a sudo command with a shell function that runs rm -rf. If the .login script was installed at a time when the user of that account was expected to run sudo before the next backup run there would be nothing useful in the records.

    I think you don’t understand how the process of logging in to a Unix system works.

    I’ve deleted two of your other comments, as I said before my patience with anonymous trolls is running out.

  • Glen Turner

    You need to be able to log in as root for one common sysadmin issue — a completely full disk with a process still writing. That 5% reserve on the disk allows root to log in without failures.

    Sure, partitioning usually avoids that problem. But part of the systems administration of large ISP networks is dealing with the worst case without expensive remote hands or truck rolls. Once you have enough machines deployed, the worst case happens every week.

    Having said that, we do restrict root login to the serial console and use independent long random passwords.

    For “normal” root access we use sudo. Our network monitoring flags machine where sudo has run in the past 20 minutes (for the geeks, we run a daemon which grabs the TASKSSTATS process accounting feed, looks for sudo, and uses NetSNMP to send a SNMP trap containing the command line). This is really useful for the Network Operations Center, since if the machine subsequently goes offline, first suspicions fall upon the sysadmin team rather than reachability or hardware. A correctly-directed fault ticket saves a lot of money.

    Root access (via sudo or otherwise) in a large network should be an extraordinary thing. Software should be maintained by yum. Configuration should be maintained by puppet or cfengine. Having to manually touch a production machine indicates an issue, having to sudo on an individual machine indicates a serious issue.

    Getting good configuration control was one of the best things we ever did, and it enables a lot of other things. The ability of Linux to have excellent configuration control is one of its major advantages over Windows. Our Windows team is three times the size of the Linux team, and runs 100th of the machines.

    For example, we can give differing teams differing access to the Subversion repository used to feed puppet. This then allows non-sydadmin teams to manage their own configuration files (even from Windows, with the right SVN attribute) without endangering other uses for the server or the integrity of the server itself. That in turn means better utilisation without the hassle of virtualisation, since we can have “one machine per POP” rather than “one machine per application per POP”. It also means that better and faster operational decisions are made, since changes don’t need to be coordinated with sysadmin availability, but can happen when they should.

    In summary, root access to a machine is a poor habit. But not because you should use sudo instead. But because touching an individual machine at all — rather than as a member of a configuration control class — is a poor habit.

  • etbe

    Glen: Good points. Discouraging people from working around the management systems is generally a good idea.

    However it seems that the vast majority of systems that don’t allow direct root access are Ubuntu desktop systems which are not managed by Puppet or similar programs, and after that the next significant group includes stand-alone servers such as the one Martin Meredith wrote about.

  • etbe

    I’ve investigated some of the ways of logging shell commands entered by root and documented them at the above URL.

  • etbe

    I’ve investigated ways of using sudo for Defense in Depth and documented them at the above URL.