4

More About Living in Hotels

In the past I have spent about 18 months living in hotels with a couple of months of breaks in between. I have previously written about it in terms of living in London hotels [1], but I have been asked for more generic advice.

Firstly the amount of possessions that you may have when living in hotels is seriously limited. For ease of travel you want to restrict yourself to one suitcase for checked luggage and one for carry-on luggage. Hotels often have short-term storage space for possessions of guests, so having a second suitcase of items that are not worth stealing (clothes and books) may be an option. But consumer electronics devices other than a single laptop computer are not an option.

I read an interesting blog post on ZenHabits.net titled Minimalist Fun: The 100 Things Challenge [2] which advocates counting and limiting the number of possessions you own. When living in hotels if I considered my books as one collection and my clothes as another (having never been interested in trendy clothes they count as utilitarian items for work or leisure not objects that I seek to own) and as my mobile phone was a tool for work and my computer gear was strictly limited to items that were needed for work (and thus “tools”) my only possessions were a digital camera and some bottles of liquor! The lack of ability to accumulate possessions may be considered as an advantage or a disadvantage depending on what your aims are.

If you are moving to another country for work there are three ways of doing it. The easiest is to be a permanent employee of a company that assigns you to work there – in which case they will probably pay to transport your stuff when you buy or rent a house. If you are a looking for new employment (either contract or permanent) in another country then you can either find the work before moving or after arriving there. Finding work before arriving in the country is difficult and generally only works for short-term contracts. So it’s most likely that you will either be looking for work immediately after arriving or after a short contract. In either case having better mobility increases your employment options – why restrict yourself to one city or region when you can choose from all jobs in an entire country or (in the case of the EU) half of a continent! The career benefits of being able to accept any job anywhere in the world at short notice are significant!

There are situations where an employer will pay hotel bills. One example was when I was working for a London based company and they assigned us to work at the other side of London. My colleagues complained and the company paid for hotel bills for everyone Sunday night to Thursday night inclusive as well as an extra hour of pay per day as compensation for the inconvenience. For me of course one hotel was as good as another so it just meant that my employer was covering 5/7 of my living expenses. Then I had a meeting with the hotel manager and pointed out that having me check out every Friday would be bad for them as the hotel was mostly empty on the weekend and suggested that they make me a deal for the other two days – I ended up paying something like one night of hotel fees per week! If I had rented an apartment I would have still been paying the full rent (which while less than 30 days hotel fees per month would have been considerably more than 4 or 5 days of hotel fees per month).

If you live in a hotel then there is always some sort of deal that can be arranged. Apart from certain busy days (such as around the Christmas and new-year time) they always want long-term guests and will be willing to reduce the price, give free dinner or drinks from the bar, etc.

The cost of living in a hotel at times such as Christmas may be as much as five times the regular rate. That is a further incentive to visit friends or relatives at Christmas. If you can’t visit your family (which may be difficult if they live on the other side of the world) then finding a friend who has a spare room might be an option.

10

Restorecon Equivalent for Unix Permissions

SE Linux has a utility named restorecon to set (or reset) the security context. This is useful for many reasons, corrupted filesystems, users removing files or changing the context in inappropriate ways, and for re-creating files from tar files or backup programs that don’t restore SE Linux contexts. It can also be used to report the files that have different contexts to that which would be set by restorecon to verify the contexts of files.

Restorecon determines the context from two sources of data, one is the policy that came with the system (including any policy modules from other sources which were loaded) and the other is the local file contexts that were created by semanage.

It’s a pity that there doesn’t seem to be an equivalent program for Unix permissions. rpm has a -V option to verify the files from a package and dpkg doesn’t seem to have an option to perform a similar operation (/var/lib/dpkg/info/* doesn’t seem to have the necessary data). But even on an RPM based system this isn’t possible because there is no way to add local files into the list.

I would like to be able to specify that an RPM system should have root:root as the owner and permission mode 0755 for all files matching /usr/local/bin/* and use a single command to check the RPM database as well as this extra data for the permissions of all files.

Does anyone know of any work in this area?

I’m going to file Debian and Fedora bug reports about this, but I would appreciate any comments first.

Update:

Here is an example of how this feature works in rpm:
# rpm -Vv nash
…….. /sbin/nash
…….. d /usr/share/man/man8/nash.8.gz
# chmod 700 /sbin/nash
# rpm -Vv nash
.M…… /sbin/nash
…….. d /usr/share/man/man8/nash.8.gz

The “M” character indicates that the permission mode of the file does not match the RPM. There is no way to automatically correct it (AFAIK) but at least we know that something changed. With Debian AFAIK it’s only possible to verify file checksums not the permission.

Xen for Training

I’m setting up a training environment based on Xen. The configuration will probably be of use to some people so I’m including it below the fold. Please let me know if you have any ideas for improvements.

The interface for the user has the following documentation:

  • sudo -u root xen-manage create centos|debian [permissive]
    Create an image, the parameter debian or centos specifies which
    distribution you want to use and the optional parameter permissive
    specifies that you want to use Permissive mode (no SE Linux access controls
    enforced).
    Note that creating an image will leave you at it’s console. Press ^]
    to escape from the console.
  • sudo -u root xen-manage list
    Display the Xen formation on your DomU. Note that it doesn’t tell you whether
    you are using Debian or CentOS, you have to access the console to do that.
  • sudo -u root xen-manage console
    Access the console.
  • sudo -u root xen-manage destroy
    Destroy your Xen image – if it’s crashed and you want to restart it.

Continue reading

2

Squid and SE Linux

Is Squid not returning some data you need on a SE Linux system?

The default configuration of the SE Linux policy for Squid only allows it to connect to a small number of ports which are used for web servers. For example ports http (80) and https (443) are labelled as http_port_t which permits serves such as Apache to bind to them and Squid to connect to them. But sometimes services run on non-standard ports and periodically new services are devised which use the HTTP protocol and thus you have Squid and Apache using new ports.

semanage port -a -t http_port_t -p tcp 11371

One example of such a port is hkp (11371) – the latest protocol for sending and receiving GPG/OpenPGP keys. Running the above command relabelled the TCP port 11371 in question as http_port_t and thus allowed everything to work.

setsebool -P squid_connect_any 1
An alternate option would be to run the above command to allow Squid to connect to any port.

I will suggest that the upstream policy be changed to make the default labelling of TCP port 11371 be http_port_t, but the same operations can be used for other ports.

Some people may claim that this makes things difficult for sys-admins. But the fact is that a well known port is a significant resource that you don’t want to permit any random user to access. Not only do the SE Linux port access controls prevent malice, but they also prevent system programs from accidentally using the wrong ports. A common example of accidental mis-use is the port 631 used for the IPP (Internet Printing Protocol – CUPS). When system programs need to use TCP source ports below 1024 they start at 1023 and work their way down, having such programs get down to 631 is not uncommon (there are some error conditions which result in ports being reserved for some minutes after use). In terms of malicious operations, it seems that the ports used by database servers such as MySQL and PostgreSQL would ideally be inaccessible to a Squid proxy, and services such as network backup should be inaccessible to everything other than the backup software.

5

Increasing Efficiency through Less Work

I have just read an interesting article titled Why Crunch Mode Doesn’t Work [1] which documents the research on efficiency vs amount of time spent working (and by inference amount of time spent on leisure activities and sleep). It shows that a 40 hour working week was chosen by people who run factories (such as Henry Ford) not due to being nice for the workers but due to the costs of inefficient work practices and errors that damage products and equipment.

Now these results can only be an indication of what works best by today’s standards. The military research is good but only military organisations get to control workers to that degree (few organisations try to control how much sleep their workers get or are even legally permitted to do so), companies can only give their employees appropriate amounts of spare time to get enough sleep and hope for the best.

Much of the research dates from 80+ years ago. I suspect that modern living conditions where every house has electric lights and entertainment devices such as a TV to encourage staying awake longer during the night will change things, as would ubiquitous personal transport by car. It could be that for modern factory workers the optimum amount of work is not 40 hours a week, it could be as little as 30 or as much as 50 (at a guess).

Also the type of work being done certainly changes things. The article notes that mental tasks are affected more than physical tasks by lack of sleep (in terms of the consequences of being over-tired), but no mention is made about whether the optimum working hours change. If the optimum amount of work in a factory is 40 hours per week might the optimum for a highly intellectual task such as computer programming be less, perhaps 35 or 30?

The next factor is the issue of team-work. In an assembly-line it’s impossible to have one person finish work early while the rest keep working, so the limit will be based on the worker who can handle the least hours. Determining which individuals will work more slowly when they work longer hours is possible (but it would be illegal to refuse to hire such people in many jurisdictions) and determining which individuals might be more likely to cause industrial accidents may be impossible. So it seems to me that the potential for each employee to work their optimal hours is much greater in the computer industry than in most sectors. I have heard a single anecdote of an employee who determined that their best efficiency came from 5 hours work a day and arranged with their manager to work 25 hours a week, apart from that I have not heard any reports of anyone trying to tailor the working hours to the worker.

Some obvious differences in capacity for working long hours without losing productivity seem related to age and general health, obligations outside work (EG looking after children or sick relatives), and enjoyment of work (the greater the amount of work time that can be regarded as “fun” the less requirement there would be for recreation time outside work). It seems likely to me that parts of the computer industry that are closely related to free software development could have longer hours worked due to the overlap between recreation and paid work.

If the amount of time spent working was to vary according to the capacity of each worker then the company structures for management and pay would need to change. Probably the first step towards this would be to try to pay employees according to the amount of work that they do, one problem with this is the fact that managers are traditionally considered to be superior to workers and therefore inherently worthy of more pay. As long as the pay of engineers is restricted to less than the pay of middle-managers the range between the lowest and highest salaries among programmers is going to be a factor of at most five or six, while the productivity difference between the least and most skilled programmers will be a factor of 20 for some boring work and more than 10,000 for more challenging work (assuming that the junior programmer can even understand the task). I don’t expect that a skillful programmer will get a salary of $10,000,000 any time soon (even though it would be a bargain compared to the number of junior programmers needed to do the same work), but a salary in excess of $250,000 would be reasonable.

If pay was based on the quality and quantity of work done (which as the article mentions is difficult to assess) then workers would have an incentive to do what is necessary to improve their work – and with some guidance from HR could adjust their working hours accordingly.

Another factor that needs to be considered is that ideally the number of working hours would vary according to the life situation of the worker. Having a child probably decreases the work capacity for the next 8 years or so.

These are just some ideas, please read the article for the background research. I’m going to bed now. ;)

5

Load Average

Other Unix systems apparently calculate the load average differently to Linux. According to the Wikipedia page about Load(computing) [1] most Unix systems calculate it based on the average number of processes that are using a CPU or available for scheduling on a CPU while Linux also includes the count of processes that are blocked on disk IO (uninterruptible sleep).

There are three load average numbers, the first is for the past minute, the second is for the past 5 minutes, and the third is for the past 15 minutes. In most cases you will only be interested in the first number.

What is a good load average depends on the hardware. For a system with a single CPU core a load average of 1 or greater from CPU use will indicate that some processes may perform badly due to lack of CPU time – although a long-running background process with a high “nice” value can increase the load average without interfering with system performance in most cases. As a general rule if you want snappy performance then the load average component from CPU use should be less than the number of CPU cores (not hyper-threads). For example a system with two dual-core CPUs can be expected to perform really well with a load average of 3.5 from CPU use but might perform badly with a load average of 5.

The component of the load average that is due to disk IO is much more difficult to interpret in a sensible manner. A common situation is to have the load average increased by a NFS server with a network problem. A user accesses a file on the NFS server and gets no response (thus giving a load average of 1), they then open another session and use “ls” to inspect the state of the file – ls is blocked and gives a system load average of 2. A single user may launch 5 or more processes before they realise that they are not going to succeed. If there are 20 active users on a multi-user system then a load average of 100 from a single NFS server that has a network problem is not uncommon. While this is happening the system will perform very well for all tasks that don’t involve the NFS server, the processes that are blocked on disk IO can be paged out so they don’t use any RAM or CPU time.

For regular disk IO you can have load average incremented by 1 for each non-RAID disk without any significant performance problems. For example if you have two users who each have a separate disk for their home directory (not uncommon with certain systems where performance is required and cooperation between users is low) then each could have a single process performing disk IO at maximum speed with no performance problems for the entire system. A system which has four CPU cores and two hard drives used for separate tasks could have a load average slightly below 6 and the performance for all operations would be quite good if there were four processes performing CPU intensive tasks and two processes doing disk intensive tasks on different disks. The same system with six CPU intensive programs would under-perform (each process would on average get 2/3 of a CPU), and if it had six disk intensive tasks that all use the same disk then performance would be terrible (especially if one of the six was an interactive task).

The fact that a single load average number can either mean that the system is busy but performing well, under a bit of load, or totally overloaded means that the load average number is of limited utility in diagnosing performance problems. It is useful as a quick measure, if your server usually has a load average of 0.5 and it suddenly gets a load average of 10 then you know that something is wrong. Then the typical procedure for diagnosing it starts with either running “ps aux|grep D” (to get a list of D state processes – processes that are blocked on disk IO) or running top to see the percentages of CPU time idle and in IO-wait states.

Cpu(s): 15.0%us, 35.1%sy,  0.0%ni, 49.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
7331 rjc      25  0  2868  640  312 R  100  0.0  0:21.57 gzip

Above is a section of the output of top showing a system running gzip -9 < /dev/urandom > /dev/null. Gzip is using one CPU core (100% CPU means 100% of one core – a multi-threaded program can use more than one core and therefore more than 100% CPU) and the overall system statistics indicate 49.9% idle (the other core is almost entirely idle).

Cpu(s):  1.3%us,  3.2%sy,  0.0%ni, 50.7%id, 44.4%wa,  0.0%hi,  0.3%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
7425 rjc      17  0  4036  872  588 R    4  0.1  0:00.20 find

Above is a section of the output of top showing the same system running find /. The system is registering 44% IO wait and 50.7% idle. The IO wait is the percentage of time that CPU core is waiting on IO, so 44% of the total system CPU time (or 88% of one CPU core) is idle while the system is waiting for disk IO to complete. A common mistake is to think that if the IO was faster then more CPU time would be used, in this case with the find program using 4% of one CPU core if all the IO was instantaneous (EG in cache) then the command would complete 25 times faster with 100% CPU use. But if the disk IO performance was doubled (a realistic possibility given that the system has a pair of cheap SATA disks in a RAID-1) then find would probably use 8% of CPU time.

Really the only use for load average is for getting an instant feel for whether there are any performance problems related to CPU use or disk IO. If you know what the normal number is then a significant change will stand out.

Dr. Neil Gunther has written some interesting documentation on the topic [2], which goes into more technical detail including kernel algorithms used for calculating the load average. My aim in this post is to educate Unix operators as to the basics of the load average.

His book The Practical Performance Analyst gives some useful insights into the field. One thing I learned from his book is the basics of queueing theory. One important aspect of this is that as the rate at which work arrives approaches the rate at which work can be done the queue length starts to increase exponentially, and if work keeps arriving at the same rate when the queue is full and the system can’t perform the work fast enough the queue will grow without end. This means that as the load average approaches the theoretical maximum the probability of the system dramatically increasing it’s load average increases. A machine that’s bottlenecked on disk IO for a task where there is a huge number of independent clients (such as a large web server) may have it’s load average jump from 3 to 100 in a matter of one minute. Of course this won’t mean that you actually need to be able to serve 30 times the normal load, merely slightly more than the normal load to keep the queues short. I recommend reading the book, he explains it much better than I do.

Update: Jon Oxer really liked this post.

3

Xen and Security

I have previously posted about the difference between using a chroot and using SE Linux [1].

Theo de Raadt claims that virtualisation does not provide security benefits [2] based on the idea that the Xen hypervisor may have security related bugs.

From my understanding of Xen a successful exploit of a Xen system with a Dom0 that is strictly used for running the DomU’s would usually start by gaining local root on one of the DomU instances. From there it is possible to launch an attack on the Xen Dom0. One example of this is the recent Xen exploit (CVE-2007-4993) [3] where hostile data in a grub.conf in a DomU could be used to execute privileged commands in the Dom0. Another possibility would be to gain root access to a DomU and then exploit a bug in the Xen API to take over the hypervisor (I am not aware of an example of this being implemented). A final possibility is available when using QEMU code to provide virtual hardware where an attacker could exploit QEMU bugs, an example of this is CVE-2007-0998 where a local user in a guest VM could read arbitrary files in the host [4] – it’s not clear from the advisory what level of access is required to exploit it (DomU-user, DomU-root, or remote VNC access). VNC is different from other virtual hardware in that the sys-admin of the virtual machine (who might be untrusted) needs to access it. Virtual block devices etc are only accessed by the DomU and Xen manages the back-end.

The best reference in regard to these issues seems to be Tavis Ormandy’s paper about hostile virtualised environments [5]. Tavis found some vulnerabilities in the QEMU hardware emulation, and as QEMU code is used for a fully virtualised Xen installation it seems likely that Xen has some vulnerabilities in this regard. I think that it is generally recommended that for best security you don’t run fully virtualised systems.

The remote-console type management tools are another potential avenue of attack for virtualised servers in the case where multiple users run virtual machines on the same host (hardware). I don’t think that this is an inherent weakness of virtualisation systems. When security is most important you have one sys-admin running all virtual machines – which incidentally seems to be the case for most implementations of Xen at the moment (although for management not security reasons). In ISP hosting type environments I doubt that a remote console system based on managing Xen DomU’s is going to be inherently any less secure than a typical remote console system for managing multiple discrete computers or blades.

I have just scanned the Xen hypervisor source, the file include/asm-x86/hypercall.h has 18 entries for AMD64 and 17 for i386 while include/xen/hypercall.h has 18 entries. So it seems that there are 35 or 36 entry points to call the hypervisor, compared to 296 system calls on the i386 version of Linux (which includes the sys_socketcall system call which expands to many system calls). This seems to be one clear indication that the Linux kernel is inherently more complex (and therefore likely to have a higher incidence of security flaws) than the Xen hypervisor.

Theo’s main claim seems to be that Xen is written by people who aren’t OpenBSD developers and who therefore aren’t able to write secure code. While I don’t agree with his strong position I have to note the fact that OpenBSD seems to have a better security history than any other multi-user kernel for which data is available. But consider a system running Xen with Linux in Dom0 and multiple para-virtualised OpenBSD DomU’s. If the Linux Dom0 has OpenSSH as the only service being run then the risk of compromise would be from OpenSSH, IP based bugs in the Linux kernel (either through the IP address used for SSH connections or for routing/bridging to the OpenBSD instances), and from someone who has cracked root on one of the OpenBSD instances and is attacking the hypervisor directly.

Given that OpenSSH comes from the OpenBSD project it seems that the above scenario would only add the additional risk of an IP based Linux kernel attack. While a root compromise of an OpenBSD instance (consider that a typical OpenBSD system will run a lot of software that doesn’t come from the OpenBSD project – much of which won’t have a great security history) would only lose that instance unless the attacker can also exploit the hypervisor (which would be a much more difficult task than merely cracking some random daemon running as root that the sys-admin is forced to install). Is the benefit of having only one instance of OpenBSD cracked due to a bad daemon enough to outweigh the risk of a Linux IP stack?

I’m sure that the OpenBSD people would consider that a better option would be OpenBSD in the Dom0 and in the DomU. In which case the risk of damage from a root compromise due to one badly written daemon that didn’t come from OpenBSD is limited to a single DomU unless the attacker also compromises the hypervisor. When working as a sys-admin I have been forced by management to install some daemons as root which were great risks to the security of the system, if I had the ability to install them in separate DomU’s I would have been able to significantly improve the security of the system.

Another flaw in Theo’s position is that he seems to consider running a virtual machine as the replacement of multiple machines – which would be an obvious decrease in security. However in many cases the situation is that no more or less hardware is purchased, it is just used differently. If instead of a single server running several complex applications you have a Xen server running multiple DomU’s which each have a single application then things become much simpler and more secure. Upgrades can be performed on one DomU at a time which decreases the scope of failure (which often means that you only need one business unit to sign-off on the upgrade) and upgrades can be performed on an LVM snapshot (and rolled back with ease if they don’t succeed). A major problem with computer security is when managers fear problems caused by upgrades and prohibit their staff from applying security fixes. This combined with the fact that on a multiple DomU installation one application can be compromised without immediate loss of the others (which run in different DomU’s and require further effort by the attacker for a Xen compromise) provides a significant security benefit.

It would be nice for security if every application could run on separate hardware, but even with blades this is not economically viable – not even for the biggest companies.

I have converted several installations from a single overloaded and badly managed server to a Xen installation with multiple DomU’s. In all cases the DomU’s were easier to upgrade (and were upgraded more often) and the different applications and users were more isolated.

Finally there is the possibility of using virtualisation to monitor the integrity of the system, Bill Broadley’s presentation from the 2007 IT Security Symposium [6] provides some interesting ideas about what can be done. It seems that having a single OpenBSD DomU running under a hypervisor (maybe Xen audited by the OpenBSD people) with an OpenBSD Dom0 would offer some significant benefits over a single OpenBSD instance.

9

Introverts

I am amazed that I had never read the article Caring for Your Introvert [1] before. One of the interesting points concerned acting like an extrovert (I can do it for the duration of a typical job interview). Another was the issue of recovery time after having to deal with people. When living in hotels (which I did for about 18 months straight in 1999 and 2000) I found that some days I would reach my quota for dealing with people before I had dinner, going to bed hungry seemed like a better option than going to a restaurant.

One thing that occurred to me is the lack of apparent introversion among most delegates at computer conferences. It seems that the majority of people who are any good at coding are introverts and you might expect an environment with a majority of introverts to be somewhat quiet. An interview with the author of the article [2] published 3 years later explains this (among other things). Here is a quote:
But once an introvert gets on a subject that they know about or care about or that intrigues them intellectually, the opposite often takes hold. They get passionately engaged and turned on by the conversation. But it’s not socializing that’s going on there. It’s learning or teaching or analyzing, which involves, I’m convinced, a whole different part of the brain from the socializing part.

Which describes a lot of the activity at conferences. It’s standard practice for people to walk up and join a conversation that covers an area of technology that interests them and then just walk away when the topic changes.

I wonder if any of the social networking and dating sites have a section for Myers-Briggs [3] test results.

Via Tim Connors blog [4].

12

Cheap Laptops for Children

I was recently browsing an electronics store and noticed some laptops designed for children advertised at $50AU. These machines were vastly different from what most of us think of when the term laptop is used, they had tiny screens, flimsy keyboards, no IO devices, and a small set of proprietary programs. It was more of a toy that pretends to be a laptop than a real laptop (although I’m sure that it had more compute power than a desktop machine from 1998).

After seeing that I started wondering what we can do to provide cheap serious laptops for children running free software. The One Laptop Per Child (OLPC) [1] program aims at producing laptops for $100US to give to children in developing countries. It’s a great project, the hardware and software are innovative in every way and designed specifically for the needs of children. However they won’t have any serious production capacity for the near future, and even $100US is a little more expensive than desired.

Laptops have significant benefits for teaching children in that they can be used at any time and in any place – including long car journeys (inverters that can be used to power laptops from a car power socket are cheap).

A quick scan of a couple of auction sites suggests that laptops get cheap when they have less than 256M of RAM. A machine with 128M of RAM seems likely to cost just over $200 and a machine with less than 128M is likely to be really cheap if you can find someone selling it.

So I’m wondering, what can you do to set up a machine with 64M of RAM to run an educational environment for a child? KDE and GNOME are moderately user-friendly (nothing like the OLPC system, and even Windows 3.0 was easier in some ways) but too big to run on such a machine (particularly when GIMP is part of a computer education system). This should be a solvable problem, Windows 3.0 ran nicely in 4M of RAM, one of the lighter X window managers ran well in 8M of RAM for me in Linux 0.99 days, and the OS/2 2.0 Workplace Shell (which in many ways beats current KDE and GNOME systems) ran nicely in 12M). I think that a GUI that vaguely resembles Windows 3.0 should run well on a machine with 64M of RAM – is there such a GUI?

I have briefly scanned the Debian-Edu [2] site but the only reference to hardware requirements is for running LTSP.

4

Dreamhost and the DMCA

Dreamhost have refused my request (under the DMCA) to be correctly identified as the author of content copied from my blog. I am publishing this so that anyone else who deals with them will know what to expect. Also if someone wishes to sue Dreamhost in regard to content that they host this may help demonstrate a pattern of behaviour.

The situation is quite obviously the result of a broken script used by a splogger that doesn’t correctly match author names with articles. The fact that the official Dreamhost policy is to disregard the requirement that the author(s) of copyright material be correctly identified is reprehensible. It also seems likely to open them to the risk of legal action. If you know how to contact a director of Dreamhost then please give them a link to this post and explain the risks to them.

For anyone who wants the detail the messages are below.
Continue reading