|
|
On a closed mailing list someone wrote:
2 X 120gb ide drives installed as slaves on each ide channels. … Presto. A 230’ish GB storage NAS for all my junk.
I’m not going to write a long technical response on a closed list so I’ll blog about it instead.
Firstly I wonder whether by “junk” the poster means stuff that is not important and which won’t be missed if it goes away.
If P is the probability of a drive not dying in a given time period (as a number between 1 being certain death and 0 an immortal drive) then the probability of serious data loss is P^2 for the configuration in question.
If P has a value of 0.5 over the period of 7 years (approximately what I’m seeing in production for IDE drives) then your probability of not losing data over that period is 0.25, IE there’s a 75% chance that at least one of the drives will die and data will be lost.
If the data in question really isn’t that important then this might be OK. About half the data on my file server consists of ISO images of Linux distributions and other things which aren’t of particularly great value as I can download them again at any time. Of course it would be a major PITA if a client had a problem with an old distribution and I had to wait for a 3G download to finish before fixing it, this factor alone makes it worth my effort in using RAID and backups for such relatively unimportant data. 300G IDE and S-ATA disks aren’t that expensive nowadays, if buying a pair of bigger disks saves you one data loss incident and your time has any value greater than $10 per hour then you are probably going to win by buying disks for RAID-1.
As another approach, LVM apparently has built-in functionality equivalent to RAID-1. One thing I have idly considered is using ATA over Ethernet with LVM or GFS to build some old P3 machines into a storage solution.
P3 machines use 38W of power each (with one disk, maybe as much as 70W with 4 disks but I haven’t checked) and should have the potential to perform well if they each have 4 IDE disks installed. That way a large number of small disks could combine to give a decent capacity with data mirroring. Among other things having more spindles decreases seek times when under heavy load. If you do work that involves large numbers of seeks then this could deliver significant performance benefits. If I had more spare time I would do some research on this, it would probably make for a good paper at a Linux conference.
Yesterday Erich Schubert blogged about reducing Debian SE Linux work due to lack of hardware. To solve such problems I’ve put a Debian/unstable machine on the net and given Erich the root password. Also now I am starting work on Debian SE Linux again too. There should be some significant developments in Debian SE Linux in the near future.
Also if anyone else has a problem of a lack of hardware getting in the way of free software development the first thing to do is to mention it on the IRC channel for the project in question. While Erich has demonstrated that blogging works, IRC is faster.
In regard to my post yesterday about Planet Debian I received the following response:
James Purser said I’m betting that your feed is an atom feed. We had the same problem on PLOA with Jeff and Pias feeds when they switched to atom. Planet needs to be upgraded.
Well I am using an atom feed, so this probably explains it. Sorry for the inconvenience to the Planet Debian readers, I guess that things will stay the way they are until it is upgraded.
Also when viewing my blog entry in Planet Debian I realised that much of a spam message had got pasted in to the URL field for the Planet Debian link. Oh the irony that I only found this embarassing error because of a bug in the Planet software.
This brings me to another issue, Security Enhanced X. With SE-X (before you ask, I didn’t invent the acronym) you can use SE Linux to control communication between windows on an X desktop. With a modification to the clipboard manager (klipper in the case of KDE) every piece of data that’s copied from an application will have a security context assigned to it and this context will be checked against the context of an application that is to be the target of a paste operation. Klipper will also have to support relabeling clipboard data. Therefore if I want to cut text from my email client (Kmail) and paste it into Firefox then I would have to relabel it with the appropriate MCS categories. This would permit me to paste text from an email into a web form with a few extra mouse clicks, but would prevent me from accidentally pasting the wrong text. Keeping in mind the fact that there are many more embarassing things that could be accidentally pasted into a blog entry than the contents of a spam this doesn’t seem overly difficult.
PS Before anyone jumps to conclusions. When I receive GPG encrypted email or other material that should be kept confidential I try and avoid cutting it, and if I have to do so I clear the clipboard buffer afterwards. Keeping spam a secret is not really a priority to me so I didn’t take adequate precautions in this case.
I am aware of the problems in displaying my blog in Planet Debian. I have filed a bug report with blogger and informed mako. There’s nothing else I can do at the moment, if you use Planet Linux Australia then things work OK. I’m not sure whether Planet Debian or Blogger is at fault.
Sorry for the inconvenience, if you only use Planet Debian then you will have to read my blog directly.
This story on 365tomorrows.com on the topic of rootkits is interesting (note the OSs involved). Also it made me wonder about the other possibilities for a root-kitted robot, the mind boggles at how it might determine whether you need an enlargement to some body part…
365tomorrows is a good site, they post a short sci-fi story every day and it’s all free (paid for by merchandise and AdSense). When you read the storys make sure you check out the AdSense links, it’s sometimes rather amusing when Google gets some unusual interpretations of sci-fi storys and supplies adverts to match, I don’t think that AdSense was designed to work well with fiction.
There are many people claiming that nuclear power will solve all the ills of the world. However this does not seem to be possible. Firstly you have to consider the hidden costs of nuclear power such as deaths from the mining industry (ingesting uranium ore is a really bad thing) and the difficulty in disposing of radioactive waste. But rather than concentrating on the bad aspects of nuclear power (which are well documented) I will concentrate on some of the viable alternatives.
Wind power is a really good option, particularly for countries such as Australia that have a low population density and a large land area. The Chinese government is investing heavily in wind power, I think it’s safe to assume that it’s not because they are great environmentalists but because they simply need more energy than they can get from other sources and that they have strategic reasons for not wanting to rely on Australian coal and uranium or Arabian oil. Most energy sources have some drawbacks, but wind power has no side effects and isn’t going to kill birds either (birds have evolved the ability to detect and avoid predatory birds, they can easily avoid large fixed objects such as fans from wind farms).
Two other good options are wave and tidal power. These are better than river based hydro-electricity because there is no need to create dams that remove forests. Wave and tidal power are both very predictable which is an advantage when compared to wind power which is less predictable. One solution to the unpredictability of wind power is to couple it with a river based hydro-electric system which can provide electricity when there is less wind. A hydro-electric system to compensate for days that are less windy would need a much smaller dam than one that is designed to provide the main power source.
The next issue is how to power vehicles (on air, land, and sea). Advocates of nuclear power often talk about hydrogen powered cars. However while hydrogen has a good ratio of energy to weight it is not very dense, so the energy density for volume is much less than petrol. Combining Prius technology with
hydrogen in an internal combustion engine still won’t give the distance per tank of fuel as petrol does. Hydrogen with fuel cells in an all electric vehicle might allow you to drive the same distance as a non-hybrid car on petrol, but probably won’t compare to the range of a hybrid Diesel vehicle.
Bio-Diesel is a good option for fuelling cars. Diesel engines give greater efficiency than Otto cycle (the most common car engine) or Atkinson cycle (as used in the Prius) engines. Not only is bio-Diesel renewable but it also produces exhaust that is less toxic than that which is produced from fossil fuels. See the VeggieVan site for more details on bio-Diesel. The toxic fossil fuels are linked to health problems in airline hostesses, AFAIK there has been no research on the impact of car exhaust on pedestrians.
One thing to note about bie-Diesel is that you can do it right now. According to a British TV documentary all you have to do is filter oil that was used for frying food (they used oil from a Mexican restaurant) and mix it with a small amount of ethanol and it’s ready to use in your car. As restaurants currently have to pay to dispose of old frying oil this should be good for everyone!
Bio-Diesel could work for powering planes, there is already research in progress on this issue, but there are problems related to the viscosity of bio-Diesel at low temperatures. Maybe a blend of bio-Diesel and bio-Ethanol would work. Ethanol freezes at -114.3C and should lower the freeze temperature of bio-Diesel.
Bio-Diesel would of course work really well for ships. Supplying the amount of fuel that current ships need would be difficult. Some analysis shows that the deck area of a ship can collect enough sunlight to supply ~10% of the power needs of the ship. The Orcelle is a design for a totally clean ship that runs on solar, wind, and wave power. However with the proposed design the solar panels will not be angled effectively for collecting sunlight as they will be on sails. I think that there is a lot of potential in having a design based around sails, wave and solar power for generating electricity, and also a Diesel engine running on bio-Diesel fuel for supplying extra power when required (EG when sailing at night in calm weather). Building a ship that uses only wind, solar, and wave power would probably be significantly more expensive than the current Diesel design. Building a ship that uses 10% Diesel and 90% wind, solar, and wave power might be a lot cheaper.
There are lots of ways of producing the energy we need to maintain our current standard of living. If our government was to spend as much money researching them as it does protecting petroleum reserves then the problem would be solved.
One advantage of not being a permanent employee is that I am free to do paid work for other people. This not only gives a greater income but also a wider scope of work.
I’ve just completed my first significant project since leaving Red Hat. The Inumbers project provides an email address for every mobile phone. If you know someone’s mobile phone number but don’t have an email address then you can send email to NNN@inumbers.com where NNN is the international format mobile phone number. The recipient will receive an SMS advising them how to sign up and collect the email.
It was fun work, I had to learn how to implement SRS (which I had been meaning to do for a few years), write scripts to interface with a bulk SMS service, and do a few other things that were new to me.
I’ve been working on a mail forwarding system which required me to implement SRS to allow people who use SPF to be customers of the service (as I use SPF on my domain it’s fairly important to me). Reading the web pages before actually trying to implement it things seemed quite easy. All over the web you will see instructions to just set up an /etc/aliases file that pipes mail through the srs utility.
The problem is that none of the srs utility programs actually support piped mail. It seems that the early design idea was to support piped mail but no-one actually implemented it that way. So you can call the srs utility to discover what the munged (cryptographically secure hash signed) originator of the email should be but you have to do the actual email via something else.
This wasn’t so much of a problem for me as I use my own custom maildrop agent to forward the mail instead of using /etc/aliases (Postfix doesn’t support what I want to do with /etc/aliases – dynamically changing the email routing as you receive it isn’t something that Postfix handles internally).
However I still have one problem. Sometimes I get two or three copies of the SPF header from Postfix when it checks them.
In my main.cf file I have a smtpd_recipient_restrictions configuration directive that contains check_policy_service unix:private/spfpolicy and the Postfix master.cf file has the following:
spfpolicy unix - n n - - spawn user=USER argv=/PATH/spf-policy.pl
Does anyone have any ideas why I would get multiple SPF checks and therefore multiple email header lines such as:
Received-SPF: none (smtp.sws.net.au: domain of SRS0=MUyCQ6=CO=coker.com.au=russell@inumbers.com does not designate permitted sender hosts)
Received-SPF: none (smtp.sws.net.au: domain of SRS0=MUyCQ6=CO=coker.com.au=russell@inumbers.com does not designate permitted sender hosts)
[some other headers]
Received-SPF: pass (inumbers: domain of russell@coker.com.au designates 61.95.69.6 as permitted sender)
Received-SPF: pass (inumbers: domain of russell@coker.com.au designates 61.95.69.6 as permitted sender)
Received-SPF: pass (inumbers: domain of russell@coker.com.au designates 61.95.69.6 as permitted sender)
The email went through one mail router and then hit the destination machine, but somehow got 5 SPF checks along the way. Also the pair of identical checks had no lined between them and the set of three identical checks also had no lines between them. So multiple checks were performed without any forwarding. It seems that a single port 25 connection is giving two or three checks. Both machines run Postfix with SPF checking that is essentially idential (apart from being slightly different versions, Debian/unstable and RHEL4).
Any advice on how to fix this would be appreciated.
I’m currently working for a company that in the past has not embraced new technology. One of my colleagues recently installed a wiki which did a lot of good in terms of organizing the internal documentation.
The next step is to install some blogging software. What I want is to have every sys-admin run a blog of what they are doing and have an aggregation of all the team’s blogs for when anyone wants to see a complete list of what’s been done recently. The security does not have to be particularly high as it’s an internal service (probably everyone will use the same account). The ability to store draft posts would be really handy, but apart from that none of the advanced features are really needed.
Also it would be handy to be able to tag posts. For example if userA did some work on the mail server they would tag it with SMTP and then at some future time it would be possible to view all posts with the SMTP tag.
I’ve done a search on google for this topic and there are many pages comparing blog software. But all the comparisons seem based on Internet use, they talk about what versions of RSS are supported etc. But I don’t need much of that. An ancient version of RSS will do as long as there is a single syndication program that can support it. Performance doesn’t have to be great either, I’m looking at less than a dozen people posting and reading and a fairly big Opteron server with a decent RAID array.
For the minimal requirements I could probably write blog and syndication programs as CGI-BIN scripts in a couple of days. They wouldn’t support RSS or XML but that’s no big deal. But I expect that if I use some existing software that someone recommends in a blog comment it will be faster to install and have some possibility of future upgrades.
To get the maximum value out of my writing when I am asked a question that is of general interest in private mail I will (without in any way identifying the person or giving any specifics of their work) blog my reply. I hope that not only will this benefit the general readers, but also the person who originally asked the question may benefit from reading blog comments.
The question is “I wonder whether I can define a domain which is a union of two existing domain, that is, define a new domain X, which has all the privilege domain Y and Z has got”.
There is no way to say in one line of policy “let foo_t do everything that bar_t and baz_t can do” (for reasons I will explain later). However you can easily define a domain to have the privileges that two other domains have.
If you have bar.te and baz.te then a start is:
grep ^allow bar.te baz.te | sed -e s/bar/foo/ -e s/baz/foo/ >> foo.te
Then you need to just define foo_t in the file foo.te and define an entry-point type and a suitable domain_auto_trans() rule to enter the domain.
There are other macros that allow operations that don’t fit easily into a grep command, but they aren’t difficult to manage.
The only tricky area is if you have the following:
domain_auto_trans(bar_t, shell_exec_t, whatever1_t)
domain_auto_trans(baz_t, shell_exec_t, whatever2_t)
As every domain_auto_trans() needs to have a single target type those two lines conflict so you will need to decide which one you want to merge. This is the reason why you can’t just merge two domains. Also the same applies for file_type_auto_trans() rules and for booleans in some situations.
|
|