|
Paul Russell writes about his 3-yearly laptop replacement at IBM [1]. It probably makes some sense to replace laptops periodically for a large company, but if you are buying for personal use then it makes sense to try and get a longer life out of an expensive machine. I think that aiming for 6 years is quite reasonable with today’s hardware – you should be able to buy a new machine now and have it last 6 years or buy a 3yo second-hand machine and hope to have it last 3 years (most second-hand laptops on sale in every place other than Ebay were trophies for managers and never had any serious use).
If you are going to buy a second-hand laptop the first thing to consider is PAE support. If you get a laptop without PAE support (I think that means all Pentium-M CPUs) then you will not have proper Xen support (it seems that all distributions have abandoned Xen support for PAE for the moment). This may not be a big deal if you don’t want Xen, but if you are a programmer then you probably do want Xen (even if you don’t realise it yet). The next issue is support for the AMD64 instruction set. 32bit laptops are going cheap at the moment but if you buy one you will be significantly limited as to what software you can run at some future time (my 32bit laptop is doing well at the moment apart from the lack of PAE support).
If you are buying a new laptop then the first thing to consider when planning a long life is the warranty. In my experience most computer gear does not need a long warranty, if it survives 3 months then it’ll probably last until it’s well obsolete. Laptops however periodically wear out if used seriously, I average one warranty replacement of a Thinkpad keyboard every two years and I have had a few motherboard replacements (the lighter Thinkpads flex and they eventually break inside if you use them on enough trains, trams, buses, planes, etc). On one of the Lenovo T series Thinkpads that I saw advertised (one that I would consider if I wanted a new laptop now) there was an offer to spend an extra $350AU to get extend the warrantee from 1 year to 5 years (according to my understanding of the confusing text on the web site) on a laptop that cost $3050. An increase in the purchase price of 12% for the extra warranty is a bargain (I know that for my use they would lose money on the deal). Repair of a laptop is generally very expensive, any serious damage to a laptop that is more than 18 months old will generally mean that the replacement cost is less than the repair cost.
The next thing to consider is the screen resolution. After purchasing a laptop you can upgrade the RAM and the hard drive, the CPU power of all modern machines is great enough that for most typical use it’s difficult to imagine any need to upgrade. But screen resolution is something that can never be good enough and can never be improved after purchase. Lenovo is offering T series Thinkpads with 1920×1200 resolution for $4000AU and 1680×1050 resolution for $3050AU. That’s 31% more pixels for 31% more money and seems like a good deal to me. I believe that a larger display can significantly increase productivity [2] so it seems that the extra expense would be a good investment if you plan to earn money from work you do on your laptop. As a point of reference a desktop monitor from Dell (who seems to be the cheapest supplier for such gear) with resolution of 1920×1200 will cost at least $1000AU.
The hard drive capacity should not be an issue, it seems that 100G is about the minimum size. The 60G drive in my current Thinkpad is adequate for my development work (including several Xen instances and some ISO files for a couple of distributions) so unless you plan to collect MPEG4 files of TV series and store them on your laptop I can’t imagine 100G being much of a limit. Also external storage is getting quite cheap, 2GB USB flash devices are now in the bargain bin of my local electronics store and USB attached hard drives with capacities of 40G or more are getting cheap. Also with a Thinkpad replacing a hard drive is really easy and does not risk damage to the drive or the rest of the laptop (I don’t know how well other brands rate in this regard).
For RAM you can buy a model with a large memory module in socket 0 (or attached to the motherboard). Adding new RAM later is easy to do. Just try and avoid purchasing a memory capacity that involves having all sockets filled with modules that are not of the maximum size – it’s annoying if you have to try and sell modules on Ebay after you buy a memory upgrade.
Finally one mistake I made in the past was to not get all the options for the motherboard. Make sure that every option for Ethernet ports and 802.11 type protocols is selected. It might sound like a good idea to save $100 or so on not getting one of those options, but if you end up repeatedly plugging a CardBus or USB device for many years you will regret it. Also external devices tend to break or get lost.
Rusty documents his laptop replacement as a time for spring-cleaning. I use LVM for the root filesytem on my Thinkpad so that I can easily install a new distribution (or a new version of a distribution) at any time. I’ve been through that spring-cleaning a couple of times on my current Thinkpad without needing new hardware.
From a quick view of the Lenovo site it seems that an ideal new Thinkpad that would last me 6 years would cost about $4500 while one that would last me 2 years would cost $1600 (and have a significantly lower screen resolution). A Thinkpad that would last 6 years and not be so great (but still better than the cheap option) would cost about $3500.
Update: One significant issue is the life expectancy of laptop batteries. If you use a laptop for mobile use (as opposed to just moving between desks occasionally) then you are probably familiar with the problem of laptop batteries that discharge after 10 minutes. Last time I checked the warranty on Thinkpad batteries was 1 year or 300 charges (whichever comes first). My experience is that after 300 full cycles a Thinkpad battery will only last for a small fraction of the original charge time. When buying a laptop I suggest getting a spare battery at the time of purchase. The spare battery may last longer than the battery that is shipped with the laptop and two batteries means that you have twice the number of charge cycles before they are both useless. Batteries apparently don’t last long if completely discharged, so charge them up before storing them and periodically charge them if they have been left unused for any length of time (maybe every second or third month). With a Thinkpad it seems quite safe to change the battery while the machine is plugged in to mains power and running (I expect that Lenovo doesn’t recommend this though). You should probably plan to have a battery die every three years of use (or sooner if you do a lot of travelling). So one spare battery may last you 6 years of use but you will need two spare batteries if you travel a lot.
Don Marti has written his own equivalent to Technorati based on links from blogs that he reads, and my blog comes in at #40 in the list (last place) [1].
Don does note the fact that such lists mean little and links to a post by Doc Searls [2] which makes the same point more strongly. But it’s still interesting to note.
Also if Don chose to release the Perl script in question (or host is as a cgi-bin script) to allow other people to make their own top 40 list then I’m sure that many people would appreciate it (I’ll write such a script myself and release it if he doesn’t). He notes when publishing his list that a blog may be included even if he never reads it. I believe that if a blog is highly ranked according to links from blogs that you read then it’s quite likely that you will be interested in reading it, it may be a blog that is opposed to your beliefs (for example I’m sure I’m not the only person in the Linux community to link to an official Microsoft blog) but if it gets enough links then it would be worth reading at least once (if only to discover why you don’t like it). While I can’t tell Don to read every blog on his top 40 list I’ll certainly read every blog on my top 40 list at least once when I create the list.
Bruce Schneier has just written about the Storm Worm [1] which has apparently been quietly 0wning some Windows machines for most of this year (see the Wikipedia page for more information [2]).
I have just been asked whether SE Linux would stop such a worm from the Linux environment. SE Linux does prevent many possible methods of getting local root. If a user who does not have the root password (or is not going to enter it from a user session) has their account taken over by a hostile party then the attacker is not going to get local root (unless there is a kernel vulnerability). Without local root access the activities of the attacker can be seen by a user logged in on another account – processes will be seen by all user sessions if using the SE Linux targeted policy and files and processes can be seen by the sys-admin.
If while a user account is 0wned the user runs “su –” (or an equivalent command) then in theory at least the attacker can sniff this and gain local root access (whether enough users do this to make attackers feel that it’s worth their effort to write the code in question is something I couldn’t even guess about). If the user is clueless then the attacker could immediately display a dialog with some message that sound urgent and demand the root password – some users would give it. If the user is even moderately smart the attacker could fake the GUI dialogues for installing updated packages (which have been in Red Hat distributions for ages and have appeared in Debian more recently) and tell the user that they need to enter the root password to install an important security update (oh the irony).
In conclusion I think that if a user is ill-educated enough to want to run a program that was sent to them in email by a random person then I expect that the program would have a good chance of coercing them into giving it local root access if the user in question had the capability of doing so.
Even if a Linux trojan did not have local root access then it could still do a lot of damage. Any server operations that don’t require ports <1024 (which means most things other than running a web, DNS, or mail server) can still be performed and client access will always work (including sending email). The trojan would have access to all of the user’s data (which for a corporate desktop machine usually means a huge network share of secret documents).
If a trojan only attempts to perform actions that SE Linux permits (running programs from the user’s home directory, accessing servers for DNS, HTTP, IRC, SMTP, and other protocols – a reasonable set of options for a trojan) then the default configuration of SE Linux (targeted policy) won’t stop it or even log anything. This is not a problem with SE Linux, just a direct result of the fact that in every situation a trojan can perform all operations that the user can perform – and if the trojan only wants to receive commands via web and IRC servers and send spam via the user’s regular mail server then it will be a small sub-set of the permitted operations for the user!
If however the trojan tries more aggressive methods then SE Linux will log some AVC messages about access being denied. If the sys-admin has good procedures for analysing log files they will notice such things, understand what they mean, and be able to contain the damage. Also there have been at least two cases where SE Linux prevented local root exploits.
Finally, in answer to the original question: SE Linux will stop some of the more aggressive methods that trojans might use. But there are still plenty of things that a trojan could do to cause harm which won’t be stopped or audited by SE Linux policy. When Linux gets more market share among users with a small amount of skill and no competent person to do sys-admin work for them we will see some Linux trojans and more Linux worms. It will be interesting to see what methods the trojan authors decide to use.
When running SE Linux you will notice that most applications are not permitted to run with an executable stack. One example of this is libsmpeg0 which is used by the game Freeciv [1]. When you attempt to run the Freeciv client program on a Debian/Etch system with a default SE Linux configuration (as described in my post on how to install SE Linux on Debian in 5 minutes [2]) then you will find that it doesn’t work.
When this happens the following will be logged to the kernel log and is available through dmesg and usually in /var/log/kern.log (Debian/Etch doesn’t have auditd included, the same problem on a Fedora, RHEL, or CentOS system in a typical configuration would be logged to /var/log/audit/audit.log):
audit(1191741164.671:974): avc: denied { execstack } for pid=30823 comm=”civclient” scontext=rjc:system_r:unconfined_t:s0 tcontext=rjc:system_r:unconfined_t:s0 tclass=process
The relevant parts are in bold. The problem with this message in the message log is that you don’t know which shared object caused the problem. As civclient is normally run from the GUI you are given no other information.
So the thing to do is to run it at the command-line (the avc message tells you that civclient is the name of the failing command) and you get the following result:
$ civclient
civclient: error while loading shared libraries: libsmpeg-0.4.so.0: cannot enable executable stack as shared object requires: Permission denied
This makes it clear which shared object is at fault. The next thing to do is to test the object by using execstack to set it to not need an executable stack. The command execstack -q /usr/lib/libsmpeg-0.4.so.0.1.4 will give an “X” as the first character of the output to indicate that the shared object requests an executable stack. The command execstack -c /usr/lib/libsmpeg-0.4.so.0.1.4 will change the shared object to not request an executable stack. After making such a change to a shared object the next thing to do is to test the application and see if it works correctly. In every case that I’ve seen the shared object has not needed such access and the application has worked correctly.
As an aside, there is a bug in execstack in that it will break sym-links. Make sure that the second parameter it is given is the shared object not the sym-link to it which was created by ldconfig. See Debian bug 445594 [3] and CentOS bug 2377 [4].
The correct thing to do is to fix the bug in the source (not just modify the resulting binary). On page 8 of Ulrich Drepper’s document about non-SE Linux security [5] there is a description of both the possible solutions to this problem. One is to add a line containing “.section .note.GNU-stack,"",@progbits” to the start of the assembler file in question (which is what I suggested in Debian bug report 445595 [6]). The other is to add the parameters “-Wa,--execstack” to the command-line for the GNU assembler – of course this doesn’t work if you use a different assembler.
In the near future I will establish an apt repository for Debian/Etch i386 packages related to SE Linux. One of the packages will be a libsmpeg0 package compiled to not need an executable stack. But it would be good if bug fixes such as this one could be included in future updates to Etch.
I read the logs from my servers. The amount of time I spend reading log summaries is determined by how important the server is. On the machines that are most important to me I carefully read log summaries and periodically scan the logs for anything that looks unusual.
The amount of time taken is obviously determined by the amount of data in the logs, so it is a benefit to me (in terms of spending less time) to have smaller logs. It’s also a benefit for me (and the other people who depend on those servers) that I spend my time on things that might be important instead of mindless failed attacks.
One thing that I do to reduce the size of my logs is to run sshd on a non-standard port. This requires using a Port directive in the file /etc/ssh/sshd_config and on the client machines I edit /etc/ssh/ssh_config to include a section such as the following to avoid the need to use the “-p” option for ssh (or the “-P” option for scp):
Host some_server
ForwardX11 no
Protocol 2
HostKeyAlgorithms ssh-rsa
Port 1234
Incidentally I disable X11 forwarding explicitly because it’s a dangerous option which usually isn’t needed, and I specify the ssh-rsa algorithm not because it’s any better than the other option of ssh-dsa but because the possibility of having a secondary option that is normally used adds the possibility that a MITM [1] attack can be performed by an attacker who forces the client to use the non-default protocol (thus giving an unknown host key message instead of a message about an invalid key).
Note that these settings can go in /etc/ssh/ssh_config to apply to all users or in ~/.ssh/config to apply to only one user (IE if you aren’t root on the machine in question).
The practice of avoiding attacks by using non-standard ports is not Security by Obscurity in that the security of my systems does not rely on attackers not knowing the port. Attackers can easily scan all ports and discover which one I use. The fact is that any attacker who does so is a more serious threat than all the attackers who scan port 22 and bother someone else when they discover that nothing is listening, such an attacker deserves to have some of my time used to read the log files related to their attempt.
There is ongoing debate about the issue of security cameras, how many should there be, where should they be located, and who should be able to access the data.
I spent about a year living in London which probably has more security cameras and a greater ratio of cameras to people than any other city. I was never bothered by this. I believe that if implemented correctly security cameras increase public safety and will not have any serious problems.
A while ago I witnessed a violent assault (which could potentially have ended up as a manslaughter case – it was merely luck that ~200 people got off a train at the right time to scare the attackers off). AFAIK I was the only person who identified themself to the police and was prepared to stand as a witness, without security camera footage the case would not have gone anywhere (I only saw the attackers from behind as they ran off). Security camera footage allowed the police to identify the attackers, my testimony was not required and I was never informed as to how the case proceeded – but I know for a fact that the police investigation depended on security camera footage and that they did make progress in the case based on such footage.
There are current plans to increase the scope of security cameras in many cities under the guise of the “war on terror”. The problem is that once a terrorist is involved in an attack it’s too late for security cameras. Security cameras are really only good for catching criminals after an attack, in most cases they will be entirely ineffective against suicide bombers as the issue of catching them is moot. There have been cases where security cameras have enabled the authorities to identify people with terrorist ideas who were investigating military bases (but I wouldn’t call such lamers “terrorists” as all the available evidence suggests that they would be incapable of succeeding in an attack). However no-one is disputing the fact that military installations need to have good security.
Given that security cameras do provide significant benefits to public safety I don’t think it’s reasonable to oppose them as long as they are implemented in a sensible and responsible manner. Most of the current plans to install security cameras don’t seem to be sensible and have few controls on who can access the data. This makes them good targets for oppressive government actions, organised crime, and even terrorists. The countries that have serious terrorist problems always have problems of terrorists infiltrating government departments and bribing government officials. A centralised system that allows the police to watch anyone at any time would probably do more good for al Quaeda and the Mafia than it would for regular police action.
For the fastest possible response a security camera system needs to have humans able to monitor it’s output in real-time. Having a control-room where police officers can randomly switch between public cameras to see if a crime appears to be in progress is a good thing (and works well in the UK). Of course the actions of such police need to be monitored to make sure that they are actually doing their job (not checking out hotties on the camera – an ongoing problem with security cameras).
Finally there’s the issue of what level of surveillance can be expected in a public place. I think that most people agree that when you enter a government building it’s reasonable to expect that you will be on camera, and many private buildings have security cameras with a condition of entry being that you permit yourself to be watched and no-one seems to be boycotting shopping centres because of this. Significant public spaces such as main roads and public transport also seem like reasonable locations for security cameras.
One location that is widely disputed is that of streets in residential areas. Most people who are happy to be photographed when entering and leaving public buildings such as train stations and shopping centres are not happy to be photographed when entering and leaving their own home.
I think that a reasonable solution to these problems requires the following:
- Restrictions on the duration and scope of surveillance in residential areas (EG require police to get court orders for such surveillance that must be periodically renewed).
- Restricting the duration for which records may be kept by the police. Keeping any records for longer than the period in question (which would be a few weeks at most) would require a court order.
- Prohibiting private organisations from handling surveillance data from government property (including public roads, train stations, etc). There are problems with having a private company aggregate surveillance data from multiple private properties but I don’t think we can address this at the moment.
There seems to be a recent trend towards home-schooling. The failures of the default school system in most countries are quite apparent and the violence alone is enough of a reason to keep children away from high-schools, even without the education (or lack therof).
I have previously written about University degrees and whether they are needed [1].
The university I attended (which I won’t name in this context) did an OK job of teaching students. The main thing that struck me was that you would learn as much as you wished at university. It was possible to get really good marks without learning much (I have seen that demonstrated many times) or learn lots of interesting things while getting marks that are OK (which is what I did). So I have been considering whether it’s possible to learn as much as you would learn at university without attending one, and if so how to go about it.
Here are the ways I learned useful things at university:
- I spent a lot of time reading man pages and playing with the various Unix systems in the computer labs. It turned out that sys-admin work was one of my areas of interest (not really surprising given my history of running Fidonet BBS systems). It was unfortunate that my university (like almost all other universities) had no course on system-administration and therefore I was not able to get a sys-admin job until several years after graduating.
- I read lots of good text books (university libraries are well stocked).
- There were some good lectures that covered interesting material that I would not have otherwise learned (there were also some awful lectures that I could have missed – like the one which briefly covered computer security and mentioned NOTHING other than covert channels – probably the least useful thing that they could cover).
- I used to hang out with the staff who were both intelligent and friendly (of which there were unfortunately a small number). If I noticed some students hanging out in the office of one of the staff in question I would join them. Then we would have group discussions about many topics (most of which were related to computers and some of which were related to the subjects that we were taking), this would continue until the staff member decided that he had some work to do and kicked us out. Hanging out with smart students was also good.
- I did part-time work teaching at university. Teaching a class forces you to learn more about the subject than is needed to basically complete an assignment. This isn’t something that most people can do.
I expect that Children who don’t attend high-school will have more difficulty in getting admitted to a university (the entrance process is designed for the results of high-school). Also if you are going to avoid the public education system then it seems useful to try and avoid it for all education instead of just the worst part. Even for people who weren’t home-schooled I think that there are still potential benefits in some sort of home-university system.
Now a home-university system would not be anything like an Open University. One example of an Open University is Open Universities Australia [2], another is the UK Open University [3]. These are both merely correspondence systems for a regular university degree. So it gives a university degree without the benefit of hanging out with smart people. While they do give some good opportunities for people who can only study part-time, in general I don’t think that they are a good thing (although I have to note that there are some really good documentaries on BBC that came from Open University).
Now I am wondering how people could gain the same benefits without attending university. Here are my ideas of how the four main benefits that I believe are derived from university can be achieved without one (for a Computer Science degreee anyway):
- Computers are cheap, every OS that you would ever want to use (Linux, BSD, HURD, OpenSolaris, Minix, etc) is free. It is quite easy to install a selection of OSs with full source code and manuals and learn as much about them as you desire.
- University libraries tend not to require student ID to enter the building. While you can’t borrow books unless you are a student or staff member it is quite easy to walk in and read a book. It may be possible to arrange an inter-library loan of a book that interests you via your local library. Also if a friend is a university student then they can borrow books from the university library and lend them to you.
- There are videos of many great lectures available on the net. A recent resource that has been added is Youtube lectures from the University of California Berkely [4] (I haven’t viewed any of the lectures yet but I expect them to be of better than average quality). Some other sources for video lectures are Talks At Google [5] and TED – Ideas Worth Spreading [6].
- To provide the benefits of hanging out with smart people you would have to form your own group. Maybe a group of people from a LUG could meet regularly (EG twice a week or more) to discuss computers etc. Of course it would require that the members of such a group have a lot more drive and ambition than is typical of university students. Such a group could invite experts to give lectures for their members. I would be very interested in giving a talk about SE Linux (or anything else that I work on) to such a group of people who are in a convenient location.
- The benefits of teaching others can be obtained by giving presentations at LUG meetings and other forums. Also if a group was formed as suggested in my previous point then at every meeting one or more members could give a presentation on something interesting that they had recently learned.
The end result of such a process should be learning more than you would typically learn at university while having more flexible hours (whatever you can convince a group of like-minded people to agree to for the meetings) that will interfere less with full-time employment (if you want to work while studying). In Australia university degrees don’t seem to be highly regarded so convincing a potential employer that your home-university learning is better than a degree should not be that difficult.
If you do this and it works out then please write a blog post about it and link to this post.
Update:
StraighterLine offers as much tuition as you can handle over the Internet for $99 per month [7]. That sounds really good, but it does miss the benefits of meeting other people to discuss the work. Maybe if a group of friends signed up to StraighterLine [8] at the same time it would give the best result.
I am currently considering what to do regarding a Zope server that I have converted to Xen. To best manage the servers I want to split the Zope instances into different DomU’s based on organisational boundaries. One reason for doing this is so that each sys-admin will only be granted access to the Zope instance that they run so that they can’t accidentally break anyone else’s configuration. Another reason is to give the same benefit in the situation where one sys-admin runs multiple instances, if a sys-admin is asked to do some work by user A and breaks something else running for user A then I think that user A will understand that when you request changes there is a small risk of things going wrong. If a sys-admin is doing work for user A and accidentally breaks something for user B then they won’t expect any great understanding because user B wanted nothing to be touched!
Some people who are involved with the server are hesitant about my ideas because the machine has limited RAM (12G maximum for the server before memory upgrades become unreasonably expensive) and they believe that Zope needs a lot of RAM and will run inefficiently without it.
Currently it seems that every Zope instance has 100M of memory allocated by a parent process running as root (of which 5.5M is resident) and ~500M allocated by a child process running as user “zope” (of which ~250M is resident). So it seems that each DomU would need a minimum of 255M of RAM plus the memory required for Apache and other system services with the ideal being about 600M. This means that I could (in theory at least) have something like 18 DomU’s for running Zope instances with Squid running as a front-end cache for all of them in Dom0.
What I am wondering about is how much memory Zope really needs, could I get better performance out of Zope if I allowed it to use more RAM?
The next issue is regarding Squid. I need to have multiple IP addresses used for the services due to administrative issues (each group wants to have their own IP), having Squid listen on multiple addresses should not be a big deal (but I’ve never set up Squid in a front-end proxy manner so there may be hidden problems). I also need to have some https operations on the same IP addresses. I am considering giving none of the Xen DomU’s public IP addresses and just using Net Filter to DNAT the connections to the right machines (a quick test indicates that if the DomU in question has no publicly visible IP address and routes the packets to the Dom0 then a simple DNAT in the PREROUTING table does the job).
Is there anything else I should be considering when dividing a server for running Zope under Xen?
Is it worth considering a single Apache instance that talks to multiple Zope instances in different DomU’s?
Recently I was talking to an employee at Safeway (an Australian supermarket chain) about Linux etc. He seemed interested in attending a meeting of my local LUG (which incidentally happens on the campus of the university where he studies). I have had a few conversations like that and it seems that it would be good to have some LUG business-cards.
It shouldn’t be difficult to make something that is similar in concept to the Debian business cards [1] for use by a Linux Users Group (LUG). That way when you tell someone about Linux you can hand them a card that has your name and email address along with the web site for your local LUG.
In other news I will be attending a meeting of the Linux Users of Victoria (LUV) [2] this evening and will have some Fair Trade Chocolate [3] to give away to people who arrive early. The chocolate in question is now sold by Safeway for a mere $4 per 100g (not much more expensive than the regular chocolate).
I’ve just started getting a lot of traffic referred by live.com. It seems that my post Porn for Children [1] is the second link returned from a live.com search for “porn” and my post Porn vs Rape [2] is the third link. These results occur in two of the three settings for “safe search” (the most safe one doesn’t return any matches for a query about “porn”). A query for “porn” and “research” (which would reasonably be expected to match a blog post concerning scientific research made my page the 8th listing (behind http://www.news.com.au/404, http://www.news.com.au/couriermail/404, and http://www.theaustralian.news.com.au/404). It seems strange that a query which should match my page gives it a lower ranking than three 404 error pages while a query which shouldn’t match my page (no-one who searches for “porn” on it’s own wants to read about scientific research) gives it a high ranking.
One very interesting thing about the live.com search is that it doesn’t filter out some of the least effective ways of gaming search engines. For example the URL http://gra.sdsu.edu/research.php has a huge number of links to porn pages and pages that apparently sell illegal pharmecuticals) that are not visible (view page source to see them). The links that I tested were all broken so it seems that the other sites (including http://www.hcs.harvard.edu/~hraaa, http://base.rutgers.edu/pgforms, http://www.wccs.edu/employees, http://base.rutgers.edu/mirna, http://www.calstatela.edu/faculty/jperezc/students/oalamra, and http://institute.beacon.edu/), were cleaned up long ago. There is probably some money to be made in running a service that downloads all content from a web site and/or has a firewall device that sniffs all content that is sent out and makes sure that it seems to be what is intended (about half the URLs in question appear to relate to content that is illegal under US law).
As an aside, I did a few other live.com searches for various sites and the word “porn” and found one Australian university running a forum with somewhat broken HTTP authentication that has some interesting posts about porn etc. I’m not going to provide a link because the content didn’t appear to violate Australian law and you expect some off-topic content on a forum.
But to be fair, live.com have significantly improved their service since last time I tested it [3]. Now a search for “bonnie” or “bonnie++” will give me the top two spots which is far better than the previous situation. Although I have to admit that the Google result of not giving me a high ranking for “bonnie” is probably better.
|
|