|
|
There seems to be a recent trend towards home-schooling. The failures of the default school system in most countries are quite apparent and the violence alone is enough of a reason to keep children away from high-schools, even without the education (or lack therof).
I have previously written about University degrees and whether they are needed [1].
The university I attended (which I won’t name in this context) did an OK job of teaching students. The main thing that struck me was that you would learn as much as you wished at university. It was possible to get really good marks without learning much (I have seen that demonstrated many times) or learn lots of interesting things while getting marks that are OK (which is what I did). So I have been considering whether it’s possible to learn as much as you would learn at university without attending one, and if so how to go about it.
Here are the ways I learned useful things at university:
- I spent a lot of time reading man pages and playing with the various Unix systems in the computer labs. It turned out that sys-admin work was one of my areas of interest (not really surprising given my history of running Fidonet BBS systems). It was unfortunate that my university (like almost all other universities) had no course on system-administration and therefore I was not able to get a sys-admin job until several years after graduating.
- I read lots of good text books (university libraries are well stocked).
- There were some good lectures that covered interesting material that I would not have otherwise learned (there were also some awful lectures that I could have missed – like the one which briefly covered computer security and mentioned NOTHING other than covert channels – probably the least useful thing that they could cover).
- I used to hang out with the staff who were both intelligent and friendly (of which there were unfortunately a small number). If I noticed some students hanging out in the office of one of the staff in question I would join them. Then we would have group discussions about many topics (most of which were related to computers and some of which were related to the subjects that we were taking), this would continue until the staff member decided that he had some work to do and kicked us out. Hanging out with smart students was also good.
- I did part-time work teaching at university. Teaching a class forces you to learn more about the subject than is needed to basically complete an assignment. This isn’t something that most people can do.
I expect that Children who don’t attend high-school will have more difficulty in getting admitted to a university (the entrance process is designed for the results of high-school). Also if you are going to avoid the public education system then it seems useful to try and avoid it for all education instead of just the worst part. Even for people who weren’t home-schooled I think that there are still potential benefits in some sort of home-university system.
Now a home-university system would not be anything like an Open University. One example of an Open University is Open Universities Australia [2], another is the UK Open University [3]. These are both merely correspondence systems for a regular university degree. So it gives a university degree without the benefit of hanging out with smart people. While they do give some good opportunities for people who can only study part-time, in general I don’t think that they are a good thing (although I have to note that there are some really good documentaries on BBC that came from Open University).
Now I am wondering how people could gain the same benefits without attending university. Here are my ideas of how the four main benefits that I believe are derived from university can be achieved without one (for a Computer Science degreee anyway):
- Computers are cheap, every OS that you would ever want to use (Linux, BSD, HURD, OpenSolaris, Minix, etc) is free. It is quite easy to install a selection of OSs with full source code and manuals and learn as much about them as you desire.
- University libraries tend not to require student ID to enter the building. While you can’t borrow books unless you are a student or staff member it is quite easy to walk in and read a book. It may be possible to arrange an inter-library loan of a book that interests you via your local library. Also if a friend is a university student then they can borrow books from the university library and lend them to you.
- There are videos of many great lectures available on the net. A recent resource that has been added is Youtube lectures from the University of California Berkely [4] (I haven’t viewed any of the lectures yet but I expect them to be of better than average quality). Some other sources for video lectures are Talks At Google [5] and TED – Ideas Worth Spreading [6].
- To provide the benefits of hanging out with smart people you would have to form your own group. Maybe a group of people from a LUG could meet regularly (EG twice a week or more) to discuss computers etc. Of course it would require that the members of such a group have a lot more drive and ambition than is typical of university students. Such a group could invite experts to give lectures for their members. I would be very interested in giving a talk about SE Linux (or anything else that I work on) to such a group of people who are in a convenient location.
- The benefits of teaching others can be obtained by giving presentations at LUG meetings and other forums. Also if a group was formed as suggested in my previous point then at every meeting one or more members could give a presentation on something interesting that they had recently learned.
The end result of such a process should be learning more than you would typically learn at university while having more flexible hours (whatever you can convince a group of like-minded people to agree to for the meetings) that will interfere less with full-time employment (if you want to work while studying). In Australia university degrees don’t seem to be highly regarded so convincing a potential employer that your home-university learning is better than a degree should not be that difficult.
If you do this and it works out then please write a blog post about it and link to this post.
Update:
StraighterLine offers as much tuition as you can handle over the Internet for $99 per month [7]. That sounds really good, but it does miss the benefits of meeting other people to discuss the work. Maybe if a group of friends signed up to StraighterLine [8] at the same time it would give the best result.
I am currently considering what to do regarding a Zope server that I have converted to Xen. To best manage the servers I want to split the Zope instances into different DomU’s based on organisational boundaries. One reason for doing this is so that each sys-admin will only be granted access to the Zope instance that they run so that they can’t accidentally break anyone else’s configuration. Another reason is to give the same benefit in the situation where one sys-admin runs multiple instances, if a sys-admin is asked to do some work by user A and breaks something else running for user A then I think that user A will understand that when you request changes there is a small risk of things going wrong. If a sys-admin is doing work for user A and accidentally breaks something for user B then they won’t expect any great understanding because user B wanted nothing to be touched!
Some people who are involved with the server are hesitant about my ideas because the machine has limited RAM (12G maximum for the server before memory upgrades become unreasonably expensive) and they believe that Zope needs a lot of RAM and will run inefficiently without it.
Currently it seems that every Zope instance has 100M of memory allocated by a parent process running as root (of which 5.5M is resident) and ~500M allocated by a child process running as user “zope” (of which ~250M is resident). So it seems that each DomU would need a minimum of 255M of RAM plus the memory required for Apache and other system services with the ideal being about 600M. This means that I could (in theory at least) have something like 18 DomU’s for running Zope instances with Squid running as a front-end cache for all of them in Dom0.
What I am wondering about is how much memory Zope really needs, could I get better performance out of Zope if I allowed it to use more RAM?
The next issue is regarding Squid. I need to have multiple IP addresses used for the services due to administrative issues (each group wants to have their own IP), having Squid listen on multiple addresses should not be a big deal (but I’ve never set up Squid in a front-end proxy manner so there may be hidden problems). I also need to have some https operations on the same IP addresses. I am considering giving none of the Xen DomU’s public IP addresses and just using Net Filter to DNAT the connections to the right machines (a quick test indicates that if the DomU in question has no publicly visible IP address and routes the packets to the Dom0 then a simple DNAT in the PREROUTING table does the job).
Is there anything else I should be considering when dividing a server for running Zope under Xen?
Is it worth considering a single Apache instance that talks to multiple Zope instances in different DomU’s?
Recently I was talking to an employee at Safeway (an Australian supermarket chain) about Linux etc. He seemed interested in attending a meeting of my local LUG (which incidentally happens on the campus of the university where he studies). I have had a few conversations like that and it seems that it would be good to have some LUG business-cards.
It shouldn’t be difficult to make something that is similar in concept to the Debian business cards [1] for use by a Linux Users Group (LUG). That way when you tell someone about Linux you can hand them a card that has your name and email address along with the web site for your local LUG.
In other news I will be attending a meeting of the Linux Users of Victoria (LUV) [2] this evening and will have some Fair Trade Chocolate [3] to give away to people who arrive early. The chocolate in question is now sold by Safeway for a mere $4 per 100g (not much more expensive than the regular chocolate).
I’ve just started getting a lot of traffic referred by live.com. It seems that my post Porn for Children [1] is the second link returned from a live.com search for “porn” and my post Porn vs Rape [2] is the third link. These results occur in two of the three settings for “safe search” (the most safe one doesn’t return any matches for a query about “porn”). A query for “porn” and “research” (which would reasonably be expected to match a blog post concerning scientific research made my page the 8th listing (behind http://www.news.com.au/404, http://www.news.com.au/couriermail/404, and http://www.theaustralian.news.com.au/404). It seems strange that a query which should match my page gives it a lower ranking than three 404 error pages while a query which shouldn’t match my page (no-one who searches for “porn” on it’s own wants to read about scientific research) gives it a high ranking.
One very interesting thing about the live.com search is that it doesn’t filter out some of the least effective ways of gaming search engines. For example the URL http://gra.sdsu.edu/research.php has a huge number of links to porn pages and pages that apparently sell illegal pharmecuticals) that are not visible (view page source to see them). The links that I tested were all broken so it seems that the other sites (including http://www.hcs.harvard.edu/~hraaa, http://base.rutgers.edu/pgforms, http://www.wccs.edu/employees, http://base.rutgers.edu/mirna, http://www.calstatela.edu/faculty/jperezc/students/oalamra, and http://institute.beacon.edu/), were cleaned up long ago. There is probably some money to be made in running a service that downloads all content from a web site and/or has a firewall device that sniffs all content that is sent out and makes sure that it seems to be what is intended (about half the URLs in question appear to relate to content that is illegal under US law).
As an aside, I did a few other live.com searches for various sites and the word “porn” and found one Australian university running a forum with somewhat broken HTTP authentication that has some interesting posts about porn etc. I’m not going to provide a link because the content didn’t appear to violate Australian law and you expect some off-topic content on a forum.
But to be fair, live.com have significantly improved their service since last time I tested it [3]. Now a search for “bonnie” or “bonnie++” will give me the top two spots which is far better than the previous situation. Although I have to admit that the Google result of not giving me a high ranking for “bonnie” is probably better.
It seems that the majority of blog traffic (at least in blogs I read) is time-based. It is personal-diary entry posts, references to current events, predictions about future events, bug reports, and other things that either become obsolete or for which it’s important to know the date. For such posts it makes sense to have the date be part of the Permalink URL, and in the normal course of events such posts will tend not to be updated after release.
Another type of blog traffic is posts that have ongoing reference value which will (ideally) be actively maintained to keep them current. For such posts it makes sense to have no date stamp in the Permalink – for example if I update a post about installing SE Linux on Etch once Lenny is released (a significant update) I don’t want people ignoring it when it comes up in search engines (or worse having search engines score it down) because the URL indicates that it was written some time before the release of Etch.
WordPress supports Pages as separate entities to Posts, and the names of Pages are direct links under the root of the WordPress installation. However there is no RSS feed for Pages (AFAIK – I may have missed something) and the WordPress themes often treat Pages differently (which may not be what you want for timeless posts). Also it is not unreasonable to have Pages and timeless posts.
I’m thinking of creating a separate WordPress installation for posts that I intend to manage for long periods of time with updates (such as documenting some aspects of software I have written). The management options for a blog server program provide significant benefits over editing HTML files. The other option would be to use a different CMS (a blog server being a sub-category of a CMS) to store such things.
What I want is a clear way of presenting the data with minimal effort from me (an advantage of WordPress for this is that I have already invested a significant amount of effort in learning how it works) and editing from remote sites (the offline blog editing tools that are just coming out is a positive point for using a blog server – particularly as I could use the same editor for blog posts and documents).
Any suggestions as to how to do this?
Then of course there’s the issue of how to syndicate this. For my document blog (for want of a better term) I am thinking of updating the time-stamp on a post every time I make a significant change. If you subscribe to the document feed than that would be because you want to receive new copies of the documents as they are edited. The other option would be to not change the time-stamp and then include the feed along with my regular blog feed (making two feeds be served as one is not a technical challenge). If I was to update the time stamps then I would have to write posts announcing the release of new documents.
Does anyone know of someone who writes essays or howto documents in a similar manner to Rick Moen [1] or Paul Graham [2] who also does daily blog posts? I’d like to see some examples of how others have solved these problems (if there are any).
I just upgraded to WordPress 2.3. When using Konqueror (my favourite browser) the comment approval is slightly broken (when I tag a comment as spam it usually just turns red and doesn’t disappear from the main Comments tab) and I have to refresh that window more often than usual to make sure I got the result I desired. Also the Sidebar Widget editing is totally broken in Konqueror, I guess I’ll have to login with Firefox to get the new tags feature working.
Also I have got a few WordPress errors about the table “$table_prefix” . “post2cat” not existing. The table in question doesn’t exist in any of my blogs.
So far this is the worst WordPress upgrade experience I’ve had (I started on 2.0).
There is a wide-spread myth that swap space should be twice the size of RAM. This might have provided some benefit when 16M of RAM was a lot and disks had average access times of 20ms. Now disks can have average access times less than 10ms but RAM has increased to 1G for small machines and 8G or more for large machines. Multiplying the seek performance of disks by a factor of two to five while increasing the amount of data stored by a factor of close to 1000 is obviously not going to work well for performance.
A Linux machine with 16M of RAM and 32M of swap MIGHT work acceptably for some applications (although when I was running Linux machines with 16M of RAM I found that if swap use exceeded about 16M then the machine became so slow that a reboot was often needed). But a Linux machine with 8G of RAM and 16G of swap is almost certain to be unusable long before the swap space is exhausted. Therefore giving the machine less swap space and having processes be killed (or malloc() calls fail – depending on the configuration and some other factors) is probably going to be a better situation.
There are factors that can alleviate the problems such as RAID controllers that implement write-back caching in hardware, but this only has a small impact on the performance requirements of paging. The 512M of cache RAM that you might find on a RAID controller won’t make that much impact on the IO requirements of 8G or 16G of swap.
I often make the swap space on a Linux machine equal the size of RAM (when RAM is less than 1G) and be half the size of RAM for RAM sizes from 2G to 4G. For machines with more than 4G of RAM I will probably stick to a maximum of 2G of swap. I am not convinced that any mass storage system that I have used can handle the load from more than 2G of swap space in active use.
The reason for the myths about swap space size are due to some old versions of Unix that used to allocate a page of disk space for every page of virtual memory. Therefore having swap space less than or equal to the size of RAM was impossible and having swap space less than twice the size of RAM was probably a waste of effort (see this reference [1]). However Linux has never worked this way, in Linux the virtual memory size is the size of RAM plus the size of the swap space. So while the “double the size of RAM” rule of thumb gave virtual memory twice the size of physical RAM on some older versions of Unix it gave three times the size of RAM on Linux! Also swap spaces smaller than RAM have always worked well on Linux (I once ran a Linux machine with 8M of RAM and used a floppy disk as a swap device).
As far as I recall some time ago (I can’t remember how long) the Linux kernel would by default permit overcommitting of memory. For example if a program tried to malloc() 1G of memory on a machine that had 64M of RAM and 128M of swap then the system call would succeed. However if the program actually tried to use that memory then it would end up getting killed.
The current policy is that /proc/sys/vm/overcommit_memory determines what happens when memory is overcommitted, the default value 0 means that the kernel will estimate how much RAM and swap is available and reject memory allocation requests that exceed that value. A value of 1 means that all memory allocation requests will succeed (you could have dozens of processes each malloc 2G of RAM on a machine with 128M of RAM and 128M of swap). A value of 2 means that a different policy will be followed, incidentally my test results don’t match the documentation for value 2.
Now if you run a machine with /proc/sys/vm/overcommit_memory set to 0 then you have an incentive to use a moderately large amount of swap, safe in the knowledge that many applications will allocate memory that they don’t use, so the fact that the machine would deliver unacceptably low performance if all the swap was used might not be a problem. In this case the ideal size for swap might be the amount that is usable (based on the storage speed) plus a percentage of the RAM size to cater for programs that allocate memory and never use it. By “moderately large” I mean something significantly less than twice the size of RAM for all machines less than 7 years old.
If you run a machine with /proc/sys/vm/overcommit_memory set to 1 then the requirements for swap space should decrease, but the potential for the kernel to run out of memory and kill some processes is increased (not that it’s impossible to have this happen when /proc/sys/vm/overcommit_memory is set to 0).
The debian-administration.org site has an article about a package to create a swap file at boot [2] with the aim of making it always be twice the size of RAM. I believe that this is a bad idea, the amount of swap which can be used with decent performance is a small fraction of the storage size on modern systems and often less than the size of RAM. Increasing the amount of RAM will not increase the swap performance, so increasing the swap space is not going to do any good.
Davyd Madeley writes about vegetarianism for the environment [1] which is listed in Wikipedia as Environmental Vegetarianism [2]. He links to an article on the Huffington Post [3]. The Huffington Post article in turn links to an article on GoVeg.com about global warming [4].
Mass-produced meat is not only bad for the environment but there are also health issues related to meat consumption (due to bad practices in mass farming, combining the meat of thousands of animals into mince thus increasing the spread of bad meat, and the fact that most people in first-world countries consume significantly more meat than anyone did at any time in history).
One thing that doesn’t get mentioned in these posts is the fact that farming is not required to produce meat. In fact the meat that is most healthy (due to lack of carcinogenic chemicals and free-range feeding) and has the strongest flavour (which may be a good or bad thing depending on whether you actually like the flavour of meat) is from wild animals. If you don’t like the taste of meat (which seems to be the case when people don’t like game meat) then why eat it at all?
In Australia large numbers of kangaroos are killed because they eat grass more efficiently than cattle (they have evolved over tens of thousands of years to survive in Australian conditions unlike cattle). There are also a number of foreign animals that have run wild in Australia and are considered vermin, this includes rabbit, pig, buffalo, deer and camel (all of which are tasty).
Even among native animals there are often times when a cull is needed. If some good seasons allow the population to increase then when there is a bad season the population has to reduce and it’s often better for them to be culled (thus providing plenty of food for the surviving animals) than for all of them to starve.
There is a game meat wholesaler I’ve visited a few times that sells buffalo, rabbit, pig, camel, crocodile, possum, emu, kangaroo, and some other animals. All of the meat is from wild animals (apart from rabbit and pig none of those animals can be domesticated). I’m sure that every region has such a wholesaler that will sell to interested individuals if you know where to look (it seems impossible to buy any game meat other than kangaroo retail in Australia).
Finally one thing that offends me is people who eat meat but are not prepared to kill the animal. If you aren’t prepared to kill it then you shouldn’t pay someone else to do so on your behalf! Claiming that “the animal was going to be killed anyway” is a pitiful excuse that is only suitable for children. It’s acceptable for children to eat meat without thinking about where it came from. But adults should be able to deal with the fact that eating meat means killing animals – or become vegetarian if they can’t cope with it.
The book 3001 The Final Odyssey pioneered the term “corpse food” for eating meat. I believe that the term is accurate and should be used. If you can’t stomach eating corpses then there are many good vegetarian options available.
There are many vegetarians in the Linux community. As these issues are getting discussed a lot recently maybe it would be good to have the vegetarians choose some good vegetarian restaurants to have Linux meetings on occasion. Davyd got a bit of negative feedback on his post, maybe if he invited a bunch of his local Linux people to have dinner at a vegetarian restaurant and they enjoyed the food then the reaction to such ideas would be more positive.
I often get reports such as “the server was dead so I rebooted it“. This really doesn’t help me fix the problem, so if the person who uses the server wants reliability (and doesn’t want to be rebooting it and losing data all the time) then more information needs to be provided. Here is a quick list of tests to perform before a reboot if you would like your server not to crash in future:
- Does pressing the CAPS-LOCK key on the keyboard make the CAPS LED light up? If so then the OS isn’t entirely dead.
- What is on the screen of the server (you may have to press a key to get the screen to un-blank)? If it’s a strange set of numbers then please photograph them if possible, I might understand what they mean. If you don’t have a camera with high enough resolution to capture them then please make a note of some of the messages. Don’t write down numbers – they are not useful enough to be worth the effort. Write down words, including special words such as OOM and pairs of words seperated by a “_” character.
If the “server” is a Xen virtual machine then save the contents of the console (as described in my previous post [1]).
- Can you ping the machine (usually by ping servername)? If so then networking is basically operational.
- Are the hard drive access lights indicating heavy use? If so then it might be thrashing due to excessive memory use (maybe a DOS attack).
- Can you login at the console? If so please capture the output of free, ps auxf, and netstat -tn.
- If the machine offers TCP services (almost all servers do) then use the telnet command to connect to the service port and make a note of what happens. For example to test a mail server type “telnet server 25” and if all goes well you expect to see “220 some message from the mail server“, note how long it takes for such a message to be displayed. Some protocols don’t send a message on a connect, for example with HTTP (the protocol used by web servers) you have to enter a few characters and press ENTER to get a response (usually some sort of HTTP error message).
Finally please don’t tell me that the server is too important and that the users couldn’t wait for you to perform any tests before rebooting it. If the server is important then it is important that it doesn’t crash repeatedly. A crash may even be caused by something that could cause data loss (EG hardware that is failing) or something that could incur extra expense if not fixed quickly (EG failing hardware that will be out of warranty soon). You have to tell users that the choice is to wait for an extra few minutes or risk having another crash tomorrow with further data loss.
If the server is important enough for it to be worth my time to try and fix it then it’s important enough to have these tests performed before the reboot.
I’m just completing Jeff and Pia Waugh’s Australian Open Source Industry & Community Census [1]. There are some things that can be improved with that survey in particular and surveys in general.
It seems to be assumed that everyone is trying to work full-time. I admit that there are probably few people who have decided that they don’t need as much money as they would earn in full-time work and reduced their work hours to match their financial needs (as I have done). Surveys that just ask a for a figure of how much is earned add to the pressure to earn as much as possible, which isn’t what’s best for most people.
I have the impression that the questions about “how long have you been doing X” assume that doing so is contiguous. If that’s the case then asking “when did you first do X” and giving a drop-down list-box of the last 20 years to select from would probably be better (more precise and remove confusion).
Debian wasn’t listed as a Linux distribution! It only has Ubuntu, RHEL, Fedora, and SUSE.
The question mixing kernels and distributions was a little strange. It gives BSD as a kernel option but no BSD option in user-space, so apparently it’s assumed that there is only one user-space/distribution for BSD (there have been some attempts to release Debian with the various BSD kernels). Also it wasn’t clear what you have to do regarding the kernel to select it (is the couple of hundred lines of patches I submitted to the Linux kernel adequate to let me list it?).
There were a bunch of questions that would get very predictable answers. Do you want to have access to official government web-sites and documents with free software? I guess that they want to get a large number of people requesting such things. Incidentally much of that is in the Greens IT policy…
Immediately below the buttons on the screen to go to the Next and Previous pages there is a link to clear the form and exit. When doing something difficult like sys-admin work it’s expected that a command to wipe out your work will be immediately next to something innocuous, but for something that generally doesn’t require much concentration (such as filling out a survey) it would be good to have the dangerous options a little further away.
At the end of the survey there are questions about whether you want to be contacted about events held by various companies. I think that it would be better to have an RSS feed of such events that we can poll when we have spare time. I’m sure that the PR people running the events are happy when they see a good number of people signed up to their mailing list. But if you actually want to get people to sign up without prior contact the best thing to do is have it on a web site with an RSS feed (either a blog or a CMS) so that it can be polled, syndicated, and googled.
|
|