Freevite

Freevite is a web- and email-based invitation and RSVP system, licensed under the GPL.

Details will appear here shortly.

Python

Here are the first three programs I wrote in Python:

Fight the Virus

I’ve been getting hammered with the latest W32/Swen@MM virus like there’s no tomorrow. Usually these things aren’t such a problem for me, but I’ve got nearly 1000 emails in the last day or so, and much of my email checking goes over a 56K connection, and spam filtering only happens once I’ve downloaded the messages. Not to mention that, for whatever reason, SpamAssassin doesn’t recognize the Swen virus emails as spam.

If you’re in the same boat, there’s a good solution. Get mpartinfo2hdr, a tool written just for this purpose. mpartinfo2hdr adds a header line with the md5sum of attachments. Then add the following lines to your ~/.procmailrc:

 :0fw | mpartinfo2hdr.py :0: * X-Msg-Part-Info:.*b09e26c292759d654633d3c8ed00d18d virus 

You’ll need to put the proper path to the mpartinfo2hdr.py script. Of course, this only works if you are a GNU/Linux user, have python, and use procmail to filter your email. Don’t try this at home otherwise.

You’ll also need the python email module (Debian package python2.1-email), if you don’t have that already installed on your system.

It’s a great relief to be able to check email in a reasonable amount of time now, though.

Update: a similar option is to just put the following in .procmailrc:

 :0 B * ^TVqQAAMAAAAEAAAA//8AALgAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA * ^AAAA2AAAAA4fug4AtAnNIbgBTM0hVGhpcyBwcm9ncmFtIGNhbm5vdCBiZSBydW4gaW4gRE9TIG1v * !< 100000 virus 

This will only catch Swen, of course. The advantage of the former method is you can quickly customize it to catch emails with any particular md5sum attachment. The advantage of this method is that you don’t need to invoke a separate script for each email that comes in, something which would certainly have a performance hit on a large server.

Shorlfilter

Get the latest version of shorlfilter as a tarball (v0.5, released 9/7/03).


Shorlfilter is a text filter to shorten long URLs using an online redirection database.

Shorlfilter takes all HTTP links in input text longer than a specified length and converts them to short links through the online shorl database. It is particularly handy for email, and can be used as a vim or mutt macro.

See http://shorl.com for more information.

shorlfilter is listed on freshmeat.net.

You can download shorlfilter as a Debian package, or add the following line to your /etc/apt/sources.list to use apt-get to install shorlfilter:

 deb http://bostoncoop.net/adam/debian unstable main deb-src http://bostoncoop.net/adam/debian unstable main 

You can take a look at shorlfilter, the main script, with nice highlighting, also see the changelog.

CoopOrder

CoopOrder is a free software web-based buying club cooperative ordering system written by Adam Kessel in perl 5.8. It is intended to work with United Northeast’s system (formerly Northeast Cooperatives), which is a bulk whole foods supplier, part of United Natural Foods, Inc., but could conceivably be adapted to other systems as well.

Why don’t you try it and see what you think? I always need comments and bug reports. You should log on as ‘guest’. You have free reign of the system as guest—try everything out, it won’t do any harm at all. You can even submit your order (it won’t really go in).

See the changelog for the latest updates.

There is also a low volume email list to which you can subscribe to talk about CoopOrder or other wholesale food buying club issues.

Here’s a list of some of the more recent features; in the near future I’ll organize this and present it more systematically:

  • Multiple Buying Clubs. CoopOrder can now support many buying clubs, each with their own delivery schedule, pricelist, contact information, message board, splits, etc.. Each club can has a coordinator who can clear, lock, submit, or archive the order. The coordinator can also add new members to the buying club, and deputize them as coordinators as well. A buying club coordinator can only effect her own buying club—she can’t interfere with anything with any other buying club. Note that both the New Oxford and Brattleboro pricelists are available.
  • Password protection. Although every member by default has no password, you can click ‘password and settings’ and set a password for yourself (except the guest user, who always has no password). If you forget your password, the buying club coordinator can reset it for you.
  • Export to FoodLink Format. I need someone to try this out and see if it actually will import into FoodLink! “Import From FoodLink” is coming soon as well.
  • Better (I hope) user interface all around. Clearer menus. More logical groupings of buttons. And a spiffy logo.
  • Better handling on invoices. This will need some testing, but I believe the program is smarter now about handling out of stock, ordered but not on invoice, price change, random weight, etc..
  • More stable/robust design. This is mostly internal but it should make it easier to maintain and improve CoopOrder.
  • You can now search for things in plural or singular. E.g., “apples” will return thing with “apple” or “apples” in it.
  • Categories are now searchable as a pull down list, so you don’t need to know the name of the category you wish to search in.
  • “Incomplete Splits Only”. If you check this box, you will see splits that still need to be completed within your buying club. It will tell you how much you need to order in order to complete the split and give you the option of doing it right there. Note that splits are done as fractions in CoopOrder: 0.5 = half a case, 0.25 = one quarter, etc..
  • You can do “google” like searches on description. For example, “apples -juice -pie” gives you everything with apple or apples but not juice and not pie.

bin/blog.pm

 #!/usr/bin/perl -w use strict; use POSIX qw(strftime); use Time::Local; use URI::Heuristic; use Text::Wrap; use vars qw/$CGIPath $blogPath $documentRoot $documentURL $sidebarFile $syntaxChecksFile $styleSheet $followup_root $heading $title $admin_email/; $CGIPath = '/home/adam/public_html/cgi-bin'; $blogPath = '/adam/cgi-bin/weblog.pl'; $documentRoot = '/home/adam/public_html/blog'; $followup_root = $documentRoot . '/followups'; $documentURL = '/~adam/blog/'; $sidebarFile = "$documentRoot/sidebar"; $syntaxChecksFile = "$documentRoot/syntax_checks"; $title = "Adam Kessel's Weblog"; $heading = "Adam Kessel’s Weblog"; $styleSheet = "/~adam/style.css"; $admin_email = "adam\@bostoncoop.net"; $Text::Wrap::columns = 100; # for wrapping HTML sub PrintFollowUps { my $entry_name = shift; my $followup_text = ""; my $date_string; $entry_name =~ s<^$documentRoot/><>g; if (-e "$followup_root/$entry_name") { open IN, "$followup_root/$entry_name" || return ""; $followup_text = ">\n"; while() { my ($epoch, $url, $comment) =  m/^(.*?)\t(.*?)\t(.*)$/; $date_string = &EpochToShortDate($epoch); $followup_text .= "\n"; }$followup_text .= "
Linked Responses
class='responses'>$comment$date_string
"; close IN; } return $followup_text; } sub AddFollowUp { my $file_name = shift; my $url = shift; my $comment = shift; my $epoch = timelocal(localtime); ($url = URI::Heuristic::uf_urlstr($url) and $comment and $file_name) || return 0; open OUT, ">>$followup_root/$file_name" || return 0; print OUT $epoch . "\t" . $url . "\t" . $comment . "\n"; close OUT; 1; } sub GetMetaData { open IN, shift || return; $_ = join('',); close IN; my %metadata = (); my @matches = m{<%(.*)\s*[:=]\s*(.*?)\s*>}gi; while( @matches ) { my $key = lc shift @matches; if ($key eq "title") { $metadata{$key} = [ shift @matches ]; } else { my @values = split( /\s*,\s*/, shift @matches ); $metadata{$key} = [@values]; } } return %metadata; } sub GetTopicStringFromMetaData { my $topicArray = shift; $topicArray or return ""; my $topicString = "Topics: "; foreach (@{$topicArray}) { my $topic_filename = &MakeTopicFilename($_); $topicString .= "" . $_ . ", "; } $topicString =~ s/, $//g; $topicString .= ""; return $topicString; } sub MakeTopicFilename { my $topic_filename = lc shift; $topic_filename =~ s/ /_/g; $topic_filename; } sub MetaDateToEpoch { $_ = shift; my ($year, $mon, $mday, $hour, $min) = m/^(\d{2,4})\.(\d{1,2})\.(\d{1,2})\.(\d{1,2})\.(\d{2})/; $year < 100 and $year += 100 or $year > 1900 and $year -= 1900; # timelocal wants dates since 1900 $mon -= 1; timelocal(0, $min, $hour, $mday, $mon, $year); } sub EpochToBlogDate { $_ = shift; strftime("%A, %B %d, %Y at %I:%M %p", localtime($_)); } sub EpochToShortDate { $_ = shift; strftime("%D %H:%M", localtime($_)); } sub EpochToDateOnly { $_ = shift; strftime("%D", localtime($_)); } # Returns the timestamp of the specified blog file, either from the last modified # or from embedded metadata (metadata always takes priority) sub GetBlogFileDate { my $current_file_name = shift; my $return_value = 0; my %meta_data; (-e $current_file_name) || return $return_value; $return_value = (stat($current_file_name))[9]; %meta_data = &GetMetaData($current_file_name); if ($meta_data{"date"}) { $return_value = &MetaDateToEpoch($meta_data{"date"}[0]); } $return_value; } # Despite its name, FastGrep is probably not all that fast; # I think something needs to be done to precompile the pattern--although I wasn't able to figure it out. # It is passed a search string and the material to search; # it parses out the search string by spaces. In order to return true, all term smust appear in the material. # (i.e., 'google' type searching) sub FastGrep { my $search_string = shift; my @search_material = @_; my $found = 1; my $code; my @search_string = split(/\s/,$search_string); foreach my $current_search (@search_string) { $found = 0 unless grep /$current_search/i, @search_material; } $found; } sub ShowSearchResults { my $search_string = shift; my %meta_data; my @results; foreach my $blog_file (<$documentRoot/*>) { my $entry_name = $blog_file; open IN, $blog_file; push @results, $blog_file if &FastGrep($search_string, ); close IN; } print "
Search Results
\n"
; print "

Sorry, there were no results. You can try a new search if you want. Note that all terms must match; if you want to do an “or” search, try using a | between your search terms.

"
. &StringSearchBox unless @results; foreach my $current_file_name (@results) { my ($description, $topics) = &BlogItemSummary($current_file_name); print &UniversalFormat($description); } } sub StringSearchBox { <'$blogPath'
method='post'>

'feedbackform'> 'submit' value='Search:' /> 'text' name='search' size='20' maxlength='40' />

EOF } sub BlogItemSummary { my $blog_file = shift; my ($item_description, %meta_data); my $blog_timestamp = &GetBlogFileDate($blog_file); my @topics; %meta_data = &GetMetaData($blog_file); return "" unless $meta_data{"title"}; $blog_file =~ s<^$documentRoot/><>; $item_description="

$blog_file>" . $meta_data{"title"}[0] . '
'
. EpochToBlogDate($blog_timestamp) . ' '; if (&GetTopicStringFromMetaData($meta_data{"keywords"})) { $item_description .= "
"
. &GetTopicStringFromMetaData($meta_data{"keywords"}) . "

\n"; foreach (@{$meta_data{"keywords"}}) { push @topics, $_; } } return ($item_description, @topics); } sub UniversalFormat { $_ = ">" . join('',@_) . "<"; my $string = $_; while ($string =~ s{<%embed:(.*?)>}{ REPLACETEXTHERE}i) { my $embedded_blog_link = $1; my $embedded_document = &show($documentRoot . "/" . $embedded_blog_link,1); $embedded_document =~ s{blogtitle}{blogsubtitle}g; $embedded_document =~ s{(blogsubtitle.*?>)(.*?)(<)} {$1s="blogsubtitle" href="$blogPath?rightframe=$embedded_blog_link">$2$3}g; $string =~ s/REPLACETEXTHERE/$embedded_document/; } $_ = $string; s{<%blog:(.*?)>} {
$1
$&}g; s[<%blogimage:(.*?)>] [${documentURL}
image_$1" alt="$1" />$&]g; s[<%rimage:(.*?)>] [${documentURL}image_$1" alt="$1" class="insetright" />$&]g; s[<%limage:(.*?)>] [${documentURL}image_$1" alt="$1" class="insetleft" />$&]g; s[<%image:(.*?)>] [${documentURL}image_$1" alt="$1" class="insetcenter" />$&]g; s{\s*([^>]*?)>} {$documentURL$1.pdf">PDF version [info]}gi; s{\s*([^>]*?)>} {$blogPath?rightframe=$1">}g; s{\s*([^>]*?)>} {$documentURL$1">}g; s{(
.*

)} {WEBLOGPLACEHOLDER}is; # Remove a

 section, if there is one, to be put back afterwards my $preSection = $1; s{
}{
}gi; s{
}{
}gi; s{(]*[^/])>}{$1 />}gi; s{&([^;]*? )}{&$1}g; # Only replace & with & when the & isn't already an HTML escape sequence! while (s{>([^<]*?)``(.*?)''(.*?)<} {>$1$2$3<}gs) {}; while (s{>([^<]*?)"
([^"]*?)"(.*?)<} {>$1$2$3<}gs) {}; while (s{>([^<]*?)`([^']*)'(.*?)<} {>$1$2$3<}gs) {}; while (s{>([^<]*?\s)'([^']*)'([\s,;\.].*?)<} {>$1‘$2’$3<}gs) {}; while (s{>([^<]*?)'} {>$1}gs) {} s/WEBLOGPLACEHOLDER/$preSection/; # Put back any removed
 section. s/^>|<$//g; s{<%(.*?)>} {}g; $_; } 1; 

syntax highlighted by Code2HTML, v. 0.9.1

MultiZilla

Mozilla just hit critical mass. I had read a fair bit about the XUL architecture and Gecko rendering engine but I didn’t really get it until recently. David Boswell’s article, Let One Hundred Browsers Bloom on O’Reilly Network is a good introduction to the myriad possibilites that are now unleashed. 101 things that the Mozilla browser can do that IE cannot on XulPlanet also is a good overview of what it’s all about. Ars Technica ran a good review when Mozilla 1.0 was released, although it’s now slightly dated.

One of my favorite applications (or “plug-ins”?) that takes advantage of Mozilla’s extensibility is MultiZilla, which does all sorts of crazy things with tabbed browsing. The webpage describes all this fairly well; my favorite extension is one where you can take a whole set of tabs and “bookmark” them as a “groupmark”, and then later open that groupmark and get all of those tabs together again. I do this to read the news—I have seven or eight news sites I like to read first thing every day, so I’ve groupmarked them, then I go through each one, opening up new tabs (by middle-button-clicking) on articles I’m interested in. As I finish going through everything in a particular tab, I close it with a Mouse gesture and move on to the next one; essentially going left-to-right, new articles leapfrogging to the end.

I suspect most people don’t necessarily want or need to use their web browsers this way, but for some it opens up new ways of interacting with the web. I believe eventually this will lead to new ways of thinking about the whole technology. We’re now at a point, finally, where the web browser can be a platform upon which castles are built. If nothing else, that’s a good thing for competition.

A Cautionary Tale

Here’s a cautionary tale for all you folks.

So I was going along, happily minding my own business, debianizing/defenestrating a few new computers, when I needed to make root/rescue disks. I went to my laptop, got the dd images and typed:

dd if=root.bin of=/dev/hda bs=1024 conv=sync

Boom. That was fast. And the floppy didn’t even spin. Hm…

Now wait a minute—that’s /dev/hda, not /dev/fd0! That’s ME. But I wasn’t root???

 ~> ls -l /dev/hda brw-rw---- 1 root disk 3, 0 Mar 14 2002 /dev/hda ~> groups ...disk... 

Uh oh. I suddenly have the feeling of someone you see in the movies where the torso has been severed but they don’t feel it yet. I just overwrote the first 1024K of my hard drive with root.bin.

But everything was still working fine, for the moment. I had overwritten my partition table and part of my hibernation partition, but none of my actual linux drive.

So I foundered about for a bit, desperately not wanting to have to back up my whole drive and start over. I called my friend Dylan, woke him up (you never know with mathematicians!), and he gave me some very good ideas.

It turns out my partition table was still in memory, in /proc/partitions.

cfdisk /dev/hda failed (fatal error), but it turns out I could still run fdisk /dev/hda.

So I manually recreated the partition table with fdisk from /proc/partitions (set the partition types), wrote it to disk, reinstalled grub, rebooted and crossed my fingers.

Back to normal!

Try recovering from such a disaster under Windows, and I’ll see you next year.

A few lessons to be learned:

  • Don’t put yourself in group disk! This might be obvious to some of you, but I had added myself in order to burn CDs (I should have created a different group, didn’t think about it at the time).
  • Save a copy of your partition table! I was lucky based on what I did that the partition table was still in /proc, but googling suggested that this is sometimes an invaluable recovery tool. Put your partition table somewhere else. There is a tool gpart that tries to guess your partition table based on the data, but having the information just makes it so much easier.
  • Don’t dd over your hard drive.

I guess that’s it. I hope my averted disaster is helpful to some of you.

fetchemusic

Unfortunately, this script no longer works. EMusic has switched to an encrypted RMP format. I’m leaving this script here in case it is ever useful for another project.

Update 10/1/03: Someone has written a very nice perl script that works with the new encrypted EMP file format, called decrypt-emp. Get it now!

 #!/usr/bin/perl # fetchrmp.pl - a quick-n-dirty script for parsing EMusic RMP data and # fetching entire albums. Requires an EMusic.com subscription. :-) # AUTHOR: Doran Barton  # Modifications: Adam Kessel  # VERSION 0.9 # Copyright (c) 2002 Doran Barton. All rights reserved. # Copyright (c) 2003 Adam Kessel. All rights reserved. # This program is free software; you can distribute it and/or modify it # under the same terms as Perl itself. my $VERSION = 0.91; use strict; use XML::EasyOBJ; use LWP::Simple; use Getopt::Long; use File::Path; use File::Copy; Getopt::Long::config("no_ignore_case"); my ($opt_help, $opt_destdir, $opt_rmpfile, $opt_folders, $opt_play, $opt_art); my $error = &GetOptions('help' => \$opt_help, 'destdir:s' => \$opt_destdir, 'rmpfile:s' => \$opt_rmpfile, 'folders' => \$opt_folders, 'art' => \$opt_art, 'play:s' => \$opt_play); if($opt_help) { exit _usage(); } if(!$opt_rmpfile) { print STDERR "ERROR: An RMP data file is required (with the --rmpfile parameter)\n\n"; exit _usage(); } if(!$opt_destdir) { $opt_destdir = "."; } if(defined $opt_play) { unless ($opt_play) { $opt_play = 'mpg321 -o oss'; } } $opt_destdir =~ s</$><>; # Trim any trailing /'s from destination dir my $doc = new XML::EasyOBJ($opt_rmpfile); my $server = $doc->SERVER(0)->NETNAME(0)->getString; my @elements = $doc->TRACKLIST(0)->TRACK; my $track_num = 1; my ($track, $url, $track_number, $genre, $artist, $album, $current_track); my ($album_art, $lowergenre, $lowerartist, $loweralbum); foreach $track (@elements) { $url = sprintf("http://%s/%s/%s", $server, $track->TRACKID->getString, $track->FILENAME->getString); $current_track = $track->FILENAME->getString; $current_track =~ tr/A-Z /a-z_/; $current_track =~ s/[^a-z0-9_\-\.]//g; print STDERR "Getting ", $current_track, "... "; if ($opt_folders or $opt_art) { $genre = $track->GENRE->getString || ""; $artist = $track->ARTIST->getString || ""; $album = $track->ALBUM->getString || ""; $album_art = $track->ALBUMART->getString || ""; $loweralbum = $album; $loweralbum =~ tr/A-Z /a-z_/; $loweralbum =~ s/[^a-z0-9_\-]//g; $album =~ tr/ /_/; $album =~ s/[^A-Za-z0-9_\-]//g; $lowergenre = $genre; $lowergenre =~ tr/A-Z /a-z_/; $lowergenre =~ s/:.*//; $lowergenre =~ s/[^a-z0-9_\-]//g; $lowerartist = $artist; $lowerartist =~ tr/A-Z /a-z_/; $lowerartist =~ s/[^a-z0-9_\-]//g; $artist =~ tr/ /_/; $artist =~ s/[^A-Za-z0-9_\-]//g; $track_number = $track_num++; $track_number =~ s/^(\d)$/0$1/; } my $rv = getstore($url, $current_track); if($rv == 200) { print STDERR "OK\n"; if ($opt_folders) { mkpath("$opt_destdir/$lowergenre/$lowerartist/$loweralbum"); move($current_track,"$opt_destdir/$lowergenre/$lowerartist/$loweralbum/$artist---$album---$track_number---$current_track"); } } else { print STDERR "FAILED\n"; } } if ($opt_art) { if ($album_art) { my $rv = getstore($album_art,"$loweralbum.jpg"); if($rv == 200) { print STDERR "Art download OK\n"; if ($opt_folders) { move("$loweralbum.jpg","$opt_destdir/$lowergenre/$lowerartist/$loweralbum"); } } else { print STDERR "Art download failed\n"; } } } if ($opt_play) { if ($opt_folders) { chdir "$opt_destdir/$lowergenre/$lowerartist/$loweralbum"; } `$opt_play *mp3`; } sub _usage { print STDERR "This is $0 version $VERSION\n", "Usage: $0 --help \n", " or: $0 [--destdir DIR] --rmpfile FILE [--folders] [--play [mpeg player]] [--art]\n\n", "--folders puts track in folder hierarchy based on genre, album, and artist under destdir (or current directory if not specified)\n", "--play plays music when done downloading with specified command line (or mpg321 -o oss if not specified)\n", "--art downloads the album art if available and places it with the music\n\n", "Copyright (c) 2002 Doran Barton. All rights reserved. Modifications copyright (c) 2003 Adam Kessel.\n", "This program is free software; you can distribute it and/or modify it\n", "under the same terms as Perl itself.\n"; return 1; } 

syntax highlighted by Code2HTML, v. 0.9.1