Monday 21 November 2005

Duplicating data one more time

Last week I was reading up a bit on SPF and briefly at the other proposals for verifying that email is legitimate rather then spam. So far as I can see, the basic premise of these types of solutions is that the recipient (e.g. you) can find out if the email message is from a legitimate source.

The basis of SPF if that the sender's domain (e.g. nibble.bz in my case) would include a TXT record that lists all valid sending email servers. I really find this astounding. DNS has been messed with for a lot of things and now this? The RFCs state that TXT records are not to be used for structured data. I don't know how those bird-brains could promote such trash as being a good idea. "But it's easy!" Well, find me a paper or a book on computer security and I'll show you at least two places that say computer security comes at the expense of ease of use. Sorry folks, I don't buy SPF.

Holistic dicussion aside, one of the underlying shortcomings of many existing mail setups, Nibble Net's included, is that the sender is never authenticated in the first place. I use my email from all over the lower mainland and when I send mail I just use whatever the nearest relay is. At work, I use the SFU mail servers, at home, I might use the Shaw mail servers. Why? Relaying mail is typically permitted on the basis of IP address alone and not any sort of login.

Now we get in to authenticated SMTP. Users within a domain can login and send mail from one of the various hosts within that domain isntead of using, say, the SFU mail server. Ah! See there's the rub. For a client (my workstation at SFU) to send an email, I use the same protocol for talking to the server and the server uses for talking to the recipient's mail server. To talk to the server here, it would make sense that I put in a username and password so the server can identify me. Fine, great, swell. Now for the server here at SFU to send a message to hotmail.com, for example, there's no password involved. There's no certificates, no nothing. See where I'm going with this? The Simple Mail Transport Protocol is geared for exchanges messages equally well between your mail program and a server as it is between a virus-infected PC and a mail server. "You don't know where it's been" couldn't be more true.

Well SPF, SenderID, DomainKeys all try to address the server-to-server side of the problem. The client-to-server is best solved by forcing users to provide authentication information which is what I took a stab at on the weekend. Now all the user information for our users is stored in a LDAP directory so it is best if users authenticate against that. Not so fast, sport. Aparently, we're going to have to use Simple Authenticate and Security Layer (SASL). "That's pretty cool," I think to myself, "I'll be able to use PLAIN, CRAM-MD5, DIGEST-MD5, or GSSAPI (if I had Kerberos installed)."

So I poke around and poke around and to the best of my searching, it turns out there's a couple things going on here. Firstly, SASL is a set of libraries so that clients and servers can perform the various SASL operations and secondly Postfix will do SASL authenticate against a seperate SASL password file. Suposedly, I would have to duplicate all the user data in another password file. Heck, even with system logins and Jive Messenger logins using LDAP, I still have a seperate Samba password file and a plethora of htpasswd type files and up until I retired NIS there was the yppasswd database as well! Common folks, let's duplicate the user info ONE MORE TIME! So far as I can tell, I will have to setup an additional daemon, sasld. So the users email program talks to Postfix, Postfix talks to the SASL daemon, the SASL daemon talks to the OpenLDAP server, and finally the user will be authenticated and allowed to send mail. Excitement! Well, if I'm even in the right ballpark, then the setup is actually pretty simple. A couple options in each of three servers to get strong authentication working. Nevertheless, a task for another day.

Speaking of other tasks, I did finally start trying to clean up the DNS names and SSL certificates. So the best convention for keeping the DNS stuff organized that I can figure is to give each machine a proper host name (siona.dl.nibble.bz, friday.dl.nibble.bz, etc) and then give each *service* a hostname to match the service like imaps.dl.nibble.bz. Then issue a certificate for each service such that there's no confusion as to what's going on (you're authenticating the service you're accessing) and if services do get moved to different machines, changes like that are transparent to the user.

Anyhow, enough excitement, it's time to go home.

Friday 4 November 2005

Tocaraul now with searching and stuff

After making some modifications Tocaraul now has fewer bugs, a search function, and functions to delete songs from the request queue (or flush the whole thing). So far, we're doing good. I think that's probably going to be about all the features that I can update without making significant changes to how the code is structured.

I would like to break-up everything into views (request queue is one, library is another, search is a sort of sub-view of library) so that more and more features can be builtin to each part of the program without adding confusion to the interface. That smacks of effort so Fabio and I will see what else we're going to work on. Probably the play history. That would be cool.

Popular Posts