Sunday 15 December 2013

MySQL Replication

MySQL replication is very flexible for running multiple servers. Most MySQL administrators should already have a copy of "High Performance MySQL" (); if you don't, get this book as it is top notch and will guide you through most configurations imaginable. Rather than repeat what is already written, here's a few things I've found that can help make things simpler.

There are a few things with replication you need to be attentive to.
  • Set a unique "server-id" for each server
  • Keep your binary logs (and relay logs) with your databases
  • Initialize your slave cleanly
  • Run your slave "read-only" 
  • Use "skip-slave-start" if the slave is used as a backup

Figure out a scheme to create a unique "server-id". If your database hosts all have unique number in their name, you could use that. I've been using their IP addresses converted to decimal (IP Decimal Converter). Even if you're aren't thinking about master-master or many slaves configurations today, starting with this set will save you the trouble later of having to reinitialize everything.

Unlike best practices on other platforms like SQL Server, with MySQL it is going to be simpler to keep the binary logs with the databases. Like if you try to initialize slaves with LVM snapshots, capturing binary logs with databases in one snapshot is going to be the best. And moving binary logs later is a challenge. So for the "log-bin" and "relay-log", set these to a file name only and not a full path ("mysql-bin", and "<hostname>-relay-bin").

 Create a script to (re)initialize a slave. If your environment is on the messy side and there's a lot of strange things that people are doing in the databases, you are going to find your slave is at risk of getting out of sync with the master. I would suggest re-initializing the slave(s) as often as possible, like monthly. Even if your environment is pristine, you will inevitable make changes from time to time and will want to have had a consistent tool for initializing slaves. Again, "High Performance MySQL" has several good ways of doing this. LVM snapshots and rsync are what are in my scripts. The Percona Toolkit (including the former Maatkit) also has some good tools for you.
Running a slave in "read-only" ensures only "root" can make changes which helps ensure the slave integrity. If you want to have a writable slave, e.g. one out of sync with the master, why use replication? Load data as needed and skip the whole replication thing. Even so, very bad schema (like tables with no unique keys) are still susceptible to falling out of sync as replicated transactions may not behave consistently. There may be other reasons for slaves to fall out of sync, but this has been my problem so I couldn't attribute any out-of-sync issues to anything else.

For a server that is intended to be the backup of the primary, the "skip-slave-start" is necessary for the times you do use the backup server. It means every time you restart MySQL, you have to manually issue "start slave" which prevents the backup server from downloading transactions from the primary server like after you have made a cut-over and are trying to restore the primary.

Wednesday 27 November 2013

Monitoring Network Traffic at Home

Since the news last week about LG "smart" TVs ignoring privacy settings and sending all your viewing and media information to LG, BBC News LG investigates Smart TV 'unauthorised spying' claim, I started looking at increasing monitoring of network activity at home and see what my Wii or Sony Blu-ray player or other devices are up to.

Virtually any router allows you to enable SNMP which is enough to collect aggregate interface traffic. I have been using Cacti for recording traffic for years.

What I've found is that DD-WRT has something called "rflow" to send live packet information to a monitoring server - an equivalent of Cisco NetFlow. The Network Traffic Analysis With Netflow And Ntop guide is very good, and on Ubuntu the ntop server is readily available. This gives a live view of what systems are connecting to what, how much traffic is passing on different protocols and top users. Great if there's any question of who is hogging all the tubes.

But not enough to tell if your "smart" DVD player is reporting to Sony that you enjoy "midget porn" in the privacy of your own home (I'm not judging; that was the example in the BBC article). For that, I will need to look at some bigger iron - Snort to really go whole hog.

Wednesday 17 July 2013

Simple web page for generating passwords

I find it annoying not to have APG handy when I want to create a password. I also don't like using random websites online for this either because I can't trust that they aren't logging their output. So I put a simple little form and using PHP to invoke APG to create passwords. My form right now is very simple and doesn't support all of the APG options, but it will do:

https://alia.thenibble.org/passwords/

Here's the simple code I'm using:

https://docs.google.com/document/d/12CViQAEW6q2moZ4GtkJllsD0-8pazpVMfeYjlrBz-4s/pub

Monday 15 July 2013

Use owncloud for RSS feeds

Sorry hosted services, I will be using my owncloud with the "news" plugin for my feeds.

Whipping Up a Quick Site

Recently I helped set up a quick site along the lines of a proof of concept for a community type site and had decided to use Google Sites and we were very well rewarded with the richness of what is readily available after getting a bit used to how everything is put together.

I feel like there is almost no technical knowledge required, it just takes a bit of getting used to. Not much, but some. When you are first trying to layout your site, you have to poke around for a bit to figure out what things need to be changed on the page, the page layout, the page template, or the site layout. I don't want to do a mock-up and the site we put together will get taken down too soon to be used as a reference - as long as you have some idea for a layout for the site you want, you can make it in Google Sites. Sketch it out on paper works fine, just fine. There will be a couple places where some HTML knowledge is useful, not necessary but useful if you want something too align or look a very specific way.

Now for the cool stuff.

Site templates - there are a bounty of free site templates everything from generic Pokemon theme to a complete Strata community site with calendars and council meeting minutes etc. 

And if you can't readily find a site template that helps get you started, it is super easy to put a site together even from scratch. Managing the site layout and using page templates make building your site very fast. You get the look and feel you want for your pages together quickly so you can work on the content. A handy trick with your page templates is you can give a default "parent" page so you can create many pages in one section.

In your site layout, you can have a main navigation bar. The nav bar you put together manually putting your main pages and sections. When you put sub-pages in the main nav bar, it makes nice little pull-down menus. It sure beats the stock left-hand nav section which lists all your pages in alphabetical order - get rid of that ugly thing.

And then there's the widgets. The best integration on Google Sites is obviously Google's other products: Calendar and Docs. An event or other calendar can be dropped right in your site and then regular sharing rules apply.

One of the features of Google Docs (or is it Drive now?) that is particularly useful for your site is the "forms". You can create contact forms, polling data, and probably a lot more than I've seen in my quick tour. 

The contact form is interesting because what you can do is change the form responses to send notifications for when it is filled out. This will give you a stock contact form so you don't have to put an email address on the site and only takes a minute or two to put up.

Any information your gathering like from a poll comes with rich analytics out of the box. The form responses have full reports on response selection and trending. Which you can publish or not as appropriate.

Lastly I will mention you can of course use your own domain for the site. Anonymous visitors to your site will see the custom domain. Users logged in to sites will be instead redirected to sites.google.com/a/sitename/page/blah/blah/blah... 

Super quick and easy. 

Google Sites is "free" so remember "If you're not paying for the product, you are the product."

... And even if you pay for the product, you may still be the product.

</rant>

Saturday 15 June 2013

Get your ownCloud

I've recently moved some of my "cloud" files to ownCloud. There are some files we all store in the cloud which we probably shouldn't: password files, financial information including taxes, and maybe some dirty laundry too. For these uses, you don't need "anywhere access", you probably just need convenient access between your own computers and the backups afforded by having the files stored both on your desktop and your laptop is probably enough.

OwnCloud is your own cloud and you install it on some station to be the file repository. A workstation, or an Ubuntu server, whatever you have. The stock package in Ubuntu 12.04 is the much older 3.0x. You can use this with the older sync clients, but it doesn't have file versioning and other features of the newer ownCloud versions. You can add the ownCloud software repository from their site. I assume that setting up ownCloud on Windows is a bunch of double-clicking, that seemed like too much effort so I didn't bother.

There isn't much configuration, once it's up your first hit to the web page does the configuration which includes: creating an administrator account, ... That's it. After logging in there are more options like requiring SSL if you want.

 
There are also a lot of plugins / apps for enabling calendar / contact sync, for integrating authentication with LDAP, external storage, and more than you can imagine.




Since you likely have a hard drive at least 500GB in size if not multiple TB, this is automatically a much larger repository than you'll get without a monthly bill from cloud services. It's pretty cheap too, a new drive should be under $0.10 / GB these days whereas DropBox is about that per month.

Like any of the cloud services, you can access files via web interface:



There is a "sync client" for PCs and that includes Windows, Mac, and the major GNU/Linux distributions (CentOS/RHEL, Fedora, openSUSE, Ubuntu, Debian). There are also mobile clients but I haven't tried them.


There's ownCloud for you. You can totally replace the big corporate run cloud services if you want or complement them.

Thursday 11 April 2013

Cron scheduling for first Sunday of the month

For everyone who uses cron, you are familiar with the job schedule form:

min hr day-of-month month day-of-week <command>

A problem with cron job scheduling is if you want to schedule something, like backups or updates, for "the first Sunday of the month".  The job spec "0 0 1-7 * Sun" will run every Sunday and every day on the 1st to the 7th of the month.

The way to work around this is schedule the job for the possible days to run and then as part of the command, check the date before running the command.  I've just seen what is The Best format for this:

0 9 1-7 * * [ "$(date '+%a')" == "Sun" ] && <path>/script.sh

This solution comes from LinuxQuestions.org user kakapo in the post here:

http://www.linuxquestions.org/questions/linux-software-2/scheduling-a-cron-job-to-run-on-the-first-sunday-of-the-month-524720/#post4533619

Up until now I used a slightly different form of this using the day of the week in the cron job and then testing  date %d to test the day of the month.  But the above form is far clearer and easier to schedule jobs with.

So props to kakapo for sharing that form and until cron changes how the day-of-the-month and the day-of-the-week fields are used, this will be the best way to schedule a job on the first Sunday of the month.

Friday 5 April 2013

Nagios and SensorProbe

Nagios rules all and has great agents and plugins and also checks clusters.  One of the things we have got recently at #dayjob is a generator for emergency power.  Well, how do you know if the generator is running, is healthy, etc?  Sure, there's routine checks that someone has to do, weekly, monthly, seasonally, etc but how do you know what's going on in real-time?

Like much machinery, generators don't always have a nice web-based monitoring system or even an SNMP interface - they have "dry contacts".  Dry contacts, for those of you like me who think a voltmeter is something used by parking enforcement, is a simple electrical circuit which is either "open" and not passing current or "closed" and passing usually 5 volts.  Usually it is something like a screw which you thread a sensor through.

Okay, so no IP interface.  What do you use?  Well, we're using these AKCP SensorProbe devices which come in a variety of shapes and sizes.  For our generator we used the SensorProbe2DC to which you can connect two sensors supporting 5 dry contacts each.  It's a little IP device you need to feed a data run and power to and then you screw in the leads from the sensor to the dry contacts on the generator - your installer can help with the latter.

The SensorProbe is a monitoring device so you can configure the alerts and view status from there.  But the other thing is that it has an SNMP interface so you can now monitor your generator status from your regular network monitoring system, by which I mean of course Nagios.  Good old check_snmp, tell it which OID is which, and you're off to the races!

So now we've got in Nagios a host (the sensorpobe) which has service checks telling us whether the transfer switch is on mains or generator power, if the generator is running, if the generator is set to "auto start", or having any other problems.  

AKCP makes various other SensorProbe devices.  The SensorProbe8 has 8 ports to which you can connect various sensors for temperature and humidity, airflow, water detectors, etc or single-port dry-contacts.  If you look hard, you'll find also that your AC units and other equipment also have dry contacts. Avid readers who crack their equipment manuals will also find that dry contacts can also be used as output triggers not just receiving status.  Once you have a lot of dry contacts, you can check out the SensorProbe8 x20 and x60 which come with a whole lot more dry contacts to check everything in your datacenter.

Mmm, generator power, yum.

Tuesday 19 March 2013

Autofs and a couple tricks with NFS shares

Autofs is great especially for NFS mounts between systems.  Autofs will mount file systems on demand and then un-mount them again when they are not needed.  This is especially a nice trick where servers are sharing NFS shares compared to putting the shares in fstab which mounts the file system on boot potentially putting you in a deadlock where server A is waiting on a share from server B and server B is waiting on a share from server A... A common issue for example is mounting /home from a shared location but then no other server can boot until the home file server is up and deadlocks are bad, um 'kay?

"yum install autofs" as needed, use your package manager of choice.  Once you fire up the automount daemon, you can poke around in the configuration.

First thing you'll want to try with is in auto.master enabling the net (or nfs) -hosts line.  This will create a multi-mount in /net where you can access shares via /net/<hostname>/<share path>.

That's really it.  You can directly access files, for example in /net/homefs/data/home/username.  Automount will automatically mount that share when it is accessed and remove it after it's been left idle.

What's it doing?

First, it will look up shares for you.  When you do `ls /net/hostname`, it will return a list of shares available from the server "hostname".  It's really that easy.

Any access to a share is automatically mounted.  A shared folder, call it "work", will be mounted on the first user request for a file from that share.  Whether that's someone using 'ls' or else writing a file to a specified path.

For convenience, my first tip is that you use symlinks to point to specific shares or locations.  Rather than using the path "/net/workfs/export/work", do something like `ln -s /net/workfs/export/work $HOME/work`.   Another example is for shared home directories, super common in UNIX environments.  So link /home to /net/<server>/path/to/shared/home.  It will bring up the shared home directories when a user tries to logon and will unmount the home share after the users are logged out.

This brings me to my next tip: the automatic timeout.  Autofs will un-mount the share after an idle period.  10 minutes by default (depending on your platform).  This may cause you some grief, for example if you have a job that runs every 10 minutes.  You should adjust this timeout based on your needs (either up or down) but unless your running something on a schedule every 10 minutes, you probably won't have much of a problem with this default.

You can also tune your NFS settings in your auto.master config file.  The defaults should work for most systems but if you do have to do NFS tuning, you can do this with autofs either for all NFS mounts or you can create specific mounts as needed.

The interesting thing with autofs is you don't have to use it just for network shares.  You can use it for regular devices as well.  One of the main things is that again, anything in automount will be mounted on demand and not at boot time.  So if you have a large data volume, you can add it specifically to the auto.misc file instead of fstab.

From time to time autofs can get a bit mucked up.  Usually when "you screw around too much".  If stopping autofs and umounting any left-over shares doesn't work, remember "umount -f -l" which is force lazy unmount.  Very useful.  If the folders don't go away, like /net/server/path/to/share, try doing umount on them and then you can remove them (rm -Rf), just be careful.  It depends whether rebooting is more risky than an rm -Rf.

Wednesday 27 February 2013

Useful trick for sending HTML email

Sometimes you want to send an email as HTML from a script or from script output anyhow.  There's a couple ways to get the HTML page to the recipient.  You can attach the HTML page or you can set your Content-Type to HTML.  In my case, we're looking at scraping a web page as a cron job and sending it to some recipient(s) - the mainstay is:

curl 'http://server/web/page.html'
Attaching an HTML file is safe, but recipients may not like "opening attachments".  For HTML email, the security risk is the same, but there's a perception that "opening attachments is bad" which I wouldn't discourage as a general practice.  Rambling aside, "uuencode" will encode an attachment, any attachment, and can be used in general (word doc, zip file, etc).

curl 'http://server/web/page.html' | uuencode attachmentname.html

The other way is to set the content type to HTML and the HTML becomes the body of the email.  On some operating systems, namely Debian and Ubuntu, the mail / mailx command can add a header with the -a switch.  This is pretty simple.

curl 'http://server/web/page.html' | mail -a "Content-Type: text/html" -s "An HTML email" some-recipient@example.com

However if you are on a Red Hat / Fedora / CentOS system, your mail command does not support the -a switch.  Here you can use mutt and the mutt method will work in general.

curl 'http://server/web/page.html' | mutt -e "my_hdr Content-Type:text/html" -e "set charset=\"utf-8\"" -s "An HTML emailsome-recipient@example.com

There you have it.  Personally not a fan of HTML email (since it opens the doors to a lot of malware attacks), but if you've got to generate HTML email, using standard tools instead of writing your own perl script to wrap the scraping of a web page and generating an email is going to be much simpler.

Thanks to the "telinit q" blog for helping with this answer.  http://blog.telinitq.com/?p=137

Tuesday 19 February 2013

Blocking applications with AppLocker

I've just been in a situation where there was a particular user whom we wanted to give some access to but needed to limit their general access which in Windows 7 and Windows Server 2008 R2 you can do with "AppLocker" in a very clear way.  AppLocker sets rules that look much like firewall rules allowing or denying access to run different programs and this can be controlled either locally or through Group Policy Objects.

For example, you have a consultant helping you with your new ERP system (just saying).  They need to launch the ERP application but you really don't want them firing up a browser or the RDP client and checking things out on your network.

Getting started with AppLocker is pretty simple:

  • Launch the local group policy management tool
  • Enable auditing only initially for exe/dll control
  • Create the default rules to allow basic or general access (if applicable)
Then you want to create your specific allow / deny rules.  The AppLocker rules are going to be a collection of rules saying if they are allow or deny rules, who they apply to, what type of matching they use (path, or publisher), and then actual match.  So you might have a rule like
  • Allow
  • Consultants
  • Path
  • Program Files\ERP\bin\*
If the consultant only matches this one rule, they will be allowed to launch binaries in the ERP's installation path and will be blocked from anything else.

The first thing to do is set your rules in audit-only which creates event logs for all access that is controlled by AppLocker.  You can test out your rules very easily this way as there will be two types of events to look for: "access granted" and "access granted BUT applocker rules will block this when set to enforcing".  Once you are satisfied you are not going to can all access to your regular users and that you are locking down the consultant sufficiently, switch to enforcing and you're golden.


Or for another example, maybe you just want to block an out of date version of Acrobat Reader from running on your network.  You can set a rule to deny "Acrobat" publisher's "Acrobat Reader" program from running "9.0 or older".  Again, easy to test using "audit only" before setting enforcing.

Looks like a dummy apparmor or selinux maybe?  Honestly, I never made too much progress selinux.  I would figure out how to get something working then wouldn't use it for a while and forget out how to work with selinux and have to start all over again.

Popular Posts