Friday 7 December 2012

End of Free Google Apps

It's official - there's no more free Google Apps.  Google started by limiting free GA accounts from 50 users to 10 and now to 0.  "thenibble.org" was registered in the magical times of 50 free users on your domain and has been grand-fathered in but there will be no new free GA accounts.  Domain aliases can still be added to existing accounts and other features, however I suspect it's only a matter of time before Google stops porting "Enterprise" features into the free service.  Fingers crossed!

Official Google Enterprise Blog: Changes to Google Apps for businesses: Posted by Clay Bavor, Director of Product Management, Google Apps Google Apps started with the simple idea that Gmail could help business...

Sunday 4 November 2012

Simple Shared Storage

Many cluster applications require, or significantly benefit from, shared storage so what is the easiest way to do shared storage?  Whether you want High Availaiblity in VMware or distributed computing on a Beowulf cluster, sharing storage across nodes is useful if not essential.

The first question you need to ask is whether you can do direct connect or not.  Directly connected storage is going to be simpler and more reliable but limits the number of connected nodes and the distance from the nodes to the storage.  There's SCSI or SAS storage which is going to be fast and reliable.  Dell sells an array, the PowerVault MD3000/3200 series, which you can connect up to 4 hosts on each of 2 controllers so either 4 hosts with redundant connections or 8 hosts with non-redundant connections.  The limit of a SAS cable length is 10m (steadfast.net) so if you need longer runs, you start looking at Fibre Channel or Ethernet connections.

Fibre Channel is well established and with link speeds at 4Gbps and 8Gbps these days it's fast.  But it's expensive and suffers from the complexity issue that unless the stars align and you sacrifice a pig, nothing will work.

iSCSI is suprisingly popular.  Encapsulating SCSI devices / commands inside an IP packet seems kind of like a lot of overhead and it really is.  ATA over Ethernet (AoE) makes more sense as you're only going so far as layer 2 - what kind of mad person would run shared storage across separate subnets which need to be routed? It just seems like a big slap in the face for storage when you could use a regular file server protocol like NFS.

Ah, NFS, you've been around for a while.  Frankly, it doesn't get any easier or cheaper than NFS.  Any system can share NFS - every UNIX flavour and GNU/Linux distribution will come with an NFS server out-of-the-box.

It's a one-liner config and you're off to the races.  Put LVM on there and you can expand sotrage as needed, do snapshots, heck add DRBD if you want replication.

You can run NFS (or iSCSI) on any Ethernet connection from just sharing the main interface on the system to dedicated redundant 10Gbps ports.

NFS is well established with plenty of support for optimazation to get over a lot of the limitations of TCP.

Backups for the data, take your snapshot (LVM or VMware, etc) and a plain old cp / rsync will be sufficient to get the files you need off there.


And there's one other benefit of using NFS over others - if you have a lot of concurrency like many virutal machines or compute nodes accessing storage simulataneously, there is (usually) only 1 lock per storage device with the block storage protocols whereas NFS is one lock per file.

But NFS certainly will require at least the addition of a switch and potentially several and some complexity with redundant network links etc etc it can also suffer from the same complexity issue that unless the stars align, you're going to have a bad time.

So in short, NFS is hella awesome and you should use it wherever you can.  Depending on the scale of what you're doing, directly connected storage is going to be simpler and more reliable, possibly cheaper.

Tuesday 7 August 2012

fg && exit

Found another way to abuse the command chaining...  I had a long running task (e2fsck) running under screen and I wanted to chain some other commands (mount -a && exportfs -rav) but couldn't restart the first command.
  1. Use ctrl+Z to put the job on hold
  2. fg && more commands to bring the job into the foreground again 
  3. Shazzam!
So naturally I put && exit on the end there again to roll out of shell when the command completed.

[1]+  Stopped                 e2fsck -p -f -v /dev/mapper/VolGroup01-project
[root@palmberg ~]# fg && mount -a && exportfs -rav && exit
e2fsck -p -f -v /dev/mapper/VolGroup01-project

Sunday 22 July 2012

Nagios check_cluster

The other day, we got an escalation from Nagios in the middle of the night that email was down.  Looking at the system, I quickly found that while yes, one SMTP relay was down, the other was up.  So how do you monitor services that require multiple failures before service is disrupted?

check_cluster

This is a service check which doesn't check a service, but checks the results of other service (or host) checks.  The documentation is pretty clear on who to set this up. 

So where before I had been checking two SMTP services and escalating to SMS on each, I still have checks for two SMTP services, but then added the check_cluster which checks the results of both and is only critical if all the SMTP services are down.  Then I escalate based on the check_cluster checks instead of the check_smtp ones.

Friday 8 June 2012

LeakedIn

Its making the tours but its just so much fun to get a raw password file.  With the recent password compromise from LinkedIn, you can readily find a copy of the raw file posted online and check if your password is in there.  And there's a site if you want to just punch in guesses:

http://leakedin.org/

Its pretty fun ;)

Nothing else to say here, really.  I've posted about storing passwords.  Use a password manager, generate a random password for each site, and make a large haystack from a short password by using arbitrary but simple patterns to extend the length of a complex password.

"lollerskates" is in that cracked file :P


Saturday 7 April 2012

Windows Server 8 Proudly Joins 5 Years Ago

I've been poking around a bit to try to get some idea of Windows 8 might be an enticing upgrade in the workplace.  I haven't been following too closely so there may be features I'm missing but here's what I've found so far.

On the desktop - no benefit.  New Metro UI is the biggest change and it's primarily a touch-screen friendly interface suitable for tables or smartphones.  Not at all for a "working" desktop where you might want to do more advanced tasks such as using a word processor or a spreadsheet.  And since Microsoft's plan to compete in the smart phone market is to first surpass Blackberry mostly by waiting for RIM's demise as BlackBerry use hits 0, Metro is irrelevant before even getting out the door.

On the server - a lot of big benefits and well worth the upgrade.  Windows Server 8 looks to be a big step forward from the late 20th century and into the early part of the 21st century.  This PC Mag article from the fall gives a pretty good breakdown.  I won't re-hash the author's well written piece and just go for the jugular here.

  1. "Intellisense Powershell" lets administrators auto-complete in PowerShell.  Big benefit, must-have, and has been readily available to bash and zsh users in Linux-based operating systems for a long long time.  Seriously - get real!  Microsoft has only just started down the road of a headless server OS path where automation can truly scale out operations and they have a lot of ground to cover.  This is our first example of the Microserfs pulling their heads out of <the ground> and look at what's going on outside their <world>.
  2. "Live Migration" lets Hyper-V guests be moved without disruption to new hosts.  I can't honestly say I'd touch Hyper-V without some sort of hazmat suit on.  Seriously, this is a "new" feature for Microsoft?  VMware vmotion has be been doing this for vmware customers for a while.  Yes, Hyper-V is free and VMware is paid, but with Hyper-V you're not getting your money's worth.  Maybe if VMware doesn't innovate at all for the next decade, Hyper-V will catch up enough to make it a viable option for anything other than a test lab or party tricks.
  3. "NIC Teaming" oddly I don't consider a "must-have", but this would be a feature possibly 20 years or more behind the times.  Hardware independent NIC teaming for bandwidth agreggation and fault tolerance has been the norm on any network operating system outside Microsoft Windows since, well, forever.  Where MS admins have historically depended on NIC vendors' drivers to provide this functionality to date, there at least is a path to do this in Winblows so though this is an important feature, I wouldn't buy Windows Server 8 specfically for it.
  4. "Claim Definitions" is a feature that allows sensitive files to be tagged as confidential, for example and access can be based on these "claim definitions".  I have no gripe here - sounds like "access control lists" based on tags.  I'd like to see how flexible this functionality is but even as-is can be an important tool under Windows 8.
  5. "Flexible Deployment" means that you can install Windows Server "core" (the stupid headed "headless" install we know from Windows Server 2008) and then, wait for this shocker....  Upgrade to full at a later time.  #facepalm   I mean, seriously?  You've got Ubuntu users who do in-place one-click upgrades across major versions, RedHat Enterprise Linux admins who will generally install headlessly just to get a box up and then add in all the features including the GUI in their default software package manager tool, but Windows Server users are only now going to be able to add the full Windows install into core without reformatting?  Maybe with Windows 25 in the year 2050, Microsoft will shock and amaze us by letting their users get software updates for their application all from a common update utility rather getting random prompts every other day to update all the plethora of third-party applications and utilities they have installed just to make their computer usable (actually, this will never happen...).
In summary - looking forward to Windows Server 8.  Maybe get myself a pager again to relive life in the pre-iPhone era.



Wednesday 29 February 2012

Rolling out of nested shells

I just realized that if I'm really lazy, I can stick && exit after everything to dump me out of all my nested shells after a program completes.
ssh <whateverhost> && exit
sudo -i && exit
for item in list ; do someprocessing ; done && exit

Monday 27 February 2012

PC apps are dead?

I've been looking around from time to time for an app which would let me scan books from our collection at home and build a digital library - most useful for loaning books.   I never found much on a PC, I did find

https://market.android.com/details?id=com.eleybourn.bookcatalogue&feature=also_installed#?t=W251bGwsMSwxLDEwNCwiY29tLmVsZXlib3Vybi5ib29rY2F0YWxvZ3VlIl0.

In short - yes, apps for desktop seem to be pretty much dead.  I can't think of the last time I found a usable desktop application.  At most, it's browser plugins like Nagios Checker.  There are some "rich" applications or system management applications with rich clients, like InterMapper, but generally it's all web UI.

As Martha says, "It's a Good Thing".

Sunday 29 January 2012

Charting Systems Using Cacti

There are a lot of great monitoring tools out there.  I've posted many times before about Nagios and I could post still more on this great tool, but it's not the only tool I use.  Another one is Cacti which is an excellent tool I've also mentioned before and it is mostly for graphing system resources.

Out of the box, Cacti will give you a lot of the basics especially when combined with SNMP.  Disk usage, network interface usage, CPU, and memory.  But what I really like about these great Open Source tools is that there are extensions readily available from the F/OSS community.  With Cacti, you can extend by getting new host templates and data queries (and more).  Here are some examples.

Disk IO - this is a new data query that tracks disk IO usage either in IOPS or MB/s.  This is one of the simplest examples of how you can extend cacti.  It comes as a xml file defining an SNMP query which you copy into your resrouces/snmp_query installation folder and as a data query template which you import through the Cacti UI.  Once you've done this quick installation, you can add the disk io checks to any SNMP enabled host you are already tracking.

Dell PowerEdge Environment - this is another simple example which is the same as the Disk IO in that it is an SNMP query plus a data query template but there's 3 checks it adds.  System ambient temperature, fan speeds, and system voltages.  Its a great example of how Cacti as a generic tool can be tuned to target your specific operating environment whether you're a Dell shop, HP shop, or otherwise.

APC UPS Daemon - Another example of an application specific example.  This one comes as a host template so its a collection of checks you can use to capture all the data queries on a host using APC UPS Daemon.  A great example of where F/OSS tools *far* exceed the stock or closed-source tools provided by vendors.  Rather than these cheesy brief inflexible views of how your system works as provided by APC that require overly large utilities to be installed, its quick, lightweight, and much more flexible to use the F/OSS tools.

Cacti is another of these great tools that works well in conjunction with other tools to give system administrators great insight into the operation of their network.

Popular Posts