Tuesday, 30 October 2018

Working with DNS settings in systemd-resolved in Ubuntu

In troubleshooting some DNS name resolution issues I started to get more familiar with systemd-resolved in Ubuntu. Specifically if you look at the traditional /etc/resolv.conf file it says something like this:
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
nameserver 127.0.0.53
Hah hah! Subtly clever for any old hands in networking because "53" is the standard DNS port so 127.0.0.53 is a little clue to look for something specific.

Go ahead though - run "systemd-resolve --status", it doesn't require root, and it shows you a lot of info (the IPs of name servers have been change to protect the innocent...)
localadmin@ca-yvr-adm2:~$ systemd-resolve --status
Global
         DNS Servers: 10.1.1.11
                      10.2.2.12
          DNS Domain: sub.example.com
          DNSSEC NTA: 10.in-addr.arpa
                      16.172.in-addr.arpa
                      168.192.in-addr.arpa
                      <snip>
                      local

Link 1 (eno1)
      Current Scopes: none
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no

The config file is easily found in "/etc/systemd/resolved.conf" and it's quite short and simple and it should look familiar if you have used other systemd configurations before. The configuration out-of-the-box will be blank with all options commented out. I wanted to add additional search domains to the Domains line, in the same space-delimited way you would traditionally do the search line in your resolve.conf.
[Resolve]
#DNS=
#FallbackDNS=
Domains=sub.example.com example.com example.local
#LLMNR=no
#MulticastDNS=no
#DNSSEC=no
#Cache=yes
#DNSStubListener=yes

Restart the same as you would any other systemd tool, and then re-check your resolved status
sudo systemctl restart systemd-resolved.service 
systemd-resolve --status

OR check your /etc/resolve.conf file because if your only change is to modify the search domains, it also appears there for the resolver to work normally.
Global
         DNS Servers: 10.1.1.11
                      10.2.2.12
          DNS Domain: sub.example.com example.com example.local
          DNSSEC NTA: 10.in-addr.arpa
                      16.172.in-addr.arpa
                      168.192.in-addr.arpa
                      <snip>
                      local

Normally you would be done... Here's a couple bonus tricks that may arise.

You can modify DNS settings by interface - furthermore this is handy for testing DNS changes and reverting before making the change in the config file. Check out systemd-resolve --help 

The multicast service may conflict with a .local domain. The symptom I had was that I could resolve a short name like "pc" but could not resolve an FQDN like "pc.example.local". If you are using .local and finding odd DNS resolution results, edit your nsswitch.conf  and move "dns" earlier than the mdns (keep it after "files" though to avoid breaking your hosts file).

I've tried to make this a quick and useful blurb on how to use systemd-resolved and get pointed in the right direction because the documentation available wasn't simple for simple cases - there is certainly a lot of tuning you can do with the resolver tools.

Ciao

Wednesday, 3 October 2018

Archiving Files (E.g. Deleting Stuff) is My Super Power!

This week one of the things I was working on was archiving a lot of files on a file in order to prune it down ahead of migrating that share to a new location.

In particular this share while only ~100GB has > 3M files on it! Any basic operations like checking folder sizes, applying ACLs, etc are very slow on xM files. Looking at the share with WinDirStat I found that there's a large number of folders each with 20-50K files in each. Furthermore the majority of these files were old (years) and not actively used so we decided to Archive (e.g. Delete) the contents of these folders.

I started by using 7-Zip and there's an option on the Add to Archive screen to "Delete files after compression":


And that's pretty good! ... But not if you've got hundreds of folders to process.

Instead, with a bit of PowerShell, we invoke 7-Zip from a script and use the -sdel switch to remove files after they've been archived. This PS we used to simply to stuff each sub-folder into an archive.

Get-ChildItem . -Directory |  ForEach-Object  {
 $Archive = $_.Name + ".zip"
 $Folder = $_.Name + "\"
 &"C:\Program Files\7-Zip\7z.exe" a $Archive $Folder -sdel
}
Pow!

Thursday, 16 August 2018

Let's Encrypt on Blogger

I can't say I fiddle too much with the settings in Blogger, it's kinda "set it and forget it" stuff BUT sometime I think recently the team made it so you can enable SSL certs for custom domains on Blogger and it signs up a cert for you and everything.

https://support.google.com/blogger/answer/6284029?hl=en

In short it's so stupid simple just go do it and do it now

  1. Go to basic settings 
  2. Change HTTPS to Yes
  3. Are we done yet? Why yes, yes we are.

Optionally once the cert has generated (it's not instant) you can also turn on redirect to SSL which again, why not? It's just the next tick box


This is how security should work; it works and its easy. I guess it could be on by default and pushed out but really, just click "security activate!"

Ciao

Friday, 14 July 2017

Automation with RT CLI

Ticket automation in Best Practical's RT is by far the easiest with the RT CLI.  I shan't re-hash the documentation but will give an example because it wasn't obvious just how easy it is. Like in Scouts, Be Prepared. A bit of prep makes the RT CLI simple to work with.
  1. Setup your .rtrc and .bashrc as a one-off so you can invoke the RT CLI directly
  2. Build a search query in the regular RT Web UI
  3. Automate the function 
Find the "rt" binary:
[support-email@rt ~]$ locate */rt
/opt/rt4/bin/rt
Add it to your PATH in .bashrc:
export PATH=$PATH:/opt/rt4/bin
 Setup your .rtrc file with your credentials rather than giving them on the command line:
[support-email@rt ~]$ cat .rtrc
server http://rt/
user me
passwd xxx
auth rt
Now you can already do some stuff like the examples from the RT CLI page in the wiki:
[support-email@rt ~]$ rt show user/ggee
id: user/832782
Password: ********
Name: GGee
EmailAddress: GGee@example.com
RealName: G Gee
Privileged: 1
Disabled: 0
CF-Employee Department: Applications Software
The last "prep" thing is to create your search criteria. This is far easier in the Web UI like you can build up your Search and then when you click Advanced you can copy that Query text directly and test it out from the CLI:
[support-email@rt ~]$ rt ls -i -q "'Corp Support'" "Status = 'stalled' AND Told < '-1 week'"
ticket/314370
ticket/315571
Now you're ready for some automation.

  • The use of "-i" gives the output in a suitable format for processing
  • The "-q" option specifies a queue and you need to use quotes (') around names with spaces in them, hence on the CLI you get "'Corp Support'"
The above query is Searching for Stalled tickets which haven't been touched (Told) in over a week. We want to change such tickets to Open so that staff pick up these tickets. For this we can setup a job with cron which pipes the tickets found in a search into an rt edit command.
# un-stalls support tickets NOTE: requires valid creds in .rtrc
@daily /opt/rt4/bin/rt ls -i -q "'Corp Support'" "Status = 'stalled' AND Told < '-1 week'"  | rt edit - set status=open
You can automate all kinds of functionality whether routine activities like this example, or to build helper scripts for large operations like to populate some new custom field or otherwise.

Wednesday, 5 July 2017

Data Retention and Percona Archiver

Data retention can be a bit touchy but when the alternative is to let tables grow by GB per week or per day, sometimes you just got to pick an upper limit. In my experience, suggesting something to stakeholders helps to get things rolling.

Magically I've recently "discovered" the Percona Archiver - I've been rolling my own for far too long. This tool is well documented and I shan't repeat the documentation other than to give an example along with some tidbits.


The archiver can move records to a destination table (the --dest option) OR to a file (the --file option). Both are useful and I'll show the file one because that's The Final Solution other than outright launching the nukes with --purge. Give a Select criteria (the --where option) and consider to include table maintenance (--optimize) if you are moving a lot of data.

For clarity: pt-archiver does a DELETE for each record it archives. 


# dump table from N months ago
DELAGE=6
DELYRMO=`date --date "$DELAGE months ago" +%Y%m`
TABLE="calib_aimextractor_log_history_$DELYRMO"
BAKFILE="/mnt/backups/mysqlserver/archive-$DB-$TABLE.txt"

# do not (!) overwrite file with something (-s) in it already
if [ ! -s "$BAKFILE" ] ; then
        pt-archiver --source h=localhost,D=$DB,t=$table --file archive-$table.txt --where "calib_aimextractor_id > 0" --optimize s --statistics
else
        echo "$BAKFILE has something in it, dump has been SKIPPED"
        exit -1
fi
This is a drastically simplified script from what I used to do.

  1. Set the data retention which in this case is 6 months. The "date" command is useful for generating dates or parts thereof like the year, month, day, week whatever you need for both file names and search criteria. 
  2. File target should be some file system location locally or NFS. The file format is suitable for LOAD DATA INFILE
    • Gotcha! Loading data files is a risky thing to do and disabled by default in MySQL. Typically load the data to a non-production server, then manually extract the relevant records and insert them back into prod.
  3. Sanity check you're not stomping a file that's already there. I prefer to be safer than sorrier.
  4. Credentials should be in .my.cnf 
    • Seems obvious when you know to do it, but don't put user creds in scripts, dumbo! I did that too often :(
  5. Gotcha! If using --dest table instead of a file target, specify the host (h) and database (D) because otherwise pt-archiver makes some assumptions which may be very wrong
  6. Optimize your source (s) especially if a large number of rows are being pulled. Consider to also use destination (d) 
There's lots more guidance in the documentation and from other users Online. Some like to process larger numbers of records concurrently like with --limit and --bulk-delete, but the defaults (1 record) have been good to me as this runs relatively fast. Likewise there's options to check your slaves don't get far out of sync which again default behaviour is fast enough for me, but there's lots of powerful options to tune pt-archiver.

Take backups, test, test, test and you shouldn't need Good Luck :)

Saturday, 14 May 2016

Ubuntu automysqlbackup

There is a script called "automysqlbackup" which is a pretty straightforward shell script wrapping up routine MySQL backups. The Ubuntu package is mostly preconfigured so you would not necessarily even have to modify the stock configuration.
  • Gets the maintenance user from "/etc/mysql/debian.cnf" for credentials
  • Dynamically determines what databases are on the system
  • Has a default schedule and backup path (/var/lib/automysqlbackup)
You should consider changing a couple of the defaults found in "/etc/default/automysqlbackup".
  • BACKUPDIR to preferred backup path
  • MAILADDR to an appropriate recipient in case there are errors
It does not remove old backup files so you may want to make a basic script which does remove them. There's a "PREBACKUP" variable so you can hook in such a script. I like this because it runs before your backup so you don't accidentally nuke your fresh backups and keeps things simple:
  • PREBACKUP="find $BACKUPDIR -mtime +90 -delete"
Finally, remember to copy your backups offsite if appropriate. "rsync" to some remote system or otherwise. Use the "POSTBACKUP" script - again a good hook here because it will push out your backups right away after they have been created.

Ubuntu Man page:
http://manpages.ubuntu.com/manpages/wily/man8/automysqlbackup.8.html

Monday, 21 September 2015

CentOS 7

Having started an install for CentOS 7, this is my first time working with the Red Hat Enterprise Linux 7 based system and they've done a few things I'll have to learn.

Goodbye, Sys V init! What an era there's been with init scripts. The newer "systemd" system and service manager replaces the init system along with RedHat's chkconfig and similar tools. The "systemctl" command is kindof similar to "chkconfig", but takes the command name first and the new style service name second:


# systemctl status nfs-server.service

Overview of systemd for RHEL 7
https://access.redhat.com/articles/754933

The other is the new "firewalld" which provides more of set of front-ends to iptables. The command-line tool, firewalld-cmd, can generate the settings changes like to open ports. As the RedHat docs say, this mechanism can load firewall rule changes instead of dumping the whole rule set so you keep open connections and stats.

# firewall-cmd --zone=public --add-port=80/tcp --permanent
# firewall-cmd --reload

Using Firewalls
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html

I'm sure there's more but mostly cosmetic like how the installer works, which packages are bundled or not (like bind-utils not included in a base install? interesting). Sometimes hard to figure out what to do in a new system when there's big changes that aren't just drop-in replacements for older tools.

Ciao,
Arch

Popular Posts