Search This Blog

Tuesday, November 10, 2015

Many AWS accounts and Zsh

That might not be a common problem, but I have to deal with many AWS accounts in the same time. For example I might to have to run an Ansible playbook for one account and a CLI commend for other one in parallel. To make my life easier i wrote this ZSH function. (It's probably going to work in other advance shell).

There are two ways to use it.
  • awsenv list - returns the list of all available accounts/environments.
  • awsenv - sets, based on values provided in ~/.aws/credentaials and ~/.aws/config, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION.
The list is create with simple AWK script (assuming that any lines with "[]" is OK.

The actual command to set environment uses two AWK scripts. First one looks for requested account name and set variable "a" to 1. When "a" equals 1 it prints shell export command for access_key_id and for secret_access_key to  standard output, which is redirected to $TMP_FILE. Then it sources, prints and deletes that file.

Please note that in current form script requires access_key_id being define before secret_access_key.  Printing the value of all variables, especially secret_aceess_key could be consider as security weakness, so you might want to modify/remove "cat $TMP_FILE line.

# vim: set filetype=sh
awsenv () {
    if [[  $1 == list ]]
        then
        print $1
        awk  '/\[.*\]/ {print $1}'  ~/.aws/credentials
    else
        TMP_FILE=/tmp/current_aws
        awk \
           'BEGIN{a=0};\
           /\['$1'\]/ {a=1};\
           /access_key_id/ {if (a==1){printf "export %s=%s\n", toupper($1), $3}};\
           /secret_access_key/ {if (a==1) {printf "export %s=%s\n", toupper($1), $3;a=0}}'\
            ~/.aws/credentials > $TMP_FILE
        awk \
            'BEGIN{a=0};\
            /\[profile '$1'\]/ {a=1};\
            /region/ {if (a==1){printf "export AWS_DEFAULT_%s=%s\n", toupper($1), $3; a=0}}'\
            ~/.aws/config >> $TMP_FILE
        source $TMP_FILE
        cat $TMP_FILE
        rm $TMP_FILE
    fi
}



Finally, initial version of this script and some discusion on profiles in older boto versions is in this post http://larryn.blogspot.co.uk/2015/03/how-to-deal-with-aws-profiles.html

Wednesday, September 02, 2015

Real "Get All Launch Configuration" In Boto

During my recent Ansible tests I created 'some' number of launch configuration, enough to reach account limit and I wanted/had to clean it. Boto sounded like a good candidate to do this. This few lines should address my problem.

import boto
import boto.ec2
import boto.ec2.autoscale

asg = boto.ec2.autoscale.connect_to_region('us-east-1')
results = asg.get_all_launch_configurations()

But it didn't. I could even found my launch configurations in the results sets. I figured out quickly that my results set is rather big and by default get_all_lunch_configurations method paging results (AFAIR default value is 20). Using example from this post on SDB I create following function doing what above method name promises - get all lunch configuration.


def get_all_launch_configuration(connection)
    """get_all_launch_configuration(connection) -
       returns results set of all launch configuration,
       regardless it size. Function require established
       boto.ec2.autoscaling connection."""

    results = connection.get_all_launch_configurations()
    token = results.next_token
    while True:
        if token:
            r =  asg.get_all_launch_configurations(next_token=token)
            token = r.next_token
            results.extend(r)
        else:
            break

    return results
       

Monday, June 08, 2015

A small example of advance aptitude usage

Few years ago I 'preserved' few example of advance aptitude usage I found on a Debian mailing list. Recently I had a chance to use them again. I was looking for a version of packages with distinguish string in a name (let say it was 'apache').

aptitude versions \$(aptitude search ~iapache| awk '{print \$2}')

  • So first I ran aptitude search for 'apache' term, but only among installed packages - aptitude search ~iapache
  • From the results I cut package name (second column) - awk '{print \$2}'. Please note I escaped dollar character, because I run this command using Ansible (see this blog entry for more details).
  • The outcome of those two command allows me to query package versions - aptitude versions

Links



Wednesday, May 20, 2015

Crux as a second system in GRUB2

I tried to install CRUX as a secondary system on a DELL XPS 13 (Ubuntu Edition). I run setup, configured fstab and rc.conf, compiled a kernel and then tried to add new system to the GRUB2 menu in Ubuntu. The update-grub script recognised the new Linux entry, added it, but CRUX didn't start — kernel panicked. It could not find root partition. I double checked my kernel and had all important options mentioned in CRUX installation compiled in.
Then I looked at menu entry created by os-prober for CRUX and noticed that it had following line:

        linux /boot/vmlinuz root=/#ROOT_DEVICE# ro quiet

I analysed what the probe script do and found that it checked boot loader configuration files from a new/additional Linux. Then I remembered that I had not bothered to configure LILO, because I had plan to use Ubuntu GRUB2 rather than it. I updated /etc/lilo.conf (without runing lilo command itself).
After that update-grub properly created CRUX entry in boot menu and I could start my CRUX installation.

Saturday, May 02, 2015

zsh and ssh-agent

 INTRODUCTION

One of the post which gets some attention on this blog is My Way For Binding SSH Agent With Zshell. The method presented there is far from ideal and it stopped to work for me some time ago. After that I wrote a new version. I think it is much better and should work with bash or other shells. I tested it on Ubuntu and Crux.

ZSSH


I have the .zssh file in my home directory. It is sources by my .zshrc file. The .zssh consists of 3 functions.

SSHAGENT

The first function is responsible for starting ssh-agent.

sshagent () {
    SSHAGENT=$(ps ax|grep "[s]sh-agent"| grep -cv Z)
    if (( $SSHAGENT == 0 ))
    then
        sshupdate
    else
        SSHPID="$(ps -eo pid,command | awk '/ ssh-[a]gent/ {print $1}');"
        SSHPID_ENV=$(awk  '/Agent/ {print $NF}' ~/.ssh-env)
        if [[ $SSHPID == $SSHPID_ENV ]]
        then
            source ~/.ssh-env
        else
            killall ssh-agent
            sshupdate
        fi
    fi
}


It checks if a ssh-agent runs already and it isn't a zombie. (On one of my systems, after starting a desktop environment, I always had a zombie ssh-agent running.) If there is no ssh-agent running the function calls sshupdate, another function described below. If the agent is present and live in a system the function then compares ssh-agent pid with the information saved in the ~/.ssh-env file. (See sshupdate paragraph for more information.) If informations are consistence it sources .ssh-env.  If not it kills all ssh-agent and the calls sshupdate.

SSHUPDATE

This is a very simply function calling ssh-agent and saving its output to a file.

sshupdate () {
    ssh-agent > ~/.ssh-env
    source ~/.ssh-env
}


The output then can be sourced by other functions or processes. Oh, and if you don't remember/know the output of ssh-agent looks like that:

SSH_AUTH_SOCK=/tmp/ssh-BnXafqRnOSHx/agent.1884;
export SSH_AUTH_SOCK;
SSH_AGENT_PID=1885; 

export SSH_AGENT_PID;
echo Agent pid 1885;









SSHADD

Finally the function responsible for adding your ssh key.

sshadd () {
    if (( $(ssh-add -l | grep -c $USER) == 0 ))
    then
        ssh-add
    else
        ssh-add -l
    fi
}


It checks the number of added keys. If a key from you home directory, or having your username in the path, is not present it adds it. Otherwise it lists all added keys.

USAGE

sshagent is called from your .zshrc, so it should be present during every session. sshadd need to be called by you, when you need it first time.

FURTHER UPDATES

What if you have more than one key and you would like ti add all of them in the same time. Then you could try to use the 'file' program to find ssh keys in the.ssh, or other, directory and source all of them.


Saturday, April 18, 2015

The Cassandra, the Ansible, a pipe and a complicate command

I've been working on Cassandra ring downsizing (not a fanny task). To make things a bit easier I've been using Ansible for configuration management. In theory you don't need to change the value of 'initial_token' in Cassandra configuration, and everything else stays the same during ring resize. Therefore, Ansible is not really necessary, but I believe it's good to have consistence in your configuration.

Cassandra role

In first place I added a variable with the Cassandra ring size as a max_nodes  to its role.

  - {role: cassandra, max_nodes: 13}

This value is used to create a Ansible local fact. To learn more on local facts please read this Curtis Collicutt article, which is the best for my script. Please note that the following code is a Jinja template not ready script.

#!/bin/sh
NODE=$(hostname |grep -o -P "\d+")
TOKEN=$(echo '2^127/{{ max_nodes }}' | bc)
cat <
{
    "node_number": $NODE,
    "token": $TOKEN
}
EOF


The above script print onto standard output a json with two variables.
  • node_number - is created based on hostname. The number a hostname correspond to a ring position (i.e. cassandra-4 is forth node).
  • token - is 'a main part' of token value calculation. The rest is done in Cassandra configuration template:
    initial_token: {{ '%d' % ( ansible_local.cassandra.token *  (ansible_local.cassandra.node_number - 1)) }}

That's sounds complicated and it is. As often there is a historical explanation to that situation. Initially, the whole calculation was done in the template. It worked for 16 node ring, but not for 15. It didn't work, because division two integral  number (2**127/15) results in float one in jinja what lead to rounding error!

Ansible commands

As a bonus two example of using Ansible to run a bit more complicated command on many hosts.
  1. Reads token position from configuration file and initialize node move in a screen session. Please note escape character in front of '$'.
    ansible \
     -i inventory/ec.py \
     -m shell \
     -a "screen -d -m \
       nodetool -h localhost move \
       \$(awk '/initial_token/ {print \$2}' \
         /etc/cassandra/default.conf/cassandra.yaml)"\
    tag_pool_cassandra

  2. Much simpler. Checks that screen runs. AFAIK you have to use shell module if you want to use UNIX pipe.
    ansible \
     -i inventory/ec2.py \
     -m shell \
     -a "ps -ef| grep SCREEN"\
    tag_pool_cassandra

Friday, March 06, 2015

How to deal with AWS profiles

I don't know how common it is to be a part of an organisation having many AWS (Amazon Web Services) accounts, but it's make things tricky. Amazon make it relatively easy to use many 'named profiles' (account) with AWS CLI. (If you haven't try see this documentation). Boto (python AWS interface) developers also added easy way to use the same profiles in version 2.29. (How to use the same profiles in Boto and other SDKs check this article.) But release 2.29 is not so old and what if you got stacked with older version (for example the one from the latest Ubuntu LTS)? I was in such situation and wrote this small function to use it with profiles from the ~/.boto (not ~/.aws/) file.

def set_account(environment):
    """set_credentials(environment) -
    sets credentials for given environment/account.
    """
    for i in boto.config.items(environment):
        boto.config.set('Credentials', i[0], i[1])

So if your profile is called 'prod':

import boto

set_account('prod') 
conn = boto.connect_ec2()

Another program having issues with many AWS accounts is Ansible. (However, authors claims it's a feature not a bug). My first approach was to add the above function in the ec2.py inventory script and  further extended it by adding following lines:
  • to the __init__ method of the Ec2Inventory class:
     
    set_credentials(self.args.environment)
     
  • and to the parse_cli function:
     
    parser.add_argument('-e', '--environment', type=str, required=True,
                               help="select an environment/profile to run.")
     
    
Such prepared script is not ready to use with Ansible yet - a wrapper around it is needed as well. For example the bash script called ec2-prod.sh to use Ansible in the 'prod' environment.

#!/bin/sh
cd $(dirname $0)
./ec2.py -e prod --refresh-cache

If you need to know why the wrapper is needed check the Ansible inventory code.

Such approach is not ideal if you have many account to work with, you will need a wrapper for each one. What worse, it doesn't work with unified AWS config approach and require to keep a unique version of the inventory script. Therefore, I tried to find a better resolution. I could not find anything interesting and decided to write a small shell script to read ~/.aws/credentials and exports AWS keys for selected profile. The script is a simple wrapper around a bit complicated awk command. To use it you have to source, not execute, it, because the script should execute in a current shell.

#!/bin/bash

TMP_FILE=/tmp/current_aws
awk  \
 'BEGIN{a=0};\
 /\['$1'\]/ {a=1};\
 /access_key_id/ {if (a==1){printf "export %s=%s\n", toupper($1), $3}};\
 /secret_access_key/ {if (a==1) {printf "export %s=%s\n", toupper($1), $3;a=0}}'\
 ~/.aws/credentials > $TMP_FILE

source $TMP_FILE
rm $TMP_FILE

The script ensure that:
  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
are exported. This works not only with Ansible, but also can simplified AWS CLI usage, because the --profile option can be dropped. It might also help with other tools.

Links:

Thursday, January 08, 2015

Datadog and many dataseries stacked together

Recently, I've started to use Datadog. It has nice features, but I have also found some annoying lacks. One of them is no easy way to prepare a graph with a stack of different series in one graph, for example nice representation of CPU time spent in different states.



Luckily, as you can see above it can be done. You just need to change some things in JSON and have something similar to what I got below. The main point is to have all dataseries in the argument of one "q".

{
  "viz": "timeseries",
  "requests": [
    {


      "q": "avg:system.cpu.system{host:host-01}, avg:system.cpu.user{,host:host-01}, avg:system.cpu.iowait{host:host-01}, avg:system.cpu.stolen{host:host-01}, avg:system.cpu.idle{host:host-01}",
    },

      "type": "area"
  ],
  "events": []
}

Tuesday, January 06, 2015

Count processes per state per application

In previous posts (here and here) I discussed how to count thread in a given state for a give process. Recently, I had another problem - I needed to count number of processes per application per state. My previous commands wouldn't work, so I wrote an alternative version.

while [ 1 ];
do
    date;
    cat /proc/loadavg;
    ps -Leo state,args |
     awk ' $1 ~ /(D|R)/ {state[$0]++} \
      END{ for (j in state) {printf "%s - %d\n", j, state[j]}}' |
      sort -k 2;
    echo "---";
    sleep 5;
done 

There is not PID and args are included in the output list as a whole. worried for number of processes.

One more thought. Dropping "$1 ~ /(D|R)/" can be useful in case of problem with total number of processes. But then the whole command should be a bit modified, so the results are sorted by number of processes. Simplified version would look like this one:

while [ 1 ];
do
    ps -Leo state,args |
     awk ' $1 ~ /(D|R)/ {state[$0]++} \
      END{ for (j in state) {printf "%d - %s\n", state[j], j}}' |
      sort -n;
    echo "---";
    sleep 5;
done