Search This Blog

Tuesday, April 30, 2019

Get Public IP address of Azure VM from shell on VM

Sometimes you need to get an IP address of the VM from inside of it. You can do this relatively simple from Azure VM with curl and jq thanks to Metadata endpoint as described in this document https://azure.microsoft.com/en-us/blog/announcing-general-availability-of-azure-instance-metadata-service/

And if you need the public IP address of interface eth0 this is the command:

curl -H Metadata:true http://169.254.169.254/metadata/instance?api-version=2017-04-02| jq '.network.interface[0].ipv4.ipAddress[0].publicIpAddress'


Thursday, December 20, 2018

One liner to list interactive Unix users


If you need simple command to list all interactive users this awk one-liner might be helpful. It prints all usernames without few keywords in shell. May trigger false positives, so be careful.

awk -F: '$NF !~ /(false|nologin|halt|shutdown|sync)/ {print $1}'  /etc/passwd

Friday, October 27, 2017

Terraform debugging (Azure Storage long names)

I had strange problem with Terraform on Azure. Everything looked good, but ever time I got. I tried to changed few things. Didn't help. All the time:

* module.storage.azurerm_storage_container.vhds: 1 error(s) occurred:

* module.storage.azurerm_storage_container.vhds: Resource \
'azurerm_storage_account.vhds' not found for variable \
'azurerm_storage_account.vhds.name'

I couldn't find any explanation to me problems, so I started to look to increase amount of information from Terraform. I found that TF_LOG controls log level. I ran

TF_LOG=TRACE terraform plan| grep ERROR

And found following line:

2017/10/27 20:31:37 [ERROR] root.storage: eval: *terraform.EvalValidateResource, \
err: Warnings: []. Errors: [name can only consist of lowercase letters and numbers, \
and must be between 3 and 24 characters long]


Yes, the create name was a bit long, but aaaa... The original error message is not the most obvious one.

Tuesday, March 07, 2017

OpenSSL and Azure VPN

I had to set up an Azure Point-to-Site VPN, but didn't want to do it from Windows machine (I'm Linux/MacOSX kind of the guy) but luckily I found this Aris Plakias' article which describe in a plain language with good example how to prepare all necessary certificates using OpenSSL.

Actually I've created this small script so I can easily repeat client creation step.

#/bin/sh
name=$1
openssl genrsa -out ${name}1Cert.key 2048
openssl req -new -out ${name}1Cert.req -key ${name}1Cert.key -subj /CN="MyAzureVPN"
openssl x509 -req -sha256 -in ${name}1Cert.req -out ${name}1Cert.cer -CAkey MyAzureVPN.key -CA MyAzureVPN.cer -days 180 -CAcreateserial -CAserial serial
openssl pkcs12 -export -out ${name}1Cert.pfx -inkey ${name}1Cert.key -in ${name}1Cert.cer -certfile MyAzureVPN.cer

Tuesday, February 21, 2017

Firefox update in Crux

Updating Firefox, without rebuilding all other ports in Crux, is not the easiest task. Quite often you need to update one of packages Firefox depends one, but not all of them.

In my case, limited space on root partition is an additional problem. In the same time I have another, much bigger partition attached.

To address both issues I prepared this small script. It updates require dependencies (autoconf, sqlite, libpng, nspr, nss) and then updates Firefox, but with working directory in no default location (PKGMK_WORK_DIR).


prt-get update autoconf
prt-get update sqlite3
prt-get update libpng
prt-get update nspr
prt-get update nss
PKGMK_WORK_DIR=/media/pictures prt-get update firefox

Sunday, January 01, 2017

Bringing my Crux Enligthement+ repo back to live [Part I]

I wasn't doing anything with my home Linux machines for some time and recently decided to change it a bit. There are quite a few projects I hope to push a  forward. For example put all my pictures from CDROMs/DVDs into Dropbox, enable internal streaming from my RPi, install Linux on my Dell laptop.To achieve the last one I tried Kubuntu, but it's not so easy properly configure KDE nowadays (KDE5). I tried Enlightenment from Ubuntu PPA, but it connman seemed to be broken. So it seemed that I had to use Crux. Another option could be Arch, but I had already spent quite a lot of time configuring Enlightenment for Crux.

The package repository is located at wawrzek.name/crux/wawrzek/, and I keep my work in github wawrzek/crux-ports repository.

The first problem I encountered was issues with access to some .httpup related info:

Connecting to http://wawrzek.name/crux/wawrzek/
Updating collection wawrzek
 Edit: wawrzek/.httpup-repo.current
Failed to download http://wawrzek.name/crux/wawrzek/.httpup-repo.current: The requested URL returned error: 403 Forbidden
 Edit: wawrzek/.httpup-urlinfo
Failed to download http://wawrzek.name/crux/wawrzek/.httpup-urlinfo: The requested URL returned error: 403 Forbidden
Finished successfully


I host my repo on Linux virtual machine and I could easily confirm that all files has right access privileges, so I eliminated OS level problem. All other files in the same directory were properly accessible, so I started to suspect Apache configuration and that was a case. Apache Debian/Ubuntu configuration has block preventing web clients accessing .htaccess and .htpasswd, but definition is rather generous (^\.ht or everything what starts with string ".ht"). I decided that the simplist resolution is going to replace it with more specific rule (^\.ht(access|passwd)) what matches only ".htaccess" or ".htpasswd". Updated block of configuration is below:


#
# The following lines prevent .htaccess and .htpasswd files from being
# viewed by Web clients.
#
<FilesMatch "^\.ht(access|passwd)">
        Require all denied
</FilesMatch>



Saturday, December 31, 2016

Control terminal name and comment block in VIM

Somewhere (I think it was Stackoverflow) I found simple command to control the name of terminal from commandline which works very nice with iterm2 tabs on MacOSX and decided to add to my zsh environment this function:

termname() {
 echo -en "\e]1; $1 \a"
}
 

And if we are saying about Stackoverflow one of the most useful Vim suggestion I've ever found is this instruction to comment/uncomment a multiple lines in VIM.

Tuesday, November 10, 2015

Many AWS accounts and Zsh

That might not be a common problem, but I have to deal with many AWS accounts in the same time. For example I might to have to run an Ansible playbook for one account and a CLI commend for other one in parallel. To make my life easier i wrote this ZSH function. (It's probably going to work in other advance shell).

There are two ways to use it.
  • awsenv list - returns the list of all available accounts/environments.
  • awsenv - sets, based on values provided in ~/.aws/credentaials and ~/.aws/config, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION.
The list is create with simple AWK script (assuming that any lines with "[]" is OK.

The actual command to set environment uses two AWK scripts. First one looks for requested account name and set variable "a" to 1. When "a" equals 1 it prints shell export command for access_key_id and for secret_access_key to  standard output, which is redirected to $TMP_FILE. Then it sources, prints and deletes that file.

Please note that in current form script requires access_key_id being define before secret_access_key.  Printing the value of all variables, especially secret_aceess_key could be consider as security weakness, so you might want to modify/remove "cat $TMP_FILE line.

# vim: set filetype=sh
awsenv () {
    if [[  $1 == list ]]
        then
        print $1
        awk  '/\[.*\]/ {print $1}'  ~/.aws/credentials
    else
        TMP_FILE=/tmp/current_aws
        awk \
           'BEGIN{a=0};\
           /\['$1'\]/ {a=1};\
           /access_key_id/ {if (a==1){printf "export %s=%s\n", toupper($1), $3}};\
           /secret_access_key/ {if (a==1) {printf "export %s=%s\n", toupper($1), $3;a=0}}'\
            ~/.aws/credentials > $TMP_FILE
        awk \
            'BEGIN{a=0};\
            /\[profile '$1'\]/ {a=1};\
            /region/ {if (a==1){printf "export AWS_DEFAULT_%s=%s\n", toupper($1), $3; a=0}}'\
            ~/.aws/config >> $TMP_FILE
        source $TMP_FILE
        cat $TMP_FILE
        rm $TMP_FILE
    fi
}



Finally, initial version of this script and some discusion on profiles in older boto versions is in this post http://larryn.blogspot.co.uk/2015/03/how-to-deal-with-aws-profiles.html

Wednesday, September 02, 2015

Real "Get All Launch Configuration" In Boto

During my recent Ansible tests I created 'some' number of launch configuration, enough to reach account limit and I wanted/had to clean it. Boto sounded like a good candidate to do this. This few lines should address my problem.

import boto
import boto.ec2
import boto.ec2.autoscale

asg = boto.ec2.autoscale.connect_to_region('us-east-1')
results = asg.get_all_launch_configurations()

But it didn't. I could even found my launch configurations in the results sets. I figured out quickly that my results set is rather big and by default get_all_lunch_configurations method paging results (AFAIR default value is 20). Using example from this post on SDB I create following function doing what above method name promises - get all lunch configuration.


def get_all_launch_configuration(connection)
    """get_all_launch_configuration(connection) -
       returns results set of all launch configuration,
       regardless it size. Function require established
       boto.ec2.autoscaling connection."""

    results = connection.get_all_launch_configurations()
    token = results.next_token
    while True:
        if token:
            r =  asg.get_all_launch_configurations(next_token=token)
            token = r.next_token
            results.extend(r)
        else:
            break

    return results
       

Monday, June 08, 2015

A small example of advance aptitude usage

Few years ago I 'preserved' few example of advance aptitude usage I found on a Debian mailing list. Recently I had a chance to use them again. I was looking for a version of packages with distinguish string in a name (let say it was 'apache').

aptitude versions \$(aptitude search ~iapache| awk '{print \$2}')

  • So first I ran aptitude search for 'apache' term, but only among installed packages - aptitude search ~iapache
  • From the results I cut package name (second column) - awk '{print \$2}'. Please note I escaped dollar character, because I run this command using Ansible (see this blog entry for more details).
  • The outcome of those two command allows me to query package versions - aptitude versions

Links



Wednesday, May 20, 2015

Crux as a second system in GRUB2

I tried to install CRUX as a secondary system on a DELL XPS 13 (Ubuntu Edition). I run setup, configured fstab and rc.conf, compiled a kernel and then tried to add new system to the GRUB2 menu in Ubuntu. The update-grub script recognised the new Linux entry, added it, but CRUX didn't start — kernel panicked. It could not find root partition. I double checked my kernel and had all important options mentioned in CRUX installation compiled in.
Then I looked at menu entry created by os-prober for CRUX and noticed that it had following line:

        linux /boot/vmlinuz root=/#ROOT_DEVICE# ro quiet

I analysed what the probe script do and found that it checked boot loader configuration files from a new/additional Linux. Then I remembered that I had not bothered to configure LILO, because I had plan to use Ubuntu GRUB2 rather than it. I updated /etc/lilo.conf (without runing lilo command itself).
After that update-grub properly created CRUX entry in boot menu and I could start my CRUX installation.

Saturday, May 02, 2015

zsh and ssh-agent

 INTRODUCTION

One of the post which gets some attention on this blog is My Way For Binding SSH Agent With Zshell. The method presented there is far from ideal and it stopped to work for me some time ago. After that I wrote a new version. I think it is much better and should work with bash or other shells. I tested it on Ubuntu and Crux.

ZSSH


I have the .zssh file in my home directory. It is sources by my .zshrc file. The .zssh consists of 3 functions.

SSHAGENT

The first function is responsible for starting ssh-agent.

sshagent () {
    SSHAGENT=$(ps ax|grep "[s]sh-agent"| grep -cv Z)
    if (( $SSHAGENT == 0 ))
    then
        sshupdate
    else
        SSHPID="$(ps -eo pid,command | awk '/ ssh-[a]gent/ {print $1}');"
        SSHPID_ENV=$(awk  '/Agent/ {print $NF}' ~/.ssh-env)
        if [[ $SSHPID == $SSHPID_ENV ]]
        then
            source ~/.ssh-env
        else
            killall ssh-agent
            sshupdate
        fi
    fi
}


It checks if a ssh-agent runs already and it isn't a zombie. (On one of my systems, after starting a desktop environment, I always had a zombie ssh-agent running.) If there is no ssh-agent running the function calls sshupdate, another function described below. If the agent is present and live in a system the function then compares ssh-agent pid with the information saved in the ~/.ssh-env file. (See sshupdate paragraph for more information.) If informations are consistence it sources .ssh-env.  If not it kills all ssh-agent and the calls sshupdate.

SSHUPDATE

This is a very simply function calling ssh-agent and saving its output to a file.

sshupdate () {
    ssh-agent > ~/.ssh-env
    source ~/.ssh-env
}


The output then can be sourced by other functions or processes. Oh, and if you don't remember/know the output of ssh-agent looks like that:

SSH_AUTH_SOCK=/tmp/ssh-BnXafqRnOSHx/agent.1884;
export SSH_AUTH_SOCK;
SSH_AGENT_PID=1885; 

export SSH_AGENT_PID;
echo Agent pid 1885;









SSHADD

Finally the function responsible for adding your ssh key.

sshadd () {
    if (( $(ssh-add -l | grep -c $USER) == 0 ))
    then
        ssh-add
    else
        ssh-add -l
    fi
}


It checks the number of added keys. If a key from you home directory, or having your username in the path, is not present it adds it. Otherwise it lists all added keys.

USAGE

sshagent is called from your .zshrc, so it should be present during every session. sshadd need to be called by you, when you need it first time.

FURTHER UPDATES

What if you have more than one key and you would like ti add all of them in the same time. Then you could try to use the 'file' program to find ssh keys in the.ssh, or other, directory and source all of them.


Saturday, April 18, 2015

The Cassandra, the Ansible, a pipe and a complicate command

I've been working on Cassandra ring downsizing (not a fanny task). To make things a bit easier I've been using Ansible for configuration management. In theory you don't need to change the value of 'initial_token' in Cassandra configuration, and everything else stays the same during ring resize. Therefore, Ansible is not really necessary, but I believe it's good to have consistence in your configuration.

Cassandra role

In first place I added a variable with the Cassandra ring size as a max_nodes  to its role.

  - {role: cassandra, max_nodes: 13}

This value is used to create a Ansible local fact. To learn more on local facts please read this Curtis Collicutt article, which is the best for my script. Please note that the following code is a Jinja template not ready script.

#!/bin/sh
NODE=$(hostname |grep -o -P "\d+")
TOKEN=$(echo '2^127/{{ max_nodes }}' | bc)
cat <
{
    "node_number": $NODE,
    "token": $TOKEN
}
EOF


The above script print onto standard output a json with two variables.
  • node_number - is created based on hostname. The number a hostname correspond to a ring position (i.e. cassandra-4 is forth node).
  • token - is 'a main part' of token value calculation. The rest is done in Cassandra configuration template:
    initial_token: {{ '%d' % ( ansible_local.cassandra.token *  (ansible_local.cassandra.node_number - 1)) }}

That's sounds complicated and it is. As often there is a historical explanation to that situation. Initially, the whole calculation was done in the template. It worked for 16 node ring, but not for 15. It didn't work, because division two integral  number (2**127/15) results in float one in jinja what lead to rounding error!

Ansible commands

As a bonus two example of using Ansible to run a bit more complicated command on many hosts.
  1. Reads token position from configuration file and initialize node move in a screen session. Please note escape character in front of '$'.
    ansible \
     -i inventory/ec.py \
     -m shell \
     -a "screen -d -m \
       nodetool -h localhost move \
       \$(awk '/initial_token/ {print \$2}' \
         /etc/cassandra/default.conf/cassandra.yaml)"\
    tag_pool_cassandra

  2. Much simpler. Checks that screen runs. AFAIK you have to use shell module if you want to use UNIX pipe.
    ansible \
     -i inventory/ec2.py \
     -m shell \
     -a "ps -ef| grep SCREEN"\
    tag_pool_cassandra

Friday, March 06, 2015

How to deal with AWS profiles

I don't know how common it is to be a part of an organisation having many AWS (Amazon Web Services) accounts, but it's make things tricky. Amazon make it relatively easy to use many 'named profiles' (account) with AWS CLI. (If you haven't try see this documentation). Boto (python AWS interface) developers also added easy way to use the same profiles in version 2.29. (How to use the same profiles in Boto and other SDKs check this article.) But release 2.29 is not so old and what if you got stacked with older version (for example the one from the latest Ubuntu LTS)? I was in such situation and wrote this small function to use it with profiles from the ~/.boto (not ~/.aws/) file.

def set_account(environment):
    """set_credentials(environment) -
    sets credentials for given environment/account.
    """
    for i in boto.config.items(environment):
        boto.config.set('Credentials', i[0], i[1])

So if your profile is called 'prod':

import boto

set_account('prod') 
conn = boto.connect_ec2()

Another program having issues with many AWS accounts is Ansible. (However, authors claims it's a feature not a bug). My first approach was to add the above function in the ec2.py inventory script and  further extended it by adding following lines:
  • to the __init__ method of the Ec2Inventory class:
     
    set_credentials(self.args.environment)
     
  • and to the parse_cli function:
     
    parser.add_argument('-e', '--environment', type=str, required=True,
                               help="select an environment/profile to run.")
     
    
Such prepared script is not ready to use with Ansible yet - a wrapper around it is needed as well. For example the bash script called ec2-prod.sh to use Ansible in the 'prod' environment.

#!/bin/sh
cd $(dirname $0)
./ec2.py -e prod --refresh-cache

If you need to know why the wrapper is needed check the Ansible inventory code.

Such approach is not ideal if you have many account to work with, you will need a wrapper for each one. What worse, it doesn't work with unified AWS config approach and require to keep a unique version of the inventory script. Therefore, I tried to find a better resolution. I could not find anything interesting and decided to write a small shell script to read ~/.aws/credentials and exports AWS keys for selected profile. The script is a simple wrapper around a bit complicated awk command. To use it you have to source, not execute, it, because the script should execute in a current shell.

#!/bin/bash

TMP_FILE=/tmp/current_aws
awk  \
 'BEGIN{a=0};\
 /\['$1'\]/ {a=1};\
 /access_key_id/ {if (a==1){printf "export %s=%s\n", toupper($1), $3}};\
 /secret_access_key/ {if (a==1) {printf "export %s=%s\n", toupper($1), $3;a=0}}'\
 ~/.aws/credentials > $TMP_FILE

source $TMP_FILE
rm $TMP_FILE

The script ensure that:
  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
are exported. This works not only with Ansible, but also can simplified AWS CLI usage, because the --profile option can be dropped. It might also help with other tools.

Links:

Thursday, January 08, 2015

Datadog and many dataseries stacked together

Recently, I've started to use Datadog. It has nice features, but I have also found some annoying lacks. One of them is no easy way to prepare a graph with a stack of different series in one graph, for example nice representation of CPU time spent in different states.



Luckily, as you can see above it can be done. You just need to change some things in JSON and have something similar to what I got below. The main point is to have all dataseries in the argument of one "q".

{
  "viz": "timeseries",
  "requests": [
    {


      "q": "avg:system.cpu.system{host:host-01}, avg:system.cpu.user{,host:host-01}, avg:system.cpu.iowait{host:host-01}, avg:system.cpu.stolen{host:host-01}, avg:system.cpu.idle{host:host-01}",
    },

      "type": "area"
  ],
  "events": []
}

Tuesday, January 06, 2015

Count processes per state per application

In previous posts (here and here) I discussed how to count thread in a given state for a give process. Recently, I had another problem - I needed to count number of processes per application per state. My previous commands wouldn't work, so I wrote an alternative version.

while [ 1 ];
do
    date;
    cat /proc/loadavg;
    ps -Leo state,args |
     awk ' $1 ~ /(D|R)/ {state[$0]++} \
      END{ for (j in state) {printf "%s - %d\n", j, state[j]}}' |
      sort -k 2;
    echo "---";
    sleep 5;
done 

There is not PID and args are included in the output list as a whole. worried for number of processes.

One more thought. Dropping "$1 ~ /(D|R)/" can be useful in case of problem with total number of processes. But then the whole command should be a bit modified, so the results are sorted by number of processes. Simplified version would look like this one:

while [ 1 ];
do
    ps -Leo state,args |
     awk ' $1 ~ /(D|R)/ {state[$0]++} \
      END{ for (j in state) {printf "%d - %s\n", state[j], j}}' |
      sort -n;
    echo "---";
    sleep 5;
done 

Tuesday, December 30, 2014

What is the (UNIX) load?

The "load" is use widely to describe stress/work applied onto a UNIX system. The simple rule is "lower than better". In the older days of uniprocessor machine load 1 was  kind of a borderline. In the new brave world of multi-core/processor machines load 1 means nothing.  Many people suggests that load equal or lower to number of processors/cores is good. That sound sensible, but not always is accurate.
Why? To answer that we have to comeback to question asked in the subject.

What is the "load"?

The load as the exponentially damped/weighted moving average of the number of processes, including threads, using or waiting for CPU and, at least at Linux, in uninterruptible sleep state in last 1, 5 and 15 minutes (see Wikipedia). The last part means that all processes/threads waiting for a disk (or other I/O device) will increase the load, without increasing a CPU usage. It leads to situation when the load lower than number of core/processes is danger. Let imagine few processes trying to dump important information on disks. Especially if all interrupts have affinity to one processor only (see this post) or just data are store in many small files. On the other hand, machine with very high load might be very responsive. Plenty of processes waiting to write information onto a disk not using a lot of memory and CPU in the same time. Just look at this picture:



If you want to know even more details of how the load is actually calculated
read this impressive white paper.


Links:
http://en.wikipedia.org/wiki/Load_%28computing%29
http://www.teamquest.com/pdfs/whitepaper/ldavg1.pdf
http://larryn.blogspot.co.uk/2013/05/cpu-affinity-interrupts-and-old-kernel.html

Saturday, December 06, 2014

Install CyanogenMod at Nook HD+

Recently I decided to try the new Cyanomogen (CM11) on my Nook HD+. Initial reading indicated that I had to reinstall using Recovery rather than internal updater. I tried to login to Recovery so much, that I recovered official B&N OS which replaced CM.

I needed to start from beginning. I did some research and found that post. It looked good so I gave it a try. First download ClockworkMod attached to the post, but later I downloaded latest CM snapshot from  there and  added  Google Apps for CM11 from there. I put everything as described on SD Card and kicked off installation. It flew like an Albatross. (To be honest I don't know why I did write Albatross - maybe because of this?) 



Anyway CM11 works good at Nook HD+.

Links:

  • http://wiki.cyanogenmod.org/w/Ovation_Info
  • http://download.cyanogenmod.org/?type=snapshot&device=ovation
  • http://wiki.cyanogenmod.org/w/Google_Apps
  • http://forum.xda-developers.com/showpost.php?p=42406126&postcount=7
  • http://forum.xda-developers.com/attachment.php?attachmentid=2849350&d=1405272804

Sunday, November 23, 2014

More fabric as a library

Recently I had to prepare a tool doing some remote commands, so of course I decided to use fabric, but I have big problem to control hosts. I remembered that I had written a short article on Fabric in here some time ago. But it didn't help. I asked on the Fabric mailing lists, but there was no help.


Manual host name control

In this tool I didn't need to run many parallel SSH connection, so I decided to control remote host name from inside the loop in the function my setting env.host_string each time (this is very useful functionality). Like in following example:


#!/usr/bin/env python
"""Example code to use Fabric as a library. 
It shows how to set up host manually.
 
Author: Wawrzek Niewodniczanski < main at wawrzek dot name >
"""
 
# import sys to deal with scripts arguments and of course fabric 
import sys
import fabric
from fabric.api import run, hide, env

env.hosts = ['host1', 'host2'] 
 
# Main function to run remote task 
def run_task(task='uname'):
    """run_task([task]) -
    runs a command on a remote server. If task is not specify it will run 'uname'."""
    # hide some information (this is not necessary).
    with hide('running', 'status'):
        run(task) 
 
# Main loop
# take all arguments and run them on all hosts specify in env.hosts variable
# if not arguments run 'uname' 
if len(sys.argv) > 1:
    tasks = sys.argv[1:]
    for task in tasks:
        for host in env.hosts:
            env.host_string = host
            run_task(task)
else:
    for host in env.hosts:
        run_task() 



Fabric in full control

The problem bugged me since then. Yesterday I found some of my old code. Analysed it and quickly found small, but profound difference with mu recent fabric usage. Rhe code above called the run_task function wrongly. Rather than dealt it in the normal way I supposed to use execute.

#!/usr/bin/env python
"""Example code to use Fabric as a library. 
It shows how to set up host manually.
 
Author: Wawrzek Niewodniczanski < main at wawrzek dot name >
"""
 
# import sys to deal with scripts arguments and of course fabric 
import sys
import fabric
from fabric.api import run, hide, env, execute

env.hosts = ['host1', 'host2'] 
 
# Main function to run remote task 
def run_task(task='uname'):
    """run_task([task]) -
    runs a command on a remote server. If task is not specify it will run 'uname'."""
    # hide some information (this is not necessary).
    with hide('running', 'status'):
        run(task) 
 
# Main loop
# take all arguments and run them on all hosts specify in env.hosts variable
# if not arguments run 'uname' 
if len(sys.argv) > 1:
    tasks = sys.argv[1:]
    for task in tasks:
        execute(run_task, task)
else:
    execute(run_task)


Links:

http://www.fabfile.org/
http://larryn.blogspot.co.uk/2012/11/fabric-as-python-module.html
http://lists.nongnu.org/archive/html/fab-user/2014-10/msg00002.html