Running invoke from other folders

While working on big projects, sometimes, you have invoke tasks lying around in different places. It wouldn’t make sense to merge them together but rather help each other out as and when needed.

One such way for this would be to search for invoke tasks from other folders and run them directly when they can be used.

I had to go for this approach for a monolithic repo where multiple projects were being built in mostly similar style with minor modifications. All of them would have the same set of commands along with same style of running those commands. I didn’t want to set up the same invoke task for all individual projects but rather a common set of tasks that could be re-used by each one of them.

Hence, here’s what I did:

  1. I knew for a fact that most sub-projects needed the same command to build themselves. I didn’t want to use the same command over and over again in each of the projects. I would rather use the command in the general space and override it only when a sub-project requires a special version of the build command.
  2. When the general invoke was called to do any task, it would first check whether, the sub-project for which the command was to run, the command was already available for the given sub-project itself.
    • If yes, this would mean that sub-project intends to override the default command in it’s own style
    • If no, then the default version is to run

Here’s the simplilfied version of the code.

import subproces
import os

def cd(path):
    old = os.getcwd()

def build():
    folder_to_run_the_command_on = '/home/folder'
    with cd(folder_to_run_the_command_on):
        print('Finding tasks...')
        # List all the possible commands that you can run on that folder
        res = subprocess.check_output(['invoke', '-l'])
        # does it contain this command that we need to run?
        if 'build'  in res:
            print('Found the build command in "{}" folder'.format(folder_to_run_the_command_on)
            run('invoke build')
            # we need to run the generic version of build command

Systemd tutorial

Systemd usually requires two files:

  1. service file
  2. timer file

Service files

Here you provide the details you’d use to

  • Start/stop a service
  • Define the type of service
    • Can be simple, forking, oneshot, dbus, notify or idle
  • How to kill the service
  • Ability to restart
  • Path for starting up
  • Timeout for the service startup or shutdown

Service is usually made up of 3 sections:

  1. Unit
  2. Service
  3. Install
    • Usually prefer for your installation

One example is as follows:




Here’s what your regular Service section would look like:



Timer files contain information about a timer controlled and supervised by systemd, for timer-based activation. This is possibly a better replacement for cron jobs. However, the changes are a bit different.

To set up a timer, you need the following options:

  • OnActiveSec
  • OnBootSec
  • OnStartupSec
  • OnUnitActiveSec
  • OnUnitInactiveSec

They all help you set up your timers relative to different starting points.

Some other options you could use are:

  • OnCalendar
    • This is your friend if you are looking for cron job replacement Please check the references below to look for some samples on how to set up your cron jobs in the correct format. It is not exactly same as cron job styles
  • AccuracySec
    • Based on the timer, how close to the actual time should this timer wake up
    • Use the value of 1us to be the smallest and most accurate
  • Persistent
    • Maybe you want to save the information about the timestamps whenever the service is shutting down. In that case, the information will be saved on the hard disk. It will be used along with boot and active sec information.

Here’s one simple sample for setting a timer



Some references:

Gitlab CLI API reference

Here’s a short tutorial on setting up gitlab cli for yourselves. It is extremely user friendly and you can take almost any action that you need. Anything that the UI provides is also available over cli or web services - both of which have examples here.

Let’s get started.

gitlab cli

Installing the gitlab CLI

gem install gitlab



Available commands

$ gitlab
|   Help Topics   |
| Branches        |
| Commits         |
| Groups          |
| Issues          |
| Labels          |
| MergeRequests   |
| Milestones      |
| Namespaces      |
| Notes           |
| Projects        |
| Repositories    |
| RepositoryFiles |
| Snippets        |
| SystemHooks     |
| Users           |

Sample CLI commands

# Check the list of Projects
$ gitlab projects

# Based on the response, we know reconwisev2 is ID 487928
# Let's find out the list of labels in it
$ gitlab labels 487928
|                                                      Gitlab.labels 487928                                                      |
| closed_issues_count | color   | description        | name         | open_issues_count | open_merge_requests_count | subscribed |
| 2                   | #ff0000 | null               | !Blocker     | 0                 | 0                         | false      |
| 2                   | #0033cc | null               | #AWS         | 8                 | 0                         | false      |
| 27                  | #428bca | null               | #Bug         | 2                 | 0                         | false      |
| 3                   | #0033cc | null               | #Feature     | 29                | 0                         | false      |
| 7                   | #5843ad | null               | #Improvement | 22                | 0                         | false      |
| 1                   | #428bca |                    | #Support     | 1                 | 0                         | false      |
| 28                  | #f0ad4e | null               | $GH          | 12                | 0                         | false      |
| 0                   | #f0ad4e |                    | $IFAST       | 4                 | 0                         | false      |
| 25                  | #ff0000 | null               | 1-Critical   | 7                 | 0                         | false      |
| 2                   | #ad4363 | null               | 2-Important  | 20                | 0                         | false      |
| 5                   | #ad4363 | null               | 3-Normal     | 18                | 0                         | false      |
| 2                   | #d491a5 |                    | 4-Trivial    | 6                 | 0                         | false      |
| 0                   | #a8d695 | null               | ^In-Progress | 3                 | 1                         | false      |
| 0                   | #69d100 | Completed/Finished | ^Resolved    | 0                 | 1                         | false      |

Sample CURL commands

Check the list of Projects

This will return you a big JSON with the list of your projects in gitlab.

Based on the response, we know that the project is ID 487928 Let’s find out the list of labels in it.

The response is a bit like this:


        "name": "!Blocker",
        "color": "#ff0000",
        "description": null,
        "open_issues_count": 0,
        "closed_issues_count": 2,
        "open_merge_requests_count": 0,
        "subscribed": false
        "name": "#AWS",
        "color": "#0033cc",
        "description": null,
        "open_issues_count": 8,
        "closed_issues_count": 2,
        "open_merge_requests_count": 0,
        "subscribed": false
        "name": "#Bug",
        "color": "#428bca",
        "description": null,
        "open_issues_count": 2,
        "closed_issues_count": 27,
        "open_merge_requests_count": 0,
        "subscribed": false

More documentation is available here

Protobuf on Docker

Found it really strange that nobody had mentioned on their blog how to compile Protobuf in python with C++ implementation.

I had been having a lot of trouble with the compilation of python protobuf. After struggling with it for a few months on and off I decided to give Docker a try as I realized that my own Fedora OS may be the one having troubles. Thought of starting with Ubuntu Docker as I’ve had success with it earlier with such compilation scripts. Luckily it all worked out successfully again for protobuf.

Then I tried Docker for Centos 7 and Fedora 23, both of which had not been working for me in any shape.

The source code of the Dockerfiles are available on Github here:


We are running all the steps through the docker image so that the steps can be replicated with any protobuf source code release.

Here’s what we will be doing:

  1. Create protoc compiler by compiling C++ files
  2. Compile C++ implementation for python using the just created protoc

Dockerfiles are available for the following Operating Systems:

Where to find the files inside the Docker images

  • protoc compiler is available at /ws/protoc-3.2 folder inside the images
  • python version (compiled from c++) is available at /ws/protobuf-3.0.0-beta-3.2/python/dist/

You can copy out the files using the following commands:

id=$(sudo docker create <image_name>)
sudo docker cp $id:/ws/protoc-3.2 ./
sudo docker cp $id:/ws/protobuf-3.0.0-beta-3.2/python/dist/*.gz ./

In case you get an error like the following, remove *.gz from the cp command:

zsh: no matches found: e7c8a9102e1cd07b4c471c331bc4deba2222278eb22be1e79ecaa14e914ed654:/ws/protobuf-3.0.0-beta-3.2/python/dist/*.gz

Your second cp command then becomes:

sudo docker cp $id:/ws/protobuf-3.0.0-beta-3.2/python/dist/ ./

Once done, you can remove the created container with the following command:

sudo docker rm -v $id

Just remember to change the rights as the files will belong to root by default. You can do that with the following commands:

sudo chown -R <USERNAME>:<USERNAME> *

Using TODO in Fedora

I recently started using Fedora for work and had to manage a lot of tasks on various projects. The list was big enough and there’s no proper support for Evernote in linux, my trusty todo list manager or ToDo list manager by AbstractSpoon. Decided to try post-it notes but my list was changing on an ad-hoc basis. Finally came across an extension Todo.txt.

Turned out this was just what I was looking for. I started putting all of my tasks in it, with proper categorization. Behind the scenes it is an extremely simple app which has only two files, both saved in ~/.local/share/todo.txt/ folder.

  • done.txt
  • todo.txt

The tasks are initially put as simple text in todo.txt and are moved to done.txt once marked complete. It is extremely useful that todo.txt app has UI as well as the files which are user friendly.

Finding tasks completed in the last week

During weekly meetings I found it difficult to mention all the tasks that I had been working on for the whole of the previous week. Thinking about todo tasks, I thought of using a bash script to print out the tasks from the last 8 days. After all, the files did contain a whole long list of tasks.

Here was the idea that I had in mind:

  • Read done.txt and todo.txt
  • Highlight the tasks differently from both files so it’s easy to see what has been completed already
    • Chose green color for done and red for todo
  • Highlight the categories differently - chose yellow
  • Show all the tasks completed in the last 8 days
    • Also provide the option to chose any number of days
    • Helps on those days when I wanna see more than 8 days
  • Show all the tasks in todo

Here’s the script for that:

    # echo $TODOFILE
    # echo $DONEFILE "\n"

            cat $TODOFILE | GREP_COLOR="1;31" grep --color=always ' [a-Z[].*' | GREP_COLOR="3;33" grep --color=always  "\+.*"
            egrep $(date '+%Y-%m-%d' --date=$DAY' days ago') $DONEFILE | GREP_COLOR="1;32" grep --color=always ' [a-Z].*' | GREP_COLOR="03;33" grep --color=always  "\+.*"
        for i in $(seq 0 $END)
                search $i

    # First arg, if given, or default value of 8
    lastXdays $DAYS | sort -u

Now, when I run this command, it gives me the following:



Sharing the tasks

I work with different teams which means sharing with them the latest updates on different days of the week. I used to run my todos command on bash before going for the meeting but I realized this was getting very mundane and I was spending a lot of time remembering the tasks I had done.

I decided to make it easy by sharing the tasks with the rest of the team automatically. Enter crontab and python’s invoke.

Here are the steps we will need:

  1. Setup cronjob
  2. Cron job will call upon a bash script
  3. Bash script will call python’s invoke
    • Here we call upon a bash script to provide us the results of todos in bash and then use that to send an email based on the --mailgroup
  4. Simply taking the output of todos in bash will give us a lot of unreadable characters. Especially the ones where we try and color code the response so it’s easy on the eyes - 3;33
  5. You can install aha to convert the ANSI terminal colors to html color codes. This way when we mail the contents to team members, it will display properly.


45 09 * * 1 /usr/bin/bash /PATH_TO_BASH_SCRIPT/ --mailgroup=<TEAM_MAIL>

The itself is really simple, which calls upon the invoke task:

# ----- --------
# Activate the virtual env
source ~/code/venvs/ve_opt/bin/activate

# Go to the directory containing the invoke script
cd ~/code/scripts/

# Run the invoke, pass the cmd line params, as is (which means mailgroup)
inv share_todos $*

Now it is time for contents of invoke’s We want to ensure that the font is big enough.

def share_todos(mailgroup):
    process_out = subprocess.check_output(['/FULL_PATH_TO/'])\
                 '<body style="font-weight:900; font-size:1.3em;">')
        "My todos @ {}".format('%c')),

We will use aha to convert the ANSI terminal colors to html color codes. Also, we will replace some color codes that aha creates because it is not really nice looking.

source ~/.bashrc

todos | /usr/local/bin/aha | sed -e 's/color:olive/color:DeepSkyBlue; font-style:italic;/g' -e 's/color:green;/color:LimeGreen;/g' -e 's/<pre>/<pre style="color:gray;">/g'
exit 0

The result

The email look

The email look

Getting list of Issues from JIRA under current sprint

When you are working on Agile Boards in JIRA, you may want to retrieve all the issues related to a particular board or the sprint. Usually you’d find issues in progress under the dashboard of the sprint itself.

Python JIRA allows you only a few options:

As you will also notice from jira docs the sprints function in there only provides you sprints.

What it fails to provide is the issues under the sprint for which work through a different subquery under the hood.

The code here intends to provide a full list of all the issues, based on a sprint name - complete or incomplete - that belong to a given sprint name. You can modify the code easily to suit your needs.


First things first, you need to install jira through pypi for the code.

pip install jira

The code

from jira.resources import Issue
from jira.client import JIRA

def sprints(username, 
    def sprint_issues(cls, board_id, sprint_id):
        r_json = cls._get_json(
            'rapid/charts/sprintreport?rapidViewId=%s&sprintId=%s' % (
                board_id, sprint_id),

        issues = []
        for t in type_of_issues_to_pull:
            if t in r_json['contents']:
                issues += [Issue(cls._options, cls._session, raw_issues_json)
                           for raw_issues_json in
        return {x.key: x for x in issues}.values()

    fmt_full = 'Sprint: {} \n\nIssues:{}'
    fmt_issues = '\n- {}: {}'
    issues_str = ''
    milestone_str = ''

    options = {
        'server': 'http://jira/',
        'verify': True,
        'basic_auth': (username, ldp_password),
    gh = JIRA(options=options, basic_auth=(username, ldp_password))

    # Get all boards viewable by anonymous users.
    boards = gh.boards()
    board = [b for b in boards if == sprint_name][0]

    sprints = gh.sprints(

    for sprint in sorted([s for s in sprints
                   if s.raw[u'state'] == u'ACTIVE'],
                key = lambda x: x.raw[u'sequence']):
        milestone_str = str(sprint)
        issues = sprint_issues(gh,,
        for issue in issues:
            issues_str += fmt_issues.format(issue.key, issue.summary)

    result = fmt_full.format(
    return result

You can call the function with the following command:

sprints(<username>, <password>, <sprint_name>)

You will get results that are similar to the following:


- PROJECT-437: Description of the issue
- PROJECT-447: Description of the issue

getopt vs getopts

Should you use getopt or getopts in your bash scripts?

The answer can be a bit tricky but mostly straight forward.


Generally, try to stay away from getopt for the following reasons:

  • External utility
  • Traditional versions can’t handle empty argument strings, or arguments with embedded whitespace
  • For the loops to work perfectly, you must provide the values in the same sequence as the for loop itself; which is very hard to control
  • Mostly a way to standardize the parameters

The only time I could think of using getopt is when I really want to use a long variable name and there’s just a single one.

Here’a a sample for getopt


#Check the number of arguments. If none are passed, print help and exit.
# echo -e \\n"Number of arguments: $NUMARGS"
if [ $NUMARGS -eq 0 ]; then

OPTS=`getopt -o vhns: --long verbose,dry-run,help,stack-size: -n 'parse-options' -- "$@"`

eval set -- "$OPTS"

while getopt dir:,env: FLAG; do
  case $FLAG in
      echo "-dir used: $OPTARG"
      echo "-env used: $OPTARG"
    h)  #show help
    \?) #unrecognized option - show help
      echo -e \\n"Option -${BOLD}$OPTARG${NORM} not allowed."

shift $((OPTIND-1))  #This tells getopts to move on to the next argument.


Whereas, getopts is:

  • Portable and works in any POSIX shell
  • Lenient with usage of “-a -b” as well as “-ab”
  • Understands “–” as the option terminator

Here’s a sample for getopts

SCRIPT=`basename ${BASH_SOURCE[0]}`

## Let's do some admin work to find out the variables to be used here
BOLD='\e[1;31m'         # Bold Red
REV='\e[1;32m'       # Bold Green

#Help function
function HELP {
  echo -e "${REV}Basic usage:${OFF} ${BOLD}$SCRIPT -d helloworld ${OFF}"\\n
  echo -e "${REV}The following switches are recognized. $OFF "
  echo -e "${REV}-p ${OFF}  --Sets the environment to use for installing python ${OFF}. Default is ${BOLD} /usr/bin ${OFF}"
  echo -e "${REV}-d ${OFF}  --Sets the directory whose virtualenv is to be setup. Default is ${BOLD} local folder (.) ${OFF}"
  echo -e "${REV}-v ${OFF}  --Sets the python version that you want to install. Default is ${BOLD} 2.7 ${OFF}"
  echo -e "${REV}-h${OFF}  --Displays this help message. No further functions are performed."\\n
  echo -e "Example: ${BOLD}$SCRIPT -d helloworld -p /opt/py27env/bin -v 2.7 ${OFF}"\\n
  exit 1


# In case you wanted to check what variables were passed
# echo "flags = $*"

while getopts p:d:v:h FLAG; do
  case $FLAG in
    \?) #unrecognized option - show help
      echo -e \\n"Option -${BOLD}$OPTARG${OFF} not allowed."

What if I really wanted long options with getopts?

getopts function can be used to parse long options by putting a dash character followed by a colon into the OPTSPEC. Sharing the solution from this link.

#!/usr/bin/env bash 
while getopts "$OPTSPEC" optchar; do
    case "${optchar}" in
            case "${OPTARG}" in
                    val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
                    echo "Parsing option: '--${OPTARG}', value: '${val}'" >&2;
                    echo "Parsing option: '--${opt}', value: '${val}'" >&2
                    if [ "$OPTERR" = 1 ] && [ "${OPTSPEC:0:1}" != ":" ]; then
                        echo "Unknown option --${OPTARG}" >&2
            echo "usage: $0 [-v] [--loglevel[=]<value>]" >&2
            exit 2
            echo "Parsing option: '-${optchar}'" >&2
            if [ "$OPTERR" != 1 ] || [ "${OPTSPEC:0:1}" = ":" ]; then
                echo "Non-option argument: '-${OPTARG}'" >&2

Color your process listings

Many a times I am grepping for a process that is running on a prod server with lots of different configuration parameters. However, since there are so many of them, it is very difficult to view a particular parameter itself and find out what value was assigned to it. I wanted to make it easier on the eyes and decided to color code the parameters and separate them out from the values. Here’s the bash function I pulled out.

  gawk 'BEGIN {RS=" --| -"; }{print $0}' \
  | sed -e 's/\([[:alpha:]]\+\)=/\1 /g' \
  | gawk 'BEGIN    {printf "-----------------\n" ; }
                if (NF > 2) printf "\n\033[41;5;1m%s\033[0m\n", $NF ;
                printf "\033[40;38;5;82m  %30s  \033[38;5;198m %s \033[0m \n", $1, $2

The idea is as follows:

  • Have a bash function that can be piped onto any command; perhaps ps -ef
  • Paragraph style viewing for each process
  • Break down every parameter into separate lines using gawk
  • Use sed to convert config params in the --rate=10 into something like rate 10, just like others
  • Use gawk again to add colors on every pair of key value line
  • Keys are right aligned, green in color and values are right aligned, red in color so it’s very easy to view

Here is a sample command I wanted to test out.

/opt/py27env/bin/python main-service-name --daemonize \
    --alias-svc=mainsvc01 --application-id=app03/mainsvc01 --monitoring-service-name=mainsvc01 \
    --log-level=DEBUG --solace-session-prop-username=testing \
    --solace-session-prop-password=testing --solace-session-prop-vpn=testing \
    --solace-session-prop-cache-name=test_dc \

Here is the result from my tests:

Color coded process listing

Color coded process listing

How to setup a local pypi mirror

It is quite easy to set up a local pypi server.

Some details can be found here. You can also use devpi if you prefer but it seems overly complicated for a job that is easily achieved by pip.

Let’s look at how to use pip for local installation. Firstly, it is possible to install all requirements beforehand in a separate directory. We can use the following commands:

pip install --download DIR -r requirements.txt

If you prefer wheel, then use the following:

pip wheel --wheel-dir DIR -r requirements.txt

Now, when we want to install from this given directory DIR, then the following command can help:

pip install --no-index --find-links=DIR -r requirements.txt

If you are using these in a current setup and you feel it still slows you down then the reason would be one of the first few commands where the request is still going to the internet. If you want to speed up the whole process then perhaps you need to send out a request to the internet only if a new package is available in the requirements.txt file otherwise you can skip that part altogether, just leading onto pip install --no-index

This will make your installation a flash.

One quick and dirty way to maintain a local copy of requirements.txt file and figure out on every commit of code change in the project, whether a new requirement has been added to that list. In that case, install it + update your local copy.

Here’s a sample code to put all changes in a single line that you can feed into pip install

sdiff -s /tmp/1 /tmp/2 | sed -e 's/<//g' | awk 'BEGIN {ORS=" "} {print $1}'

Breaking it down:

  • sdiff checks if there are any new changes
  • sed ensures that you only get the relevant characters, not < or >
    • If you want you can put an egrep before sed to get only one side of the changes
  • awk puts all the lines together into a space separated values that can be fed into pip install

AngularJS 2.0 building blocks explained

Let’s explain the eight building blocks of any Angular 2 app:

  1. Module
  2. Component
  3. Template
  4. Metadata
  5. Data Binding
  6. Directive
  7. Service
  8. Dependency Injection


  • Optional feature
  • Useful if you are using TypeScript which allows you to use interface or classes
  • export class AppComponent is like saying that this class is going to be public
  • Use relative file paths for importing modules

Component class is something you’d export from a module.


Components controls Views

  • Logic to support the view can be inside a class
  • Angular creates/destroys components as user moves through UI


A form of HTML that describes how to render the Component. It looks mostly like HTML syntax except if you add Angular keywords in them.


Some @Component configuration options:

  • selector: css selector to be applied to that html element
  • templateUrl: address of the component itself
  • directives: array of components/directives that this component itself requires to function properly
  • providers: an array of dependency injection providers for services

Data Binding

Following are the four possible ways of data binding:

<hero-detail [hero]="selectedHero"></hero-detail>
<div (click)="selectHero(hero)"></div>
<input [(ngModel)]="">
  1. The “interpolation” displays the component’s property value within the
  2. The [hero] property binding passes the selectedHero from the parent HeroListComponent to the hero property of the child HeroDetailComponent
  3. The (click) event binding calls the Component’s selectHero method when the user clicks on a hero’s name
  4. Two way data binding combines property and event binding in a single notation using ngModel directive


Class with directive metadata. Even Components are directives - directive with templates. Two other examples are:

  1. Structural: They alter layout by adding, removing, and replacing elements in DOM
  2. Attributes: Attribute directives alter the appearance or behavior of an existing element. In templates they look like regular HTML attributes, hence the name


The ngModel directive, which implements two-way data binding, is an example of an attribute directive.

<input [(ngModel)]="">

Other examples: ngSwitch, ngStyle, ngClass


It can be any value, function or feature that works well.

Dependency Injection

A way to supply a new class instance with all the requirements. In TypeScript this can be achieved by providing everything inside the constructor.

An Injector maintains a list of service instances it has created previously so that it can reuse those if needed. The way it achieves this is by utilizing provider which is used within each Component.