Many apps do not come pre-built in rpm format for Fedora so you’d have to download the tar file for it.
To run the app, you would have to go to the saved folder and then either double cilck or run the command through
command line. By default, that app won’t be accessible through the Super key’s universal search or
as a regular application in *Show Applications.
Fortunately, there’s a way around and it is an easy one.
Fedora looks for .desktop files in ~/.local/share/applications/ folder.
Let’s say we are trying to create a shortcut for Eclipse. We will then create
a file by the name eclipse.desktop in the given folder.
The contents will be as follows:
While working on big projects, sometimes, you have invoke tasks lying around
in different places. It wouldn’t make sense to merge them together
but rather help each other out as and when needed.
One such way for this would be to search for invoke tasks from other folders
and run them directly when they can be used.
I had to go for this approach for a monolithic repo where multiple projects
were being built in mostly similar style with minor modifications.
All of them would have the same set of commands along with same style of running those commands.
I didn’t want to set up the same invoke task for all individual projects but rather
a common set of tasks that could be re-used by each one of them.
Hence, here’s what I did:
I knew for a fact that most sub-projects needed the same command to build themselves.
I didn’t want to use the same command over and over again in each of the projects.
I would rather use the command in the general space and override it only when a sub-project
requires a special version of the build command.
When the general invoke was called to do any task, it would first check
whether, the sub-project for which the command was to run, the command was already
available for the given sub-project itself.
If yes, this would mean that sub-project intends to
override the default command in it’s own style
If no, then the default version is to run
Here’s the simplilfied version of the code.
importsubprocesimportos@contextmanagerdefcd(path):old=os.getcwd()os.chdir(path)try:yieldfinally:os.chdir(old)@taskdefbuild():folder_to_run_the_command_on='/home/folder'withcd(folder_to_run_the_command_on):print('Finding tasks...')# List all the possible commands that you can run on that folderres=subprocess.check_output(['invoke','-l'])# does it contain this command that we need to run?if'build'inres:print('Found the build command in "{}" folder'.format(folder_to_run_the_command_on)run('invoke build')else:# we need to run the generic version of build commandbuild_internal()
Timer files contain information about a timer controlled and
supervised by systemd, for timer-based activation.
This is possibly a better replacement for cron jobs.
However, the changes are a bit different.
To set up a timer, you need the following options:
OnActiveSec
OnBootSec
OnStartupSec
OnUnitActiveSec
OnUnitInactiveSec
They all help you set up your timers relative to different starting points.
Some other options you could use are:
OnCalendar
This is your friend if you are looking for cron job replacement
Please check the references below to look for some samples on how to set up
your cron jobs in the correct format. It is not exactly same as cron job styles
AccuracySec
Based on the timer, how close to the actual time should this timer wake up
Use the value of 1us to be the smallest and most accurate
Persistent
Maybe you want to save the information about the timestamps whenever
the service is shutting down. In that case, the information will be saved
on the hard disk. It will be used along with boot and active sec information.
Here’s a short tutorial on setting up gitlab cli for yourselves.
It is extremely user friendly and you can take almost any action that you need.
Anything that the UI provides is also available over cli or web services -
both of which have examples here.
Found it really strange that nobody had mentioned on their blog how to
compile Protobuf in python with C++ implementation.
I had been having a lot of trouble with the compilation of python protobuf.
After struggling with it for a few months on and off I decided to give Docker
a try as I realized that my own Fedora OS may be the one having troubles.
Thought of starting with Ubuntu Docker as I’ve had success with it earlier
with such compilation scripts. Luckily it all worked out successfully again
for protobuf.
Then I tried Docker for Centos 7 and Fedora 23, both of which had not
been working for me in any shape.
The source code of the Dockerfiles are available on Github here:
I recently started using Fedora for work and had to manage a lot of tasks on various projects.
The list was big enough and there’s no proper support for Evernote in linux, my trusty todo list manager
or ToDo list manager by AbstractSpoon. Decided to try post-it notes but my list was changing on an
ad-hoc basis. Finally came across an extension Todo.txt.
Turned out this was just what I was looking for. I started putting all of my tasks in it, with proper categorization.
Behind the scenes it is an extremely simple app which has only two files, both saved in ~/.local/share/todo.txt/ folder.
done.txt
todo.txt
The tasks are initially put as simple text in todo.txt and are moved to done.txt once marked complete.
It is extremely useful that todo.txt app has UI as well as the files which are user friendly.
Finding tasks completed in the last week
During weekly meetings I found it difficult to mention all the tasks that
I had been working on for the whole of the previous week. Thinking about todo
tasks, I thought of using a bash script to print out the tasks from the last
8 days. After all, the files did contain a whole long list of tasks.
Here was the idea that I had in mind:
Read done.txt and todo.txt
Highlight the tasks differently from both files so it’s easy to
see what has been completed already
Chose green color for done and red for todo
Highlight the categories differently - chose yellow
Show all the tasks completed in the last 8 days
Also provide the option to chose any number of days
Helps on those days when I wanna see more than 8 days
Show all the tasks in todo
Here’s the script for that:
todos(){TODOFILE=~/.local/share/todo.txt/todo.txt
DONEFILE=~/.local/share/todo.txt/done.txt
# echo $TODOFILE# echo $DONEFILE "\n"
lastXdays(){
search()•
{DAY=$1
cat $TODOFILE|GREP_COLOR="1;31" grep --color=always ' [a-Z[].*'|GREP_COLOR="3;33" grep --color=always "\+.*"
egrep $(date '+%Y-%m-%d' --date=$DAY' days ago')$DONEFILE|GREP_COLOR="1;32" grep --color=always ' [a-Z].*'|GREP_COLOR="03;33" grep --color=always "\+.*"}END=$1for i in $(seq 0$END)do
search $idone}# First arg, if given, or default value of 8DAYS=${1:-8}
lastXdays $DAYS| sort -u
}
Now, when I run this command, it gives me the following:
Sharing the tasks
I work with different teams which means sharing with them the latest updates
on different days of the week. I used to run my todos command on bash
before going for the meeting but I realized this was getting very mundane and
I was spending a lot of time remembering the tasks I had done.
I decided to make it easy by sharing the tasks with the rest of the team
automatically. Enter crontab and python’sinvoke.
Here are the steps we will need:
Setup cronjob
Cron job will call upon a bash script
Bash script will call python’s invoke
Here we call upon a bash script to provide us the results of todos
in bash and then use that to send an email based on the --mailgroup
Simply taking the output of todos in bash will give us a lot of
unreadable characters. Especially the ones where we try and color code
the response so it’s easy on the eyes - 3;33
You can install aha to convert the ANSI terminal colors to html color
codes. This way when we mail the contents to team members, it will display
properly.
The crons.sh itself is really simple, which calls upon the invoke task:
# ----- crons.sh --------# Activate the virtual envsource ~/code/venvs/ve_opt/bin/activate
# Go to the directory containing the invoke scriptcd ~/code/scripts/
# Run the invoke, pass the cmd line params, as is (which means mailgroup)
inv share_todos $*
tasks.py
Now it is time for contents of invoke’s tasks.py.
We want to ensure that the font is big enough.
@taskdefshare_todos(mailgroup):process_out=subprocess.check_output(['/FULL_PATH_TO/_htmltodos.sh'])\
.replace('<body>','<body style="font-weight:900; font-size:1.3em;">')mail(process_out,"My todos @ {}".format(datetime.datetime.now().strftime('%c')),mailgroup
_htmltodos.sh
We will use aha to convert the ANSI terminal colors to html color codes.
Also, we will replace some color codes that aha creates because
it is not really nice looking.
#!/usr/bin/bash
source ~/.bashrc
todos | /usr/local/bin/aha | sed -e 's/color:olive/color:DeepSkyBlue; font-style:italic;/g' -e 's/color:green;/color:LimeGreen;/g' -e 's/<pre>/<pre style="color:gray;">/g'exit0
When you are working on Agile Boards in JIRA, you may want to retrieve
all the issues related to a particular board or the sprint.
Usually you’d find issues in progress under the dashboard of the sprint itself.
As you will also notice from jira docs
the sprints function in there only provides you sprints.
What it fails to provide is the issues under the sprint
for which work through a different subquery under the hood.
The code here intends to provide a full list of all the issues, based on
a sprint name - complete or incomplete - that belong to a given sprint name.
You can modify the code easily to suit your needs.
Requirements
First things first, you need to install jira through pypi for the code.
pip install jira
The code
fromjira.resourcesimportIssuefromjira.clientimportJIRAdefsprints(username,ldp_password,sprint_name,type_of_issues_to_pull=['completedIssues','incompletedIssues','issuesNotCompletedInCurrentSprint','issuesCompletedInAnotherSprint']):defsprint_issues(cls,board_id,sprint_id):r_json=cls._get_json('rapid/charts/sprintreport?rapidViewId=%s&sprintId=%s'%(board_id,sprint_id),base=cls.AGILE_BASE_URL)issues=[]fortintype_of_issues_to_pull:iftinr_json['contents']:issues+=[Issue(cls._options,cls._session,raw_issues_json)forraw_issues_jsoninr_json['contents'][t]]return{x.key:xforxinissues}.values()fmt_full='Sprint: {} \n\nIssues:{}'fmt_issues='\n- {}: {}'issues_str=''milestone_str=''options={'server':'http://jira/','verify':True,'basic_auth':(username,ldp_password),}gh=JIRA(options=options,basic_auth=(username,ldp_password))# Get all boards viewable by anonymous users.boards=gh.boards()board=[bforbinboardsifb.name==sprint_name][0]sprints=gh.sprints(board.id)forsprintinsorted([sforsinsprintsifs.raw[u'state']==u'ACTIVE'],key=lambdax:x.raw[u'sequence']):milestone_str=str(sprint)issues=sprint_issues(gh,board.id,sprint.id)forissueinissues:issues_str+=fmt_issues.format(issue.key,issue.summary)result=fmt_full.format(milestone_str,issues_str)print(result)returnresult
You can call the function with the following command:
sprints(<username>, <password>, <sprint_name>)
You will get results that are similar to the following:
Sprint: SPRINT_NAME
Issues:
- PROJECT-437: Description of the issue
- PROJECT-447: Description of the issue
Should you use getopt or getopts in your bash scripts?
The answer can be a bit tricky but mostly straight forward.
getopt
Generally, try to stay away from getopt for the following reasons:
External utility
Traditional versions can’t handle empty argument strings, or arguments with embedded whitespace
For the loops to work perfectly, you must provide the values in the same sequence as the for loop itself; which is
very hard to control
Mostly a way to standardize the parameters
The only time I could think of using getopt is when I really want to use a long variable name and there’s just a single one.
Here’a a sample for getopt
#!/bin/bash
#Check the number of arguments. If none are passed, print help and exit.NUMARGS=$## echo -e \\n"Number of arguments: $NUMARGS"if[$NUMARGS -eq 0];then
HELP
fiOPTS=`getopt -o vhns: --long verbose,dry-run,help,stack-size: -n 'parse-options' -- "$@"`evalset -- "$OPTS"while getopt dir:,env: FLAG;docase$FLAG in
-dir)DIR=$OPTARGecho"-dir used: $OPTARG";;
-env)PYENV=$OPTARGecho"-env used: $OPTARG";;
h)#show help
HELP
;;\?)#unrecognized option - show helpecho -e \\n"Option -${BOLD}$OPTARG${NORM} not allowed."
HELP
;;esacdoneshift$((OPTIND-1))#This tells getopts to move on to the next argument.
getopts
Whereas, getopts is:
Portable and works in any POSIX shell
Lenient with usage of “-a -b” as well as “-ab”
Understands “–” as the option terminator
Here’s a sample for getopts
SCRIPT=`basename ${BASH_SOURCE[0]}`## Let's do some admin work to find out the variables to be used hereBOLD='\e[1;31m'# Bold RedREV='\e[1;32m'# Bold Green#Help functionfunction HELP {echo -e "${REV}Basic usage:${OFF}${BOLD}$SCRIPT -d helloworld ${OFF}"\\n
echo -e "${REV}The following switches are recognized. $OFF "echo -e "${REV}-p ${OFF} --Sets the environment to use for installing python ${OFF}. Default is ${BOLD} /usr/bin ${OFF}"echo -e "${REV}-d ${OFF} --Sets the directory whose virtualenv is to be setup. Default is ${BOLD} local folder (.) ${OFF}"echo -e "${REV}-v ${OFF} --Sets the python version that you want to install. Default is ${BOLD} 2.7 ${OFF}"echo -e "${REV}-h${OFF} --Displays this help message. No further functions are performed."\\n
echo -e "Example: ${BOLD}$SCRIPT -d helloworld -p /opt/py27env/bin -v 2.7 ${OFF}"\\n
exit1}PYENV='/usr/bin'DIR='.'VERSION='2.7'# In case you wanted to check what variables were passed# echo "flags = $*"whilegetopts p:d:v:h FLAG;docase$FLAG in
d)DIR=$OPTARG;;
p)PYENV=$OPTARG;;
v)VERSION=$OPTARG;;
h)
HELP
;;\?)#unrecognized option - show helpecho -e \\n"Option -${BOLD}$OPTARG${OFF} not allowed."
HELP
;;esacdone
What if I really wanted long options with getopts?
getopts function can be used to parse long options by putting a dash character followed by a colon into the OPTSPEC.
Sharing the solution from this link.
Many a times I am grepping for a process that is running on a prod server with lots of different configuration
parameters. However, since there are so many of them, it is very difficult to view a particular parameter itself
and find out what value was assigned to it. I wanted to make it easier on the eyes and decided to color code the
parameters and separate them out from the values.
Here’s the bash function I pulled out.
Some details can be found here.
You can also use devpi if you prefer but it
seems overly complicated for a job that is easily achieved by pip.
Let’s look at how to use pip for local installation.
Firstly, it is possible to install all requirements beforehand in a separate directory.
We can use the following commands:
pip install --download DIR -r requirements.txt
If you prefer wheel, then use the following:
pip wheel --wheel-dir DIR -r requirements.txt
Now, when we want to install from this given directory DIR, then
the following command can help:
If you are using these in a current setup and you feel it still slows you down then the reason would be
one of the first few commands where the request is still going to the internet.
If you want to speed up the whole process then perhaps you need to send out a request to the internet
only if a new package is available in the requirements.txt file otherwise you can skip that part altogether,
just leading onto pip install --no-index
This will make your installation a flash.
One quick and dirty way to maintain a local copy of requirements.txt file and figure out on every commit of code change
in the project, whether a new requirement has been added to that list. In that case, install it + update your local copy.
Here’s a sample code to put all changes in a single line that you can feed into pip install