CVS Cheat Sheet

I’ve worked on CVS for a long time now and figured out that I’d save a lot of time if I started writing alias or functions for the numerous tasks that I did on them. I will put down some of them here so you may benefit from them.

Silently update and inform about the status of the files (recursive)

cvs -q -n update

Same as earlier but will only do so for the current folder

cvs -Q -n update -l

Finds out the cv[s] [m]odified list of files while also indentating them nicely with only the important data pulled out

cvs -Q status | egrep "File: " -A 4 | egrep -v "Up-to-date" | egrep "File: " -A 3 | sed -e "s/ Status:/\t\tStatus:/" -e "s/,v$/\n-------------------------------------------------------------------------\n/" -e "s/.*${PWD##*/}\//Location:\t\t /" -e "s/Attic\///" | egrep "Location:|Repository|Status:|File:|--------

Recursively add all files to CVS for committing from the current directory

find . -type d -print | grep -v CVS | xargs cvs add

Doing a side by side diff (change the value of -W according to the width of the screen)

cvs -Q diff -t -y --suppress-common-lines -W 190 $*

Merging code from one branch to another

# Creates a command that you can use to "merge" your code from dev head to this current branch.   
# Ideally you want to run this command from a folder where you want the code to merge to....   
BRANCH=$(cat CVS/Tag | cut -c2-)   
CMD=$(echo cvs update -j $BRANCH -j Version_2_17_dev .)   
echo $CMD   

Cursor Control in VIM Search

Found a great way to search for keywords && control the location of cursor in vim. It is excellent if you want to do a particular task multiple times. Usually if you search the cursor will straight away take you to the start of search. What if you want to go to the end of the word you are searching for?


This takes you to the END of the keyword you are looking for.

That’s not all. What if you want to go the second letter in that keyword? Change the pattern to as follows:


That’s great. But what if I want to go to the end of the keyword?


Awesome. Let’s review it through examples. Let’s say our phrase is “the brown fox jumped over the lazy dog” and we originally want to search for “brown”.

/brown brown fox…. search and start at “brown”
/brown/s+2 own fox… start at “brown” but move 2 letters from ‘start’
/brown/s-4 the brown fox… start at “brown” but move 4 letters to the left from ‘start’
/brown/e n fox…. search for “brown” but move to the end
/brown/e+2 fox… search for “brown” but move 2 letters from the ‘end’
/brown/e-1 wn fox… search for “brown” but move 1 letter to the left from the ‘end’

Change Putty Window Title

How to modify the Putty window title to a specific string?

By default, you’d like to have the window title to give you the full path to the folder you are working from (working directory). PROMPT_COMMANDwill help you with that.

However, if you have too many windows where you are on the same folder then it may become confusing. To set your own title you’d like to use the title() function provided here. It sets the name to whatever you say.

Please note that PS1 is only valid until you move your folders again. That is why the title() function also has to reset PROMPT_COMMAND.

Let’s have a look at the script needed. Maybe you wanna put them in your .bashrc file.

Multiple Inheritance

What is the output to the following code?


Foo FooToo Bar FootTooBar


  1. An instance of FooTooBar needs to be created according to main()

  2. To create that instance, we first need the base classes too, hence FooToo and Bar classes

  3. Now notice the keyword virtual everytime inheritance is defined

    • This does half the magic by ensuring that the member data instances are shared with any other inclusions of that same base in further derived classes

    • This is very handy for multiple inheritance

As an exercise you may try to remove virtual from some derived class and see what results you get.

Heap Sort

Heap Sort algo has the following properties:

  • The top element (root) is always the next in order

  • This allows you to remove one element at a time (the root) and ensure that you are pulling out items in a sorted order

  • Always takes O(n*log(n)) time - worst case or best case

    • Pros and cons to both
  • Simple implementations require additional space to hold heap of size n

    • Hence space requirement is double of array size n

    • Not included in big-O notation so something to keep in mind

  • Not a stable sort

    • Original order of equal values may not be maintained

Let’s look at three versions of Heap Sort and see how they compare against each other. We will also find out what differentiates them from each other.

Testing Notes

  • I compared three versions of of Heap Sort

    • First version (HeapSort_1) includes logic of sorting items while adding them one by one

    • Second version (HeapSort_2) and third version (HeapSort_3) are very similar to each other with very minor differences but the performance is different. Still some improvements can be made but for now HeapSort_3 seems to be better

      • There’s still scope for both the versions to be better
  • Postincrement can be replaced with preincrement in most places which should improve the numbers even more


As usual the code is available here:



# of Items in Array Time Taken Average for 100 numbers
Random 10 0.013543 0.13543
1K 3.22043 0.322043
1M 4770.14 0.477014
Sorted 10 0.007507 0.07507
1K 0.625425 0.0625425
1M 633.142 0.0633142

# of Items in Array Time Taken Average for 100 numbers
Random 10 0.019352 0.19352
1K 3.86284 0.386284
1M 8914.22 0.891422
Sorted 10 0.011289 0.11289
1K 3.49712 0.349712
1M 6661.45 0.666145

# of Items in Array Time Taken Average for 100 numbers
Random 10 0.016266 0.16266
1K 0.968032 0.0968032
1M 900.004 0.0900004
Sorted 10 0.012845 0.12845
1K 3.08637 0.308637
1M 3839.23 0.383923

Merge Sort

Merge Sort

  • Complexity is O(n log n)
  • Needs more space to merge - proportional to the size of the array
  • Stable Sort
  • * Preserves the order of equal elements
  • Merge Sort does about 39% lower comparisons, in worst case, compared to Quicksort’s average case
  • The algo almost always behaves in the same way; taking relatively the same amount of time, whether sorted or unsorted arrays

Testing Notes

  • Started testing the algo with two versions.
  • * First version creates two temporary arrays
    • First version creates only one temporary array
  • The sole difference between them is the one that makes second implementation better


As usual the code is available here:


### Merge Sort **# of Items in Array** **Time Taken** **Average for 100 numbers**
**Random** 10 0.047464 0.47464
1K 5.41906 0.541906
1M 8444.11 0.844411
**Sorted** 10 0.027155 0.27155
1K 4.47016 0.447016
1M 6323.05 0.632305
### Merge Sort 2 **# of Items in Array** **Time Taken** **Average for 100 numbers**
**Random** 10 0.033668 0.33668
1K 3.89374 0.389374
1M 7076.04 0.707604
**Sorted** 10 0.019034 0.19034
1K 2.7833 0.27833
1M 4664.16 0.466416

Quick Sort

Quick Sort is an efficient divide and conquer algorithm performed in two phases - partition and sorting phase.

Here are few pointers to remember about Quick Sort:

  • Partitioning places all the elements less than the pivot in the left part of the array and greater elements in the right part
  • Pivot element stays in its place
  • After partitioning no element moves to the other side, of the pivot
  • * This allows you to sort the elements, to the left or right of the pivot, independent of the other side
  • Complexity is O(n log n)
  • Often fast for small arrays with a few distinct values, repeated many times
  • It is a conquer-and-divide algo; with most of the work happening during partitioning phase
  • If you had to choose the optimum pivot then it should the median of the given array
  • Not a stable sort

Testing Notes

  • Currently we have only one version of the code. We may try to do another version that is not recursive because putting the functions on stack will take up some memory and time
  • Another version could be trying to use the pivot from the middle and then compare how do the random numbers compare against the sorted numbers


As usual the code is available here:


**# of Items in Array** **Time Taken** **Average for 100 numbers**
**Random** 10 0.072972 0.72972
1K 2.74698 0.274698
1M 4640.77 0.464077
**Sorted** 10 0.042773 0.42773
1K 1.84335 0.184335
1M 2473.42 0.247342

Insertion Sort

Insertion Sort has the following properties:

  • It works by moving elements one at a time
  • Works really well for small data sets
  • Consider going with this when the input data may already be sorted or partially sorted
  • The may not have to move the elements around, thereby saving precious cycles
  • Stable sort
  • Keeps the original order of elements with equal values

Testing Notes

  • Had a very interesting time testing my code. I knew that swapping takes time. std::swap takes particularly longer. I disabled that from the beginning itself
  • Even more interesting was how I thought of fixing my code which was running slowly
  • Initially even running an array of size 1K was taking about 4s so I just made a minor change to remove “– insertIndex ;” altogether and do the calculation in the previous line itself. That did the trick. Otherwise the code was taking hours and hours for 1M array size so I had to stop running it.
  • I even tried improving the code a bit futher by ensuring that I do not call insertIndex-1 many a times but that didn’t really help - in fact made it worse again
  • Sorted runs will run much faster because there’s no work to be done in those cases
  • Would be a nice algo to use if your data is mostly sorted


As usual the code is available here:


**# of Items in Array** **Time Taken** **Average for 100 numbers**
**Random** 10 0.006095 0.06095
1K 0.369859 0.0369859
1M 323.878 0.0323878
**Sorted** 10 0.005022 0.05022
1K 0.11696 0.011696
1M 122.427 0.0122427

Radix Sort

It is a non-comparative integer sorting algorithm. It sorts data by grouping keys by the individual digits which share the same significant position and value. Think Tens, Hundreds, Thousands etc. Some pointers about Radix Sort:

  • Even though it is an integer sorting algorithm, it is not restricted just to integers. Integers can also represent strings of characters
  • Two types of radix sort are:
    • LSD (Least Significant Digit): Short keys come before long keys
    • MSD (Most Significant Digit) Sorting: Lexicographic Order. Better for strings.
  • Uses Counting Sort as Sub Routine (which takes extra memory)
    • If memory is not really a concern, forget about this issue
  • Radix Sort, with at most d digits, takes O(d*(n+b)) time where b is the base
  • Use Radix Sort over Counting Sort when the numbers range from 1 to n-square for example.


Let’s look at the stats from the algorithm:

**# of Items in Array** **Time Taken** **Average for 100 numbers**
**Random** 10 0.033351 0.33351
1K 3.22004 0.322004
1M 5650.9 0.56509
**Sorted** 10 0.020659 0.20659
1K 3.26273 0.326273
1M 5683.91 0.568391


As always, the code is available at for easier access.

Counting Sort

Counting Sort is an integer sorting algorithm. It is not very famous when somebody talks about sorting algorithms but it is great when sorting integers. In fact, many a times it may even beat other Sorting Algorithms. The highlight of Counting Sort is that it creates a bucket array (to keep track of frequency of numbers) whose size is the maximum element in the provided array.

We are looking to compare most of the sorting algorithms to find out which one performs better under different circumstances. One of the ways is to compare the complexity for each algorithm. The other way is to compare how well they perform based on the input they are all provided. I will post my code on Github but will start with Counting Sort here.

Here are some pointers about Counting Sort:

  • Linear sorting algo with O(n+k) when the elements range from 1 to k

    • On
  • It is really good when the numbers range from 1 to n which means the max between them is not going to be very high

How do we compare all Sorting Algorithms?

We are going to look at the following scenarios across all Sorting Algorithms:

  • 10 numbers - Random + Sorted

  • 1000 numbers (1K) - Random + Sorted

  • 1,000,000 numbers (1M) - Random + Sorted

  • 20000 iterations for each one of these scenarios

  • For good measure we will look at the average time required to sort 100 numbers in each one of these cases

    • This will allow us to compare all algorithms against each other by looking at the average times

    • It means we will be multiplying the results of 10 numbers by 10 (to get the “average” for 100 numbers) and also

Machine Setup

The setup of my machine is as follows:

| Hardware: x86_64 | Processor: 16 x Intel® Xeon® E5540 2.53 GHz | Compiler: gcc version 4.8.2 (GCC) | Redhat Version: Red Hat 5.3

The Source Code

I am trying to do this all together in a single place so that it is easy for anybody to pick up the code, either for their own testing or going through the Sorting Algos. All of the source code for this can be found here:


Without further ado, let’s delve into the numbers for Counting Sorts. Here’s the result from the code:

# of Items in Array Time Taken Average for 100 numbers
Random 10 0.022982 0.22982
1K 1.21822 0.121822
1M 1823.85 0.182385
Sorted 10 0.026815 0.26815
1K 1.19146 0.119146
1M 1612.58 0.161258