Wednesday, December 16, 2009

Mobile Application idea

I am thinking of writing a mobile application for Android, now for creating any application I could go down the route of reading through the Android Dev Guide and end up creating a simple application like notepad or something more complex like a home webcam viewer. The lack of an application idea makes researching into learning about the Android less motivating, as a result of which I give up reading through the Dev Guide half way through.
What I really need is an idea for a application, this application should be unique and it should have a wow factor, it should be something that no one has thought about and it should be generic so that more people can use it, rather than having something very specific to I want. Something on the lines of the mobile based web cam viewer application that I had thought and developed in June 2004.

In order to develop this idea I wanted to write down all the features provided by my Google Phone, but then I thought for developing idea I should start with all the features provided by the platform, but think out of box about the problem that I have solve without being constrained by the set of features provided by the phone.

Lets see how things go, I have to rush for a meeting now

Friday, December 11, 2009

Adding Syntax Highlighting to Blogger

A good description for adding syntax highlighting to a blog post.

Extracting and creating a book from an online site

I have a subscription to an online book store (similar to safari). Unfortunately I cannot provide you with the store name. I wanted certain books from the store on my local machine as a printable pdf file. The bookstore allowed me to print only 2 pages.

Here is what I did to extract the data from the book store.
1. Figured out the data format for each page. Each page was a jpg file
2. Figured out the http request for each page. The request contained the ISBN nuber the page number and the resolution
3. Using httpfox - a firefox plugin to introspect the http data sent accross.
4. Wrote this application to fetch all the pages for a set of books and save the image on the disk additionally it also creates the pdf file with all the images.

Rather than using a java application I could have used some kinda scripting language. I may research into that later on if I get the time, as of now I can print a few pages that I want to read :-)

Here is the java source code for

package bookextracter;



import org.apache.commons.httpclient.HttpClient;
import org.apache.commons.httpclient.HttpException;
import org.apache.commons.httpclient.methods.GetMethod;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

import com.itextpdf.text.BadElementException;
import com.itextpdf.text.Document;
import com.itextpdf.text.DocumentException;
import com.itextpdf.text.Image;
import com.itextpdf.text.PageSize;
import com.itextpdf.text.pdf.PdfWriter;

 * Library Dependencies
 *   iText (
 *   Used for creating pdf documents.
 *   Some really good example code using iText (
 *   Apache Commons Logger ( 
 *   A generic interface for logging.
 * The high level steps are as follows
 * 1. Log into the online book site using your browser
 * 2. Use an HTTP monitor (something like httpfox for firefox) to introspect the request/response messages
 * 3. As your are browsing through the online book pages note the cookie headers in the GET request for each page. Note down this cookie string 
 * since you will be using it in this program
public class Main {

 static Log _logger = LogFactory.getLog(Main.class);
  * The cookie that is unique to my login.
 public static final String cookie = "not real cookie";
  * Each pages is a jpg image and this determines the scaling factor when we make a request from the server. 
 public static final int MAX_SCALE = 1200;
  * I loop through these many number of pages till reach the end or the server responses with code '500'
  * Here I assume that all the books contain a max of 1500 pages.
 public static final int MAX_ESTIMATE_BOOK_SIZE_IN_PGS = 1500;
 public static final String bookSaveLocation = "C:\\Extract\\Books\\";
 public static final String rawImageSaveLocation = "C:\\Extract\\RawImages\\";

 public static void main(String[] args) throws HttpException, IOException,
   DocumentException, InterruptedException {

  // These are the ISBN numbers of the books that I am interested in.(not real)
  String[] bookIsbs = { "234234234", "1234234234", "234234234"};
  // loop through each book and fetch all the pages for each book in a different thread.
  for (String isbn : bookIsbs) {
   (new Thread(new BookFetcher(isbn))).start();
  // Wait for it.;


 private static class BookFetcher implements Runnable {
  private String isbn;

  public BookFetcher(String isbn) {
   this.isbn = isbn;

  public void run() {
   try {
   } catch (Exception e) {


 private static void extractAndSaveBook(String isbn) throws IOException,
   HttpException, FileNotFoundException, DocumentException {
  // The format of the pdf doc.
  Document pdfDoc = new Document(PageSize.A2, 0, 0, 0, 0);
//  Document pdfDoc = new Document(PageSize.LETTER,0,0,0,0);
//  Document pdfDoc = new Document();  
  // The file name and location of the pdf file 
  final String pdfFileName = bookSaveLocation + isbn + ".pdf";
  PdfWriter.getInstance(pdfDoc, new FileOutputStream(pdfFileName));;

  // Set up the http Connection
  _logger.debug("Creating an Http Client");
  HttpClient httpClient = new HttpClient();
  httpClient.getHostConfiguration().setProxy("webproxy", 80); // I use a proxy."Retriving Book " + isbn);
  // Loop through each page.
  for (int currentPageNumber = 0; currentPageNumber < MAX_ESTIMATE_BOOK_SIZE_IN_PGS; currentPageNumber++) {
   // Create the page url
   String pageUrl = construcPageUrl(isbn, currentPageNumber, MAX_SCALE);
   _logger.debug("Retrive Page: " + pageUrl);
   // Print the page number do indicate progress.
   System.out.print(currentPageNumber + "..");
   if ((currentPageNumber + 1) % 30 == 0)

   // The retrived image for each page will be saved at the following location (file format: <rawImageSaveLocation>/<isbn>/Page_<currentPageNo>.jpg 
   String pagePathName = rawImageSaveLocation + isbn + "\\Page_" + currentPageNumber
     + ".jpg";
   File imageFile = new File(pagePathName);
   // If the file dosent exist at the location the fetch the file from the server.
   // This prevents us from hitting the server during multiple runs, since fetching 
   // each page is the most time consuming operation.
   if (!(imageFile.exists() && imageFile.isFile())) { 
    GetMethod getPage = new GetMethod(pageUrl);
    getPage.setRequestHeader("Cookie", cookie); // Setup the http GET request with my cookie 

    int responseCode = httpClient.executeMethod(getPage);
    if (responseCode != 200) {"Page Not found, I assume we are done with the book");

    // Get the data
    _logger.debug("Get the Page");
    byte[] pageRawByteArray = getPage.getResponseBody();
    // Save the Raw Image to file
    saveRawImageToFile(pagePathName, pageRawByteArray);
   // Save the image page to pdf.
   savePageImageToPdfDoc(pdfDoc, pdfFileName,
  System.out.println("");"Book " + isbn + " ....Done");


 private static void savePageImageToPdfDoc(Document pdfDoc,
   final String pdfFileName, String pagePathName)
   throws BadElementException, MalformedURLException, IOException,
   DocumentException {
  _logger.debug("Saving to pdf Doc " + pdfFileName);
  // Image pagePdfImage = Image.getInstance(pageRawByteArray);
  Image pagePdfImage = Image.getInstance(pagePathName);
  // pagePdfImage.scalePercent(50);

 private static void saveRawImageToFile(String pagePathName,
   byte[] pageRawByteArray) throws FileNotFoundException, IOException {
  _logger.debug("Saving Image to a file : " + pagePathName);
  FileImageOutputStream imageWriter = new FileImageOutputStream(
    new File(pagePathName));

 private static String construcPageUrl(String bookIsbn, int pageNo, int scale) {
  return "http://booksitehostname/" + bookIsbn + "
    + scale + "/" + pageNo + ".jpg";


Friday, July 17, 2009


Some web applications that I would like to see.

Adding notes to sections of a webpage.
I need a web application (with a firefox plugin) that allows me to add notes to sections of a web page. Something similar to selecting sections of a web page and highlighting it, similar to using a highlighter on a book and then writing some notes on the side. I tried searching for something similar on the net, but all that I could get was MyStickies, it allows me to add postit notes to the entire webpage and not to sections of a web page.

I would like to catalog all my sms on a web site, and view it along a timeline. Something like a central portal for my sms's, call history, social networking updates etc. I should be able to view it along a timeline or for each user. I still havent started to search for a site that would give me such a feature.

Finally found the site that I was looking for

Thursday, May 28, 2009

Must have songs...

  1. Velvet Revolver - Get Out The Door
  2. Bush - Head Full Of Ghosts
  3. Angels & Airwaves - Secret Crowds
  4. mody - When I reach for my revolver
  5. Apocalyptica - I'm Not Jesus
  6. Louis XIV Finding Out True Love is Blind
  7. Tool - Jambi (album ver)
  8. Saliva - Doperide
  9. Marcy Playground - Saint Joe On A School Bus
  10. Disturbed - Inside the Fire ****
  11. Disturbed - The Game
  12. MorningWood - Nth Degree
  13. Cake - Love you Madly
  14. Oleander - Why I'm Here
  15. The Refreshments - Banditos
  16. Big Wrek - The Oaf
  17. PearlJam - Go
  18. Stroke9 - 100 Girls
  19. Guns and Roses - Chinies Democracy
  20. Powerman 5000 - When Worlds Collide
  21. Eve 6 - Inside Out
  22. Monster Magnet - Silver Future
  23. Buckcherry - Too Drunk
  24. Metallica - The Day that never comes
  25. Staind - All I want
  26. Shiny Toy Guns - Ghost Town
  27. All-American Rejects - Gives You Hell
  28. U2 - Get On Your Boots
  29. Papa Roach - Lifeline
  30. Shiny Toy Guys - Ricochet
  31. Thornley - Easy Comes
  32. 3 Doors Down - Its not my time
  33. Prom Kings - The, Alone
  34. Beastie Boys - So What'cha Want
  35. Carolina Liar - Show Me What I'm Looking for
  36. Collective Soul - Vent
  37. Cage The Elephant - Aint No Rest For The Wicked.
  38. Loudermilk - Rock N Roll and the Teenage Desperation
  39. David Bowie & Trent Reznor - I'm Afraid of Americans
  40. Seether - Gasoline
  41. Big B - Sinner
  42. Lo-Fidelity All-Stars - Battleflag
  43. Shinedown - Heroes
  44. Limp Bizkit - Nookie
  45. Lo-Fidelity All-Stars - Battleflag
  46. Kings Of Leon - Notion
  47. Cake - Sheep Go To Heaven

Friday, April 03, 2009

Extracting audio from all CFA video files in one command.

Really impressed with myself today.
I have the CFA schwsere CDs with me and these CDs have vidoes in them, but I am not interested in looking at a person talking, I am more interested in listening to the lecture. In short I would like to extract the audio channel for all these videos.
1. Find a tool that extracts audio from the vidoe file - ffmpeg, this tool was able to extract the audio as an mp3 file.
2. Automate the process to go over all the vidoe files and convert them to mp3.

Since all the video files have the same name, and I want the audio file to have diffent names based off the Study Session and Reading, here is what I did
- Find all wmv files
- for each file extract the folder name ( this contains the following format L2_SS18_P7)
- extract the video into an audio file based off the above file name.

find . -name *.wmv -exec sh -c 'ffmpeg -i {} `ls {} | cut -d '/' -f 3`.mp3' \;

Whoo hooo

I found that CD10 and CD11 are the same, so I have to mount and extract them again.

Intermediate commands to test
find . -iname *.flac -exec flac2vorbis {} \; -exec rm -v {} \;

find . -iname *.flac -exec sh -c "flac2vorbis {} && rm {}" \;

find . -name *.wmv -exec sh -c "ls {} | cut -d '/' -f 3" \;

ffmpeg -i ./CD16/L2_SS18_P7/L2_SS18_P7_files/0MM0.wmv `ls ./CD16/L2_SS18_P7/L2_SS18_P7_files/0MM0.wmv | cut -d '/' -f 3`.mp3

Tuesday, February 03, 2009

LPIC Linux Certification

You have to pass 2 exams to be certified, the first exam is more todo with commands while the 2nd exam is more towards system admin.
Try and get Ubuntu installed on your home pc to try out Linux, for Ubuntu you don’t have to partition your HD etc, you can directly install Ubuntu in the windows partition, It will get installed as files under c:\ubuntu , the next time you boot into windows it will prompt you if you would like to boot into Ubuntu.

Procedure for taking the exams

Couple of books

Friday, January 02, 2009

Connecting from Mac (mini) and Linux (Ubuntu) through Citrix to my work desktop

I have been trying to connect (through Citrix) to my desktop at work through my home machine, Ubuntu 8.04 (Linux) and Mac Mini.

On the mac I installed the Citrix client, but on connecting to the server I got the following error

SSL Error 0: YOu have not chosen to trust "/C=US/ST=
/L=/O=Verisign, Inc./OU=Class 3 Public Primary Certification Authority/CN=", the issuer of the server's security certificate.

Error number 183

I remembered getting a similar error while connecting from Linux but all that I could recollect that it was related to some missing certificates.

I logged into my Linux box and copied over the following certificates


from the following Linux folder
to the following mac folder
/Applications/Citrix ICA Client/keystore/cacerts

Actually I just one certificate, and I can get it from here

I got this information from here :