Tuesday, December 20, 2011

Android development on Ubuntu 11.10 Oneiric

I decided to play around with Android development but I had a hard time finding good instructions on how to set up the Android development environment on Ubuntu 11.10. With a little trial and error I've figured it out. I'm posting my notes in hopes that they will be useful to me in the future and possibly to others also. The aren't a complete step-by-step guide and they assume you're comfortable with the Linux command line, but hopefully they'll be useful.

  1. Download and untar/gzip the Android SDK
  2. Install the JRE
    1. sudo apt-get install icedtea6-plugin openjdk-6-jre openjdk-6-jdk ia32-libs
  3. Install ant
    1. sudo apt-get install ant
  4. Set your JAVA_HOME and CLASSPATH
    1. export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
    2. export CLASSPATH=/usr/lib/jvm/java-6-openjdk/lib
  5. Update the build tools
    1. cd android*; ./tools/android update sdk --no-ui
  6. Download eclipse (Helios release) (Note the one in the Ubuntu repositories doesn't seem to work)
    1. google-chrome www.eclipse.org/downloads/
  7. Untar eclipse
    1. tar -xvf ./eclipse*.tar.gz
  8. Install the eclipse Android plugin
    1. eclipse : Help -> Install New Software ...
    2. enter: "Android Plugin"
    3. enter: "https://dl-ssl.google.com/android/eclipse/"
    4. Select 'Developer Tools'
    5. Click "Next"
  9. Restart eclipse
  10. Select the Android SDK
At this point you should have the Android SDK and eclipse installed and configured for Android development.

Here are some notes on doing Android development if you prefer to live on the command line instead of living in eclipse.

  1. Create and Android Emulator
    1. cd android*; ./tools/android -> Tools -> Manage AVDs
  2. Start the Android Emulator
    1. cd android*; ./tools/emulator -avd
  3. Create a new Android project
    1. cd android*; /tools/android create project --target android-14 --name HelloAndroid --path ../HelloAndroid.git --activity HelloAndroidActivity --package com.mydomain.helloandroid
  4. Compile your project
    1. cd ../HelloAndroid.git; ant debug
  5. Push your project onto your running Android emulator
    1. ant debug install

Tuesday, August 23, 2011

Why you should get a college degree

I've been thinking about why it's important to get a college degree for many years and have finally decided to write down why I think it's important. I've heard many arguments over the years both for and against college. I'll try and address those arguments here. I should point out that my arguments may only apply to those going into a technical field (Computer Science, Electrical Engineering, Mechanical Engineering), but many of the arguments may apply to non-technical fields.  Since I don't fully understand these fields, nor do I fully understand the college experience for non-technical degrees, I am likely not qualified to discuss the related benefits or costs.

Since many of my arguments will be based on empirical evidence, I believe my personal background is relevant. My career goals have been to be a Software Engineer. Computer Programming has been a long hobby and passion of mine. I started working as a Computer Programmer before I started college. Since I was working in my desired field, I didn't think that college would be that important.  I did, however, decide to get my bachelors degree as career insurance in case I later needed or wanted to change careers. At the time, I didn't think there would be much educational benefit (I felt I already had many of the skills and knowledge that I would need for my career), but it seemed like a good idea. Many of the advanced CS courses I took showed me that I did have a lot to learn from college and I continue to use the knowledge I gained from my undergraduate studies at work.

A few years after getting my bachelors degree, I decided to go back to get my Masters Degree. Graduate school was more of a hobby than a career advancement strategy. I enjoyed reading papers and learning new things, and I wanted to get experience doing academic research and writing academic papers on my research. Just like with my Bachelors degree, I've been pleasantly surprised at how much I've learned while getting my Masters Degree. I've now graduated, and since my school work is done, I have had a lot of time to review my schooling and to think of the costs and benefits of my undergraduate and graduate work.

Addressing the Myths
Over the years I have heard repeated many arguments against college which I simply don't believe are valid. Here are the common myths that I've heard along with my counter argument.

    1. A degree is just a piece of paper
    This is simply not true. A degree is a certification from an educational institution that you have successfully completed an academic program. A degree is intangible, a diploma is tangible and is typically a piece of high quality paper.

    2. A diploma is just a piece of paper
    This is true, but the diploma is not really the goal of going to college. The diploma is a token that represents something of greater value. The diploma is nothing more than a certificate to show you have a degree. The concept of a token that represents something of value is not only limited to college diplomas. Cash, paychecks, car and house titles are all just pieces of paper, but their value is much greater than the cost of the paper that they're printed on. There is also a greater probability that you will be able to obtain the later pieces of paper such as cash, titles, paychecks if you have first obtained the first piece of paper a diploma.

    3. Just because you have a degree doesn't mean you're smart.
    This is certainly true. I have met and interviewed many degreed Software Engineers who managed to make it through college without retaining much of what they studied. Many of these unqualified Software Engineers were even able to pass their classes with good grades. So, I completely agree that the process of getting a degree will not guarantee competence. I have also met and interviewed non-degreed Software Engineers who were also incompetent and unqualified. So if a degree does not guarantee competence then why bother getting a degree? The answer is that a degree can help you become more competent, and therefore, a degreed individual is much more likely to be competent.  My experience in talking and interviewing Software Engineers leads me to believe that there are a higher percentage of degreed Software Engineering that are competent than there are non-degreed Software Engineers that are competent. Just like safe driving habits won't guarantee you won't die in an auto accident a degree won't guarantee you'll be competent, but both will increase the likelihood of safety and competency respectively.

    4. I don't have a degree, and I'm smarter than those that do.
    This is entirely possible. As I mentioned before, I've met non-degreed Software Engineers who surpassed the average degreed counterparts in ability and competency. I do, however, think that this scenario is rare and unlikely. Of the non-degreed Software Engineers that I've known and interviewed, I have observed that there are many more who had an exaggerated perception of their abilities then those who's abilities we're above the average degreed Software Engineer. The idea that there are a lot non-degreed Software Engineers who erroneously think they are extra competent should not come as a surprise to those that are familiar with illusory superiority where people tend to overestimate their abilities or the Dunning-Kruger effect where the least competent tend to overestimate their abilities the most. If college degrees do in fact increase competency, then it would be expected that non-degreed individuals would not only be less competent, but would also not have the knowledge required to grasp the depth of their incompetence.

    5. College is a waste of money.  They don't teach anything that you can't learn on your own.
    While I agree that college doesn't teach you anything that you can't learn on your own. I do not think that it is a waste of money. I believe that college provides several benefits over self learning. Namely:
        A. Access to experts. Many of my classes were taught by professors who were considered experts in their fields. They kept up to date on the latest research and technology and shared that with their students. I remember discussing with a professor at the University of Utah an idea that I had for a research project for my Masters Thesis. He not only was able to quickly point out that my research topic had already been fully explored about 10 years earlier (a fact that I was not able to discover on my own despite several internet searches), he was also able to point me to that research. If I didn't have access to this professor, I would have spent a lot of time gaining knowledge that could have been gained in much less time by reading a handful of academic papers. This professor helped me to focus my research on areas that had not already been extensively explored.
        B. Immediate feedback. More that once I've done an assignment, taken a test, or participated in a class lecture thinking that I fully understood a subject, only to find out the next week when the graded work was returned that I didn't. If you study on your own you may likewise misunderstand a topic, but you may not get the feedback that you need.
        C. A well rounded curriculum. Both my Masters and Bachelors degrees surprised me at not only the depth of learning that I received but also the breadth. I think it would be possible (although much more difficult) for me to learn those topics on my own, but I'm less confident I would have known that some of those topics even existed or that they were important. I didn't know what I didn't know. It is impossible to study a topic that you don't know exists. A good example of this is big-O notation. Many un-degreed Computer Scientists have never heard of big-O notation or have shallow or incorrect understanding of what it is. In my job we use it all the time, if you don't know what it is you'll be left behind in the conversation or you'll require us to stop work and teach you about the topic (at work we pay you to be effective, we don't want to pay you to get an education). It's not a terribly difficult topic, but you won't learn it if you don't know it exists, and it's easy to get confused about what it really means. A good degree in Computer Science will guarantee the exposure to the topic and a good professor will give ample feedback to ensure a proper understanding.
        D. Access to equipment and technologies. Somethings you just can't learn (adequately) from books. You need to get you hands dirty and work with it. Many of these equipment and technologies are out of the price range for many people. Colleges can provide access to these technologies at a cheaper cost than if you purchased all the technologies on your own.

    6. But if you did learn it all on your own and you are competent why don't employers just interview you so you can prove that you're qualified? They could but interviewing is expensive. I believe the right way to interview for a technical position is to have your most qualified employees ask candidates technical questions. This process is likely to result in hiring a qualified candidate. First of all, your most qualified employees are also likely your highest paid employees, so you're paying them a lot of money to interview. They are also likely to be overworked as they can perform the most tasks (hence the most qualified) and if they weren't overworked you wouldn't be hiring.   If you interview every candidate you will spend a lot of money paying your existing employees to interview and it is likely that your employees will get tired of interviewing and they may leave for another job where they can do real work and not perform job interviews. Because interviewing is expensive you must be selective with who you interview, so it makes sense to only interview those candidates that are most likely to be qualified. In my experience degrees individuals are most likely to be qualified and so it may make good business sense to toss the resumes of the candidates that don't have degrees. An obvious exception to this rule is if you simply do not have enough candidates, in which case, you may end up interviewing all candidates, but you should start with those who are most likely to be qualified first.
    7. What about professors who live in their ivory towers and know nothing about the real world?  This is a common criticism about professors, and one that I've probably uttered a time or two. Such professors do exist, but I've found that often what they teach isn't applicable to today's real world problems but it often becomes applicable in the future. I remember learning about closures while working on my Bachelors degree. At the time I didn't think I'd ever use them as C/C++ Java ruled the industry. Since then Perl, Python, Ruby, JavaScript, Google Go have all become more predominant and all support closures. Even C++ recently added support for lambdas which in C++ are close to full grown closures. Had I ignored this topic, I would have had a more difficult time learning these languages and taking advantage of the power of closures. Additionally many of the students that are complaining about ivory tower professor do not have any real experience in the "real world" either. They are college students, who haven't spent a lot of time working in the "real wold," and, therefore, likely unqualified to determine what knowledge will have real world significance.


While I don't think college is perfect, I do believe the vast majority of people entering technical fields would greatly benefit from obtaining a college degree. I've talked to many who don't have degrees who feel that they aren't useful, but very few who have earned a technical degree who question the benefit. If there are additional augments against earning a college degree in a technical field that I haven't addressed, I would love to hear them.

Thursday, July 14, 2011

Using gnuplot to make simple line graphs.

Gnu plot is an amazing tool. It's great for generating quick plots from bash. I've had my programs generate statistical data and then thrown it into gnuplot so I can better understand how my program is behaving. Unfortunately every time I use it I end up having to go back to the web to relearn the basic syntax of creating a line graph. I'm going to try and save myself some time by documenting the basic usage here.

To draw a simple line graph first place the points to draw in a separate file in this case named "gnuplot.data"

Now create the gnuplot script:

The pause line at the end keeps the gnuplot from flashing the graph and then immediately exiting. When you execute gnuplot you'll see the following:

There is a ton more you can do with gnuplot like generate png files, create different types of graphs, but I've found that as a simple programming tool this gets me 90% there. For more information checkout the gnuplot homepage.  Happy plotting

Wednesday, June 8, 2011

Test Second Development

Let me start off by stating that I'm a big fan of test-first development. I think that a (the?) major benefit of writing unit-tests is the fact that the first person to use a new API is the developer herself/himself. By first writing tests against the API before it has written you increase the likely hood of creating a unable, testable and bug free code. Test-first development is valuable even if you never ran any of the unit-tests that you wrote. The purpose of this post is not to extol the virtues of test-first development but to share a scenario where I feel it may be  better to not write the unit-tests right away.

Several weeks ago I started to work on a simple command line utility. Nothing fancy but just something to scratch an itch that I've had for a while. I had a vague idea of what I wanted it to do but hadn't quite worked out all of the details. Additionally I wasn't sure how the utility would involve. Finally I was writing the utility in Google Go, which I have done a bit of development but I certainly haven't played with it long enough to fully think in the language. In short I simply didn't know what I was doing and I knew it would be some time before I fully understood the problem I was trying to solve. I started to write the code and neglected to start by writing unit-tests. After some time of hacking on the utility I finally got to the point where I had a working prototype and I finally understood the problem. At that point I reviewed the code of my program and I realized that I had (not surprisingly) got the architecture of the program wrong. I figured out the right architecture of the program and started on the refactor. During the refactor the majority of the original code and internal API's was radically changed. If I had written any unit-tests they would have been discarded as the code and API's they would have tested simply did not exist in the refactored program. I could have preserved end-to-end tests but not any unit-tests. During the refactor I wrote unit-tests and I now have a program that has good unit-test coverage. I'll call this development process "Test Second Development". I'd like to propose the following criteria for when test-second development may be appropriate.

  1. You don't understand the program you are writing.
  2. You don't understand the language you are writing the program in.
  3. You will have the luxury of refactoring the program once you understand it, and before it "ships".
  4. You're disciplined enough to write unit-tests when you do your rewrite/refactor.
In other words I think it may be OK to delay writing unit-test when you are writing a true *prototype.

My thoughts on test-second development are still immature and evolving, but I thought I'd jot them down for future reference. In my case I don't think that I would have gained any benefit to using test-first development and I can definitely see how test-first development would have slowed down the discovery process.

* If you are writing code for your job and anyone else will know about your project, then I really doubt you will ever write a prototype. As soon as someone sees or hears about your project they'll want to ship it. I once worked for a company that shipped a simple utility that I had written to aid in testing my code. I only found out that they had shipped it after my manager came and talked with me about the utility and told me that I needed to pretty up the GUI. When I told him that it was a testing utility and was never meant to be shipped he told me that since it was shipped we'd have to support it. I started prettying up the GUI and started on a new testing utility. The new testing utility had a big banner across the front stating that it was for internal testing only. I later learned that calling testing tools something like "The enema tester" was a good strategy for ensuring that internal testing tools never got shipped. 

Saturday, April 23, 2011

Cross Compiling Google Go Code

This is more of a note to self, so I can look up this information in the future as needed. Google Go makes it really easy to cross-compile. Here is how you an compile your Go project for another architecture. The first step is to compile the cross compiling compilers (I can't believe I just typed that). So lets checkout the source code for the go compiler.
hg clone -r release https://go.googlecode.com/hg/ go

Now to compile the Go compiler for the architecture of your machine, simply cd into the go/src directory and run the all.bash script
cd go/src; ./all.bash
If you want to create compilers for a different architecture you set the GOARCH environment variable and then run the make.bash script.
GOARCH=arm ./make.bash
GOARCH=386 ./make.bash
GOARCH=amd64 ./make.bash

The all.bash script compiles the compiler and runs the unit tests, the make.bash script just compiles the compilers.

This will create separate compilers, linkers and assemblers for each of the three architectures. Because each compiler, linker and assembler have a different name you can have support all architectures simultaneously.

Once you have the Go compilers compiled you can now compile your project for different architectures by creating a Go Makefile and setting the GOARCH environment variable before typing make.

GOARCH=arm make
GOARCH=386 make
GOARCH=amd64 make

That's it! You can now cross compile your Go code. More information on the Go compilers, creating a Go Makefile and getting started with Go can be found here, here, and here respectively. Happy Hacking!

Friday, April 22, 2011

I'm back

Last year I decided that it was time to finish up my Masters degree. I recognized that in order to finish in a timely manner I would need to forgo all hobbies for a while and so this blog has been on hold. I'm now done, or at least I'm at the stage where I'm waiting for the final announcement that I'm done and so I'm now returning to normal life. After having dropped out more than a year ago I'm finding that it's taking some time to get adjusted to normal non-academic life. I'm also slowly starting to remember all the non-homework activities that I used to participate in. I wonder how long it will be before I become as busy as I was before.