## Thursday, February 28, 2008

### Interviews and colloquium

I have already had one interview today with Sandia National Laboratory and will have another one this afternoon with Raytheon. This afternoon I also have a faculty candidate colloquium. Luckily, the presentation I am giving tomorrow is only five minutes long and the slides are done (mostly stolen from my SenSys talk). Busy day, busy day.

## Wednesday, February 27, 2008

### Embedded systems workshop

There will be an embedded systems workshop in conjunction with the LCTES PC meeting. I mainly mention this because, since my advisor is the PC chair, I got roped into presenting. I'll be talking about the work Yang, John, David and I did to get Safe TinyOS integrated with the main distribution of TinyOS.

## Tuesday, February 26, 2008

### Marvell interview

I had a nice phone interview this morning with a manager from Marvell. Like my interview with Google, I don't think I did horrible but I also definitely didn't do a slam dunk. Here are some of the questions I was asked:

The final question at the end was something like "what are your career goals for the next five years?" which had the goal of figuring out if I really wanted to be an engineer or if I was just a researcher in disguise.

Unfortunately, I found out at the end of the interview that it was for a group in the Boise office. I do not have anything against Boise, but I would like to raise my children in a non-LDS dominated culture. I had indicated to the recruiters at the career fairs that I wanted to interview for the Santa Clara office for that reason. Oh well.

- How do you make a function reentrant?
- What does the stack frame look like during a function call?
- What are the different types of variables? local, global, static, register, pointer
- Write some code to determine the endian-ness of a machine.
- Design a file system for a scratchpad memory.

The final question at the end was something like "what are your career goals for the next five years?" which had the goal of figuring out if I really wanted to be an engineer or if I was just a researcher in disguise.

Unfortunately, I found out at the end of the interview that it was for a group in the Boise office. I do not have anything against Boise, but I would like to raise my children in a non-LDS dominated culture. I had indicated to the recruiters at the career fairs that I wanted to interview for the Santa Clara office for that reason. Oh well.

## Monday, February 25, 2008

### The Legend

I think the Great Ventilation and Telephone Riots of SrDt 3454 (See

Anyway, the problem with a lot of Mac users is that they do not actually know how their machine works. The beauty of a Mac is that most of the time this knowledge is not necessary. However, either because the Mac hides it or because the user is ignorant, when the knowledge is necessary the user is helpless.

There is something to be said for eventually forcing users to take the training wheels off and actually understand their machines. Also, Apple should remember the legend required to be placed on all devices:

Fanboys often think their Macs cannot possibly go wrong...

__Mostly Harmless__by Douglas Adams) encapsulate most of what annoys me with Mac users. I should preface the rest of this post by saying my next laptop will probably be a Mac, and that some people may consider me a bit of a fanboy myself (even though I have never owned a Mac).Anyway, the problem with a lot of Mac users is that they do not actually know how their machine works. The beauty of a Mac is that most of the time this knowledge is not necessary. However, either because the Mac hides it or because the user is ignorant, when the knowledge is necessary the user is helpless.

There is something to be said for eventually forcing users to take the training wheels off and actually understand their machines. Also, Apple should remember the legend required to be placed on all devices:

The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair.

Fanboys often think their Macs cannot possibly go wrong...

## Friday, February 22, 2008

### Sitemap for Blogger

I added http://ncooprider.blogspot.com/atom.xml as my sitemap for this blog. That was mainly so the Google Webmaster Dashboard would stop asking me for one, but I wonder if it will really help at all? Blogs are typically very spider friendly, and I would hope Google indexes their own blogs well.

### Satish Chandra visit

Satish Chandra visited the School of Computing today. I attended his colloquium, had lunch with him at Sage's Cafe, and met with him one-on-one for a few minutes. He is definitely doing some really neat things and tackling some important problems there at IBM. He specifically mentioned the T.J. Watson Libraries for Analysis (WALA).

## Wednesday, February 20, 2008

### 2008 Organick Lecture Series presents Fran Allen

This year's lectures will be given by Fran Allen, IBM Fellow Emerita at IBM's T.J. Watson research Laboratory. The lectures will be given at the following dates and times:

"High Performance Programs and Programmers: A Personal Perspective"

Wednesday, February 20, 2008

7:30 p.m.

Reception to Follow

1230 Warnock Engineering Building

"Compilers and Parallel Computing Systems"

Thursday, February 21, 2008

Refreshments 3:20 p.m.

Lecture 3:40 p.m.

1230 Warnock Engineering Building

Fran Allen is an IBM Fellow Emerita at the T. J. Watson Research Laboratory with a specialty in compilers and program optimization for high performance computers. Soon after joining IBM Research as a programmer in 1957 with a University of Michigan masters degree in mathematics, Fran found the technical goal that would drive her career. The goal was (and still is) to enable both programmer productivity and program performance when developing computer applications. One result of her work is that Fran was named the recipient of ACM's 2006 Turing Award "For pioneering contributions to the theory and practice of optimizing compiler techniques that laid the foundation for modern optimizing compilers and automatic parallel execution."

She is a member of the American Philosophical Society and the National Academy of Engineers, and is a Fellow of the American Academy of Arts and Sciences, ACM, IEEE, and the Computer History Museum. Fran has three honorary doctorate degrees and has served on numerous national technology boards including CISE at the National Science Foundation and CSTB for the National Research Council. Fran is also an active mentor, advocate for technical women in computing, environmentalist, and explorer.

"High Performance Programs and Programmers: A Personal Perspective"

Wednesday, February 20, 2008

7:30 p.m.

Reception to Follow

1230 Warnock Engineering Building

"Compilers and Parallel Computing Systems"

Thursday, February 21, 2008

Refreshments 3:20 p.m.

Lecture 3:40 p.m.

1230 Warnock Engineering Building

Fran Allen is an IBM Fellow Emerita at the T. J. Watson Research Laboratory with a specialty in compilers and program optimization for high performance computers. Soon after joining IBM Research as a programmer in 1957 with a University of Michigan masters degree in mathematics, Fran found the technical goal that would drive her career. The goal was (and still is) to enable both programmer productivity and program performance when developing computer applications. One result of her work is that Fran was named the recipient of ACM's 2006 Turing Award "For pioneering contributions to the theory and practice of optimizing compiler techniques that laid the foundation for modern optimizing compilers and automatic parallel execution."

She is a member of the American Philosophical Society and the National Academy of Engineers, and is a Fellow of the American Academy of Arts and Sciences, ACM, IEEE, and the Computer History Museum. Fran has three honorary doctorate degrees and has served on numerous national technology boards including CISE at the National Science Foundation and CSTB for the National Research Council. Fran is also an active mentor, advocate for technical women in computing, environmentalist, and explorer.

## Tuesday, February 19, 2008

### Google dice question

A fellow grad student in my lab is determined to figure out the dice question Google asked me in my phone interview. The basic idea is to accurately simulate a seven-sided die using any number of five-sided dice.

His first attempt was to roll seven five-sided dice, take the modulus by seven and then add one to the result. During simulation this comes close to a uniform distribution, but the actual math works out to be non-uniform. I wrote this program for testing:

He countered with an argument using six-sided die that was actually a degenerate case. My probability skills are fairly weak, so I had to answer with an enumeration of the possibilities in an example using three-sided dice to emulate a five-sided die.

His final answer was to look at sequences of die-rolls. First prime the pump by rolling the five-sided die twice and taking the modulos seven of the result. Then, for each seven-sided die roll we want after that, roll a five-sided die add it to the previous result, and mululo seven. His argument is that, in the long run, this produces a uniform distribution.

Simulating this algorithm seems to produce fairly uniform distributions, but so did the last one as well. Unfortunately, the probability math is beyond me. Well, it is beyond the time I have to put into it. I still do not believe this is a correct answer, but I cannot prove it. My best argument is a proof by contradiction.

Any other thoughts? How can I prove the algorithm wrong? Or is it right?

His first attempt was to roll seven five-sided dice, take the modulus by seven and then add one to the result. During simulation this comes close to a uniform distribution, but the actual math works out to be non-uniform. I wrote this program for testing:

#include

#include

#define COUNT 1000

int real[7];

int jun[7];

// Couldn't see anything wrong from this:

int test () {

int i;

srand( time(NULL) );

for (i = 0; i < COUNT; i++) {

float max = RAND_MAX;

int sum=0;

int j;

for (j=0; j < 7; j++) {

float r = rand();

int d5 = ((r/max)*5) +1;

sum+=d5;

}

int calculate = (sum % 7) +1;

float r = rand();

int d7 = ((r/max)*7) +1;

//printf("%d %d\n", d7, calculate);

real[d7-1]++;

jun[calculate-1]++;

}

int suma=0;

int sumb=0;

for (i=0;i<7;i++){

printf("%d) real %10d jun %10d\n", (i+1), real[i], jun[i]);

suma+=real[i];

sumb+=jun[i];

}

printf("%d %d\n", suma, sumb);

return 0;

}

// This comes from http://en.wikipedia.org/wiki/Dice

float probability(int s, int i, int k) {

int n;

if (i==1)

if ((k>=1) && (k<=s))

return 1/((float) s);

else

return 0;

float sum = 0;

for (n=1;n<=s;n++) {

sum += probability(s,1,n) * probability(s,i-1,(k-n));

}

return sum;

}

float sums[7];

int main () {

int x;

for (x=7;x<=35;x++){

float prob = probability (5,7,x);

//printf("%d %f\n", x, prob);

sums[x%7]+=prob;

}

for (x=0;x<7;x++)

printf ("%d %f\n",x+1,sums[x]);

}

// since probabilities are not all even, your algorithm does not work

He countered with an argument using six-sided die that was actually a degenerate case. My probability skills are fairly weak, so I had to answer with an enumeration of the possibilities in an example using three-sided dice to emulate a five-sided die.

His final answer was to look at sequences of die-rolls. First prime the pump by rolling the five-sided die twice and taking the modulos seven of the result. Then, for each seven-sided die roll we want after that, roll a five-sided die add it to the previous result, and mululo seven. His argument is that, in the long run, this produces a uniform distribution.

Simulating this algorithm seems to produce fairly uniform distributions, but so did the last one as well. Unfortunately, the probability math is beyond me. Well, it is beyond the time I have to put into it. I still do not believe this is a correct answer, but I cannot prove it. My best argument is a proof by contradiction.

Assume the algorithm is right.

After a certain number of rolls, the last roll accurately simulates a seven-sided die.

To get the next roll of the seven-sided die, we add a five-sided roll and modulo seven.

That means seven-sided equals seven-sided plus five-sided modulo seven.

Which is a contradiction.

Any other thoughts? How can I prove the algorithm wrong? Or is it right?

## Friday, February 15, 2008

### Google Street View for Salt Lake City

Just a quick note that Google Street View is now available for Salt Lake City. You cannot see my car because I park inside an apartment complex, but you can see a lot of really neat things. The coverage area extends up and down the Wasatch front, so I can go visit BYU if I want!

## Thursday, February 14, 2008

### Google phone interview

I had a phone interview with Google this morning. Overall it was a pleasant and slightly fun experience. I do not think I botched it up too badly, but I definitely do not think I had a slam dunk either. Here are the questions the Google engineer asked me in the interview:

**Describe pluggable abstract domains to me.**This one related directly to my research. It impressed me for two reasons: the interviewer had read and understood my resume and the thing he asked about wasn't RAM compression (which is what everybody else asked about. He had some follow up questions about mathematical functors (which he had studied in school), interval analysis (which he had done before) and dynamic typing (other potential application). He followed up the type theory question by using Lisp as an example.**Simulate a seven-sided die using only five-sided dice.**The tricky part of this is generating a uniform distribution between one and seven. Just rolling seven of the five-sided dice does not work because the totals in the middle have higher probability. The solution I eventually arrived at (with a little poke in the right direction by my interviewer) is to change each five-sided die into a binary digit (1-2 are zero, 3-4 are 1, five is a re-roll) and then roll three of them. Interpret the number as binary, with zero being a re-roll.**Come up with an algorithm to count number pairs which add to a certain sum in a list**. The "obvious" way to do this is to check the first one with all the others, then the second ones with all the others minus the first one, and so on. The problem is that this is O(n^{2}). The better solution is to sort the list first and then go through and look for the missing half of a pair. In other words, if the first number is one and you want to add to six, look for a five in the list. The complexity of this depends on the sorting methodology. If a bin-sort is able to be used (number of possible values is small) then it is O(n). This is because look up and counting is easy with the bin-sort structure as well. If we can't use bin-sort we have to use another sorting method, plus the lookups are more expensive as well. This results in O(n log n). The interviewer always asked me about the complexity of my algorithm (makes sense, since this is Google).

## Wednesday, February 13, 2008

### Links in LaTeX pdfs

My advisor gave me a somewhat criptic piece of advice that I should "add more links" to my CV. I noticed that his CV had more web addresses on it, and that when viewed in Adobe Acrobat Reader those addresses were "clickable." So I asked to see his .tex file and discovered the trick.

He uses the hyperref package by inserting this line:

He uses the hyperref package by inserting this line:

`\usepackage[pdftex]{hyperref}`. When used in combination with the url package (`\usepackage{url}`a hyperlink can show up in the PDF when it is done like this:`\url{http://www.cs.utah.edu/~coop}`## Monday, February 11, 2008

### cvs, svn, wget

While working on Safe TinyOS I have had to write up some instructions. These instructions involve getting sources from various different locations. The ideal write-up looks like a bunch of commands with comments, much like a script (maybe is a script). While getting things from cvs and svn were fairly straightforward for me, I was not sure how to get things from locations not in a repository. Then my advisor suggested

`wget`, which I had forgotten about. I just`wget url`and then it pulls down whatever was at that url. Problem solved!## Thursday, February 07, 2008

### Google phone call

Yesterday I received a phone call from a Google recruiter. This recruiter previously contacted me by email to set up this phone call, so I thought this would be the "first round" interview. Prepared for a thirty minute session riddled with tricky questions, I instead had a nice conversation about the Boston office of Google and set up the real first-round interview. Somewhat anti-climactic.

## Wednesday, February 06, 2008

### Ch-something

I had to set up some "sandbox" directories over the last few days in which some different members of my research group could mess around. This required using a few different commands:

`chgrp -R embed *`- This changes the group for the whole directory and subdirectories to the group`embed``chmod -R g+w *`- This adds write privileges to the whole directory and subdirectories for members of my group

## Monday, February 04, 2008

### BlockSite

I use the Firefox add-on BlockSite at work. I found myself spending a lot of time on "goof-off" sites (digg, facebook, linkedin, wikipedia, youtube, phdcomics, etc). While I wouldn't stay at any particular one for very long and each of those sites is useful to some degree, I think my productivity increases by just blocking them. Since I use Firefox I just installed the BlockSite extension and off I went.

Subscribe to:
Posts (Atom)