U of U career services rolled out a new tool this week. At first I laughed it to scorn, but it has grown on me. I might incorporate it into my suite of tools for job searching. I believe it is technically supposed to be for only U of U students and alumni, but there seemed to be nothing which kept other people from using it as well. I am sure that will change.
The tool is a school specific version of Optimal Resume. The application guides the user through inputting necessary information for the resume and then pops out a fairly nice looking resume in a variety of formats. This, by itself, is not that neat. It is my personal opinion that if you really care about the job search, you should learn how to format your resume nicely. Plus, if all the students at the U start using this tool, a way to stand out will be to NOT use this tool and look different than the rest.
Although it is pitched as a resume builder, that is not the functionality that is cool. The cool functionality is all the additional tools. For example, it automatically adds functionality so that someone viewing the resume online will see mouse-over effects. There is also a letter builder, a skill assessment, and a few other useful builders. It also provides an easy way to publish the information on the web, hosting it for free.
You can find my web page for this site at http://utah.confidentialresume.com/coop. Right now it only contains a semi-direct copy of my resume, but I will be updating my information soon to see what it can really do!
The portal I use and that you should probably check out (especially if you are a U of U alumni or student) is https://utah.optimalresume.com. The big question is if employers and recruiters will actually look at these resumes and if the cool gimmicks and stuff will actually make a difference. I believe the answer is that only time will tell, but apparently career services thinks it will help and is worth it!
Tuesday, October 23, 2007
Thursday, October 18, 2007
Career fair
Today I attended the career fair at the University of Utah. As I told my brother before he went to the career fair down at BYU, I am not entirely sure of the best way to approach these things. I spoke to a number of different companies, the major ones being National Instruments, Marvell, Raytheon, and Sandia. The email I received from my career counselor indicated that Microsoft was also going to be here, but they were not. I collected some interesting fliers and information, plus some need little gadgets. However, most recruiters just tell us to look at the company web site and to submit a resume there.
In other news, my proposal presentation is in its final format. It is a good thing, since I am presenting it tomorrow.
In other news, my proposal presentation is in its final format. It is a good thing, since I am presenting it tomorrow.
Wednesday, October 17, 2007
Real-time issues
During my interview with Sandia this morning, the recruiter asked me about if I dealt with real-time issues when working with cXprop. Unfortunately, I had to explain that I did not worry about those types of constraints when working on my tool. Basically, I worry about code size, data size, and duty cycle. That seems to be it. Occasionally we used to through stack size in there as well, but John does not maintain stacktool anymore, so it does not work for TinyOS 2.
Other than that and a few fumbles here and there, the interview went fairly well. It sounds like they are doing some really neat stuff that I would be happy to work on. I just have to make sure they would be happy to have me work on it for them!
Other than that and a few fumbles here and there, the interview went fairly well. It sounds like they are doing some really neat stuff that I would be happy to work on. I just have to make sure they would be happy to have me work on it for them!
Monday, October 15, 2007
Local view
Last week I spent a little bit of time making internal information from cXprop available to Will Archer for use in his Caduceus-using tool. Unfortunately, I came at the problem with the wrong set of assumptions and it has ground the idea to a temporary halt. Hopefully we can resolve the issues during our group meeting on Thursday.
My faulty assumption was that cXprop was going to be used on whole-program code, and probably on TinyOS code. Although it works on other things, that is normally the target. It turns out Will is using it on single files that may or may not have anything to do with TinyOS. This means that I do not know any order of execution inside the functions, so I basically have to assume ALL orders of execution.
The problem is that this model does not exist in cXprop, except for interrupts. I would have to adapt that model to functions in files. Which should not be too hard, but John wants me to stop hacking on code and keep working on my "critical path" writing/presenting projects. Which is why I have not actually done it yet. There might also be little nuances I have not thought of yet. Until I create this model, Will uses a version that assumes "bottom" for incoming entry-point function state, which basically means nothing gets learned.
Entry-point functions are defined as those which are visible outside the file. This essentially means anything not labeled as static, although there are a few other less-common cases.
My faulty assumption was that cXprop was going to be used on whole-program code, and probably on TinyOS code. Although it works on other things, that is normally the target. It turns out Will is using it on single files that may or may not have anything to do with TinyOS. This means that I do not know any order of execution inside the functions, so I basically have to assume ALL orders of execution.
The problem is that this model does not exist in cXprop, except for interrupts. I would have to adapt that model to functions in files. Which should not be too hard, but John wants me to stop hacking on code and keep working on my "critical path" writing/presenting projects. Which is why I have not actually done it yet. There might also be little nuances I have not thought of yet. Until I create this model, Will uses a version that assumes "bottom" for incoming entry-point function state, which basically means nothing gets learned.
Entry-point functions are defined as those which are visible outside the file. This essentially means anything not labeled as static, although there are a few other less-common cases.
Friday, October 12, 2007
Car Mechanics
This post is mostly about car mechanics, but it also relates them to computer repair services so I feel justified in posting it here.
My car started acting up a few months ago. Various circumstances prevented me from taking it in until a week ago. Up until now I had been going to the dealer, since I knew they would know what they were doing and would have the parts. However, I called the dealer up this time and the mechanic said it would be at least a day and at least $95 to just look at it. That was such a blatant attempt to rip me off that I did not make the appointment and resolved to find someone else to fix my car.
Luckily, my wife listens to Car Talk and has often sucked me in as well. Although we have not listened as regularly as we used to, we still trust those two guys. I knew that their website had a forum for recommending auto mechanics, so I checked it out. After doing a bit more research, I settled on a group of three to try. The first one couldn't see me until next week, and my life starts getting REALLY busy again next week. The second one could see me today, so I made the appointment.
Dropping the car off this morning, I had a little apprehension as I walked away. The repair shop was definitely a lot less "classy" than the dealer. As I thought about it, though, this made sense. The mechanic actually knew how to fix cars. He didn't charge me to look at my car. As of writing, I am still waiting to hear back from him. I'll probably call him in another twenty minutes.
The point is that I realized that computer service repairmen are like auto mechanics. Places like Geek Squad know they can take the average person to the cleaners because the average person has no clue how to self diagnose a computer and fix minor problems. The only way to REALLY protect oneself when dealing with computer repair people or auto mechanics is to actually know a fair amount about what they are fixing.
Otherwise, you end up like me, getting worked over by the mechanics. When I was ignorant/scared of computers, I had a "friend" help me fix an old laptop. Only it really didn't take much/anything. Then the "friend" expected me to give him rides to school every day, even though he had only done about half an hours worth of work.
Knowledge is power, and reading up a little bit can save a lot of money/time.
My car started acting up a few months ago. Various circumstances prevented me from taking it in until a week ago. Up until now I had been going to the dealer, since I knew they would know what they were doing and would have the parts. However, I called the dealer up this time and the mechanic said it would be at least a day and at least $95 to just look at it. That was such a blatant attempt to rip me off that I did not make the appointment and resolved to find someone else to fix my car.
Luckily, my wife listens to Car Talk and has often sucked me in as well. Although we have not listened as regularly as we used to, we still trust those two guys. I knew that their website had a forum for recommending auto mechanics, so I checked it out. After doing a bit more research, I settled on a group of three to try. The first one couldn't see me until next week, and my life starts getting REALLY busy again next week. The second one could see me today, so I made the appointment.
Dropping the car off this morning, I had a little apprehension as I walked away. The repair shop was definitely a lot less "classy" than the dealer. As I thought about it, though, this made sense. The mechanic actually knew how to fix cars. He didn't charge me to look at my car. As of writing, I am still waiting to hear back from him. I'll probably call him in another twenty minutes.
The point is that I realized that computer service repairmen are like auto mechanics. Places like Geek Squad know they can take the average person to the cleaners because the average person has no clue how to self diagnose a computer and fix minor problems. The only way to REALLY protect oneself when dealing with computer repair people or auto mechanics is to actually know a fair amount about what they are fixing.
Otherwise, you end up like me, getting worked over by the mechanics. When I was ignorant/scared of computers, I had a "friend" help me fix an old laptop. Only it really didn't take much/anything. Then the "friend" expected me to give him rides to school every day, even though he had only done about half an hours worth of work.
Knowledge is power, and reading up a little bit can save a lot of money/time.
Tuesday, October 09, 2007
Digital Photos
I have several gigabytes of digital photos. Because they are essentially "free," I take a lot more digital photos than I would if I had a regular camera. I almost never get these digital photos actually developed. Part of me feels that this is wrong and we still need to have hard copies of photos.
But then part of me says that having a hard copy is no longer important, as long as the digital copies are backed up and dispersed. A set of hard copies can be destroyed in a fire, just like there are several ways to destroy the digital versions. Plus, with applications like Quark or Pagemaker, making "scrapbooks" can become an ALL digital process.
I guess then there wouldn't be scraps, though.
But then part of me says that having a hard copy is no longer important, as long as the digital copies are backed up and dispersed. A set of hard copies can be destroyed in a fire, just like there are several ways to destroy the digital versions. Plus, with applications like Quark or Pagemaker, making "scrapbooks" can become an ALL digital process.
I guess then there wouldn't be scraps, though.
Sad but true
This sums up what it feels like to have a meeting with my advisor, John Regehr:
I'm not even kidding. That was definitely what it was like at first, at least. I think I have built up some immunity over the years.
I'm not even kidding. That was definitely what it was like at first, at least. I think I have built up some immunity over the years.
Monday, October 08, 2007
PhD Proposal
I finally got my thesis proposal past my advisor and onto my committee. So I also put it up on my web page. You can read it here, or the main point is copied below:
An interrupt complicates the dataflow analysis for a given code segment because during the segment it may fire at any time, it may fire repeatedly, or it may never fire at all. To avoid unnecessary degredation of the results, an analysis must have a better coping strategy for multiple flows due to interrupts than just modeling all possible interleavings of the flows. On the other hand, interrupts also provide useful divisions between program elements. Some data and code may be only accessed inside a single interrupt, inside multiple interrupts, or outside of any interrupt. Not capitalizing on the isolation of accesses due to interrupts will unnecessarily reduce analysis precision.
Low-level systems programming on MCUs often involves inline assembly and directly accessing specific parts of the MCU. For example, the status register may be directly read, shifted and masked in order to determine the status of the interrupt bit. Naively and pessimistically analyzing all hardware accesses degrades the analysis of systems code very quickly. Leveraging the information is crucial for the success of
the analysis.
Problem
A straightforward instantiation of traditional dataflow analysis techniques fails to provide adequate precision for aggressive optimization of interrupt-driven MCU systems. These systems contain features which either must be dealt with because they hinder the analysis, or should be leveraged because they represent an untapped potential for information. The pivotal features in interrupt-driven MCU systems are the interrupts and the microcontroller themselves.An interrupt complicates the dataflow analysis for a given code segment because during the segment it may fire at any time, it may fire repeatedly, or it may never fire at all. To avoid unnecessary degredation of the results, an analysis must have a better coping strategy for multiple flows due to interrupts than just modeling all possible interleavings of the flows. On the other hand, interrupts also provide useful divisions between program elements. Some data and code may be only accessed inside a single interrupt, inside multiple interrupts, or outside of any interrupt. Not capitalizing on the isolation of accesses due to interrupts will unnecessarily reduce analysis precision.
Low-level systems programming on MCUs often involves inline assembly and directly accessing specific parts of the MCU. For example, the status register may be directly read, shifted and masked in order to determine the status of the interrupt bit. Naively and pessimistically analyzing all hardware accesses degrades the analysis of systems code very quickly. Leveraging the information is crucial for the success of
the analysis.
Solution
I am developing a framework to enable sound and accurate dataflow analysis for interrupt-driven microcontroller programs. This framework adapts existing abstract interpretation ideas to system-level C code. My framework integrates several synergistic analyses such as value-flow analysis, pointer analysis, and callgraph construction. Contributions of my work include using pluggable abstract interpretation domains, providing a novel model of interrupt-driven concurrency, allowing dataflow through volatile variables when safe to do so, and tracking interrupt firing dependencies. When compared to a highly optimizing C compiler, my framework improves traditional code optimizations such as conditional constant propagation, dead code elimination, redundant synchronization elimination, and inessential-safety-check removal. It also enables new transformations such as RAM compression, the sub-word packing of statically allocated global variables. These transformations help microcontroller programs meet stringent resource requirements.Thursday, October 04, 2007
Nearly All Binary Searches and Mergesorts are Broken
A paper from POPL that we are reading in John's Research group refers to this blog post: Official Google Research Blog: Extra, Extra - Read All About It: Nearly All Binary Searches and Mergesorts are Broken. Scary and funny at the same time.
The problem occurs with int mid =(low + high) / 2; and statements like it. If low and high are integers, then there is not a problem. Since they are int typed instead, there is a possibility for overflow. An easy solution is to add in a subtraction or do some fancy shifting (see the original Google blog).
John mentions this type of thing a lot. It seems that a lot of formal methods people make assumptions about an int acting like an integer. Software engineers, on the other hand, are painfully aware that the problems normally come because int types do NOT act like integers. This is why there are some many complicated transfer functions in cXprop. The paper we are reading is Types, Bytes and Separation Logic by Harvey Tuch, Gerwin Klein, and Michael Norrish.
The problem occurs with int mid =(low + high) / 2; and statements like it. If low and high are integers, then there is not a problem. Since they are int typed instead, there is a possibility for overflow. An easy solution is to add in a subtraction or do some fancy shifting (see the original Google blog).
John mentions this type of thing a lot. It seems that a lot of formal methods people make assumptions about an int acting like an integer. Software engineers, on the other hand, are painfully aware that the problems normally come because int types do NOT act like integers. This is why there are some many complicated transfer functions in cXprop. The paper we are reading is Types, Bytes and Separation Logic by Harvey Tuch, Gerwin Klein, and Michael Norrish.
Wednesday, October 03, 2007
Distributed Cognition
I recommended we read a paper on distributed cognition as part of the seminar on ultra-large scale systems. The paper I chose was Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research by James Hollan, Edwin Hutchins, and David Kirsh (all at UCSD). The paper led to some interesting discussion about human computer interaction (HCI). Eric was particularly excited about it, as it directly relates to the FLUX workbench project, which he is in charge of.
I found the paper to be an interesting and informative read, but I would prefer to read the follow-up paper. The ethnographic studies they highlight emphasize the importance of considering the environment as part of the cognitive process, but they do not actually include multiple people in that environment. They explain how such a study does not exist yet, but that does not make me feel any better. Unfortunately, I have not been able to find a paper which does do this study.
I found the paper to be an interesting and informative read, but I would prefer to read the follow-up paper. The ethnographic studies they highlight emphasize the importance of considering the environment as part of the cognitive process, but they do not actually include multiple people in that environment. They explain how such a study does not exist yet, but that does not make me feel any better. Unfortunately, I have not been able to find a paper which does do this study.
Tuesday, October 02, 2007
RTSS PhD Student Forum
Yesterday I submitted my proposal to the RTSS PhD Student Forum. I am fairly happy with how my submission turned out.
Subscribe to:
Posts (Atom)