Matt Miller (#4) (12/4/2013)
Perhaps it's me but the primary flaw I noticed is that this is an attempt to use a de facto individual ranking system to measure performance in a team setting. You need to match the metrics/measurements to what it is you're trying to reward; as in - if you truly want to encourage positive teamwork, adding a dimension that measure performance of teams you participate in should be part of your review. You're essentially trying to rank apples by how much they taste like oranges.
We just went through our reviews in the Sept/Oct time frame. The 1's were below average and are probably going to be shown the door. But the the management style dictated that less than 1% get a 3. The 2's were the expected norm meaning you do your job to at least average if not at a 100% level.
It is not great, but it seems at least somewhat fair.
In addition most rating systems are setup to groom people to be a personnel managers in the future. I told them when I was hired that I can deal with managing project teams, doing training on <SW/HW/systems> for junior staff but don't want to mange people on a permanent basis. I don't like wetware, I like tech. The review system, as usual, emphasizes managing people.
So I always look at any rating system as a joke.
A little bit of this and a little byte of that can cause bloatware.