I fly a lot, as you might have guessed if you read my blog regularly. In 2025, I've been on 56 United planes as I write this, with about 10 left to go before the end of the year. One of the things United does is sometimes send out a quick "survey" after a flight, checking to see if everything went smoothly. I don't always fill these out, but recently I decided to give some feedback as I had a great experience.
I really wanted to just complement the onboard crew, but the survey was quite a few pages (10?) and a lot of questions. I started to try and fill it out, but lost focus after a few pages. This felt like a chore, and I started to just randomly click some of the selections asking me to rate things 1-10. I wasn't really rating the items; I was trying to get done. Eventually, I bailed on the survey and didn't complete it, but that got me thinking about the data from these surveys.
I'm somewhat detail-oriented and I try to do a good job, but I couldn't finish the survey. How many others just click through things and don't really give an accurate picture of their feelings?
A similar situation occurs at work, where we have an HR rating system (Thymometrics), which I really like. Over time, it helps me to keep an eye on how I feel about my job, the company, and my general attitude about work. We get quarterly reminders to fill this out, but I know quite a few people who don't fill it out at all, or just click on it and save the ratings without thinking about them. Another place data might be suspect.
At work we get feedback on various product metrics, in addition to uninstall feedback and product feedback, sometimes with a rating that people click. Is that what they really think about their experience or did they just click the first thing they saw? Or did they mis-click the wrong thing, and they can't change their rating (clicking 2 when they meant 9).
There is a lot of data that organizations collect from people that is very subjective. Across a large group of users, this should provide some sort of indication of how people feel, but if the sample sizes are small, can you really use this data? I think it's easy for people in product management, marketing, and sales to view this data as much more accurate than it might be. I know I'm always wary of any outliers when I see feedback, and often I want to know how many people contributed.
Unless it's a decently large number (100s at least) and there is a clear trend from many people (> 5%), I tend to discount the data as an outlier and not representative.
I'm not sure how many of you do this, but critically examine data and be wary of drawing conclusions. Especially when you are getting impressions, feelings, and opinions from others.