SQLServerCentral Editorial

Don't Create Workslop

,

I remember a time before email. Some of my first jobs were mostly based on paper being moved from person to person. I'm sure some of you remember these envelopes being used to communicate between individuals in an organization. I used those to send and get memorandums from others before we implemented email. Fortunately, our email implementation (cc:Mail) came soon after I started working in corporations.

Initially, people treated email much like paper mail inside organizations. However, over time, people started to treat email differently. It was easy to send an email around other work, so people started to send more messages than they ever would have with paper. They started to dash off notes quickly, sometimes too quickly, as an email might be followed by another email that includes a "I forgot this". As instant messaging grew, we saw similar patterns where people were quick to send messages, regardless of whether they were important, well-thought-out, or even necessary.

As AI becomes more widely used in the workplace, there's a similar tendency. People are quick to use AI to generate something and send it to others, often without due diligence on their part to ensure the work is at the quality level the other person expects. Some workers don't double-check what they received from the GenAI tool, and it may not be complete enough to actually satisfy the requirements they were given. Maybe even worse, the result might not be targeted at the problem that was supposed to be solved.

I ran across an article on workslop, which is defined as AI-generated work that masquerades as good work. Instead of actually being what the organization needs, it's sloppy, it's low quality, or it misses the mark.

To be fair, I don't think this is an AI issue. I have worked with plenty of people who produced low-quality output that wasn't good enough for me to use. I've seen plenty of people not really try to produce quality results and do a poor job of completing the tasks they were assigned. With AI, they can do it quicker, which can be a problem, especially if they are producing things other employees depend on or need. The result might be some people be pushing their work onto others who have to spend time fixing (or completing) the copy/pasted GenAI results, taking away from the time others might spend on more important tasks.

In the technical world, we saw that in the 90s with VB6, where lots of technical and nontechnical people produced code quickly for an application that worked initially, but didn't perform well, couldn't be scaled to others, and wasn't stable enough to run every day. Sometimes not stable enough for an hour. I suspect we'll see a lot of AI-generated code that repeats this pattern. Not because the AI can't generate good code, but the people using it won't know how to ask for good code, with instructions about the types of code that create robust applications. They also won't know (or won't bother) to check the code for quality.

My guess is that the GenAI adaptation to lots of work will result in a lot of things produced, but at a lower quality than we might want. We'll also see this phenomenon create inefficiencies as other workers have to return or repeat work. Fortunately, there is a lot of room for inefficiency in many organizations, so they can likely continue to function.

Those that learn to use GenAI well to produce higher quality work will do so faster and stand out from their peers. Of course, a big part of standing out is also developing strong soft skills and advocating for your accomplishments. Without that, you might find those who produce workslop, but talk about it well to others will stand out from you.

Rate

(1)

You rated this post out of 5. Change rating

Share

Share

Rate

(1)

You rated this post out of 5. Change rating