I had a suggestion from somone on a place where AI helps them and I decided to try it. The person had an AI summarize their work and if the result wasn’t the intention of the author, then they know know their writing wasn’t clear.
This post looks at how that worked for me.
This is part of a series of experiments with AI systems.
Checking an Editorial
I wrote an editorial on database Devops that was published recently. I decided to have a few AIs summarize this and see if the result was what I intended. First up, Claude, with a simple prompt: summarize this text (insert here). See the top of the prompt here:
The result is shown below and seems to be an accurate summary of the text. This is basically what I was trying to say in the piece. Of course, this isn’t much shorter than the text, but this gives me confidence here in the ability to recognize what I’ve written.
In Perplexity, I got this result. This result is similar, but doesn’t mention the author. Instead, this is a summary of the text, not trying to give a voice to the author, which is interesting. Very close to the text above, but this seems slightly drier, taking the text as fact rather than opinion.
Perplexity also had some related items at the bottom, which injected a prompt back into the LLM for more info.
Last, Copilot. I have a dedicated key on my laptop for this and I pressed it and entered my prompt in the app. This result is shorter and to the point. There are some additional links to click that the bottom.
Clicking on one of the items at the bottom injects the text and gets a new result.
I tried this with a couple other piece of work, some of which aren’t published. In each case (3 attempts), the summary made sense. I don’t know if that means I am writing anything clearly, but it does help me get a sense of what I’ve written.