I've seen many creative solutions for implemented in the data platform. Often these are data driven solutions that help us work at scale. This are most often with T-SQL or PowerShell (PoSh), as those are the two most common languages. Less often I see Perl, Bash, or Python, though I expect that latter to grow in usage over time.
Recently I saw a presentation where someone was using PoSh to solve a problem. In this case, they were reading some data from a text file and using that to drive the solution. That's a common pattern I've seen, but what about maintaining that text file, or that data?
Too often I've seen a process set up that might use a text file, or more often, uses data in a table. This gives us a data driven, maintainable system, but no one maintains it. Or most often, someone maintains it for a bit but then stops.
That's natural. As humans, not many of us are good at doing the same type of action over, and over, and over again, without mistakes or forgetfulness. This is especially true in an organization where people change jobs, take vacation, and have other interruptions to their work.
One of the things that DevOps has tried to get organizations to embrace is the use of automation to keep a process going over time. We try to remove the dependency on any particular person, or people.
I like the idea of a PoSh script reading from a text file, but the more complete solution mentions or describes a way to maintain the text file. Is there some inventory system that produces this, or maybe some deployment process that includes a way to update this file and ensure the process grows as our organization does.
Building a complete solution that adapts to a changing environment is hard, but it ought to be our goal. Ensuring that we account for data maintenance is something we ought to consider as we close the loop between building , deploying, and operating software.