Questions About the Use of PowerShell That you were Too Shy to Ask

Let's forget the actual PowerShell code for a moment: Why is PowerShell important? Why should I use it? Where did it come from? Why did it take so long to arrive? These and many other basic questions are answered in William Brewer's latest addition to the series that answers those seemingly simple questions that you were too shy to ask in public.

  1. What is the point of PowerShell?
  2. What are PowerShell’s antecedents? Where did it come from?
  3. Why didn’t Windows have a powerful scripting language like Korn?
  4. Why did it take so long to get PowerShell?
  5. How did PowerShell come about?
  6. Why is the PowerShell Pipeline Important?
  7. Why is PowerShell useful?
  8. What is PowerShell’s main use?
  9. Can I use PowerShell within an application or website?
  10. Can PowerShell cope with Parallelism and workflow?

What is the point of PowerShell?

It is for any task that requires scripting. It gives power back to the user, developer, or administrator by providing a tool that is at a very high level of abstraction, and can quickly provide a means of getting a task done by creating a chain of software tools without resorting to writing a compiled application. PowerShell is an extensible, open-source, cross-platform object-oriented scripting language that uses .NET. It can use COM, WMI, WS-Management and CIM to communicate with, and interact with, any Windows-based process. It can run execute scripts on the local workstation or remotely. It is ideal for automating all sorts of processes, and is simple enough to manage your workstation, and yet robust enough to manage SQL Azure. It will become the built-in batch processing system in future versions of Windows. It is important as a configuration management tool and task automation tool, but it is versatile enough to be used as a general-purpose programming language.

What are PowerShell’s antecedents? Where did it come from?

We need to go back at least to the creation of Unix.

Unix inherited from its mainframe ancestors such as the Data Generals the strong culture of the batch and its script. The use of the script allowed UNIX to develop a group of specialized applications that did one job and did it well. Data could be passed into an application through its standard input, and the results passed to the standard output which meant that data could be streamed like items on a conveyor belt. It was like building repeatable processes out of Lego

Scripting did more than encourage piped streams of data, because it also encouraged batches and command-line configuration of machines and services. This made it easy to use Unix for servers because all administration tasks could be scripted. Scaling up to large groups of servers was smooth since everything was in place to allow it to happen. Scripting also made the management of servers more precise and error-free. After the script was developed, no further work was needed. The script would do the same thing in the same order with hopefully the same result. It made operations work a lot easier.

If you’re asking what the main point of inspiration came from, It was the Korn Shell, with ideas from Bash shell.

Why didn’t Windows have a powerful scripting language like Korn?

Unlike the contemporary UNIX workstations, The first PCs had no pretensions at hosting server processes. They were low-specification affordable personal computers and initially conquered the market previously occupied by dedicated word processors, before becoming ubiquitous upon the invention of the spreadsheet. They had the ability to run batches, but this was intended merely to ease the task of installing software. Scripting just seemed wrong, old-fashioned even.

Microsoft DOS could and did run batches from the command processor, and the earliest incarnations, such as autoexec.bat are still there in Windows ( called AUTOEXEC.NT and located in the %SystemRoot%\system32 directory). After MSDOS borrowed from UNIX clone Xenix, this command processor took on some of the features of UNIX shells such as the pipe but with limited functionality when compared to the UNIX shells. Microsoft Windows was originally booted from the command processor, and when, in later editions, it took over the tasks of the operating system it incorporated the old MSDOS command-line interface tool (shell).

The features of the batch were sufficient to allow it to do a lot of configuration, installation and software maintenance tasks. The system wasn’t encouraged or enhanced after Xenix was abandoned, but remained a powerful tool. Xenix’s replacement, Windows NT or WNT (add a letter to DEC’s VMS to guess its parent.) did not have anything new for the command processor, and inherited MSDOS’s enhanced version from MSDOS 3.3. This Batch language still exists in the latest versions of Windows, though it is due to be deprecated. It has had quite a few enhancements over the years but it is essentially what came into MSDOS is still the basis of what is currently shipped. It has major failing within a Windows environment that it cannot be used to automate all facets of GUI functionality, since this demanded at least COM automation, and some way of representing data other than text. There have been attempts to replace the DOS batch file technology, including VBS and Windows Script Host (1998), but PowerShell has been by far the most effective replacement.

Why did it take so long to get PowerShell?

Microsoft under Bill Gates retained the vision that BASIC should remain the core language for Windows scripting and administration …

Basic scripting driving COM automation

To this end, all the office applications had, and still have, underlying BASIC scripting that could be controlled via COM automation. To keep batches consistent with this, the tasks done by batch scripting were to be done by Visual Basic for Applications: VBA. This was supplied with the operating system and was supposed to drive all automation tasks. Stranger things had happened in the past, Wang’s Office system, for example, was automated and scripted via Cobol!

Language -driven development and divergence

The policy held for a while but each office application developed a slightly different incompatible dialect and could not be kept in sync. Visual Basic was inadequate for the task and was changed until, as vb.net, it became a somewhat comical dialect of Java. It proved to be unpopular. The cracks were showing. VBA was never quite consistent with the Visual Basic used for building applications.

Windows Script Host

Windows Script Host was introduced as an automation and administration tool that was designed to provide automation technology, primarily for Visual Basic and JavaScript, but supporting a number of interpretive languages such BASIC, Perl, Ruby, Tcl, JavaScript, Delphi and Python. Initially, its introduction was marred by security loopholes that were only finally solved with digital signing in Windows XP. It is still installed with MS Windows and still provides a number of useful COM interfaces that can be accessed in PowerShell and any other application that can interact with COM. Windows Script Host as. Unfortunately, designed before .NET so it is not able to directly use the .NET library. It is also deficient of the means to use WMI, WS-Management and CIM for administration and monitoring. It focused on manage the platform by using very low level abstractions such as complex object models, schema, and APIs. Although it was useful for systems programming it was unusable for the typical small, simple and incremental task that is at the heart of administration, which needed very high levels of abstraction.

Microsoft competes in the server market

Microsoft became so focused on the desktop market that, for a long time, it did not realize the scale of the problem that it had in being able to compete in the server market. The culture of the GUI, ‘the GUI-centric Microsoft culture and ecosystem‘, was gripped by the idea that all configuration was a point-and-click affair. Whereas that was fine for one or two servers, it was a toil and source of error for a server-room.

How did PowerShell come about?

Lone voice of dissent

Due to the determined persuasive powers of Jeffrey Snover, Microsoft belatedly woke up to the fact that it hadn’t a viable solution for the administration of a number of servers as one might find in a medium sized company. (See the Monad Manifesto) GUI didn’t scale, and the batch system of the command line, though useful, was stuck in mid-eighties time-warp.

.Having realized that there had to be a replacement for the antiquated command-line tool, Microsoft had to consider what was required. Firstly, it had replace the command line; so it needed all the things it and other interactive shells had, such as aliases, wildcard matching, running groups of commands, conditional running of groups of commands and editing previous commands. Then it had to replace VBA, and to integrate easily with Windows Management Objects. To cap it all, it had to take over the role of VBA in being embedded in applications to make automation easier.

Microsoft needed something that looked back and forwards…

An industry standard shell backward compatible with the command Line

PowerShell started with the POSIX standard shell of IEEE Specification 1003.2, the Korn Shell, which is also available in Windows..

image

However, this dealt only with strings, so it had to be altered to deal with objects so that it could access WMI, WS-Management, CIM and COM. Because it needed so much connectivity and data interchange, it had to be able to use the .NET library to process NET objects and datatypes.

…but was object oriented

The obvious way to do this was for the new system to understand .NET so it could make use of the man-years of work of providing a common object model that was able to describe itself, and could be manipulated without converting to or from text. The new scripting system had to be resolutely object-oriented. This gave PowerShell the ability to use any .NET object or value.

….with standard naming conventions

To do things in PowerShell, the team decided on an intuitive naming convention based on the verb-noun pair, with simple conventions such as ‘Get’ to get an object and a noun describing the object.

It had to allow terseness for typing in

Powershell had to replace the command line with something better. The whole point of a command shell is that it must be convenient to type short commands into it. This is the ‘REPL’ that we know and love in Python, and that command-line junkies have always loved. Powershell had to be able to work with existing terse DOS command-line commands. This meant that the seasoned user wanted to be able to type in very truncated commands that were obvious to the expert.

…but also encourage a full, intelligible script

On the other hand, PowerShell was also to be used in scripts that were stored on disk and repeatedly invoked with just a change in parameters. This meant that it had to be easy to read, with intuitive commands and obvious program flow. It wasn’t an easy compromise, but it was done by means of aliases. Aliases also helped to ‘transition’ users from the shells they were currently using to PowerShell (For CMD.EXE it is dir, type, copy etc, for UNIX you’ll have ls, cat, cp etc.) You can even define your own!

..where functionality could be discovered:

Powershell took an idea from.NET which was that everything should be learnable by discovery, without needing documentation. All the objects and Cmdlets in Powershell are self-documenting in that you can use PowerShell to find out what they do, what functions can be called, and parameters.

Why is the PowerShell Pipeline Important?

The pipeline in PowerShell inherited the concept of a pipe from UNIX. The PowerShell team had to solve the problem of dealing with Windows Management Objects and Instrumentation by passing objects rather than text down the pipe. Having done so , it found itself in possession of a radical and extraordinarily useful system. It had the means of processing objects as though they were on a conveyor belt, with the means of selecting and manipulating each one as it passed down the pipeline. This made the code easier to understand and also helped with memory management. A long file could be passed down a pipeline, line-by line, for example, searching for text, instead of having to read the entire file into memory (you can do that too if you want, and if you have no fear of the large object stack; you have to do it if you want, for example, to order the lines). It also meant you needed only one cmdlets for selecting things, one for sorting, one for grouping , and only one for listing things out in a table. PowerShell could do a lot in a line of code, far, far, more than C# could.

Suddenly, the task of tackling the huge range of data on the average server that one might need to know about was less frightening. It was already there, and was now easy to get at and filter.

Why is PowerShell useful?

Scripts don’t require special components …

By being a full participant in .NET, PowerShell now had all the power of a compiled .NET language, if not the performance. One of the bugbears of automating processes using the windows command line was that one had to use many existing command files to determine settings and configure things. This meant that if nothing suitable existed, the developer had to write components in a compiled language. In developing scripts, part of the time was spent making small commands. This wasn’t necessary in PowerShell. Everything was there, thanks to .NET.

…and can simply get access to any type of hierarchical datastore

PowerShell simplifies the management of hierarchical data stores. Through its provider model, PowerShell lets you manage data stores such as the registry, or a group of SQL Servers using the same techniques of specifying and navigating paths that you already use to manage files and folders.

This doesn’t turn PowerShell into a rival to C#, VB-Net, ActionScript or F#. It is not for developing applications but for automating administrative tasks. Sure, it is actually possible to write a webserver in PowerShell, or an interactive GUI using Windows Presentation Foundation but it isn’t what it is designed for.

What is PowerShell’s main use?

Deployment

Traditionally, the command line was used for complex deployments. Because PowerShell can work remotely on any computer in the domain, and you have far more information about the computer available to you, it quickly became the default means of deployment for Windows. Chocolatey, which uses NuGet, is an example of this and shows just how simple it is to install almost anything. This is great for the developer. He develops his package in NuGet and can use Chocolatey to deploy it. Linux allows you to install a package with just one simple command. Chocolatey does the same, but also allows you to update and uninstall the package just as easily. A simple script can grab the latest source from Git, compile it, install it and any dependencies, and do any special configuration tasks. There are 4,511 packages you can install from the Chocolatey site. PowerShell now has its own package manager but the current incarnation isn’t as versatile as Chocolatey.

Server Administration.

The release of PowerShell met its most enthusiastic response in the server team. The Exchange Server team were early adopters and saw PowerShell as a godsend to allow the administration of Exchange. The SQL Server team, and Active Directory followed suit. These teams provided specialized Applets that covered all aspects of the administration of the server. You just installed them and you had the means to administer the server.

Windows Server now has the capabilities of using Hyper-V to provide a ‘private cloud’ which allows companies to allow a degree of ‘self-service’ for server resources. This is all driven and maintained by PowerShell

Provisioning.

Provisioning is one of the areas where PowerShell excels. PowerShell’s DSC package allows a PowerShell script to specify the configuration of the machine being provisioned, using a declarative model in a simple standard way that is easy to maintain and understand. It can either ‘push’ the configuration to the machine being provisioned, or get the machine to ‘pull’ the configuration. Chocolatey, a PowerShell script, can not only install a large range of software, but also update it or remove it. PowerShell. PowerShell has a built-in system called ‘PackageManagement’ that isn’t so versatile, but which allows you to install packages from a wider variety of sources.

Can I use PowerShell within an application or website?

As well as providing a scripting environment, PowerShell can be embedded into an application by using System.Management,Automation , so that the user of the application can extend it via scripts. You can even do this in ASP.NET

Can PowerShell cope with Parallelism and workflow?

Although PowerShell is an interpreted dynamic language (using .NET’s DLR) , its performance is enhanced by its ability to run parallel processes and to be able to run asynchronously. It is also designed to be able to run securely on other machines, remotely, and pass data between them. All this is possible without Workflow.

Where the processes that you are scripting have complex interdependencies, need to be interruptable and robust, then PowerShell workflow is the way to go. Workflow can be complicated, and will always be a niche technique for scripting. The complexity of this is hidden. it is now able to run complex workflows within a domain thereby making it possible to script even the most difficult of business processes that contain long-running tasks that require persistence and need to survive restarts and interruptions. Under the hood, PowerShell uses the Windows Workflow Foundation (WF) engine. A PowerShell workflow involves the PowerShell runtime compiling the script into Extensible Application Markup Language (XAML) and submitting this XAML document to the local computer’s Workflow Foundation engine for processing.

 PowerShell Workflow scripting is particularly useful in high availability environments for processes such as ETL (data Extraction, Transform and Load), that potentially requiring throttling and connection pooling, and it is ideal where data must come from a number of sources and be load in a certain order.