You may have noticed that I’ve been harping on for a while on the use of RESTful APIs with PowerShell in this Automation for Operations Series. It’s been an interesting and enlightening journey to craft functions and scripts in PowerShell to create workflows for completing common IT tasks.
Recently, I was asked a number of questions about my interest in APIs and why I think they are so important for infrastructure administrators / engineers moving forward. A worthy question that I’ll try to respond to here.
Historically, enterprise systems have either been closed or difficult to manipulate. They often used specialized forms of communication, vendor-specific protocols, or unpublished APIs of which only the internal development team could take advantage. Think back to command line tools like EMC’s navicli or talking to a NetApp filer using SMI-S. Not that I’m picking on them specifically, but I used them as a customer – so I remember the pain!
I found the tools to be clunky and required expertise on their specific funky usage (parameters, switches, syntax, etc). In a nutshell, fragmented methods to communicate with a device that required having mid-to-deep level knowledge on how the tool functioned.

The use of APIs really turns the model on its head. There are no tools to learn, nor specific languages or protocols to master – you can pick pretty much any scripting language you want, and it’ll support REST. My two favorites are PowerShell and Python, with an honorable mention to Ruby and vRealize Orchestrator.
How does using an API compare to the days of yore? Just find a list of published resources, pick a method, and fire over a request. Easy! Putting aside some nuances between the body format (XML, JSON, YAML) and how the API is versioned, it really is that simple to send or receive data from one or more devices that offer a RESTful API.
Here are powerful benefits to infrastructure engineers for learning APIs:
- Skills Overlay – Once you learn how to communicate with one RESTful API, the others are similar enough that only minor time investments are required to start operationalizing other endpoints.
- Code Re-Use – Within a specific API, it’s trivial to re-use similar bits of code frequently to make things happen. This could take the form of modular code, calling functions, or using templates.
- Standardization – Because you can choose any language to communicate with an API, you can slim down the amount of tooling required to automate the data center. This means choosing the languages and tools you want / need / like without having to load up on 30 of them. Plus, you don’t have to rely on the vendor to feed you a tool.
- Stateless – There’s no need to install software, clients, agents, Java (and its version dependency nightmare), or other junk on your desktop. Some vendor tools have even required a specific version of Internet Explorer or the Windows OS to run. That’s bogus!
- Feedback Loops – Every API call has a response header and status information. It’s much easier to gather feedback from API calls, looking at status codes and such, to determine what’s working and what’s breaking. This saves a lot of headache when trying to streamline a workflow.
- Cloudy with a Chance of Meatballs – It’s not possible to create a software defined data center / cloud environment without automation and APIs. Hands down.
It’s for these reasons – and some other minor conveniences – that I leaned on vendors to provide a deeply integrated API. Preferably, one they use for their own GUI. Considering how long hardware lives in most data centers, it doesn’t make sense to invest in technology today that can’t be (easily) automated and dropped into an orchestration engine today / tomorrow.