As I started working on this post, I realized that I’ve been thinking about the design and consumption of APIs for about 7 years now. I still recall the first bits of code that I wrote to interact with an API and how mysterious and confusing it all was. At the time, most systems I worked with used the Simple Object Access Protocol (SOAP) or the generation of Remote Procedure Calls (RPCs) to read and mutate data with XML payloads. Today, it’s more of a Representational State Transfer (REST) driven world. However, I’m starting to see much more usage of GraphQL, especially across the traditionally sleepy infrastructure space.
Also, I saw this Tweet from Rosalie Marshall about a talk given by Kin Lane (aka the API Evangelist). As a fan, I knew there was some story telling of my own that may be worth sharing.
In this post, I’m taking a step back and starting with a clean slate on the various extensible layers that data traverses in order to reach a user or service. My perspective will be centered more around the life of Operations. If you read this post with a greater understanding behind the architecture and flows for data within the framework of an API, we’ll consider this a win for both of us. ?
The Extensible Layers of Data
Let me blunt in saying that there is nothing magical or special about an API. However, like any architecture, there are numerous ways to do things right and numerous ways to do things not so right. To help visual learners (like myself), I’ve crafted my world view on “The Extensible Layers of Data.”

Starting at the bottom, we see databases, remote procedure calls (RPCs), and other data sources. This is where we can extract the “raw” data to be transformed higher up the stack. This layer is fed by numerous sources and workflows that I have omitted as out of scope from the diagram.
From there we move into API abstraction layer. I’ve highlighted two popular architectures for constructing an API: REST and GraphQL. The underlying data sources are abstracted by the API so that any user, service, or tool can work with data while also gaining access to operations that map towards business logic.
Note: The API should never be a direct mapping of the underlying data source! Otherwise, the purpose of abstraction is defeated. In other words – the API should not be a public facing database query. If you encounter one like this, it is OK to be sad. ?
The final top layer is all about use cases. I classify this as anything from the creation of software development kits (SDKs) and modular plug-ins to the tools, scripts, and flows used to deliver value for the organization. I’ve divided this into two parts because there is no strict dependency; you can have a tool that directly calls an API, if needed, or have the tool leverage an SDK to reduce the effort required for upkeep at the cost of total flexibility.
Are there other integrations and layers to consider? Yes. Clients, mobile, front-end development, API gateways, caching techniques, RBAC, pagination, and so on. However, I’m going to keep things fairly simple for now.
Let’s first get deeper into data and abstraction layers, including some code examples, and then finish with the use case layer.
The Data Source Layer
The data source layer is a mixture of data repositories, such as a relational database, and remote queries, such as gRPCs and external APIs. For infrastructure solutions, this is frequently an internally placed metadata layer that can be queried for system and object states. I usually run into Apache Cassandra or CockroachDB for distributed data storage.
Database queries (and sub-queries) are constructed using a fairly in-depth understanding of the data’s structure, types, schema, and model. By using various operators to compare, evaluate, or logically join data in a query, once is able to return the desired data objects to perform a task. This doesn’t scale well when dealing with a variety of resources and most databases cannot (or should not) be publicly queried.
I would prefer something that more closely aligns to the “Locality and Simplicity” of Gene Kim’s Five Ideals by avoiding the need to reach out to an expert and having a simple system with which to interact.
Database Example
Let’s assume a user wants to get information on an EC2 instance running in AWS and has access to a CockroachDB instance that stores that information along with a descriptive name. Their end goal is to figure out which instance was last powered on in the production VPC. So, the user decides to simply query a list of all instances and then parse through the data to find the most recent power on datetime.
Such a query may look like what you see below:
cockroach sql --insecure --host=localhost:26257 SELECT id, name, start FROM ec2_instance WHERE id = 'i-0764330c7bb59d8ed' id | name | start +-------------------------+-----------------|---------------------+ i-0764330c7bb59d8ed | workload_1739 | 2020-02-11-08:00:00 ...snip...
This example does answer my question. I know that instance id i-0764330c7bb59d8ed
was last powered on and is named workload_1739
through this query. Perhaps I sorted the results in descending order based on the start column, or pulled that data into another object to run a local query on all the start values. Either way, the entirety of this flow looks roughly as shown below:

There’s a lot of domain expertise and specific access required to do this. Plus, the sql query itself wasn’t really focused on solving the business problem in its entirety. There’s nothing wrong with this data flow, but it should be possible to make this sort of question simpler and easier to answer.
The Abstraction Layer – Hello, APIs!
APIs provide an accessible interface to the resources required while also providing the first data abstraction layer. This shifts us away from having to be an expert at the database schema and allows for queries that map to business logic. An API should be designed to provide value to those looking to perform actions that are ultimately used to complete a use case to the API consumer.

Let’s return to the earlier database query concept. An API is somewhat akin to using stored procedures in a database to craft a repeatable query that can be executed on demand and published to other consumers. I could, for example, take the earlier Cockroach query – or something much more complex – and drop it into a stored procedure for repeated use. However, an API takes this concept further and should be designed for a larger audience to consume using a framework that is secure, extensible, and simple.
This can be done using a RESTful query via CRUD (create, read, update, delete) methods or using a GraphQL query or mutation. Both architectures have different positives and negatives associated with them. However, both options should align to a business outcome that needs to be offered to something (or someone) higher in the stack.
GraphQL at a Glance
Finally, some GraphQL! I apologize for teasing everyone for so long, but having some background information helps set context. If you have literally never heard of GraphQL before, bookmark the Introduction to GraphQL learning link and read through it as time permits.
GraphQL brings a few things to the table that REST does not. Here are some of my favorites:
- It’s strongly typed: you must explicitly specify types of variables and objects. Thus, you can’t execute “A + 5” and expect it to work. Additionally, there is a single standard course of action that happens upon encountering a type error, making collaboration and error handling simpler. Fans of Scala, C#, Golang will appreciate this. ?
- It’s hierarchical in design: when querying the API, the structure of the schema is shaped in the same way as the data returned. It makes visualizing and productizing an API feel more natural. I think the SpaceX API is great at portraying this.
- It’s efficient across the network: With REST, you typically have to get a bunch of data, then throw away most of it. With GraphQL, you can be much more specific about what you want directly within the query, which can save a ton of round trips through the network. This is partially while Facebook crafted and open sourced GraphQL – they have a mountain of data to sift through and REST required way too many requests to get the answers they needed.
- It’s introspective: Tools such as GraphiQL and the Apollo GraphQL Playground (based on GraphiQL) make it trivial to view and interact with the schema and documentation for an API. Yes, REST has the Open API Specification (OAS) Swagger UI, but GraphQL makes it much easier to plug into an unknown API and parse incoming data into something useful for your code. This is no excuse to skimp on documentation!
Which One? Choosing an API design isn’t a “this versus that” sort of game. While GraphQL has a lot of attractive features that I’ve outlined above, there are definitely reasons to stick with REST: it’s extremely well known and understood, there’s a ton of tooling and education around it, and for some solutions the usage of GraphQL is overkill. As with anything in tech, find the right tool for the job and keep flame wars out of it. ?
GraphQL Query Example
Let us look at a GraphQL query named EC2InstancesListQuery
that can retrieve information on any number of AWS EC2 instances. The API is pulling from a cloud-based metadata repository (database) and has numerous queries available for consumption.
Note: In case you are curious, the example code comes from the GraphQL API used by Rubrik’s Polaris SaaS solution. I’ve snipped a fair bit of the entire query for brevity. Keep in mind that the implementation of GraphQL that you encounter may look different in form and structure, but the underlining principles remain static.
{ "operationName": "EC2InstancesListQuery", "variables": { "first": 20, "sortBy": "EC2_INSTANCE_ID", "sortOrder": "ASC", "filters": [ { "field": "EC2_INSTANCE_NAME_OR_INSTANCE_ID", "texts": [ "i-0764330c7bb59d8ed" ] }, { "field": "IS_ARCHIVED", "texts": [ "0" ] } ] }, "query":"query EC2InstancesListQuery($first: Int, $after: String, $sortBy: HierarchySortByField, $sortOrder: HierarchySortOrder, $filters: [Filter!]) { ec2InstancesList: awsNativeEc2InstanceConnection(first: $first, after: $after, sortBy: $sortBy, sortOrder: $sortOrder, filter: $filters) { edges { node { id instanceId instanceName vpcName region vpcId isRelic instanceType …snip… } } } } }
That’s certainly way more code than what was used for the sql query.
Let’s walk through some of the major parts of this particular GraphQL query structure. Keep in mind that each API may structure things a bit different – check out GitHub GraphQL API v4 or the SpaceX GraphiQL Explorer to see a few live examples.
- Operation Name: The operation name describes the operation we want to use, which is
EC2InstancesListQuery
in this case. This operation is known to the API and is visible in the schema. Because there is only a single endpoint with GraphQL, this design is somewhat similar to REST endpoints, except that operations are much more flexible to design and iterate upon. I can also place RBAC level controls directly on an operation, which is handy! - Variables: rather than defining values directly in the query, we abstract them using
$variable
syntax and store the values elsewhere. With REST, this would typically be done by placing information in the header (queries) or by using the body (parameters). - Operation Type: This is type of operation being executed. Besides a query, we can also choose a mutation or subscription for other use cases. In the Rubrik example, I’m requesting fields such as the
id
,instanceId
,vpcName
, andregion
by calling them out in the query. It’s like saying “Hey, I need these things – go find the fields that match this query and return them to me!” With REST, this is not possible; you always get a complete list of fields known by the resource.
Looking at the Code
At the very bottom of this post are examples of the code required to send this query using curl, Golang, and PowerShell. There are a few things you’ll notice about GraphQL that differ from REST:
- There is only one URI used to communicate with GraphQL.
- The only method used is
POST
. - Because line breaks are not allowed within a JSON payload, escape characters such as
\n
are used to generate a single-line JSON that can be read by GraphQL.
GraphQL Query Response
Most everything else – from a raw structure perspective – should be somewhat familiar to a REST consumer. The GraphQL response contains information on the desired instance and is sent over as a standard JSON payload in the body.
"id": "deb6759a59c8", "instanceId": "i-0764330c7bb59d8ed", "instanceName": "workload_1739", "instanceType": "T2_MICRO", "isExocomputeConfigured": true, "isIndexingEnabled": true, "isRelic": false, "region": "US_WEST_2", "slaAssignment": "Direct", "vpcId": "vpc-6e689b", "vpcName": "Workload VPC 37"
If you’re thinking “OK, you now have the same data but jumped through a lot of hoops to get it” – you’re right! This operation isn’t super helpful for my use case – it’s just returning information on the instances. I would still have to parse through them to find the one I want. However, what if we instead made a new operation that matched my business need?
Aligning GraphQL Operations to Outcomes
The operation used can be much more powerful than simply reporting back with information on an EC2 instance. The API could also provide an operation that uses logic, arguments, aliases, and directives – all tools that are available for a GraphQL schema – to provide more insightful answers.
Returning to our earlier example scenario, perhaps we still want to know which EC2 instance was most recently started in a specific production VPC, with a reminder as to what region the VPC resides in. An operation can be constructed for such a query. It can look across all instances in the region, focus on those that are part of the production VPC, and then order based on start time. The instance with the most recent start time is selected and returned. The return format can even be modified, if desired.

If that isn’t interesting enough to pique your curiosity, this can be taken further with other services are inserted between you, the user, and an API that can pull in useful data for you.
Ideas like ChatOps usher in the idea of “talking” with a service via various chat platform APIs such as Slack, Microsoft Teams, and Mattermost. In this model, you have APIs talking to other APIs. I even worked with Brandon Olin over at Stack Overflow to craft a module for his popular PoshBot using PowerShell and REST.
This lowers the barrier to entry even further by using a webhook or service listener to capture user chat, determine the appropriate queries to send, and then formatting the response back for the user to read.

The most exciting thing about using an API to abstract data sources is how limitless the possibilities are. With some foundational understanding of API queries, you are able to make decisions and take action based on whatever data you have at your fingertips.
Flashback: One of my first use cases for an API was to query the temperature and snowfall prediction for the next day. If the response stated that it would snow the next day, I would be alerted when waking up in the morning. I could then check for snow school closures and, in many cases, just go back to sleep.
GraphQL Mutation Example
Thus far, most of the focus has been on querying information. What about modifying data? This is the job of a mutation, which is a server-side way of modifying data.
A mutation looks similar in structure to a query. However, the operation type is a mutation and the values supplied in the variables section hold the new or updated values for whichever fields need to be mutated.
Perhaps the instance identified in the earlier examples needs to have a backup taken before a release or deployment is allowed to proceed. A mutation can trigger just such an event. The example below provides the payload sent over to GraphQL, which is now using a mutation against the internally known id
value of the instance.
{ "operationName": "TakeEC2InstanceSnapshotMutation", "variables": { "ec2InstanceIds": [ "deb6759a59c8" ] }, "query": "mutation TakeEC2InstanceSnapshotMutation($ec2InstanceIds: [UUID!]!) { createAwsNativeEc2InstanceSnapshots(ec2InstanceIds: $ec2InstanceIds) { taskchainUuids { ec2InstanceId taskchainUuid __typename } } " }
Once this payload is packaged up, the singular GraphQL API entry point is given the data via a POST
method request. I’ve gone ahead and visually watched the Polaris UI to see the EC2 instance start a new backup. For some reason I enjoy watching automated tasks like this unfold. ?

My hope is that all of this GraphQL information gives you more awareness as to what’s going on in this part of the API world, along with some more concrete examples and workflows to get started with your environment.
Let’s move on to the final layer, Use Cases, and dig into what people are building on top of all these fancy APIs.
The Use Case Layer
Now that we have a data abstraction layer that allows for pretty much anything that can talk over HTTPS to query and mutate data, there exists the opportunity to construct SDKs, tools, integrations, scripts, flows, and all other types of use cases. I’ve hinted at this when talking about ChatOps and crafting operations that align towards business outcomes. There is more that can be done.
This brings up another decision point:
- Do you want to directly consume the API to build your scripts, tools, and flows?
- Or, do you want to use SDKs and other modular functionality that further abstracts away the API?
Both are valid routes to take. I’ve zoomed in on the Use Case Layer in the diagram below to show some of the different directions you can explore.

Constructing an SDK or module allows for a tailored user experience in the language or framework of choice. For example, when I first joined Rubrik I wrote the beginnings of a PowerShell SDK. I knew that our audience was entirely made up of VMware administrators and engineers who statistically lean towards using PowerShell as the tool of choice. Thus, the SDK would take on the responsibility of working with the Rubrik API.
SDK and Module Example
There’s more to an SDK than simply being a familiar language. Let’s say that a user wants to find the last snapshot (backup) taken for a particular workload. I’ll use the aforementioned PowerShell SDK in this example by using this set of cmdlets piped together:
Get-RubrikVM -id 'VirtualMachine:::4c0f0c71-1390-4017-9206-f8b16bd7ca8c-vm-78' | Get-RubrikSnapshot | Select-Object -First 1 date : 2020-02-14 18:14:06 indexState : 1 slaName : Demo-12H-R07-A60_AWS_USW1 vmName : CWAHL-WIN slaId : aea2f90f-5066-41d5-8154-0c884b6eb6c8 replicationLocationIds : {} archivalLocationIds : {} isOnDemandSnapshot : False cloudState : 0 id : 68f21d2b-5d65-4fb8-a9de-c6e0a5e1c6eb consistencyLevel : CRASH_CONSISTENT isRetainedByRetentionLockSla : False
What’s actually happening behind the scenes is numerous API queries are being sent: first a request to get information on a virtual machine, then to get a list of snapshots for that virtual machine, and finally a local filter to return only the first object in the array. I’m making an assumption that the first snapshot object is the most current. But, what if that changes?
Instead, the SDK can further map a business outcome to a simplified parameter that is less error prone and also easier to consume for a better user experience. Let’s add the parameter -Latest
to the function and push all of the logic into the SDK.
Get-RubrikVM -id 'VirtualMachine:::4c0f0c71-1390-4017-9206-f8b16bd7ca8c-vm-78' | Get-RubrikSnapshot -Latest date : 2020-02-14 18:14:06 indexState : 1 slaName : Demo-12H-R07-A60_AWS_USW1 vmName : CWAHL-WIN slaId : aea2f90f-5066-41d5-8154-0c884b6eb6c8 replicationLocationIds : {} archivalLocationIds : {} isOnDemandSnapshot : False cloudState : 0 id : 68f21d2b-5d65-4fb8-a9de-c6e0a5e1c6eb consistencyLevel : CRASH_CONSISTENT isRetainedByRetentionLockSla : False
The results are the same. Yet, now the onus of finding the latest snapshot is no longer pushed to the user. It is absorbed by the SDK. I find that examples like this are reinforced again by the Locality and Simplicity ideal held in the Five Ideals. Ask yourself – how do I make this experience even better and less error prone?
Seeing a pattern? At each abstraction layer we’re taking a data source, applying logic, and returning the values necessary to fulfill a business outcome. Each layer brings the opportunity to provide a greater user experience and get closer to the environment with which folks are familiar. GraphQL operations, PowerShell functions, and REST endpoints are all ways to ask questions and get answers so that something desired happens downstream.
Script, Tool, or Flow Example
Additionally, I can also use scripts, tools, or flows to achieve a desired outcome. This is where having a solid set of SDKs and modules to consume come in handy. Rather than writing my scripts and tools to connect directly to an API, I can consume the SDKs and modules instead. There are some significant benefits to this approach:
- API Dependencies: Every time a “breaking change” is introduced into an API, it requires that I go in and ultimately fix the break. APIs will always reach points where they break, no matter if we’re talking about GraphQL or REST. This is because change is normal! However, the use of an SDK or module in my script or tool means that most of the downstream API breakages are invisible to me. This should result in fewer changes required to your scripts and tools.
- Quicker Time to Usage: SDKs and modules are designed for rapid consumption and a better user experience. Rather than expending time and effort on learning the API, you’re able to start getting value from the Extensible Data Layers quickly when using an SDK.
The drawback is that not all SDKs are created equally and some of them are programmatically generated with zero effort put into them for user experience. I’m personally not a fan of using software to write software without some sort of human touch applied. This comes from experience working with pretty horrible SDKs, so your mileage may vary.
If you need to send requests directly to the API, that’s fine, too. I’ve applied some lighter dotted lines to the diagram to show that this does happen. Especially when using lightweight calls to simply retrieve status or perform a singular task: it may not be worth it to deploy and manage an SDK for situations where it feels like overkill. There is no one right answer to this!
Thoughts
This post took a bit of time to write and is a collection of lots of different sources, ideas, and experiences as I’ve become more familiar with GraphQL. I felt like most posts dig deeply into the “what” of using GraphQL without spending enough time in the “why” or the “how”.
Feedback is always welcome! I hope to make this a bit of a bulkier cornerstone piece that I can base other articles upon, so let me know if you want to explore more about APIs, the Extensible Layers of Data, GraphQL, or anything in between. Cheers! ?✌
Next Steps
Please accept a crisp high five for reaching this point in the post!
If you’d like to learn more about APIs, or other modern technology approaches, head over to the Guided Learning page.
Appendix: Code Examples
curl
curl --location --request POST 'https://fun.example.com/api/graphql' \ --header 'authorization: Bearer token' \ --header 'content-type: application/json' \ --header 'accept: /' \ --data-raw '{"operationName":"EC2InstancesListQuery","variables":{"first":20,"sortBy":"EC2_INSTANCE_ID","sortOrder":"ASC","filters":[{"field":"EC2_INSTANCE_NAME_OR_INSTANCE_ID","texts":["i-0764330c7bb59d8ed"]},{"field":"IS_ARCHIVED","texts":["0"]}]},"query":"query EC2InstancesListQuery($first: Int, $after: String, $sortBy: HierarchySortByField, $sortOrder: HierarchySortOrder, $filters: [Filter!]) {\n ec2InstancesList: awsNativeEc2InstanceConnection(first: $first, after: $after, sortBy: $sortBy, sortOrder: $sortOrder, filter: $filters) {\n edges {\n node {\n id\n instanceId\n instanceName\n vpcName\n region\n vpcId\n isRelic\n instanceType\n isExocomputeConfigured\n isIndexingEnabled\n isMarketplace\n effectiveSlaDomain {\n name\n … on ClusterSlaDomain {\n fid\n cluster {\n id\n name\n __typename\n }\n __typename\n }\n … on GlobalSla {\n id\n __typename\n }\n __typename\n }\n awsNativeAccount {\n id\n name\n status\n __typename\n }\n slaAssignment\n authorizedOperations {\n id\n operations\n __typename\n }\n effectiveSlaSourceObject {\n fid\n name\n objectType\n __typename\n }\n __typename\n }\n __typename\n }\n pageInfo {\n endCursor\n hasNextPage\n hasPreviousPage\n __typename\n }\n __typename\n }\n}\n"}'
Golang
package main import ( "fmt" "strings" "net/http" "io/ioutil" ) func main() { url := "https://fun.example.com/api/graphql" method := "POST" payload := strings.NewReader("{\"operationName\":\"EC2InstancesListQuery\",\"variables\":{\"first\":20,\"sortBy\":\"EC2_INSTANCE_ID\",\"sortOrder\":\"ASC\",\"filters\":[{\"field\":\"EC2_INSTANCE_NAME_OR_INSTANCE_ID\",\"texts\":[\"i-0764330c7bb59d8ed\"]},{\"field\":\"IS_ARCHIVED\",\"texts\":[\"0\"]}]},\"query\":\"query EC2InstancesListQuery($first: Int, $after: String, $sortBy: HierarchySortByField, $sortOrder: HierarchySortOrder, $filters: [Filter!]) {\n ec2InstancesList: awsNativeEc2InstanceConnection(first: $first, after: $after, sortBy: $sortBy, sortOrder: $sortOrder, filter: $filters) {\n edges {\n node {\n id\n instanceId\n instanceName\n vpcName\n region\n vpcId\n isRelic\n instanceType\n isExocomputeConfigured\n isIndexingEnabled\n isMarketplace\n effectiveSlaDomain {\n name\n … on ClusterSlaDomain {\n fid\n cluster {\n id\n name\n __typename\n }\n __typename\n }\n … on GlobalSla {\n id\n __typename\n }\n __typename\n }\n awsNativeAccount {\n id\n name\n status\n __typename\n }\n slaAssignment\n authorizedOperations {\n id\n operations\n __typename\n }\n effectiveSlaSourceObject {\n fid\n name\n objectType\n __typename\n }\n __typename\n }\n __typename\n }\n pageInfo {\n endCursor\n hasNextPage\n hasPreviousPage\n __typename\n }\n __typename\n }\n}\n\"}") client := &http.Client { } req, err := http.NewRequest(method, url, payload) if err != nil { fmt.Println(err) } req.Header.Add("authorization", "Bearer token") req.Header.Add("content-type", "application/json") req.Header.Add("accept", "/") res, err := client.Do(req) defer res.Body.Close() body, err := ioutil.ReadAll(res.Body) fmt.Println(string(body)) }
PowerShell
$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers.Add("authorization", "Bearer token") $headers.Add("content-type", "application/json") $headers.Add("accept", "/") $body = "{"operationName
":"EC2InstancesListQuery
","variables
":{"first
":20,"sortBy
":"EC2_INSTANCE_ID
","sortOrder
":"ASC
","filters
":[{"field
":"EC2_INSTANCE_NAME_OR_INSTANCE_ID
","texts
":["i-0764330c7bb59d8ed
"]},{"field
":"IS_ARCHIVED
","texts
":["0
"]}]},"query
":"query EC2InstancesListQuery($first: Int, $after: String, $sortBy: HierarchySortByField, $sortOrder: HierarchySortOrder, $filters: [Filter!]) {
\n ec2InstancesList: awsNativeEc2InstanceConnection(first: $first, after: $after, sortBy: $sortBy, sortOrder: $sortOrder, filter: $filters) {\n edges {
\n node {\n id
\n instanceId\n instanceName
\n vpcName\n region
\n vpcId\n isRelic
\n instanceType\n isExocomputeConfigured
\n isIndexingEnabled\n isMarketplace
\n effectiveSlaDomain {\n name
\n … on ClusterSlaDomain {\n fid
\n cluster {\n id
\n name\n __typename
\n }\n __typename
\n }\n ... on GlobalSla {
\n id\n __typename
\n }\n __typename
\n }\n awsNativeAccount {
\n id\n name
\n status\n __typename
\n }\n slaAssignment
\n authorizedOperations {\n id
\n operations\n __typename
\n }\n effectiveSlaSourceObject {
\n fid\n name
\n objectType\n __typename
\n }\n __typename
\n }\n __typename
\n }\n pageInfo {
\n endCursor\n hasNextPage
\n hasPreviousPage\n __typename
\n }\n __typename
\n }\n}
\n`"}" $response = Invoke-RestMethod 'https://fun.example.com/api/graphql' -Method 'POST' -Headers $headers -Body $body $response | ConvertTo-Json